Yang Yang-5G Wireless Systems. Simulation and Evaluation Techniques-Springer (2017)
Yang Yang-5G Wireless Systems. Simulation and Evaluation Techniques-Springer (2017)
Yang Yang-5G Wireless Systems. Simulation and Evaluation Techniques-Springer (2017)
YangYang
JingXu
GuangShi
Cheng-XiangWang
5G Wireless
Systems
Simulation and Evaluation Techniques
Wireless Networks
Series editor
Xuemin Sherman Shen
University of Waterloo, Waterloo, Ontario, Canada
More information about this series at http://www.springer.com/series/14180
Yang Yang Jing Xu Guang Shi
Cheng-Xiang Wang
5G Wireless Systems
Simulation and Evaluation Techniques
Yang Yang Jing Xu
CAS Key Lab of Wireless Sensor Shanghai Research Center for Wireless
Network and Communication Communications
Shanghai Institute of Microsystem Shanghai, China
and Information Technology
Shanghai, China
It is anticipated that the Fifth Generation (5G) mobile communication systems will
start to be commercialized and deployed in 2018 for new mobile services in three
key scenarios, i.e., enhanced Mobile Broadband (eMBB), massive Machine Type
Communications (mMTC), and Ultra-Reliable and Low Latency Communications
(uRLLC). Based on the preliminary research for 5G standardization by telecom
industry leaders since 2012, the International Telecommunication Union (ITU) has
identified and announced the 5G vision and Key Performance Indicators (KPI) on
spectrum efficiency, energy efficiency, peak data rate, traffic density, device con-
nectivity, radio latency and reliability for achieving more comprehensive and better
service provisioning and user experience. At present, the telecom industry is
actively developing a variety of enabling technologies, such as massive MIMO,
Ultra-Dense Network (UDN), and mmWAVE to accelerate the ongoing 5G stan-
dardization and precommercial trial processes. On the other side, new 5G technol-
ogies drive the design and development of corresponding simulation platforms,
evaluation methods, field trails, and application scenarios to compare, screen, and
improve 5G candidate technologies for the ongoing 5G standardization, system
development, and performance enhancement. In view of these 5G technological
trends and testing requirements, this book aims at addressing the technical chal-
lenges and sharing our practical experiences in the simulation and evaluation of a
series of 5G candidate technologies. In particular, it reviews the latest research and
development activities of 5G candidate technologies in the literature, analyzes the
real challenges in testing and evaluating these technologies, proposes different
technical approaches by combining advanced software and hardware capabilities,
and presents the convincing evaluation results in realistic mobile environments and
application scenarios, which are based on our long-term dedicated research and
practical experiences in wireless technology evaluation and testbed development.
This book reviews and provides key testing and evaluation methods for 5G candi-
date technologies from the perspective of technical R&D engineers. In order to
facilitate readers with different backgrounds to better understand important con-
cepts and methods, we include in this book many examples of various simulation
v
vi Preface
vii
viii Contents
2.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3 Channel Measurement and Modeling . . . . . . . . . . . . . . . . . . . . . . . . 45
3.1 Requirements for 5G Wireless Channel Model . . . . . . . . . . . . . . . 45
3.2 Channel Modeling Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
3.2.1 Measurement-Based GSCM . . . . . . . . . . . . . . . . . . . . . . 51
3.2.2 Regular-shape GSCM . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.2.3 CSCM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
3.2.4 Extended SV Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
3.2.5 Ray Tracing based Model . . . . . . . . . . . . . . . . . . . . . . . 58
3.2.6 Comparison of Modeling Methods . . . . . . . . . . . . . . . . . 60
3.3 Channel Measurement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
3.3.1 Channel Measurement Methods . . . . . . . . . . . . . . . . . . . 62
3.3.2 Channel Measurement Activities . . . . . . . . . . . . . . . . . . 71
3.4 Channel Data Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
3.4.1 Path Parameters Extraction . . . . . . . . . . . . . . . . . . . . . . . 90
3.4.2 Channel Statistical Analysis . . . . . . . . . . . . . . . . . . . . . . 103
3.5 Existing Channel Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
3.5.1 3GPP SCM/3D/D2D/HF . . . . . . . . . . . . . . . . . . . . . . . . 113
3.5.2 WINNER I/II/+ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
3.5.3 ITU IMT-Advanced, IMT-2020 . . . . . . . . . . . . . . . . . . . 116
3.5.4 COST 259/273/2100/IC1004 . . . . . . . . . . . . . . . . . . . . . 117
3.5.5 IEEE 802.11 TGn/TGac . . . . . . . . . . . . . . . . . . . . . . . . . 118
3.5.6 QuaDRiGa, mmMAGIC . . . . . . . . . . . . . . . . . . . . . . . . 119
3.5.7 IEEE 802.15.3c/ IEEE 802.11ad/aj/ay . . . . . . . . . . . . . . . 120
3.5.8 MiWEBA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
3.5.9 METIS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
3.5.10 5GCM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
3.5.11 Comparison of Existing Models . . . . . . . . . . . . . . . . . . . 123
3.6 Stochastic Channel Generation . . . . . . . . . . . . . . . . . . . . . . . . . . 126
3.6.1 Definition of Simulation Scenarios . . . . . . . . . . . . . . . . . 127
3.6.2 Generation of Large Scale Parameters . . . . . . . . . . . . . . . 129
3.6.3 Generation of Small Scale Parameters . . . . . . . . . . . . . . . 135
3.6.4 Generation of Channel Coefficient . . . . . . . . . . . . . . . . . 141
3.7 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
4 Software Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
4.1 Overview on Software Simulation . . . . . . . . . . . . . . . . . . . . . . . . 157
4.2 5G Software Simulation Requirements . . . . . . . . . . . . . . . . . . . . 161
4.2.1 Simulation Requirements Analysis . . . . . . . . . . . . . . . . . 161
4.2.2 Technological Impact Analysis . . . . . . . . . . . . . . . . . . . . 164
4.3 5G Software Link Level Simulation . . . . . . . . . . . . . . . . . . . . . . 167
4.3.1 Link Technology Overview . . . . . . . . . . . . . . . . . . . . . . 167
Contents ix
xiii
xiv Abbreviations
AT Announcement Time
AUT UT Austin
AWGN Additive White Gaussian Noise
AWRI Advanced Wireless Research Initiative
AXI Advanced eXtensible Interface
AXIe Advanced eXtensible Interface express
B/S Browser/Server
BBU BaseBand Unit
BDS BeiDou Navigation Satellite System
BER Bit Error Rate
BFS Breadth-First Search algorithm
BI Beacon Intervals
B-ISDN Broadband Integrated Services Digital Network
BLER BLock Error Rate
BRP Beam Refinement Phase
BS Base Station
BSC Base Station Controller
BT Beacon Time
CA Carrier Aggregation
CAPEX CAPital EXpenditure
CAPWAP Control and Provision of Wireless Access Points
CB Coherence Bandwidth
CBP Competition-Based access Period
CC Cooperative Communications
CDD Cyclic Delay Diversity
CDD-OFDM Cyclic Delay Diversity Orthogonal Frequency Division
Multiplexing
CDF Cumulative Distribution Function
CDL Cluster Delay Line
CDM Cyclic Delay Modulation
CDMA Code-Division Multiple Address
CDMA2000 Code Division Multiple Access 2000
CDN Content Delivery Network
CI Close-In the reference short
CIF CI with Frequency
CIR Channel Impulse Response
CMD Correlation Matrix Distance
CoMP Coordinated Multiple Points
COST European Cooperation in Science and Technology
COTS Commercial Off The Shelf
COW Cluster of Workstation/PC
CP Cyclic Prefix
CPCI Compact PCI
CPRI Common Public Radio Interface
Abbreviations xv
EM Expectation-Maximization
EOA Elevation angle Of Arrival
EOD Elevation angle Of Departure
EPC Evolved Packet Core
ER Effective Roughness
ESNR Estimated Signal-to-Noise Ratio
ESPRIT Estimation of Signal Parameter via Rotational Invariance
Techniques
EU European Union
EuQoS End-to-end Quality Of Service Support over Heterogeneous
Networks
EUT Equipment Under Test
EVA Extended Vehicular A model
EVM Error Vector Magnitude
FB Flexible Backhauling
FBMC Filter Bank based Multi-Carrier
FCC Federal Communications Commission
FDD Frequency Division Duplex
FDMA Frequency Division Multiple Address
FD-SAGE Frequency Domain SAGE
FER Frame Error Rate
FFR Fractional Frequency Reuse
FFT Fast Fourier Transform
FIFO First Input First Output
F-OFDM Filtered OFDM
FPGA Field Programmable Gate Array
FR Fraunhofer Region
FST Fast Session Transfer
FT France Telecom
GBC General purpose Baseband Computing
GBCM Geometric-Based stochastic Channel Model
GCS Global Coordinate System
GIS Geographic Information System
GO Geometrical Optics
GoF Goodness of Fit
GPIB General Purpose Interface Bus
GPP General Purpose Processor
GPS Global Positioning System
GPU Graphics Processing Unit
GSCM Geometric-based Stochastic Channel Model
GSM Global System for Mobile communication
GSM-R Global System for Mobile Communications-Railway
GTD Geometrical Theory of Diffraction
HARQ Hybrid Automatic Repeat reQuest
Abbreviations xvii
IT Information Technology
ITU International Telecommunications Union
JP Joint Processing
KBSM Kronecker product based CBSM
KEST Kalman Enhanced Super resolution Tracking algorithm
KF K-Factor
KPI Key Performance Indicators
K-S Kolmogorov-Smirnov
LAA LTE Assisted Access
LAN Local Area Network
LCR Level Crossing Rate
LCS Local Coordinate System
LDPC Low Density Parity Check
LDSMA Low Density Spreading Multiple Access
LFDMA Localized FDMA
LMDS Local Multipoint Distribution Services
LO Local Oscillator
LOS Light Of Sight
Low Power Low Power Single Carrier PHYsical layer
SCPHY
LS Least Square
LSF Local Scattering Function
LSPs Large-Scale Parameters
LTE Long-Term Evolution
LTE-A LTE-Advanced
LTE-U LTE use Unlicensed band
LUT Look Up Tables
LXI LAN eXtension for Instrumentation
M2 M Machine to Machine
MAC Media Access Control
MAGNET My personal Adaptive Global NET
MAN Metropolitan Area Network
MCD Multipath Component Distance
MCN Multi-hop Cellular Network
MCS Modulation and Coding Scheme
MED Maximum Excess Delay
METIS Mobile and wireless communications Enablers for Twenty-
twenty (2020) Information Society
MIC Many Integrated Cores
MIMO Multiple Input and Multiple Output
MiWEBA Millimetre-Wave Evolution for Backhaul and Access
ML Maximum Likelihood
MLE Maximum Likelihood Estimation
MME Mobility Management Entity
Abbreviations xix
personally feel a gluttonous feast of the information age. The purpose of 5G age is
to construct a stable, convenient and economic information ecosystem for human
beings. As shown in Fig. 1.1, various features of the information age will be
included in the development of 5G, and users can enjoy more convenient intelligent
life [5]. With the popularity of wearable devices, the types and number of mobile
terminals will experience explosive growth. Predictably, in the future, demand for
virtual reality and augmented reality experience, demand for cloudization of mas-
sive office data, wireless control of industrial manufacturing or production pro-
cesses, remote medical surgery, automation in a smart grid, transportation safety
and other aspects, not only require 5G network data transmission rate to reach a
very high level, but also require real-time experience with almost zero latency. In
addition, cost reduction and energy saving should also be considered.
The birth of 5G largely benefited from the large scale growth of the mobile Internet
and the Internet of Things (IoT), and the application of 5G also mainly lies in the
development of these two networks [69]. In recent years, mobile Internet, as the
carrier of main businesses of mobile data communications, has greatly promoted
the development of various fields of information service. Various service providers
made full use of the advantages of their resources and services and developed
1.1 5G Requirements Analysis 3
Application scenarios of 5G are related to every aspect in peoples daily life, work,
entertainment and transportation, and wireless communications will show different
characteristics in all kinds of different scenarios. For example, the crowed or dense
mobile devices areas such as residential areas, stadiums and marketplaces, wireless
communications will have the characteristics of high traffic volume density and
high number of connections, while on transport tools such as subway and high-
speed railways, the high mobility feature of wireless communications will be
prominent. At present, the Fourth Generation mobile communications system
(4G) is not able to satisfy the requirements of some special scenarios featuring
high traffic volume density, high number of connections and high mobility [14].
In crowded scenarios such as stadiums which need ultrahigh traffic volume
density and ultrahigh connection density, we need the wireless communications
transmission rate as high as that of the optical fiber so that it can carry the businesses
like photo transmission, video transmission, live broadcast and other services. In
high-speed mobility scenarios, e.g., High-Speed Rail (HSR), the traffic volume
density and connection are relatively lower than those of stadiums. Since HSRs
speed is usually above 200 km/h, it has high requirements for the wireless commu-
nication systems to support high-speed mobility.
4 1 Candidate Technologies and Evaluation Challenges for 5G
Although its very convenient for us to access the Internet now, half of our world
is still beyond the coverage of Internet after several decades when mobile terminals
came into being. With the development and change of Internet, the denotation of
Internet is expanded accordingly and more and more devices are connected with
each other. Cisco System forecasts that by the year of 2019, the whole world will
have 11.5 billion devices to be connected [15], including some hard-to-connect
devices which are under water or beyond the coverage of satellite, so it becomes
more and more important to meet the requirements for wide coverage in future.
It is predicted that in a long period in the future, mobile data traffic will continue
to show explosive growth [4]: from 2010 to 2020, global mobile data traffic growth
will increase by more than 200 times while from 2010 to 2030, more than 20,000
times. Meanwhile, the growth of Chinese mobile data traffic is higher than the
global average level. It is predicted that from 2010 to 2020, it will grow by more
than 300 times, while from 2010 to 2030, more than 40,000 times.
Therefore, based on the requirements above, 5Gs overall goals are: much faster,
more efficient, and more intelligent. Specific 5G performance requirements are
shown in Table 1.1 [16].
To satisfy users experience in multiple dimensions, combination of different
technologies is needed to reach the 5G performance requirements in the above
table. For example, ultra-dense wireless communications technology can make
contributions to improving performance indicators of user experienced data rate,
connection density and service density through increasing the base station deploy-
ment density. Massive antenna technology can effectively improve the spectrum
utilization efficiency through increasing antennas number and it has an important
significance in improving peak data rate, user experienced data rate, connection
density and service density. Millimeter wave communications technology can
increase usable spectrum in a large scale, which is good for improving performance
indicators of peak data rate, user experienced data rate and service density.
From 2009, Long Term Evolution (LTE) set off a boom across the globe. At the end
of 2012, European Union (EU) invested 27 million euros and launched the first 5G
research project in the world Mobile and wireless communications Enablers for
1.3 5G Candidate Technologies 5
In order to meet the wireless service growth needs of 1000 times in the next ten
years, the capacity of the wireless networks can be expanded from three directions
in rising the spectrum utilization, enhancing the spatial multiplexing and bandwidth
expansion. For example, it can effectively improve spectrum efficiency, throughput
per unit area and power efficiency through deploying ultra-dense small base station,
and at the same time shorting the distance between wireless access networks and
terminals. It can greatly increase 5G systems available frequency bandwidth
through expanding the use of unlicensed spectrum, high frequency band and
millimeter wave band. And through massive MIMO it can further tap the potentials
6 1 Candidate Technologies and Evaluation Challenges for 5G
of the space, and greatly improve spectrum utilization. The characteristics and
performance of various wireless candidate technologies vary greatly, and we will
briefly introduce the several typical 5G candidate technologies in the following.
How can we effectively increase the capacity of wireless network? A simple and
effective way is to decrease the coverage area of each cell, and to deploy the cellular
network in a dense way to improve the spatial multiplexing of frequency resources
and accordingly enhance the throughput per unit area. Meanwhile, due to the
shortened distance between base station and terminals, transmission loss will be
reduced, thus greatly improving the power efficiency.
Statistics show that in wireless networks, more than 70% wireless data generated
from indoors and hotspots [19]. If we want to significantly improve the network
capacity through utilizing the outdoor macro cells, there will be the two limitations.
On the one hand, due to the scarcity of macro cell site resources, the deployment
density could not be further increased. On the other hand, when the macro cell base
stations realize the indoor coverage, wireless signals will experience severe pene-
tration loss, leading to very poor indoor coverage performance and thus making it
very difficult to meet the business needs of indoor and hotspots. Therefore, a large
number of various types of small access points will be densely deployed, forming
ultra-dense wireless network. These small access points include Home eNodeB
(HeNB), WLAN Access Point (AP), Relay Node (RN), Micro Cell, Pico Cell and
etc. In particular, both Home eNodeB and WLAN access points can be connected to
the operators core network through the cable broadband in a plug-and-play way,
and their deployments are very flexible and convenient.
Since the ultra-dense wireless network access points may be randomly deployed
by the end users according to their own needs, and business needs change more
frequently, the traditional network planning will have to be faced with huge
challenges. According to the deployment scenarios and application requirements,
ultra-dense wireless networks will have multiple features:
Ultra-dense wireless network sites need to have the capability of ad hoc net-
working, such as neighbor cell discovery by itself, physical cell identifier
configuration, access point adaptive activation, carrier adaptive selection and
re-selections.
Because it is very difficult to receive wireless positioning satellite signal of
Global Positioning System (GPS) and BeiDou Navigation Satellite System
(BDS), ultra-dense wireless networks need to realize wireless network synchro-
nization through air interfaces.
Due to access points limited coverage, ultra-dense wireless networks need to
have the capacity to distinguish users moving speed, adaptively connect fast-
1.3 5G Candidate Technologies 7
moving users and quasi-stationary users respectively with macro coverage and
ultra-dense wireless networks.
In order to enhance the mobile robustness, we can separate the user plane and
control plane, and keep the control plane in the macro cell to reduce the
handover latency and ensure the consistency of user experience.
With relatively small coverage of every access point, high line-of-sight trans-
mission probability, and less path transmission loss, ultra-dense wireless net-
works can make full use of high-frequency band and millimeter wave
transmission, whose spectrum resources are still sufficient.
Since the hotspots and indoor business requirements change frequently and
meanwhile we need to reduce the deployment cost, the ultra-dense wireless
backhaul network can use wireless link, and the spectrum resource of wireless
backhaul and access links can be shared.
In order to better measure and evaluate the cost of ultra-dense wireless networks,
J. G. Andrews et al. [3] defined base station density gain ( > 0). is the gain of
data transmission rate with the increasing density of the network. If we define the
original data transmission rate as R1 , the density of base station is 1BSs/km2.
Through the ultra density networks, the data transmission rate is defined as R2 , the
density of base station is 2BSs/km2. Then the corresponding density gain is:
R2 1
1:1
R1 2
According to this formula, if the network density increases twice and at the same
time, the data transmission rate increases twice, then the density gain is 1.
technologies of high frequency band and millimeter wave, will further improve the
performance of wireless signal coverage [21].
Since the proposal of massive MIMO, it got immediate attention from academia
and industry. Operators, equipment manufacturers and research institutions all
showed great interest and made a series of achievements [2124]. From 2010 to
2013, led by Bell Lab, Lund University and Linkoping University in Sweden, Rice
University in the U. S. and many other research institutions, the international
academia has made extensive exploration of massive MIMO channel capacity,
transmission, Channel State Information (CSI) acquisition and testing.
Massive MIMO has certain advantages in technology, for example:
Assuming that channels are independent of each other, since the base station
uses a large number of transmit and receive antennas, the transmitter or receiver
based on filter matching can effectively suppress multiuser interference. There-
fore, the optimal performance can be achieved via linear complexity greatly
improving system spectrum utilization.
The narrow beam formed by massive MIMO can greatly improve the energy
efficiency.
Correspondingly, the research on massive MIMO is also faced with some
challenges:
The pairing freedom has greatly increased between downlink multiuser MIMO
and uplink virtual MIMO user, and it needs to design a realizable and high
performance radio resource scheduler;
In the uplink transmission of large scale antenna system, a large number of users
will be multiplexed. Under the limited condition of bandwidth and orthogonal
code sequence, pilot channel will be polluted, which will affect the performance
of uplink receiver.
According to the latest spectrum requirements researches [20], in 2020 the worlds
incremental spectrum requirements will be 10002000 MHz while the low fre-
quency resource has been largely depleted. Compared with the deployed low
frequency band, the available frequency resources of millimeter wave band
(30300 GHz) are quite abundant, which is about 200 times of the low frequency
band. Therefore, the industry began to explore how to use the millimeter wave band
(30300 GHz) in wireless communications. Millimeter wave band has always been
considered to be not suitable for wireless transmission, due to relatively large path
loss, absorption by atmosphere and rain, relatively poor capacity of diffraction, big
phase noise, high cost of measurement equipment. However, with increasingly
developed semiconductor technology, the cost of equipment has been declining
rapidly. At present, in satellite communications, Local Multipoint Distribution
1.3 5G Candidate Technologies 9
where aLOS , aout and bout are the parameters decided by the measurement data of
typical network deploying scenarios. When the terminal is close to the base station,
millimeter wave transmission has close-to-zero probability of outage state, and it
mainly relies on line-of-sight transmission. When the terminal is moving away from
the base station, the millimeter wave transmission is in the state of line-of-sight
transmission or non-line-of-sight transmission. When the terminal is far away from
the base station, millimeter wave transmission is in non-line-of-sight transmission
10 1 Candidate Technologies and Evaluation Challenges for 5G
Although 4Gs physical layer uses the core technology based on Orthogonal
Frequency Division Multiplexing (OFDM), but the OFDM waveform itself has
some defects, for example: (1) Slow rolling-off leads to high out-of-band leaking.
Therefore, larger spectrum guard interval is needed. (2) To avoid interference
between carriers, (coarse) synchronization is needed between nodes transmission.
In the hierarchical network, base stations with different coverage should be syn-
chronized. Because the application scenarios of 5G is far more complex than 4G,
and its latency and the number of access requirements are very strict. However,
OFDMs slow rolling-down characteristic and strict synchronization requirement
cannot adapt to real-time business asynchronous fast access and efficient use of
non-continuous spectrum.
Targeting at the disadvantages of OFDM, a lot of alternative waveforms have
been proposed, such as Filter Bank based Multi-Carrier (FBMC), Filtered OFDM
(F-OFDM) and Universal Filtered Multi-Carrier (UFMC) [3], etc. By filtering the
sub-band or sub-carrier, the rate of spectrum rolling-off is increased and the
frequency spectrum leakage is reduced, so that the time frequency synchronization
requirement is reduced, the frequency protection band and the time domain pro-
tection interval is removed. The above new waveform technologies can be well
combined with OFDM and MIMO, and increase the flexibility of 5G air interface
design, so as to match different traffic latency and data rate requirements.
Mobile Internet and IoT are the driving force of the development of 5G, and the
requirements for various applications greatly vary. For example, various types of
real-time traffic transmission in the 5G network have set down requirements for
millisecond order end-to-end latency. Undoubtedly, such a strict traffic latency
requirement will pose very high requirements on the physical layer design (includ-
ing symbol duration, synchronization process, random access and frame structure,
etc.). In addition, the traditional cellular communications establish and complete
synchronization based on connection, and they schedule with the form of the
network resource. That is to say, firstly the end-to-end connection is established,
and then data is transmitted. In IoT applications led by machine communications,
wireless sensor nodes are mainly energy constrained, and transmission data packets
12 1 Candidate Technologies and Evaluation Challenges for 5G
are usually quite small. Hence, the requirements are very high for traffic latency and
energy efficiency. If we still use the traditional cellular communications method
based on the connection, it will bring a lot of unnecessary network expenditure and
lead to too long activation time of wireless sensor node, which is not good for
reducing energy consumption. In order to support fast access to businesses and
simplify the channel accessing process and signaling process, 5G has proposed non
orthogonal multiple access method, which is to use the modulation, spread spec-
trum, power and space dimension for joint mapping so that users may transmit data
without being scheduled, and the spectrum utilization can be effectively improved.
Device to device (D2D) means that two terminal devices directly communicate
without base station and the core network. Its typical applications include cellular
assisted D2D communications and Vehicle to Vehicle (V2V) direct communica-
tions. Cellular assisted D2D communications is a new technology that allows
terminals to multiplex cell resource for direct communications under the control
of a cellular system.
D2D communications can increase the spectrum efficiency of the cellular com-
munications system, reduce the terminal transmitting power, and to a certain extent,
solve the scarcity of spectrum resources in the wireless communications system. In
addition, it can bring additional benefits including: reducing the burden of cellular
network, reducing the mobile terminal battery power consumption, increasing the
transmission rate and improving the robustness of network infrastructure failures,
whats more, supporting the new way of point-to-point data services in a small area.
D2D communications technology is also faced with the following problems:
Since the D2D communications will multiplex cellular resources, the mutual
interference between D2D communications and cellular communications arises.
When D2D communications multiplex uplink resources, in the system it is the
base station that is interfered by the D2D communications while the base station
can control the interference by adjusting the transmitting power of D2D com-
munications and the multiplexed resources. When D2D communications multi-
plex downlink resources, in the system it is any downlink user that is interfered
by the D2D communications. The interference is not controllable, which may
result in the link failure.
The synchronous mode of D2D communications can improve energy efficiency.
When the terminal is in the coverage of base station, the terminal can make the
base station as a time synchronization source, so as to realize the synchronous
D2D communications mode. When the D2D communications terminals are not
in the coverage of the base station, we can select a terminal as the cluster head
for D2D communications to transmit synchronous signals. When there is a
multi-hop in D2D communications, it will lead to multi-hop transmission of
1.4 Challenges in 5G Testing and Evaluation 13
On the one hand, how to build a more accurate channel propagation model for
the 5G candidate technologies will pose challenges for corresponding testing
and verification. On the other hand, the new channel model based on expanding
the traditional channel model will bring the increased types and dimensions of
parameters, which will lead to increased computation complexity and storage
space for the link simulation and system simulation evaluation of the above
technologies.
In terms of the wireless network technology, the user plane and control plane are
separated, and the wireless network resources are controlled and optimized in a
centralized way, which will lead to a sharp increase in the space of the feasible
solution of control and optimization. Through multi-core Central Processing
Unit (CPU), Graphics Processing Unit (GPU) and Field Programmable Gate
Array (FPGA) computing resources, we can build a heterogeneous computing
platform, and distribute the above computing resources for the baseband signal
processing operations which have higher calculation complexity. We need to
design scheduling algorithm for heterogeneous computing resources, accurately
estimate the consumed time of heterogeneous computing and interface data
transmission, and meanwhile design the synchronized mechanism for computing
tasks to make full use of the heterogeneous computing platform.
All in all, to meet the requirements for realness, comprehensiveness, rapidity and
flexibility is an important challenge for the evaluation of 5G testing and evaluation.
Realness
In the 5G testing and evaluation, the requirement for realness is reflected in the 5G
wireless channel model, verification methods, user experience, and other aspects.
From the point of establishing channel model, describing the channel character-
istics in a numerical way with testing and measurement methods, and providing a
close-to-objective physical channel simulation environment in a real and reproduc-
ible way, are of self-evident importance for 5G communications technology veri-
fication. Compared with 3G/4G system, new application scenarios are added in 5G,
which have the various characteristics of ultra high traffic volume density, ultra
high connection density, ultra high mobility, etc. From the point of the network
topology, various link types in 5G communications system will coexist in a same
region, extending from the traditional macrocell and microcell, to picocell,
femtocell and nomadic base station, and support D2D, Machine to Machine
(M2M), V2V and the fully-connected network. In the complex and diversified
1.4 Challenges in 5G Testing and Evaluation 15
network architecture, traditional channel model does not give sufficient consider-
ation to the consistency of small and medium scale parameter space, which may
lead to the exaggerated simulation performance of in technological evaluation (for
example, MU-MIMO) [32]. In addition, two-way mobility of D2D/V2V will
introduce Doppler model which is different from the traditional model, and mas-
sively intensive scattering exists in both transmitter and the receiver and the
stationary cycle is short, all of which need to be considered in the channel model.
Under the condition of high mobility, the features of fast time-varying, more severe
fading and mixed propagation scenarios make it difficult for the current channel to
be well applied in 5G high speed mobile scenarios. Therefore, wide-range propa-
gation scenarios and diversified network topologies have posed challenges to the
5G channel model. At the technological level, a variety of networks and transmis-
sion technologies emerge in the 5G research, which pose the following challenges
to the channel model: unique transmission characteristics of radio waves in higher
frequency and bandwidth, including non-line-of-sight path loss, high resolution
angle characteristics and outdoor mobile channel; in the greater antenna array
(including antenna unit number and the physical dimensions), the original plane
wave propagation model assumption is no longer applicable, and scattered clusters
non-stationary feature is reflected not only in the time axis, but it s also changing
along the array. In the third chapter, we will describe how to build a real and reliable
5G channel model from the aspects of the characteristics of the channel model,
measurement methods and parameters extraction and data analysis.
In terms of the verification methods, according to the traditional methods, the
system design of wireless link in the mobile communications field is mainly based
on the software simulation methods to verify and evaluate the algorithms. However,
there is a certain gap between the system environment under the pure software
simulation and the real situation. First of all, the software simulation assumes that
the hardware design can perfectly realize the software algorithms, and thus we
cannot introduce the impact of hardware conditions on the communications system.
However, in reality algorithm design is often restricted by the hardware conditions,
and the software algorithms are often greatly reduced when realized in hardware.
For example, in the actual hardware environment, the algorithms with particularly
large amount of computation and extremely high calculation accuracy requirements
are often unable to obtain the optimal performance. And the precisely designed
algorithms are often vulnerable to the outside interference. The algorithms at the
front of transmitter and receiver have to consider power, the impact on other
hardware components and many other factors. If we only use software simulation
for system verification and evaluation, its easy to ignore the above problems,
which will make the designed algorithm design stay in the theoretical stage and
cannot meet the needs of the real situation. Secondly, in the former software
simulation systems, simplified methods are often introduced in network and chan-
nel modeling, simulation business abstraction, design and realization and other
processes for convenience of processing. These simplified methods usually reduce
the simulation computational cost at the cost of authenticity. Thirdly, accurate math
modeling of the complex time-varying nonlinear system is very difficult. No matter
how good the model is, it cannot replace the real scenarios.
16 1 Candidate Technologies and Evaluation Challenges for 5G
In order to more accurately reflect the impact of the actual link process on the
system simulation, we can use a combination of hardware and software, using real
hardware platform, deploying the link to be simulated in the hardware (such as
FPGA, channel simulator) for real-time calculation, and thus improving the calcu-
lation accuracy through hardware simulation; in addition, introducing hardware in
the loop technology in 5G testing, can not only get closer to the real environment,
increase the reliability of the simulation test, but also test the communications
equipment under extreme conditions, which is difficult to achieve in the laboratory
environment. For the corresponding contents, please refer to Chaps. 5 and 6 of this
book. Furthermore, in terms of environmental authenticity, in the process of
research and development of 5G key technologies, in addition to all kinds of
simulation and laboratory evaluation, we also have to verify the technological
feasibility through outfield experiments, and realize the verification of the tested
technology or prototype equipment in real scenario through algorithm realization
! data acquisition ! system optimization ! outfield verification, providing field
basis for standardization. Therefore, in Chap. 8 of this book, we will analyze the
planning for test outfield from the perspective of the diversity of application
scenarios, integration of heterogeneous networks and large scale users.
Comprehensiveness
The demand for testing and evaluating comprehensiveness mainly involves two
aspects: on the one hand, it lies in the comprehensive support for the evaluation of
5G performance indicators; on the other hand, it lies in the comprehensive support
for the diversification of candidate technologies. 5G evaluation indicators system
will include the objective indicators like transmission and network as well as the
user experience indicators. Transmission is mainly reflected in the physical layers
transceiver performance, link layer performance indicators, while the network layer
is reflected in the evaluation indicators for system performance. In evaluating the
performance of 5G system, we need to not only comprehensively measure all kinds
of KPI, but also consider the relationship between each indicator, and guarantee
users consistent experience in pace, time, network, business, and other dimensions.
5G candidate technologies can be divided into two categories of air interface
technology and network technology. The air interface technology includes massive
MIMO, full duplex, novel multiple access and waveform, high frequency range
communications and spectrum sharing and flexible using. Network technologies
include Cloud Radio Access Network (C-RAN), Software Defined Network/Net-
work Functions Virtualization (SDN/NFV), ultra-dense network, multi-RAT and
terminal direct communications, etc. As for 5G software simulation technology,
this book will, from the architecture level, element level and module level, give a
detailed analysis on various 5G candidate technologies, propose and implement the
corresponding resource model and design scheme, and build a comprehensive
performance evaluation system, which will be shown in detail in Chap. 4.
1.4 Challenges in 5G Testing and Evaluation 17
Rapidity
Requirement for rapid test performance is mainly derived from the huge growth in
5G network traffic and a variety of performance. In order to meet the KPI s in 5G
white paper, the computing performance of the simulation system must grow by
more than 1000 times before meeting the requirements of the timely evaluation of
simulation task. With such rapid growth in computational performance, a system-
atic and brand new design and realization are needed in simulation systems
hardware platform, software platform, and simulation application. It can be
predicted that within a period of time, multi-core and Many Integrated Cores
(MIC) parallel computing mode will become the main technical means to enhance
computational efficiency, and cloud computing, super computer will gradually
become the calculation means of simulation system. For the simulation system,
the key problem is how to complete the software system concurrent design and
coding implementation on the new and powerful hardware platform with powerful
computational capabilities.
Flexibility
Fig. 1.2 Relationship between chapter setting and the four elements of evaluation
1.5 Summary
Starting from 5G application requirements, this chapter firstly introduces the typical
5G application, deployment scenarios, key technical indicators, as well as the plans
and progress of global 5G research and development. After providing a brief
overview of various 5G candidate technologies and their characteristics and ana-
lyzing the challenges posed by these candidate technologies to 5G testing and
verification, we expounded the challenges brought by the four elements require-
ments for realness, comprehensiveness, rapidity and flexibility in testing and
evaluation, as well as the relationship between chapter setting and these four
elements of evaluation, playing a leading role in outlining the subsequent chapters.
References
1. C. Wang, F. Haider, X. Gao, et al. Cellular Architecture and Key Technologies for 5G Wireless
Communication Networks. IEEE Communications Magazine, 2014, 52(2):122130.
2. X. Chu, D. Lopez-Perez, Y. Yang, and F. Gunnarsson. Heterogeneous Cellular Networks:
Theory, Simulation and Deployment. New York, USA: Cambridge University Press, 2013.
3. J. G. Andrews, S. Buzzi, W. Choi, et al. What Will 5G Be? IEEE Journal on Selected Areas in
Communications, 2014, 32(6):10651082.
4. IMT-2020 (5G) Promotion Group. White Paper on 5G Vision and Requirements, 2014. http://
www.imt-2020.cn/zh/documents/download/1.
5. ITU. (2015). Recomendation ITU-R M.20830: IMT VisionFramework and overall objec-
tives of the future development of IMT for 2020 and beyond. Technical report, ITU-R.
6. J. Yang, Y. Qiao, X. Zhang, et al. Characterizing User Behavior in Mobile Internet. IEEE
Transactions on Emerging Topics in Computing, 2015, 3(1):95106.
7. W. Huang, Z. Chen, W. Dong, et al. Mobile Internet big data platform in China Unicom.
Tsinghua Science and Technology, 2014, 19(1):95101.
8. L. Xu, W. He, S. Li. Internet of Things in Industries: A Survey. IEEE Transactions on
Industrial Informatics, 2014, 10(4):22332243.
9. Z. Sheng, S. Yang, Y. Yu, et al. A survey on the IETF protocol suite for the internet of things:
standards, challenges, and opportunities. IEEE Wireless Communications, 2013, 20(6):9198.
10. C. Perera, A. Zaslavsky, P. Christen, D. Georgakopoulos. Context Aware Computing for The
Internet of Things: A Survey. IEEE Communications Surveys & Tutorials, 2014, 16
(1):414454.
References 19
11. C. Perera, C. H. Liu, S. Jayawardena, C. Min. A Survey on Internet of Things from Industrial
Market Perspective. IEEE Access, 2014, 2:16601679.
12. A. Zanella, N. Bui, A. Castellani, et al. Internet of Things for Smart Cities. IEEE Internet of
Things Journal, 2014 , 1(1):2232.
13. C. Tsai, C. Lai, M. Chiang, L. T. Yang. Data Mining for Internet of Things: A Survey. IEEE
Communications Surveys & Tutorials, 2014, 16(1):7797.
14. IMT-2020(5G)Promotion Group. White Paper on 5G Wireless Technology Architecture,
2015. http://www.imt-2020.cn/zh/documents/download/61.
15. Cisco, Cisco Visual Networking Index: Global Mobile Data Traffic Forecast Update,
20142019. White Paper, 2015.
16. IMT-2020(5G)Promotion Group. White Paper on 5G Concept, 2015. http://www.imt-2020.cn/
zh/documents/download/23
17. A. Osseiran, V. Braun, T. Hidekazu, et al. The foundation of the Mobile and Wireless
Communications System for 2020 and beyond Challenges, Enablers and Technology Solu-
tions. IEEE Vehicular Technology Conference, 2013:15.
18. A. Osseiran, F. Boccardi, V. Braun, et al. Scenarios for 5G mobile and wireless communica-
tions: the vision of the METIS project. IEEE Communications Magazine, 2014, 52(5):2635.
19. Qualcomm, The 1000x Mobile Data Challenge, 2013. https://www.qualcomm.com/docu
ments/1000xmobile-data-challenge.
20. T. Wang, B. Huang, J. Pang. Current Situation and Prospect of Spectrum Requirements
Forecasting of the Future IMT System. Telecom science, 2013, 29(4):125130.
21. G. Zhen, D. Ling, M. De, et al. MmWave Massive MIMO based Wireless Backhaul for the 5G
Ultra-Dense Network. IEEE Wireless Communications, 2015, 22(5):1321.
22. E. Larsson, O. Edfors, F. Tufvesson, and T. Marzetta. Massive MIMO for next generation
wireless systems. IEEE Communications Magazine, 2014, 52(2):186195.
23. J. Shen, J. Zhang, K. B. Letaief. Downlink User Capacity of Massive MIMO Under Pilot
Contamination. IEEE Transactions on Wireless Communications, 2015, 14(6):31833193.
24. L. Lu, G. Y. Li, A. L. Swindlehurst, et al. An Overview of Massive MIMO: Benefits and
Challenges. IEEE Journal of Selected Topics in Signal Processing, 2014, 8(5):742758.
25. G. R. MacCartney, T. S. Rappaport. 73 GHz millimeter wave propagation measurements for
outdoor urban mobile and backhaul communications in New York City. Proceedings of IEEE
International Conference on Communications, 2014: 48624867.
26. M. R. Akdeniz, Y. Liu, M. K. Samimi, et al. Millimeter wave channel modeling and cellular
capacity evaluation. IEEE Journal on Selected Areas in Communications, 2014, 32
(6):11641179.
27. A. Ghosh, T. A. Thomas, M. C. Cudak, et al. Millimeter-wave enhanced local area systems: a
high-data-rate approach for future wireless networks. IEEE Journal on Selected Areas in
Communications, 2014, 32(6):11521163.
28. Nokia, Optimizing mobile broadband performance by spectrum refarming. White paper,
2014., http://networks.nokia.com/file/35861/optimizing-mobile-broadband-performance-by-
spectrum-refarming.
29. F. M. Abinader, E. P. L. Almeida, F. S. Chaves, et al. Enabling the coexistence of LTE and
Wi-Fi in unlicensed bands. IEEE Communications Magazine, 2014, 52(11):5461.
30. B. Singh, S. Hailu, K. Koufos, et al. Coordination protocol for inter-operator spectrum sharing
in co-primary 5G small cell networks. IEEE Communications Magazine, 2015, 53(7):3440.
31. H. Zhang, X. Chu; W. Guo, et al. Coexistence of Wi-Fi and heterogeneous small cell networks
sharing unlicensed spectrum. IEEE Communications Magazine, 2015, 53(3):158164.
32. METIS. Deliverable D1. 4: METIS Channel Models, 2015. https://www.metis2020.com/wp-
content/uploads/METIS_D1.4_v3.pdf.
Chapter 2
Evolution of Testing Technology
Testing and evaluation are indispensable links of technology and product inspec-
tion. The final birth of every new technology or product must go through strict
testing and evaluating process. In the 5G era, the testing tools and evaluation
methods are faced with new challenges along with the continuous development of
the new 5G technologies. In the section of testing technology evolution of this
chapter, we will give a brief review of the history of testing technologies, and make
analysis on the development trend of the testing technology ecosystem according to
the existing technologies. In the section of testing technology challenges, based on
the characteristics of wireless communications technology and 5G technology, we
will sort out and analyze the challenges of wireless communications testing tech-
nologies in terms of overall performance, the number of RF channels, and high
throughput, etc.
among which measurement technology is the key and basis. [1] Computer, com-
munications and instruments are three internationally recognized information tech-
nologies and they are the industries that have achieved the fastest development
since the twenty-first century. Technological development of different fields usu-
ally mutually supports and promotes each other. Thanks to the advance of computer
and communications technology, measuring instruments have achieved rapid
development and changes in the past 30 years, which will be described in subse-
quent chapters of this book. As one of the three key technologies of the information
industry, testing and measuring technology has become the foundation and devel-
opment guarantee for the electronic information industry. Measuring instrument is
a countrys strategic equipment, whose development level has become a sign of
national scientific and technological level, comprehensive national strength and
international competitiveness. In communications, radar, navigation, electronic
countermeasure, space technology, measurement and control, aerospace and
many other fields, the electronic measuring instrument is indispensable technical
equipment. In modern manufacturing industry, the advanced instruments are
needed in research and design, production process control, and product testing as
a support to testing technology.
Instruments are the realization carrier of testing technology. Moreover, testing is
an important step for checking the entire products quality, and also an important
basis to help the researchers to improve the product. Any new technology or
product must be fully tested and evaluated before entering the industrial production
and becoming a standard technology or qualified product in the market. Testing
technology needs to run through the entire process, from the pre-design stage to
design stage to production completion stage. Take the production of mobile phones
as an example. Before the development and production, engineers need to test all
kinds of components related to the phone. During the development and production,
they need to take qualitative or quantitative tests of the machining parts, semi-
finished mobile phone and all kinds of mobile phone parameters. And after the
production completed, the comprehensive tests of mobile phones functionality,
performance and reliability are needed, such as mobile phones radio frequency
conformance test, WLAN test, GPS test and so on, so as to judge whether a phone
meets various protocol standards and authority requirements. Different tests are
taken to determine whether mobile phone is up to the standard or not. After
organizing and analyzing the test data, if the testing shows that it does not meet
the standards, then it will go back to optimize the performance and be tested again.
The test data can be used to assist the researchers with performance optimization
and problem solving. The product should not enter the market to become a qualified
mobile phone until the test results finally meet the standards. The quality and
efficiency of products are restricted by the quality and efficiency of testing tech-
nology. Only the accurate and efficient testing process can guarantee more efficient
production of high performance products for the fast-updating terminal market
nowadays. Take the software testing as another example. Engineers run or test
software manually or automatically to find bugs or defects in the software, to help
researchers in software design optimization and provide quality assurance for
2.2 Testing Technology Evolution 23
Instrument 3.0 Era, which has introduced interconnected network testing and
evolved from simple testing instruments to the integration of testing equipment and
services. At present, the testing industry is still in Instrument 2.0 Era and the early
stage of Instrument 3.0 Era. Whether changes will take place in the development of
instruments in the future and whether the change will be fast or slow all need our
technical workers continued joint efforts. The development trend of testing tech-
nologies will be described in the subsequent chapters. It should be noted that the
division of different eras is not absolute. Instruments of another era will not
immediately disappear when new technology emerges. Instead, they are more likely
to continue to progress with the technology development or be replaced and stop
production after a certain length of time. The renewal of old and new instruments is
a process of coexistence and substitution.
The development of testing technologies is actually the development process of
the test instruments. This section will lead readers to review the development and
evolution process of the testing instruments. The main realization carrier of testing
technologies is the testing instruments. Its easier for the readers to understand the
development of testing technologies in the corresponding periods with the intro-
duction to the development of testing instruments.
Unlike the high-end testing instrument that the test engineers use now, the initial
test instruments were simple and the measurement results were displayed by the
pointer. They had single functions and low openness, and were known as the
analog instruments, such as the analog voltmeter and ammeter. Afterwards,
with the emergence of electronic tube technology, electronic tube instruments
came into being, such as the early oscilloscope, etc. Subsequently along with the
emergence of transistors and integrated circuits, the instrument industry, combining
with integrated circuit chips, produced the instruments based on integrated circuit
chips, which were known as digital instruments. The basic working principle of
the digital instruments is that the analog signals are converted to digital signals in
the measurement process, and the test results are finally displayed and output in the
digital form. Compared with the analog instruments, digital instruments have more
intuitive and clearer test results, faster response, and relatively higher accuracy, but
the instruments then depended on manual operation. In the 1970s, the testing
instruments began to use the microprocessor. Simple intelligent instruments
began to appear in the field. For the first time, the engineers have tasted the
sweetness of intelligence. The introduction of microprocessors has greatly
improved the performance and automation degree of the instruments, facilitating
automatic range conversion, automatic zero adjustment, trigger level automatic
adjustment, automatic calibration, self-diagnosis and many other functions [5].
2.2 Testing Technology Evolution 25
As time goes by, technologies in different fields are all developed. Owing to the
development and progress of the computer technology, software, data processing,
data bus, and reprogrammable chips, test instruments began to transform from the
functional fixed discrete instruments to flexible instruments based on the software
design. The instrument based on the software design allows users to redefine the
software function according to their own needs. The development of network
technology also experienced the user defined stage of Web2.0. Similarly, we call
this stage of the instruments as Instrument 2.0 Era. Therefore, the previous
traditional instrument times can be called as Instrument 1.0 Era.
In Instrument 1.0 Era, engineers totally relied on hardware to realize the testing
and measurement. It was expert and manufacturing plant who took charge of design
and manufacturing. Although users were allowed to put forward some opinions and
demands, they could not be realized immediately. Moreover, users were unlikely to
participate in the design and manufacture of products. The hardware itself and its
analysis function were defined by the instrument suppliers, and self-definition
function was not provided to users. Even if the instrument was connected to the
computer, the transmitted information waste test results defined by equipment
manufacturers. Users are unable to obtain the original measure data to do self-
defined analysis. During the development of the testing instruments, with the help
of the progress in computer technology and communications technology, the
instrument industry had seen fast development and changes from the traditional
hardware instruments. The emergence of bus technology made automated testing
possible, and the rapid growth of the bus technology led to the fast development of
the automated testing. The development of automated testing offers a guarantee for
wide applications of wireless communications products. Traditional testing has
complicated procedures, high operations difficulty and strict requirements on the
test engineers, while automatic testing has overcome those disadvantages and
brings many advantages, such as having testing specifications, high testing speed,
avoidance of maloperations, and being able to guarantee the accuracy and authen-
ticity of the testing results to the greatest extent [6]. Among them, the most
representative one is the proposed concepts of the PCI eXtensions for Instrumen-
tation (PXI) bus technology and Software Defined Radio (SDR), which changed the
single direction of the traditional testing technology to the stage when users could
redevelop the instruments according to their own needs, bringing us to Instrument
2.0 Era, namely, the software defined radio era. In this era, engineers had more
control rights to the instruments. After getting access to the original real-time data,
engineers could use software to design their user interface and define measurement
tasks in order to obtain the desired results.
During this period, the instrument technology had experienced great improve-
ment. The major improvement and change compared with the 1.0 Era mainly
stemmed from and were reflected in the development and emergence of bus
technology, instrument module technology, SDR, hybrid systems and many other
26 2 Evolution of Testing Technology
Bus Technology
Since the birth of the first-generation test bus technology, i.e., General Purpose
Interface Bus (GPIB) technology, in the 1970s, testing has gradually developed
from a single manual operation to a large-scale automatic testing system. After-
wards, all measuring instruments with GPIB standard interfaces sprang up, and the
unified standards of interconnected instruments were gradually formed. The stan-
dards allow testing engineers to assemble a variety of automatic measurement
systems with powerful functions by very convenient means. Meanwhile, with the
passing of time and the improvement of test requirements, the bus technology is not
limited to GPIB anymore. Other bus technologies have been put forward and
improved. The representative bus technologies include VMEbus eXtensions for
Instrumentation (VXI), Peripheral Component Interconnect (PCI), Peripheral Com-
ponent Interface expres (PCIe), PXI, PCI eXtensions for Instrumentation expres
(PXIe), LAN eXtension for Instrumentation (LXI) and Advanced eXtensible Inter-
face expres (AXIe), etc.
Through the test bus, the data communications between different units and
modules in the testing instruments are realized. Thanks to the development of bus
technology, testing is no longer limited to the manual operation of a single instru-
ment and the automated testing becomes possible. This progress plays an important
role in promoting the development of automated testing system, so the bus technol-
ogy is an important technology of the 2.0 Era and the main changes in the testing
instruments cannot be separated from the development of the bus technology.
The birth of the GPIB technology enables engineers to directly connect the test
instruments with computers. Almost every device has a GPIB interface, which is
firm, reliable, universal, simple and convenient. The simplest is to connect one
computer with a testing instrument. However, it will be limited if the test needs to
use multiple instruments. In that case, with low data transmission rate, high cost,
limited bandwidth and poor reliability, the synchronization and trigger function of
multiple instruments are unable to be provided, which will affect the test
performance.
VXI has higher bandwidth and lower latency than GPIB. But it is difficult to be
applied to other fields because of its high cost. It is mainly used in military and
aerospace. In addition, VXI is based on the outdated VME bus and the modern
computer does not support this kind of bus structure, which has affected the
development of the bus technology.
Intel put forward the concept of PCI bus for the first time in 1990s, which was
used to connect peripherals and computer backplane. The PCI bus is a parallel bus
that can work in 33 MHz and 32 bit. The maximum theoretical bandwidth is
132 Mbytes/s, and it employs the shared bus topology.
2.2 Testing Technology Evolution 27
After the PCI bus was used in the instrument field, under the combined effect of
various factors, the PXI technology bus was obtained through the corresponding
expansion of the PCI bus. It is a new bus standard launched by American National
Instrument (NI) Company in 1997. PXI is based on the mature PCI bus technology,
so it has faster bus transmission rate, smaller volume and better cost performance
compared with VXI. In addition, PXI is capable of providing nanosecond timing
and synchronization features as well as robust industrial characteristics. Finally,
thanks to the flexibility of the software and the continuous updating of the modular
hardware, users can upgrade the entire test system at any time with the least
investment. Such an excellent scalability and flexible software architecture make
the system integration based on PXI modular instrumentation platform more
common.
In 2004, PCI Express was launched. It is an expansion on the PCI bus. The
testing system with the Instrument 2.0 technology can enhance the performance of
data stream transmission and data analysis when adopting this kind of bus. Com-
paring with the PCI bus, PCI Express uses the point-to-point bus topology, which
provides a separate data transmission path for each device. All slots of PCI Express
have dedicated bandwidth to connect the PC memory so that they do not have to
share bandwidth like traditional PCI. Data will be transmitted and received in the
packet form through the symmetric channels with the bandwidth of 250 Mbytes/s in
each direction. Multiple channels gathering together can maximally form a 32 times
channel width, significantly improving the data transmission bandwidth, minimiz-
ing the demand on memory and speeding up the transmission of data stream.
PXI Systems Alliance (PXISA) official organization officially launched the
hardware and software standards for PXI Express in the third quarter of 2005. By
using the technology of PCI Express on the backplane, PXI Express was able to
increase the bandwidth by 45 times, from the original 132 MB/s of PXI to current
6 GB/s. At the same time, the compatibility of the software and hardware was
retained with original PXI modules. The performance was enhanced so much that
PXI Express could enter more application fields that were used to be controlled by
dedicated instruments, such as Intermediate Frequency (IF) and Radio Frequency
(RF) digital instruments, communications protocol verification and so on. Thanks
to the software compatibility, PCI Express and PXI can share the same software
architecture. In order to provide hardware compatibility, the latest Compact PCI
Express standard has defined hybrid slots to support modules based on the archi-
tecture of PXI or PCI Express simultaneously [7]. PXI Express also offers a
100 MHz differential system clock, differential signals, and differential trigger
time and synchronization characteristics. Prior to the launch of the technology,
some fields had to only rely on expensive dedicated device to solve the problem.
PCI Express technology solves the problem in these fields with high performance,
for example, high bandwidth IF instruments for communications system testing, the
protocol based on low voltage difference signals, interfaces of high-speed digital
protocols such as the FireWire protocol and the optical fiber channel protocol,
large-scale channel data identification systems for structural and incentive test,
high-speed image recognition and data stream processing, etc.
28 2 Evolution of Testing Technology
The AXIe bus technology, an open standard for modular test equipment, has
been proposed in recent years. It inherits the advantages of modular architecture of
Advanced TCA and refers to the existing standards of VXI, PXI, LXI and IVI. It has
larger circuit boards, higher data transmission rate, greater power and better heat
dissipation than VXI or PXI. Moreover, a modular flexible platform with long life
cycle, high performance and strong scalability are provided. Its goal is to create an
ecosystem composed of various components, products and systems, promote the
development of general instruments and semiconductor testing, provide maximum
scalability and meet the needs of various platforms including the general rack
stacking system, modular system, semiconductor AT, etc. [8].
Software Defined Radio, known as the third revolution in the information field, is
the most representative technology of the Instrument 2.0 Era. The software radio is
to take hardware as the basic platform of wireless communications and implement
the maximal functions of wireless communications and personal communications
with software.
In the 1990s, as a new concept and system of wireless communications, software
radio began to receive attention at home and abroad. It gave communications
system good universality and flexibility, and made it easy for system interconnec-
tion and upgrading. The technology turned out to be a major breakthrough in the
field of radio and was primarily applied in military field. The basic idea is to let all
the tactical radios in use be based on the same hardware platform, install different
software to form different types of radio, complete functions of different natures,
and get the software programmable capability [9, 10]. At the beginning of the
twenty-first century, with the efforts of many companies, the application of soft-
ware radio was transformed from military to civilian fields, such as multi-band
multi-mode mobile terminals, multi-band multi-mode base stations, WLAN and
universal gateways, etc. The software defined radio technology, as a new wireless
communications system structure with strong flexibility and openness, has naturally
become the strategic base of global communications [11].
Because of the increase in users demand, the shortage of spectrum resources
and the huge attraction of new business have brought a lot of stress to equipment
manufacturers and operators and promoted the updating of wireless technology
standards. As there are significant differences between various wireless technology
standards and systems, the existing hardware-based wireless communications sys-
tem is difficult to adapt to this situation, which has led to the concept of Software
Defined Radio and the updated technology and equipment. The update of technol-
ogy and equipment usually causes the waste of equipment and investment, while
the software defined radio technology is able to save the inconvenience caused by
technology updating through the self-defined software.
The architecture of software radio is different from that of the traditional
software. In conventional wireless communications systems, the RF part, up/down
2.2 Testing Technology Evolution 29
frequency conversion, filtering and baseband processing are all in the analog form.
A certain band or type of modulation of communications system corresponds to a
certain specialized hardware structure. The low-frequency part of the digital radio
system adopts the digital circuit, but the RF and IF parts are still inseparable from the
analog circuit [10]. Compared with conventional radio systems, software radio
realizes all of the communications functions by the software. The key idea of
software radio is to construct a standardized, modular universal hardware platform,
to realize various functions by software, and to make the broadband A/D and D/A
convert to IF, near the RF side of the antenna, and strive to carry out the digital
processing from IF. In addition, different from digital radio using digital circuits,
software radio adopts high-speed DSP/CPU. The software radio system must work
in parallel with a number of CPUs, and the digital signal processing data should be
exchanged at high speed, which requires the system bus to have a very high data
transmission rate. DSP devices are used to replace the dedicated digital circuit board
so that the system hardware structure and functions are relatively independent.
Modular design is also employed to give the platform openness, scalability and
compatibility. Based on the relatively universal hardware platform, software radio
realizes different communications functions by loading different software. It can
rapidly change channel access methods or modulation modes and adapt to different
standards by utilizing different software, thus forming the highly-flexible multi-
mode terminals and multi-functional base stations to achieve interconnections.
In practical applications, software radio requires a very high speed of hardware
and software processing. Due to the limitation of the technological level of hard-
ware, the concept of pure software radio has not been widely used in practical
products. The SDR technology based on the concept of software radio has attracted
more and more attention. Software defined radio is a system, which must have the
ability of reprogramming and reconstructing, so that the equipment can be used in
various standards and multiple frequency bands and realize a variety of functions. It
will not only use programmable devices to implement digital baseband signal
processing, but also carry out programming and reconstruction on analog circuits
of radio frequency and intermediate frequency, and have the ability of
reprogramming, reset, providing and changing services, supporting multiple stan-
dards and the ability of intelligent spectrum utilization, etc.
The software defined radio technology is mainly realized based on single-chip
FPGA or DSP. However, with the continuous increase in the volume of collected
data, signal processing capabilities of software radio platform also need to be
further enhanced. In order to improve modern signal processing capacity, the new
technology is also proposed. For example software defined radio technology based
on Compact PCI (CPCI) has attracted many teams and manufacturers to develop the
software radio platform system in line with the CPCI standard [12].
methods and instrument concepts emerge one after another. High-speed communi-
cations test instruments featuring modular, software, and integration continue to
spring up, which complement and get closely integrated with traditional methods,
expanding the applications continuously and forming a technical highlight in the
communication-centered application fields.
As mentioned in [7], VXI was the earliest bus which introduced the concept of
modular instrument. It successfully reduced the size of the traditional instrument
system and improved the level of system integration. It mainly used to meet the
needs of high-end application of automated test, and had been successfully applied
in the military aviation test, the manufacturing test and so on. Like the tested
terminals, the test system is developing from the hardware-centric, single-purpose
and limited-functions stage to the software-centric, multi-purposes and unlimited-
functions stage. The modular instrument technology is the epitome of the techno-
logical progress.
Traditional desktop instruments generally have only a single function. They
achieve instrument functions through the man-machine operation step by step.
Modular instruments, on the other hand, highly integrated with computers and
suitable for PCI and PXI platforms, are highly-flexible plug-in computer boards.
The functions of modular instrument are similar to the traditional desktop instru-
ments but better. The modular instrument is an important part of the SDR technol-
ogy. Being defined by software enables it to define the measurement and analysis in
real time, breaking the rule that the traditional instrument manufacturer defines the
fixed functions. The users can realize the required test tasks within a short time
through the functions of software-defined modular instruments. They can apply the
customized data analysis algorithms, create a customized user interface, change the
situation of purely displaying the test results of traditional instruments, and add
more test engineers ideas and testing intentions, thus giving engineers more
initiatives. The difference of modular instruments with the traditional instruments
mainly comes from the progress of the bus technology, with which different
modular instruments share a power supply, a chassis and a controller. The bus
can guarantee the data transmission channel between the modules. Through the bus
control, different functions of the modular instruments can be integrated to reduce
the volume of the instruments and simplify the testing complexity.
With the aid of modular instruments, engineers can choose different kinds of
modular instruments according to their measurement needs and set up a test system.
Due to the adoption of the software defined modular structure, system measurement
can be realized through the corresponding software configuration operation. The
service cycle of modular instruments can be increased through software upgrading.
Different test requirements can be achieved through software programming. The
life of the instruments can be ensured through repeatable and flexible use of
modular instruments, relieving the pressure of increasingly complex equipment
and technology on the test time. Especially, after the introduction of new wireless
standards, the engineers using traditional instruments need to wait for suppliers to
develop a corresponding desktop instrument before testing the standards. However,
with the new software defined radio technology and modular instruments
2.2 Testing Technology Evolution 31
technology, the engineers are able to test the wireless standards in the standard-
setting process with the universal module instruments and user-defined wireless
protocols and algorithms. Due to the different characteristics from the traditional
instruments, the modular instrument technology has become the representative
technology of the Instrument 2.0 Era.
On account of the advance of the computer technology, the bus technology and their
penetration in instrumentation, the hybrid systems composed of different bus
technologies and instruments are gradually appearing in the testing field. Users of
such kind of hybrid systems can not only enjoy the high speed and flexibility of
modular instruments, but also use existing discrete instruments for some special
measurements.
In a hybrid test system, different components of multiple automated test plat-
forms are integrated in a system, including PXI, PCI, GPIB, VXI, Universal Serial
Bus (USB), Local Area Network (LAN) and LXI, and other different buses. Its
emergence is not a coincidence. Although the bus technology is developing forward
and PXI has been widely recognized and used, other bus technologies cannot
completely disappear. The instruments that include other bus technologies, such
as capture card based on the Industry Standard Architecture (ISA) bus technology
and GPIB control cards, are still trusted and being used by many engineers and
manufacturers, and the bus cannot be completely replaced by PXI. Moreover, from
the test engineers point of view, when designing a test system, there are various
factors to balance. Now the products are becoming more and more complex, the
requirements for the mixed signal test are also getting higher and higher, so it is
necessary to use the advantages of different bus test platforms and build a hybrid
test system to meet the testing demands.
In the examples of the equipment system indicated in Fig. 2.1, the hardware
employs the GPIB, PXI, LXI and many other test buses to form a hybrid system.
What the test engineers should do is not just simply connect several instruments,
i.e. to make the hybrid system that is composed of different products of different
manufacturers and different buses work normally and smoothly, they also need to
consider the software architecture of the hybrid system. Having unified software
architecture can greatly simplify the complexity of system programming and avoid
the problem of compatibility between different instruments.
For example, NI Company offers unified software architecture composed of the
measurement and control service layer and the application development layer. The
architecture provides software for the hybrid systems. If the hardware is the
skeleton of the hybrid system, then the software is the soul to control the whole
system. The measurement and control service layer includes a flexible device driver
and is used to connect the software and hardware and simplify the test code of the
hardware configuration part. The company also proposed Virtual Instrumentation
Software Architecture (VISA) standards to provide the API with high-performance,
32 2 Evolution of Testing Technology
Fig. 2.1 A kind of hardware components of the hybrid bus test system [7]
Different from the evolution time from Instrument 1.0 to 2.0, after the Instrument
2.0, thanks to the rapid development of software technology and network technol-
ogy, the 3.0 Era has arrived.
Instrument 3.0 Era reflects the characteristics of openness, union, and service.
Generally, there are two kinds of definitions for the 3.0 Era. From the standpoint of
test measurement industry, the Instrument 3.0 is defined as follows: in addition to
the function and performance of the equipment itself, the technical service appears
2.2 Testing Technology Evolution 33
with the instrument. From the technical and academic point of view, the Instrument
3.0 can be defined as follows: in addition to the characteristics of Instrument 2.0,
features like network-based combination, cross-layer test and remote test have
emerged. Cloud testing, software and hardware joint multi-user system simulation
test is a series of applications confirming the characteristics of the instrument 3.0
technology.
Through the above description on the development, the key technologies and
challenges of testing technology, it is not difficult to find that to cope with the
changing challenges of new standards and technologies, engineers are facing many
difficulties in testing wireless devices and more complex testing work. Although
different testing techniques have developed towards software-based, modular and
intelligent directions, it is still unable to meet the needs of the rapid technical
development. In this section, we summarizes the main development trend of
wireless testing technology based on the testing technology development status
and the development needs of wireless communications,
The instrument system for test and measurement has been developing towards the
software-centered modular system. Accordingly, users can integrate test into the
design process in a faster and more flexible way, shorten the development time, and
improve the test efficiency. While the software defined instruments depending on
software have become increasingly popular and been widely used, the synthesis of
the instruments is also proposed. However, the traditional instruments are also
advancing with the bus technology. Therefore, the emergence and widespread
adoption of modular instruments and software defined radio technology cannot
immediately replace the traditional test instruments. The renewal of the traditional
test instruments will still be competitive, and the two will continue to coexist.
In the testing field, many-core parallel technology and FPGA real time technology
will become important supports for the development of testing technology, and can
lead the development trend of testing technology.
34 2 Evolution of Testing Technology
Over the years, the microprocessor has been developed from single core into
multiple core. Commercial equipment with dual-core or even eight cores are now
everywhere. Owing to the software defined instruments, users can enjoy the huge
performance improvement of automated test applications brought by the multi-core
processor immediately. While multi-core is no longer satisfactory, many-core
comes into being. Moores law states that the number of transistors would be
doubled every year, and processor manufacturers can use these increasing transis-
tors to create more cores. Today, desktops, mobile and ultra-mobile computing
commonly use dual-core and quad core processors, and servers usually use ten or
more cores. More cores are being plugged into smaller, less-power encapsulations.
The number of cores in a server processor is increasing drastically to more than
10, and the processor is developing towards many-core. Super computers enable us
to understand what the future processors will be like. The fastest computers in the
world use millions of cores. Although these multi megawatt super computers are
not suitable for the test stations in the production workshop, integrating more
functions into smaller and smaller space means that many-core processors will be
able to shrink to equal size of other processors due to reducing the volume of power
supply. An example of this tendency is the coprocessors of Intel Xeon Phi class,
which provide up to 61 cores and 244 threads of parallel execution capability.
Market demand is driving the improvement in the performance of graphics cards
and augment of the number of cores. Although the test and measurement applica-
tions basically do not use the graphics function, the new processor with more kernel
can provide higher performance to test applications specifically designed for a
higher number of cores. The migration from single core, multi-core to the many-
core processor has been very common in the field of high-performance computing.
With more advantages of parallel processing being found, many-core processors
will continue to be adopted by more routine applications.
The emergence and development of multi-core/many-core processors, is not only
an inevitable choice of the development of the semiconductor industry, but also a very
reasonable architecture. This architecture is well adapted to the current computing
environment and application mode of Internet. Although there are still large quantities
of problems to be solved, covering from the internal structure, such as network on chip
and cache organization and coherence protocol, to the external environment, including
programming model and key system software, there is no doubt that as the core
component of future computing platform, multi-core/many-core processor is welcom-
ing the best time for development [13]. The development of multi-core/many-core
technology will surely promote the development of the test industry.
The functions of the FPGA technology will become more and more prominent in
the future. In many cases, when the test system is developed on a multi-level, it is
2.2 Testing Technology Evolution 35
3. FPGAs powerful functions can reduce the cost and size of the RF test
equipment.
4. Compared with traditional firmware or software-led tests, using FPGA signifi-
cantly shortens the test time, so that large-scale industrial applications become
possible.
The advantages and application fundamentals of FPGA ensure its bright future
in testing and application. The test industry has developed and applied the products
combining with FPGA. For example, R-series data collection and FlexRIO products
family provided by NI have integrated the high-performance FPGA into the readily
available Input/Output (I/O) broad card for users to customize and repeat configu-
rations according to the applications. At the same time, with the easy and intuitive
graphical programming of LabVIEW FPGA and without the need to write the
underlying VHDL code, users can rapidly configure and program FPGA functions
for test automation and control applications [15], simplifying the application
complexity of FPGA technology.
The challenges of testing technology have depicted the future of test instrument
times. Through the introduction to the evolution and challenges of test instruments,
we can find that in addition to the above trends of future testing technology, an
upstream and downstream test system will be formed, i.e. a software-centered test
ecosystem.
How to keep up with the rapid development of many communications technol-
ogies and standards, how to improve the testing budget cost performance, how to
flexibly redefine testing requirements and methods, how to effectively use the
multi-core technology, how to use real-time processing technology to improve
the test throughput -- the answers to all these problems will point to the solution
of software centric [15].
For decades, the electronic and communications industry has been pursuing the
ideal state of mutual improvement of design and testing. In view of the difference
between design and testing, this goal has not been reached yet. In the design phase,
the latest Electronics Design Automation (EDA) software is applied to the system
level design, while the testing area is slightly independent and lagging. Therefore,
with the latest software-centered electronic communications equipment, a new test
solution is often needed to be found.
Adopting system-level approach, integrating the concepts of design and test,
and expanding software architecture to FPGA is one of the effective means to
balance the development of the two areas and improve the efficiency of
2.2 Testing Technology Evolution 37
communications testing. The way to integrate design and test is to deploy the
designed IP cores to Device Under Test (DUT) and integrated test platform. This
deployment process is called IP to the Pin [16], because it enables tester-defined
software IP to be close to the hardware I/O pins of integrated test platform as much
as possible. These software IP can include: data acquisition, signal generation,
digital protocol, mathematical operation, RF and real-time signal processing, etc.
Regardless of throughput or power consumption of a single devices original data
processing, FPGA is better than digital signal processor, traditional processor, and
even graphics processor [17].
The specific implementation of IP to the Pin technology can be expressed as the
V graph. Each phase of the design has a corresponding verification or testing
phase. By sharing IP, the design and test team can respectively move along the two
sides of the V graph, from modeling and design of the top layer to the implemen-
tation of the bottom layer, carrying out the corresponding test at each stage
(Fig. 2.2).
Software environment of
heterogeneous computing Multi-functional integrated bus
connection and triggering
Channel simulator
behavior control and
Multi-core CPU (test
analysis)
Cloud computing GPU server
(offline data analysis (online/offline
and storage) algorithm analysis)
Fig. 2.3 Heterogeneous computing architecture for complex mobile communications testing
simultaneous spectrum sensing and detection, and collaborative test of the PHY
layer and the Media Access Control (MAC) layer.
Take the MIMO RF test system using the heterogeneous computing architecture
as an example. CPU can be used to control the program execution. FPGA is used for
online demodulation, and the GPU is used for the calculation of multi-antenna test
parameters. Finally all the processing results are stored on a remote server. The
following figure shows a heterogeneous computing architecture that can be applied
to the 5G communications test (Fig. 2.3).
Along with the rapid development of high bandwidth and high data rate of 5G
mobile communications, the combination of the heterogeneous computing archi-
tecture and the multi-core parallel programming technology will be the indispens-
able main technology with which 5G test is able to deal with massive data
processing and improve the parallel testing.
challenge the testing industry, and meanwhile promote the change of the testing
technology.
The popular WLAN (802.11) protocol also drives the increase of RF channel
number, and supports multiple antennas and multiple communications frequencies.
In addition to the increasing RF channels, the IQ baseband channels of these
transceivers are also increasing. All of these have put forward higher requests to
the test ability of the baseband mixed signals in the test system. The independent
development of the digital and the analog simulation technologies also promotes
the change of testing, which brings the challenge to the test engineers.
Due to the various applications, the current devices usually integrate some parts
of different technologies like the cellular wireless technology, the short distance
wireless technology and the GPS technology. Therefore, it is not rare to have
multiple RF channels on a single chip. The brand-new structure of new testing
instruments should be able to handle the interface test efficiently. At the same time,
the coexistence demand of LTE, Wideband Code Division Multiple Access
(WCDMA)/ High-Speed Packet Access (HSPA), Global System for Mobile com-
munication (GSM) and other networks promotes the development of multi-mode
base stations and multi-mode terminals, and the corresponding test solutions of
multi-mode measurements must be specially designed. Unlike the 2G and 3G tests,
the 4G test not only tests the physical layer, but also comes up to the MAC layer
measurement before getting a more comprehensive understanding of the product
performance. It also presents new challenges for wireless communications testing,
and the wireless testing technology needs to adapt to the needs of the forward
progress. In addition, high-speed data transmission system has developed in the
direction of high frequency. The high frequency testing is also an inevitable
challenge in testing.
In order to promote its service support capacity, 5G will have new breakthrough in
the wireless transmission technology and the network technology. In terms of the
wireless transmission technology, technologies which can further explore the
potential of spectrum efficiency improvement will be introduced, such as the
advanced multiple access technology, multi-antenna technology, the modulation
and coding technology, the new waveform design technology and so on. Among
them the wireless transmission technology based on the large scale MIMO
(or massive MIMO) will be likely to make the spectrum efficiency and power
efficiency to upgrade an order of magnitude on the basis of 4G. This section
introduces the development and application of the MIMO technology, and dis-
cusses the possible challenges of the testing technology brought by the massive
MIMO technology in 5G.
As an effective means to improve the system spectrum efficiency and the
transmission reliability, the multi-antenna technology has been applied to a variety
40 2 Evolution of Testing Technology
receiving performance of the wireless system equipment. All of the above require-
ments need a large number of Over The Air (OTA) tests for support. The fifth
generation mobile communications technology in China defined that the number of
cooperative antenna at the side of the future 5G base station should not be less than
128 [18]. The number of antennas on this massive MIMO technology has risen to
the hundreds, greatly larger than that of traditional techniques. The factors such as
the number of test channels, the synchronization and isolation among multi chan-
nels, the storage of multi-channel data, present severe challenge to the massive
MIMO test implementation.
When validating the characteristics of the MIMO device, or achieving some of
the non-product MIMO systems like RADAR and beam forming, the multi-channel
RF architecture is needed. From the previous 22MIMO system to the present
88MIMO system as well as the future massive MIMO, scalability has become the
key requirement for the next generation of the RF test system. Testing MIMO
wireless devices usually does not require a multi-channel architecture, because it
generally does not require the full spectrum characteristics test. But MIMO itself
needs to add a test channel. In order to the implement parallel test of multi-protocol
wireless devices, engineers need to upgrade the existing RF devices to more
channels at a low cost, and it should be flexible enough to measure multiple
frequency bands. According to the above requirements, it can be seen that the
new generation of RF devices should have a fully parallel architecture of hardware
and software, and have advanced synchronization. In addition, the new RF devices
must be ready to be used, and provide a higher accuracy of synchronization, not
just to realize synchronization by using reference clock (usually 10 MHz) and
trigger. The reason is that although the traditional synchronization methods can
ensure the synchronous acquisition of the signal, it cannot ensure the signal phase
synchronization. The software part is even more important in the whole test
architecture, because it requires a lot of computations to deal with many kinds of
infinite standards. The modern software architectures can implement parallel data
streams, which make one or more processing units to be dedicated to a RF channel.
There are universal parallel processing architectures in the current market, includ-
ing multi-core processors, multi-thread, multi-core and FPGA. There are also some
other promising technologies, such as the Intel Turbo Boost technology used in the
latest generation of the Intel Nehalem. It automatically allows the processors to
overspeed in excess of the basic operating frequency, as long as it runs below the
rated power and the limits of the current and temperature. In order to fully utilize
these processor technologies, engineers need to apply the parallel programming
technology on the algorithm level and the application software level, such as task
parallelism, data parallelism and pipeline operations. The multi-channel test archi-
tecture reduces the total test time, increases the test throughput, and improves the
performance of the instruments. However, the flexibility of the architecture is also
very important. For example, the MIMO configuration is a typical dynamic con-
figuration, and the phase and amplitude of each transmitter can be used to optimize
the performance and direction of signals. Every addition of a MIMO transmitter
will make the software complexity grow in an exponential way.
42 2 Evolution of Testing Technology
Similarly, the emerging wireless technologies, for example, the MIMO antenna
systems, have a great impact on the design of the transceiver, which will also affect
the RF devices. The future multi-channel wireless system should be based on the
low-cost architecture, in which the signals and the software bear the characteristics
of parallelism.
Therefore, the application of the massive MIMO technology in 5G has brought
unprecedented challenges to the traditional RF test technology at aspects of both
testing methods and testing devices.
is also one of the main research focus. At present, the heterogeneous computing
architecture is a 5G technology with much application potential. Along with the
rapid development of high bandwidth and high data rate of 5G mobile communi-
cations, the combination of heterogeneous computing architecture and multi-core
parallel programming technology will be the indispensable main technology with
which 5G test is able to deal with massive data processing and improve the parallel
testing.
2.4 Summary
This chapter focuses on the theme of testing technology evolution to explain the
importance of the testing technology in the industry chain, and guides the readers to
review the whole development process of the testing instrument industry. Due to
the development of the computer technology and the micro processing technology,
the traditional testing instruments are continuously developing towards the
bus-based, intelligent, distributed, modular and network directions. The improved
testing instruments are applied to the wireless communications, which facilitate the
test in the communications field. There is usually a system level test in the wireless
field. The development of the testing technology has avoided the stack-based test
methods needed for the system level test, simplified the test volume, and reduced
the human error. Although the test instruments have made great progress, we still
need to consider various requirements from the wireless field especially the future
wireless communications technology field. The constantly increasing requirements
are posing ever higher requirements on the communications technology. The
communications technology is also developing rapidly. All of these are challenging
the testing technology. This chapter analyzes the possible challenges as well as the
future development trends of the testing technology. With the constantly emerging
challenges, the testing technology will surely make great progress accordingly. On
the basis of the existing testing technology, it will grow toward the healthy direction
of the ecological test system.
References
1. X. You, Z. Pan, X. Gao, S. Cao and H. Wu. 5G Mobile Communication Development Trend
and Some Key Technologies. China Science, Information Science, 2014, 44(5):551563.
2. C. Wang, F. Haider, X. Gao, et al. Cellular architecture and key technologies for 5G wireless
communication networks, IEEE Communications Magazine, 2014, 52(2):122130.
3. E. Larsson, O. Edfors, F. Tufvesson, T. Marzetta, Massive MIMO for next generation wireless
systems. IEEE Communications Magazine, 2014, 52(2):186195.
4. J. G. Andrews, S. Buzzi, W. Choi, and et al. What Will 5G Be? IEEE Journal on Selected Areas
in Communications, 2014, 32(6):10651082.
5. IMT-2020 (5G) Promotion Group, 5G Network Technology Architecture, 2015.
44 2 Evolution of Testing Technology
The 5G typical scenarios will touch many aspects of life in the future, such as
residence, work, leisure, and transportation, mainly involving dense residential
areas, office, stadiums, indoor shopping mall, open-air festivals, subways, high-
ways and high-speed rails. Compared with 3G/4G system, 5G adds some new
application scenarios, which have diversified features like ultra-high traffic volume
density, ultra-high connection density and ultra-high mobility. A variety of tech-
nologies are used to serve the end users, such as massive MIMO, millimeter wave
(mmWave) communications, ultra dense networks, and D2D, etc. The application
of these new technologies and the requirements for 5G mobile communications
system present new challenges to the wireless channel models. The 5G channel
models should support wide range propagation scenarios, higher frequency and
larger bandwidth, as well as larger antenna array and so on, and keep consistency in
space, time, frequency and antenna, which are mainly showed in the following
aspects [1]:
1. Support wide range propagation scenarios and diverse network topologies
The vision of 5G is that anybody and anything can get access to information and
share data with each other whenever and wherever they are. IMT 2020 5G promo-
tion group of China has interpreted this as Information a finger away, everything in
touch [2]. Current cellular mobile communications networks serve the static and
mobile users with fixed base station(BS), so traditional channel model developed to
serve for this type of applications.While under the 5G architecture, network topol-
ogy should support not only cellular networks, but also the communication of
device to device (D2D), machine to machine (M2M) and vehicle-to-vehicle
(V2V) as well as the whole interconnected network.Accordingly, the 5G channel
models should support mobile to mobile links and networks.
In D2D/V2V communications, the user antenna is generally as low as 1-2.5 m,
which limits its coverage to be smaller than the ordinary cellular system. Besides,
massive dense scatterers exist around both the transmitter and receiver. The sta-
tionary interval of channel is short due to the mobility of both ends. All of those are
the unique characteristics of D2D/V2V transmission. Many channel measurement
activities have been carried out in D2D/V2V, and several preliminary models have
been presented. For example, METIS (an EU 5G R&D projects) proposed a
D2D/V2V model with several scenarios [1], 3GPP proposed a preliminary D2D
channel model [3], and MiWEBA (an EU-Japan joint project) proposed a D2D
models in 60GHz mmWave band [4].
2. Support higher communications frequency and larger bandwidth
5G may operate in a larger frequency range from 350 MHz to 100 GHz. Taking
into account the available bandwidth and communications capacity demand, the
bandwidth of 5G system may be higher than 500 MHz, even 1~2 GHz or higher.
Thus, the 5G channel models should also meets such requirements. In 3GPP-HF
channel model, if the bandwidth of a channel is beyond c/D(wherein c is the speed
of light, D is the aperture size of antenna), then such bandwidth can be refered to as
Big Bandwidth and special processing should be applied [6]. High frequency
(HF) band is brought into keen focus for its large available communications
bandwidth. For example, METIS defines its medium and high priority frequency
bands to develop, including 10 GHz, 28~29 GHz, 32~33 GHz, 43 GHz, 46~50
GHz, 56~76 GHz and 81~86 GHz [5]. HF band (6~100 GHz) has many radio
transmission characteristics that are different from the frequency band below 6 GHz
(sub-6 GHz) [4]:
Firstly, according to the Friis Formula of free space propagation, propagation in
HF band will undergo higher path loss (PL) due to short wavelength. Friis Formula
3.1 Requirements for 5G Wireless Channel Model 47
also points out that the antenna gain is proportional to the square of frequency.
Given a fixed physical aperture size and transmit power, higher receive power will
be obtained when propagation in HF band than in low frequency band [7]. There-
fore, high gain directional antennas or beamforming technology should be adopted
for HF band to ensure communications over several hundred meters distance.
Meanwhile, the antenna also needs to be able to adjust its beam directing to
the UTs.
Secondly, diffraction in HF band weakens due to shorter wavelength, or to say
the propagation has quasi-optical characteristics, which results in lack of rich
scattering in HF band than the sub-6 GHz band. In the line of sight (LOS)
conditions, the received signal is comprised of LOS path and a few low-order
reflection paths, whereas in the non-line of sight (NLOS) conditions, signal prop-
agation may mainly depend on reflection and diffraction. Thus the channel shows
temporal and spatial sparsity. Blockage of human or vehicles will attenuate the
signal vastly. Quasi-optical properties also make it possible to use ray tracing
technique to assist channel modeling.
Despite the fact that the propagation characteristics of mmWave have been
widely investigated, especially in the 60 GHz, many important characteristics,
such as blockage loss, penetration loss, high resolution angle characteristics, fre-
quency dependence and outdoor mobile channel, still need further measurement
and exploration. In the aspect of channel model, several HF channel models
(including 60 GHz) have been proposed [1, 4, 612].
3. Support largeassive antenna array or massive MIMO
The existing channel models [13, 14] assume the superposition of plane waves
bouncing from the scatterers (i.e. scatterers are in the the far field of transmitting
and receiving), and assume that the antenna array is in smaller size. In both ends of
the antenna array, the arrival or departure direction of the radio waves only lead to
little difference in the phase of the signal while the amplitude of the signal remains
constant, i.e. the propagation characteristics of the both ends of the antenna array
are similar. When applying larger antenna technologies, including massive MIMO
and pencil beamforming, the antenna number can reach tens to hundreds, which
will bring about two impacts, as shown in Fig. 3.1.
Firstly, planar wavefront assumption for the conventional MIMO channels is no
longer applicable and should be replaced with spherical wavefront assumption. The
radio waves propagating in the form of plane wave need conform to the far field
assumption, i.e. the distance between the transmitter/receiver between scaterers
should be larger than the Fraunhofer distance [16], Rf 2D2/, D is the aperture size
of antenna array, and is wavelength. D and corresponding Fraunhofer distance of
array will increase with the number of antennas, assuming antenna element spacing
and remain unchanged, i.e. the far field of antenna system expands. The receiver
may be in the near field zone (also known as the Fresnel zone) of transmitter, or the
scatterers will be in the near field zone of transmitter and receiver. In this case, the
wavefront should be modeled as spherical wavefront instead of phane wavefront.
48 3 Channel Measurement and Modeling
1st cluster
UT 2
BS 2nd cluster
Antenna
Array 3rd cluster
UT 1 UT 3
4th cluster
5th cluster
Fig. 3.1 Non-stationary and near field effect in Massive MIMO system [15]
Secondly, the researchers from the Lund University in Sweden have verified
through measurement that any scattering cluster in the Massive MIMO systems can
not affect all the antennas in the array. That is to say each antenna of the array has
different scattering clusters which can affect its signal propagation. In traditional
MIMO system, the scattering clusters that affect signal propagation only change
with time due to the relative movement among the transmitter, receiver and
scatterers Whereas in the massive MIMO system, scattering clusters also change
along the axis of antenna array.
Additionally, as the large antenna array own high spatial resolution to distin-
guish closely located users in horizontal and perpendicular directions, the channel
model must provide fine three-dimension (3D, includes both azimuth and elevation)
angle information. The angular resolution should be at least one degree or less. The
model should support a varity of antenna array, such as linear, planar, cylinder and
spherical array. Therefore, for massive antenna array, we need to accurately model
the azimuth and elevation angles at departure or arrival direction for each scattered
multipath, as well as the distances or locations of first-bounce/last-bounce scatterer
for spherical wavefront modeling. By far, several channel models have been
proposed according to the aforementioned requirements [17, 18].
4. Support spatial consistency and dual mobility
Spatial consistency (continuity) has two meanings: one is that the channels of
close links are of highly correlation, and the other is that the channel should evolve
smoothly rather than discontinuity or interruption when transmitter/receiver moves
or scenarios switch. The latter is commonly called the ability of dynamic
simulation.5G communications network will contain a variety of link types,
which will coexist in the same area, from the traditional macro cell, micro area,
to picocell, and femtocell, as well as nomadic BS and D2D connection between
UTs in the future. All of these ask 5G channel models to support spatial consis-
tency.At present, most of the widely used channel models [13, 14] consider the
spatial consistency of large scale parameters (LSPs), such as Path Loss, Shadow
3.1 Requirements for 5G Wireless Channel Model 49
Last bounce
al,m
a1,m UT track
bl,m
r Original Rx
ar,l,m,s er,s
Tx position
rr,s
Rx location at the s-th
Fig. 3.2 Scatterer location and AOA update in dynamic simulation [19]
Fading (SF), Delay Spread (DS), Angular Spread (AS) and Ricean K-factor.
However, the spatial consistency of small scale parameters (SSPs) is considered
inadequately. Moreover, The LSPs is generated with snapshot randomly and
assume the scattering environment of different snapshots are statistically indepen-
dent and the scatterers seen from close mobile devices are not related, which leads
to the exaggeration of the simulation performance of some multiple antenna
technologies such as multiple-user MIMO (MU-MIMO) [1]. In addition, most
traditional channel models consider the situation that one end of the link is fixed
and the other end can move arbitrarily. New Doppler model should be introduced to
support bidirectional mobility of D2D/V2V.
Withcoexistence and density increase of links, and with the application of
D2D/V2V, to support spatial consistency becomes especially important for of
wireless channel models. Space consistency model can be built through defining
the scatterers geometric position of the first bounce (the transmitter to scatterer)
and the last bounce (scatterer to the receiver) of each scattering path. As shown in
Fig. 3.2, QuaDriGa [19] channel model library supports dynamic evolution of the
channel caused by the movement of user terminal (UT). The channel coefficients
are calculated based on the location of the last bounce scatterer, the distance and
relative angle of arrival (AOA) changes between it and MT, and the distance
between mobile station and BS.
5. Support high mobility
Typical 5G high mobility scenarios involve mobile to mobile communication
(e.g., V2V), and mobility to infrastructure communication (e.g., subway, high-
speed rail, etc.). 5G should be able to support the mobility speed of above
500 km/h of high-speed rail, the ultra-high user density of 6 people /m2 on subway,
and the millisecond level end-to-end delay on highways. The channels in high
mobility condition have the following characteristics.
Fast time-varying characteristics: Different from the low mobility channel, high
mobility channels are fast time-varyingand show non-stationary characteristic. In
other words, the channel can only be stationary in a shorter period of time. Large
Doppler frequency shift or spectral spread: For either mobile to mobile channels, or
mobile to fixed channels, either the transmitter or the receiver is in high-speed
mobility.
50 3 Channel Measurement and Modeling
Channel modeling can be broadly split into deterministic and stochastic channel
modeling or their combination. When detailed environment data (including
man-made objects such as houses, buildings, bridges, roads, etc. as well as natural
objects such as foliage, rocks, ground, etc.) is available to a sufficient degree,
wireless propagation is a deterministic process that allows predicting its character-
istics at every point in space. This is the basis of deterministic channel model and
Ray Tracing is a such method.Stochastic channel modeling is one kind of method to
obtained huge measurement data set that containing the underlying statistical
properties of wireless channel by conducting channel measurements in a large
variety of locations and environments, from which path parameters are extracted
and used to statistically represent the channel parameters. Stochastic channel
models can be roughly divided into several categories as Geometric-based Stochas-
tic Channel Model (GSCM), Correlation-based Stochastic Channel Model
(CSCM), extended Saleh-Valenzuela (SV) stochastic channel model, and Ray
Tracing based Channel Model. GSCM is also known as Geometric-Based
3.2 Channel Modeling Method 51
Stochastic channel Model (GBCM) in some literature. There are two types of
GSCM. One is constructed based on the channel measurement, and the other is
based on the regular geometric shape of scatterers. Their model structure are same,
but the former is based on the measured data to extract the path parameters, and
obtain statistical characteristics through data analysis. The latter places emphasis on
deriving the autocorrelation functions, cross-correlation functions and other statis-
tical characteristics of the spatial, temporal and frequency domain of the channel.
Channel Parameters are usually divided into Path Loss (PL), large-scale param-
eters (LSPs, such as shadowing, delay spread, angular spread, etc.) and small-scale
parameters (such as delay, angle of arrival and departure, etc.), which jointly reflect
the channel fading characteristics. Path loss is usually expressed in one or two
formulas and a set of numerical values of parameters, reflecting the relationships
with transmission environment, distance and frequency and so on. Large-scale
parameters can be regarded as a statistical average in a channel segment (or a
quasi-static channel area), within which the LSP or probability distribution of LSPs
do not change significantly. The length channel segment is dependent on the
propagation environment. It is about dozens of wave-length. Small-scale parameter
is used to describe statistical characteristics of path parameter in a wavelength
range. Path Loss and Shadowing Model.
Friis formula gives the signal propagation model in free space. For the actual
scenarios, a more general path loss model is constructed by introducing a path loss
exponent dependent on environment. For LOS condition, a common path loss
model can be presented as [12]
PLdB 20log10 4f =c 10nlog10 d=1m X 3:1
This is called Close-in Reference short (CI) model [20], where f is the carrier
frequency (Hz), c is the speed of light, d is the distance between transmitter and
receiver (m). The first item at right side of the equation is the path loss in free space
at the distance 1 m. Xs is shadowing. The relationships between path loss and
frequency is same with Friis formula in free space. For NLOS, another model is
usually been used,
PLdB 20log10 d 10log10 f X 3:2
This equation is called ABG model [12] (named by three factors Alpha, Beta and
Gamma). It is the extension of FI (Floating Intercept) model [20] to reflect fre-
quency relevance.
MIMO channel is the superposition of rays with different power, delay and angular
information which can be determined by the geometry among the transmitter (Tx),
the receiver (Rx) and the scatterers. The rays with similar delays and angles are
52 3 Channel Measurement and Modeling
The n-th N
Ray (n,m) rx,n,m
BS antenna
MS v,
tx,n,m rx,n,m
rx,n
N tx,n,m
BS tx,n MS,LoS,
MS antenna
MS antenna array
BS,LoS,
MS moving
BS antenna array
grouped into one cluster [21], so that the channel can be descripted by a cluster-ray
structure as shown in Fig. 3.3. The GSCM modeling method separates antennas and
propagation channel. The user can combine arbitary type and layout of antennas
with the propagation channel model to obtain transmission channel model. The
whole 3D MIMO channel model from transmitting antenna array to receiving
antenna array is composed of the sub-channel Hu,s(t;) from transmitting antenna
element s to receiving antenna element u. Hu,s(t;) can be expressed as
" #T " #" #
X N XMn Frx, u, rx, n, m n, m n ,m Ftx, s, tx, n, m
H u, s t;
n1 m1 Frx, u, rx, n, m n
, m n, m Ftx, s, tx, n, m 3:3
ej2 rx, n, m d rx, u = ej2 tx, n, m d tx, s = ej2n, m t n, m
The clusters and rays in channel is parameterized by the path loss, shadowing
and other parameters in large scale and small scale. In order to keep spatial
consistency, the channel model considers the correlation of different LSPs in the
same station and the correlation of same LSP in the different stations. Cluster
parameters (including the number, arrival rate, power decay rate, angular spread)
and ray parameters within clusters (arrival rate, the average time of arrival, power
decay rate, etc.) are important parameters of the model, the existing GSCM usually
assumes the numbers of the clusters and the number of rays within one cluster and
are fixed, arrival interval of clusters follow exponential distribution, clusters power
decrease exponentially with delay, and cluster angles follow wrapped Gaussian or
Laplace distribution. The rays within one cluster have the same delay and power,
and different angles.
GSCM can adapt to the different scenarios and different types of antennas easily
by using rational parameters. For some new scenarios, like different high-speed rail
scenarios, such as the plains, hills, etc., theres no need to build any new model. One
only need to measure the channel of specific scenario and then update scenario-
related parameters and parameters of clusters and rays in the model. The common
GSCM modeling steps can are divided into three stages as illustrated in Fig. 3.4 and
described below [13].
Stage 1: preparation and measurement. Determine the generic channel model
formula and the parameters to be measured. Draw up a detailed measurement plan,
take into account all aspects of measurement: first, determine the scenarios of
wireless channel; select the measurement environment; determine antenna height,
speed of transmitter and receiver, the placement of channel sounder, measurement
routes and link budget, measurement time and other general requirements. Then
make actual measurements and store measurement data onto a mass memory.
Data post-
Parameter
2 processing
PDFs
/analysis
Parameter
generation MIMO Channel
Channel
3 Matrix
realizations
Simulations
Array generation
info.
Frequency response of the system RF chains and the radiation pattern of antenna
system are required to be measured and used in system calibration, subsequent data
processing and simulation.
Stage II: post-processing of the measured data. Different analysis methods are
applied depending on the required parameters. Typically, high-resolution parameter
estimation algorithm, such as Expectation-Maximization (EM), Space-Alternating
Generalized EM (SAGE), RIMAX or other algorithms, is used to extract path
parameters from the measured data. Output of data post-processing could be, e.g.,
a set of impulse responses, path-loss data, or multidimensional propagation param-
eters, such as gain or polarization gain, delay, AoA, AoD, as well as dense
scattering multipath components (DMC) and so on. The rays with similar param-
eters are grouped into one cluster, so that the parameters are divided into inter-
cluster ones and intra-cluster ones. Make statistical analysis on the post-processed
data to obtain probability distribution functions (PDFs) and the corresponding
numerical values of statistical parameters. The Goodness of Fit (GoF) tools may
be used to select the optimal distribution function among several candidates.
Stage III: the generation of simulation model. At first, clusters and ray param-
eters are generated according to their PDFs and statistical parameters. Then MIMO
transmission matrix is obtained by combining the generated parameters with
antenna information. At last, the time-varying Channel Impulse Response (CIR)
is generated to be used in simulations.
Measurement-based GSCM modeling method has been widely recognized
and used, such as COST 259/273/2100, SCM, SCME, WINNER /I/II/+,
IMT-Advanced, IEEE 802.16m, 3GPP 3D MIMO, 3GPP D2D, QuaDRiGa,
mmMAGIC, 5GCM, METIS and so on. Many new 5G channel model is modified
upon the basis model, e.g. by introducing correlation of cluster number N and
Tx/Rx (u/s) position to model the visibility of scatterers by antennas, by introducing
the correlation between tx,n,m ( rx,n,m) and s(u) position to support the spherical
wavefront propagation, by adding a another Doppler shift to characterize dual
mobility and so on. QuaDRiGa model library supports the realization of dynamic
simulation (time evolution) which depends on determing the locations of first-
bounce and last-bounce scatterers. With the locations of scatters, it is easier to
model spherical wavefront, spatial non-stationarity in Massive MIMO.
The scatterers distribute randomly in the actual environment. Whereas, the regular-
shape GSCM distributes scatterers with similar properties onto regular geometric
shapes for simplifying characteristics analysis of the MIMO channel. The charac-
teristics of the propagation channel are described according to the geometric
relationships among the transmitter, receiver and scatterers. Only one-order or
two-order reflections during the propagation process are taken into account, while
high-order reflections are ignored because of higher attenuation. As shown in
3.2 Channel Modeling Method 55
BS MS BS MS
BS MS
BS MS
BS MS
Fig. 3.5, the basic shapes of 2D distribution of scatterers mainly includes: one-ring,
two-ring, ellipse and rectangular [22]; the basic shapes of 3D distribution of
scatterers mainly includes one-sphere, two-sphere, and elliptic-cylinder etc.. The
geometric model in practical application can be a combination of these basic
shapes, and it is also possible to add new geometric shapes.
The regular-shape GSCM is a parametric modeling method with high flexibility.
Different channel characteristics can be described accurately by adopting combi-
nation of several geometry shapes and adjusting the related parameters. For some
5G typical scenarios and channel characteristics, Chengxiang Wangs team at the
Heriot-Watt University has presented a series of channel models based on this
method, mainly including: the modeling of Massive MIMO, M2M as well as High
Speed Rail (HSR) [17, 23, 24]. For massive MIMO channel modeling, S. Wu et al.
built a 2D massive MIMO channel model based on non-stationary twin clusters,
from which space-time correlation function and power spectrum density are inves-
tigated [17]. Spherical wavefront was assumed and its impact on the statistical
properties of the channel model was investigated. Birth-death process was incor-
porated to capture the dynamic properties of clusters on both the array and time
axes. In channel modeling for V2V scenario, Yi Yuan et al. extended the geometric
model represented as two circles and an eclipse in 2D scenario into two spheres and
56 3 Channel Measurement and Modeling
3.2.3 CSCM
R. Valenzuela and A. M. Saleh of the AT&T Bell Labs proposed the SV model by
analyzing the indoor channel measurement data [33]. The channel is composed of
multiple clusters and rays. The number of clusters and the number of rays within a
cluster obey Poisson distribution, namely the arrival of the cluster and the intra-
cluster rays are two Poisson processes with different arrival rates. The arrival
intervals of clusters or intra-cluster rays obey exponential distribution with different
means. The average power of the cluster or intra-cluster ray decay exponentially
with delay. The instaneous power of the clusters or intra-cluster rays obey Rayleigh
distribution or lognormal distribution. The initial SV model can only describe the
delay information of the channel, and its resolution is higher than 10ns subjected to
the sounding equipments. After that, scholars continued to improve the time
resolution of the model to a nanosecond or less with the help of many measurement
in ultra wide band and mmWave band. Moreover, angle information and polariza-
tion characteristics are also integrated to develop the extended SV model. Herein-
after the IEEE 802.11ad channel model is taken as an example to show the detail of
the extended SV model. The CIR, h, can be expressed as [9]
X
i i
ht; tx ; tx ; rx ; rx Hi Ci t T i ; tx tx ; tx tx ; rx rxi ; rx rxi
i
X
i;k
Ci t; tx ; tx ; rx ; rx i;k t i;k tx tx
k
i;k i;k
tx tx rx rx rx rxi;k
3:4
Where t, tx, tx, rx, rx are time, azimuth angle of departure, elevation angle of
departure, azimuth angle of arrival, elevation angle of arrival, respectively. H(i) is a
2 2 polarization matrix. C(i) is the CIR of the i-th cluster. () denotes Dirac
impulse function. T(i), tx(i), tx(i), rx(i), rx(i) are the delay, arrival and departure
angle of the i-th cluster respectively. (i,k) is the amplitude of the k-th path in the i-th
cluster. (i,k), tx(i,k), tx(i,k), rx(i,k), rx(i,k) are the delay and angle value of the k-th
path in the i-th cluster relative to the central ray within the cluster respectively. The
intra-cluster rays are divided into forward part and backward part, represented by
subscript f and b respectively. The distributions of all of the parameters are as
follows.
h i
p T i jT i1 exp T i T i1 , l > 0
i;k 2 i;k
f =b j0;0 j K f =b exp T i = exp f =b = f =b
2
The arrival of clusters and intra-cluster rays obey the Poisson distribution, where
and f/b respectively are the arrival rate of clusters and arrival rate of intra-cluster
forward/backward rays. and f/b respectively are the power decay rate of the
clusters and the forward/backward rays. The average power of the k-th path in the i-
i;k 2
th cluster is denoted as f =b , which is also the mean of instantaneous rays power
that obey Rayleigh distribution or lognormal distribution. Kf/b denotes the Ricean
K-factor of forward/backward rays. For the angle of clusters and intra-cluster rays,
different models have different assumptions. IEEE 802.15.3c deems the angles of
clusters are uniformly distributed between 0 and 2; the azimuth angles of rays
within a cluster follow Gaussian distribution or Laplacian distribution [8]. IEEE
802.11ad uses ray tracing method to determine the azimuth and zenith angle of
clusters. The azimuth and zenith angles of the intra-cluster rays are Gaussian
distributed with zero mean and variance of 5 degrees [9]. The extended SV
model is also independent of the antenna just as GSCM, so the MIMO channel
model can be obtained by applying specific antenna information.
There are pros and cons for GSCM, CSCM, Extended SV and RT modeling
methods. The first three types could separate the antennas from the propagation
channels, thus the transmission channel can be obtained by applying any type of
antennas with the propagation channel. GSCMs and extended SV model use a
cluster-ray structure to describe the propagation channel. Each cluster is composed
of multiple rays, and the rays within one cluster have similar amplitude, phase,
AOA and AoD. The RT channel model is expressed with multiple rays. In CSCM,
each tap has its antenna correlation matrix, which may restrain the angle informa-
tion of rays. Specific antenna correlation matrics are required to be calculated for
different configurations and radiation pattern of the antennas, which make the
model lack of flexibility.
By rational parameterization, GSCM can describe different scenarios accurately
and flexibly. GSCM model does not directly output the antenna correlation matrix.
But if needed, it can be obtained from channel path parameters [32]. The accuracy
of RT based model is closely related to the quality of electronic maps built for the
enviroments. Some notable scatterers (e.g., buildings, desk, etc.) are easy to be
modeled, but small irregular objects (such as trees, shrubs, and street lamp) may be
ignored or cannot be accurately modeled which leads to improper results.
MiWEBA model need channel measurement to obtain the statistical features of
small scale parameters. Extended SV is suitable for channel modeling with high
bandwidth and high delay resolution. It could give intuitive and accurate descrip-
tion of densely arrival of broadband signals in the indoor and outdoor environment.
However, SV model lacks the support for spatial consistency.
The advantage of CSCM model is that it can provide spatial correlation matrix
which make analyzing MIMO theoretical performance easier. However, it has some
disadvantages. It makes over-simplification on the true channel. There is big
difference in channel correlation matrices for LOS and NLOS, which make smooth
transiting from each other impossible. Correlations in time-domain and space-
domain are independent, which make the possible existed temporal-spatial corre-
lation be hard to describe. The characteristics of LSPs, such as DS and AS changing
over time and locations, are difficult to be captured in the model, because these
parameters are hidden in the channel correlation matrix. CSCM is difficult to model
3.3 Channel Measurement 61
the spatial non-stationary phenomenon and the spherical wavefront in near field
propagation.
In simulation, the computational complexity of a channel model is very impor-
tant. When talking about the channel modeling method, people used to consider that
CSCM has lower complexity and higher computational efficiency than GSCM
because of its relatively simple concept. The complexity of WINNER model
(GSCM) and CSCM were revealed and compared in [37]. The number of GSCM
sub-paths was set to 10 or 20, and the tap with a certain Doppler spectrum were
generated by 8-order IIR filters in CSCM. It was found that the complexity
(represented as the number of real calculations at each tap) of them is the same
order of magnitude when the number of MIMO antennas is small. When the number
of MIMO antennas is larger (>16), the complexity of CSCM is surprisingly higher.
RT calculation is too complicated to generate the CIRs in link-level and system-
level simulation in real-time. Off-line calculations of CIR on each UT location are
required to establish a CIRs database for simulation, which will ask for huge storage
resources. In addition, the computational quantity of channel convolution in sim-
ulation is very large and usually several times of that of CIR generation.
In addition to the above several channel modeling methods, recently, some
scholars presented propagation graph theory for channel modeling. It takes the
positions of transmitter, receiver, and scatterers, and propagation coefficient
between them as input, establishes three transmission matrics for expressing the
propagation from transmitter to the scatterers, between scatterers, and from scat-
terers to receiver, and calculates the channel matrix analytically. The location of the
scatterers can be gained by measurement or ray tracing technology and propagation
coefficient can be obtained by theoretical calculation based on GO, GTD, UTD. ER
method can be combined with propagation graph to model the channel and obtain
the PDP conformly alined with the measurement [38]. Compared with the ray
tracing, propagation graph method is less computational, so that it can generate
random channel samples in real-time.
The channel modeling methods used by a variety of standardized organizations
or project teams are shown in Table 3.1. From the table, we can find that GSCM was
widely used in 3G / 4G channel modeling. With the progress of 5G R&D, more and
more groups combine ray tracing and measurement to obtain the channel models,
such as METIS, MiWEBA, 5GCM channel models and son on.
paths can be identified. The parameters of every resolvable path are outputted, with
which a variety of important channel statistical information can be obtained, such as
the angular distribution, coherence time, DS, etc. Sometimes it is enough to use the
measurement results for transmission performace analysis. But more often, it is
better to make a general or representative description of a certain type of propaga-
tion scenario. In this case the channel models are needed, and channel measurement
can provide measurement data for channel modeling and verify the effectivity of
the model.
The differences between the channel measurement methods exist in many
aspects, such as the form of the stimulating signals, the number of RF channels,
the types and number of antennas, etc. This section is going to introduce various
measurement methods in a unified framework, and several common channel sound-
ing systems in academia and industry, as well as various channel measurement
activities.
Large bandwidth and multiple antennas are the main features of 5G, so in this
subsection we take the broadband MIMO channel as the sounding target. According
to the numbers of channels being measured simultaneously, channel measurement
methods can be divided into Single Input Single Output (SISO) [3943], Multiple
Input Multiple Output (MIMO), and the compromising Single Input Multiple
Output (SIMO) [44]. MIMO method simultaneously measures multiple channels
in parallel. It has the highest measurement speed and can be applied to the high-
speed mobility scenario. However, multiple parallel RF units are required for
MIMO measurement method. Besides the complex structure and high cost, the
inherent requirements of consistency and synchronization increase the difficulty of
system implementation and calibration. In addition, multiple concurrent sounding
signals can interfere with each other. Even when orthogonal sequences are used as
the excitation signals, under multipath channel conditions, the orthogonality
between these sequences with different delay is hard to be guaranteed. The inter-
ference will still be produced, so that a more complex processing algorithm is
needed. SISO only needs one RF chains at the transmitter and the receiver respec-
tively, and adopts the Time Division Multiplexing (TDM) mode to sounding the
subchannel iteratively. There is one electronic switch box at both ends to synchro-
nously switch different antenna element connecting to the RF chain. This method
has relatively simple structure and low cost. The number of antennas at transmitter
and receiver can be flexibly configured. Because there is only one RF chain, the
calibration and consistency between the multiple channels can be achieved much
easier. In addition, since only one probe signal is transmitted, there is no demand for
the orthogonality. However, its measurement time is generally longer, which is not
suitable for the measurement of high mobility scenarios. The switch boxes on
transmitter and receiver act according to a designed sequence. For example, the
3.3 Channel Measurement 63
Receiver
Transmitter Multi- Multi-
Down-converter
channel channel
Up-converter Low- Storage
parallel parallel or pass or Display
Signal or Multi- band- A/D
Post-
Multi- plexing pass
processing
plexing (switch)
(switch)
Local Local
possible switching sequence is as follows. The Tx box switches and keeps connec-
tion to one transmitting antenna when the Rx box switches to each of the receiving
antennas and followed by subchannel sounding sequentially. After that the Rx box
switches from the last antenna back to the first antenna when finishing sounding a
group of subchannels, at the same time the Tx box switches to next antenna and
begins sounding next group of subchannels. SIMO is a compromise between SISO
and MIMO methods. It uses switched signal transmission and parallel signal
reception, and obtain a certain balance among system complexity, cost, and mea-
surement speed. Although a few systems [45, 46] have MIMO sounding ability in
terms of the system structure and the principle of operation, they are SIMO in
nature. It is possible to utilize the advantages of both MIMO and SISO, and extend a
MIMO sounder to support larger arrays through a switch system.
A unified modular channel sounder, as shown in Fig. 3.6, is used as a framework
to illustrate various methods, techniques, and devices in the channel measurement.
We mainly focus on four aspects of the measurement system: antenna system
(including type and switching of antenna), RF unit (local oscillator and up/down
converter), sounding signal (baseband part), and transmitter/receiver synchroniza-
tion method.
1. Antenna and antenna array
The transmitting and receiving antenna elements used in sounding have many
types. In order to ensure the radiation and collection of radio signal in all the spaces
and angles, omnidirectional or low gain antennas are adopted, such as half-wave
dipole antenna, patch antenna, diskcone antenna, biconical antenna and open-ended
waveguide (OEW), etc. In order to ensure the sufficient signal strength, directional
antennas are adopted, such as standard gain horn antenna, parabolic antenna, lens
antenna, etc. Directional antennas can only guarantee the signal sounding of certain
angles. In order to achieve coverage of all space angles, multiple directional
antennas splicing in different directions or rotating a directional antenna for scan-
ning is needed. When the polarization characteristics of the channel need to be
detected, the dual polarization antenna element is necessary.
64 3 Channel Measurement and Modeling
Picture
than the switching speed of the electric switch, the virtual antenna array is only
suitable for the measurement with static or low speed. In addition, in the measure-
ment system using real antenna array, when a new frequency band is measured, not
only a new antenna array must be developed, but also a full set of complex
calibration need to be done for it. In this aspect, the virtual antenna array only
needs one antenna element, so it is much easier in terms of the cost and the
calibration complexity.
2. RF unit
The important parameters of the RF unit include the number of RF channels, RF
frequency range, and RF signal bandwidth. Parallel sounding mode requires mul-
tiple parallel RF units. It is necessary to consider the consistency and mutual
interference of the RF units. Presently, most models have designated the applicable
frequency range or sounding bandwidth, such as 3GPP SCM is 1~3 GHz/5 MHz,
WINNER II/+ and IMT-A 0.45~6 GHz/100 MHz, 3GPP D2D and 3D 1~4 GHz/
100 MHz. For sub-6 GHz bands, the existing channel models supporting 100 MHz
signal bandwidth seems to be enough, but larger bandwidth requirement will
continue to emerge. For example, the highest bandwidth in IEEE 802.11ac is
160 MHz. Therefore, the channel models have to support higher bandwidth, such
as more than 500 MHz. In the 60 GHz mmWave band, the bandwidth of sounding
system which was used for the development of IEEE 802.11ad is 800 MHz [48]. In
the future mmWave applications, the bandwidth of the signal waveform may
exceed 2 GHz, which requires that the RF unit of the mmWave channel sounder
should be able to support bandwidth larger than 2 GHz.
In the mmWave channel sounding, due to the cable loss of high frequency signal
is great, many sounding systems use an independent external mixer which is close
to the antennas. Lower frequency excitation signal is transmitted to the mixer
through a cable. The lower frequency receiving signal outputted by the
downconverter is transmitted to the signal analysis system through the cable. The
sounding system based on the Vector Network Analyzer (VNA) requires special
considerations: (1) the local oscillator (LO) signals for the mixers at both transmit-
ter and receiver should be from the same local oscillator source, so that the
consistency of reference frequency and phase in each measurement frequency
point can be guaranteed. (2) Sometimes in order to obtain a longer sounding
distance, a long cable is used. Even the low frequency signal still has huge
propagation loss, the signal strength need to be guaranteed by using an electric
power amplifier or fiber optic [1, 49].
3. Sounding signal and detection technology
Different sounding signals have large impacts on the performance of sounding
systems and correspond to different parameter extraction algorithms. How to
choose sounding signals to achieve the optimal performance of the channel sounder
is very worthwhile to explore. The pros and cons of a sounding signal can be
evaluated from several aspects, such as, signal duration, signal bandwidth, time-
bandwidth product, power spectral density, peak-to-average power ratio (PAPR)
and correlation property. Belows are some commonly used channel sounding
signals.
3.3 Channel Measurement 67
In addition, when SISO or SIMO is used for MIMO sounding, the synchronous
switching of the switch arrays at the receiver and the transmitter must be strictly
controlled. The clock of the switching control circuit is also derived from the same
reference clock.
5. Channel sounding system
The following is a summary of some commonly used channel sounding systems,
as shown in Table 3.5. The early systems that undertook channel sounding tasks for
4G or earlier communications systems only paid attention to frequency bands
below 6 GHz. These systems include Propsound [39] (stop production), RUSK
[40], the channel sounder of Aalto University (Aalto U., formerly known as the
Helsinki University of Technology, HUT) [41] the channel sounder of NTT
DoCoMo [42], etc. Many systems are applied in measurement tasks of various
standard channel models, e.g., Propsound, Medav RUSK and the Aalto University
system are widely used in the development of the channel modeling of COST,
WINNER, IMT-Advanced, 3GPP 3D, etc. In recent years, these systems are also
used in the channel sounding of massive MIMO. With the increase of the demand
for the sounding of more high frequency band, the broadband mmWave sounding
systems [1, 43, 48] and VNA-based measurement systems [1, 49, 51] were devel-
oped for a specific frequency (customization). Of course, the precious sounding
system can also extend frequency to support the mmWave sounding. The
VNA-based measurement system is often used in the sounding of massive
MIMO. From the consideration of reducing the cost in hardware and calibration,
most of the existing systems use a RF unit. But there are also some sounders with
multiple receivers (such as Durham University sounder [45]) and some sounders
with multiple receivers and multiple transmitters such as the Tokyo Institute of
Technology (TIT) sounder [45, 46], etc.
Figure 3.7 shows the MIMO channel sounding which is achieved using SISO
method by sequential switching of the receiver and transmitter antenna (SISO).
Here a simple analysis of the capability of the dynamic channel sounding of the
system will be shown. Lets take that Nt denotes the number of transmitting
antennas, Nr denotes the number of receiving antennas, and max denotes the
channel maximal excess delay. Take the PN sequence as the sounding sequence
and SISO sounding as an example. A periodic sounding sequence is composed of
K chips. The length of the chips is Tp, and the length of the sequence is TaKTp. A
longer sounding sequence Ta can get greater processing gain (see SAGE algorithm
in Sect. 3.4), but it also limits the measurable Doppler shift. The time span To of one
sounding should equal the sum of the length of sounding sequence Ta and the
maximal DS max, i.e., ToTa+max. If we take into account the antenna switching
time Tw, following the assumption in the RUSK, the measurement time and
switching time are equal, i.e., TwTo. The measurement time of a subchannel is
2To. The time spent in a snapshot to iterate all of the sub-channels is Tsnap2ToNt
Nr, i.e., the time in which a full channel sounding is carried out. For time-varying
channel, the sounding period is denoted as Tsnap. If the gap between snapshots is
zero, i.e., TfTsnap, the maximum Doppler shift that can be measured by the system
70
Tx switching
Rx switching
2To
2NrTo
Tsnap=2NtNrTo
Tf
Tsnap
Fig. 3.7 Timing sequence in MIMO channel sounding via switching transmitting and receiving
antennas
Outdoors [53] Tx: 1 omnidirectional VNA 2.6 GHz/50 MHz path gain, K- factor, PAS, eigenvalue, channel
Rx: virtual ULA (128 omnidi- correlation between users
rectional), 7.3 m long
Outdoors Tx: 1 omnidirectional; VNA 2.6 GHz/50 MHz Large-scale fading, angular resolution, singular
[54, 55] Rx1: virtual ULA(128); RUSK value spread [55]
Rx2: 128 UCA
Outdoors [56] Tx: 8 omnidirectional RUSK 2.6 GHz/40 MHz Singular value spread
Rx: UCA(128 elements)
Alcatel-Lucent Outdoors [57] Tx: UCA (112 patch) Customized 2.6 GHz/20 MHz Correlation, inverse condition number
Bell Labs in Rx: 2 omnidirectional (2 m (LTE)
German spaced)
Aalborg Univer- Indoors [58] Tx: 8*2 patch (6 orientations Customized 5.8 GHz/ Correlation matrix, channel singular value,
sity in Denmark and combinations) 100 MHz condition numbers, power variation over array
Rx: 64 omnidirectional
(3 array modes)
BJTU Indoors, Tx: 128 virtual ULA(~12.9 m) VSG 1427~1518 MHz Small scale parameters
outdoors [59] & virtual UCA (biconical); +Mixer, 4400~4500 MHz
Rx 1/2 omnidirectional VSA
Outdoors [60] Tx: 1 biconical; VSG 3.33 GHz/ PADS, PDP, PAS, DS, AS
Rx: 64 virtual ULA(biconical) VSA 100 MHz
73
74 3 Channel Measurement and Modeling
antenna elements. The ULA could provide better azimuth resolution than the UCA,
however the latter could provide the resolution in both azimuth and zenith dimen-
sion. The analysis results showed that both arrays could approach the capacity that a
independently idendifical channel (i.i.d) provided even for closely located users in
the LOS case. When the number of BS antennas is 10 times of the number of users
(the number of antennas), it could be considered as a massive MIMO array. The
performance gain would be diminishing if the number of BS antennas continued to
increase.
In [56], the UCA in [52] was used as the receiving antenna at the B. Eight users
walked randomly in a circle with radius 5 m at about 0.5 m/s speed and the user
antennas were connected to the transmitter of a RUSK channel sounder, which
constitute a synchronous massive MIMO measurement system to investigate the
spatial separability of adjacent users. As reported in [54, 55], even users located
close to each other in the LOS case can be spatially separated in a massive MIMO
system.
A massive MIMO measurement was also carried out by Alcatel-Lucent Bell Lab
in Germany [57]. Seven antennas perpendicular spaced with half wavelength was
rotated around a circle with radius 1 m and 3.5 degree steps pointing to 16 angular
directions to form a virtual antenna array with 112 elements. A LTE-like system
with a subcarrier interval of 15 kHz but only 400 subcarriers (every 3rd sub-carrier
of total 1200 subcarriers) being valid was used to sound the channel and analyzed
the channel characteristics. It mainly studied the orthogonality of the channel at
different measurement positions with correlation coefficients and inverse condition
number. The analysis results confirmed the theoretical advantages of Massive
MIMO over convention MIMO. Using more antennas could improve the orthogo-
nality of the channels between users, but the performance would not improve
significantly when antenna number exceeds a certain amount.
As a part the METIS project, a massive MIMO measurement was completed by
the Aalborg University in Denmark using self-developed sounding system with
5.8 GHz carrier frequency, 100 MHz bandwidth [58]. With 16 transmitting anten-
nas and 64 receiving antennas, a 6416 channel matrix was obtained. The 64 iden-
tical receiving antenna elements (seperated into 8 groups, and 8 antennas per group)
formed three array shapes with different aperture, i.e. ultra large linear array (6 m
long), large linear array (2 m long) and square array 8 users (two antennas per user).
Six 6 schemes with different user locations were designed to achieve various
measurement conditions, such as LOS or NLOS cases, the users being sparsely or
densely distributed, and the user antennas being perpendicular or parallel to the
array. The user/antenna correlation matrix, the singular value and condition number
of the channel, and the power variation over the antenna array was studied. The
measurement results showed that larger antenna aperture could produce more
spatial degrees of freedom (DOF) which was close to the theoretical performance.
The array with the largest aperture was able to approach the performance of an i.i.d
channel since it enabled the channel discrimination among users and between
antennas of a same device.
3.3 Channel Measurement 75
D2D/V2V and high speed rail (HSR) communications are of importance for the
future 5G wireless communications. IEEE 802.11p [61] has been committed to the
standardization of vehicle safety communications and defines three link types such
as V2V, V2I (I refers to Infrastructure) and I2V. IEEE 802.11p adopts 5.9 GHz
frequency band for the V2V communication, and gives a simple SISO channel
model with a tapped-delay line (TDL) structure. The envolope of channel coeffi-
cient of each tap is assumed to be Ricean distributed or Rayleigh distributed. In the
future, MIMO technology will be widely used in vehicle communication system
because it can significantly improve the system capacity and guarantee reliable data
transmission. Therefore, it is essential to strengthen MIMO channel sounding for
D2D, V2V, and HSR scenarios. Extensive channel measurement activities for
D2D/V2V and HSR scenarios have been carried out [1, 6282]. Part valueable
measurement activities and the related details are listed in Table 3.7. The following
is a brief review about these measurement activities.
In the METIS project, one channel measurement for D2D scenario was
conducted by DOCOMO NTT in Japan using self-developed channel sounder
with at 2.225 GHz carrier frequency [42]. A sleeve dipole antenna and a slotted
cylinder antenna were used at the BS side to transmit vertically and horizontally
polarized waves, respectively. A UCA composed of 48 dual polarized antennas
(96 ports) was used at the receiver. The height of antennas at both ends were set to
1.45 m. The UT moved along five designated courses at about 1 m/s. The received
power, PDP, azimuth angular spread at arrival (ASA) and zenith angular spread at
arrival (ZSA) were analyzed. Measurements were carried out during the day and
night time in order to observe the effects of the pedestrians on the channel.
Measurement results showed that at night the received power and DS are larger
because the pedestrians block long-delay path during the daytime, and the ZSA is
Table 3.7 Summary of D2D/V2V/HSR measurement activities
76
Carrier/
Research Channel bandwidth
institutes Scenario Antenna configuration sounder (units: GHz) Measurement parameters
NTT DOCOMO D2D urban,1 m/s Tx: Sleeve, Rx 96 ports, dual DOCOMO 2.225/0.050 Received power, PDP,ASA,
[1] polarization, 1.45 m high ZSA
University of V2V, same/ opposite Tx 1 omnidirectional, 1.6 m high; Propsound 2.3/0.1 PL, DS, SF, K, correlation
Oulu [1, 62] direction,20 km/h Rx:56 ports cylinder array (2.3 GHz), 5.25/0.2 distance(DS, SF, K)
50 ports cylinder array (5.25 GHz),
2.5 m high
Lund University V2V, 2 scenarios, UCA(4 patch), 2.5 m high RUSK 5.2/0.24 TDP(2D), DDP (2D), path
[6366] same/opposite direction power, antenna correlation
matrix
V2V, 3 scenarios ULA( 4 patch), 1.73 m high RUSK 5.6/0.24, AOA,AOD,DS,Doppler,
0.020 analysis ASA
V2V, 10 scenarios bandwidth LSF, time frequency varying
PDP, DSD, DS, Doppler
spread, K
Aalto V2V, 4 scenarios, two Tx/Rx: SPH_5 Aalto 5.3/0.06 PL and exponent, SC
University vehicles, same direction, sounder parameters, DMC
515 km/h [67] parameters, SS fading gain
V2V, 5 scenarios, Tx: ULA 4,Rx: SPH_5,~2 m high PDP,APDP,ACF,CB,CMD,
<40 km/h [68] SD,SC
NEC [70] V2V, 7 scenarios, Single antenna 1.5 m~3 m high Self- 5.9/0.020 Large- and small- scale
<108 km/h developed power
German Aero- V2V speed 30 km/ h Omnidirectional dipole RUSK 5.2/0.12 DDP
space Center
[69]
University of V2V, 2 scenarios, <1 m/s Single antenna Tx:WARP 5.8/0.015 PL,SF, amplitude
Southern Cali- Tx: 1.5 m/3 m high Rx: VSA distribution, correlation
fornia [71] Rx:1.5 m high of DS and SF
3 Channel Measurement and Modeling
BJTU [7477] HSR,multi scenarios Single antenna Tx:GSM-R 0.93/0.001 PL, SF, amplitude distribu-
<350 km/h Rx: VSA tion, DS, K- factor, LCR,
HSR, cutting Single antenna Propsound 2.35/0.05 AFD
BJTU, UPM Subway, <120 km/h Single antenna Tx custom- 2.4/
[7881] ized source, 0.002,0.92/-
Rx:VSA ,2.4/-,5.705/-
WINNER + HSR Tx: monopole, 5 m high; Rx:UCA RUSK 5.25/- PL, DS, K, ASA
[164] 20/100/240 km/h (16 diskcone), 6 m high
3.3 Channel Measurement
BUPT [83] HSR Tx: 2 vertically polarization directional VSG+VSA 2.6/0.02 Correlation Coefficients
370 km/h 45 /135 layout, 40 m high, Rx 2 3 dBi , between subchannels,PL,SF,
~4 m high DS,K-factor
77
78 3 Channel Measurement and Modeling
small because the pedestrians become the scatterers in the zenith direction).
Whereas, the ASAs in the daytime and night didnt show significant difference.
The statistical parameters obtained according to the measurement results are listed
in details in Table A-14 [1].
Also in the METIS project, a V2V channel measurement was carried out using
Propsound sounder at 2 frequency bands (100 MHz bandwidth in 2.3 GHz band and
200 MHz bandwidth in 5.25 GHz band) by University of Oulu in downtown of
Oulu, Finland [62]. A single omni-directional antenna with 1.6 m high was used at
the transmitter side, an array antenna with 28 dual-polarized elements (56 ports,
2.3 GHz) and an array antenna with 25 dual polarized elements (50 ports and 5.25
GHz) were fixed at 2.5 m high at the receiver side. Two types of V2V scenarios,
moving in the same direction with low traffic density and moving in the opposite
direction with high traffic density, were measured, in which the maximum relative
speed was 20 km/h. The path loss, DS, Maximum Excess Delay (MED), SF
standard deviation, K-factor, as well as their respective correlation distances were
investigated. The results showed that the correlation distance is less than 11 m.
A series of 4-by-4 MIMO channel measurements were carried out by Lund
University in Sweden in many scenarios using a RUSK channel sounder.
In [63], the system configuration for the measurement was 5.2 GHz carrier
frequency and 240 MHz bandwidth. A circular array comprising four microstrip
antennas was placed on the roof of cars, the antenna height was 2.5 m and the
antenna radiation directions were 45 /135 /225 /315 , respectively. A sounding
sequence of 3.2s long was used and the snapshot (tranversing 16 subchannels)
interval was 0.3072 ms which corresponds to maximum Doppler shift of 1.6 kHz
(equivalent to to 338 km/h). The measurement scenarios included one-way rural
expressway and one-way two-lane highway. The movement of vehicles could be in
the same direction or in the opposite direction. Firstly, the Time-Delay Profile
(TDP) and Delay-Doppler Profile (DDP) of the V2V channel were analyzed and the
results showed that the propagation channel was composed of diffuse scattering and
discrete specular scattering. The path loss model was a combination of attenuation
with distance and a slowly varying stochastic processes. Based on the observations,
a GSCM channel model suitable for V2V transmission was propased. The antenna
correlation matrix of the model was verified to be consistent with the measurement
results.
In [64], the system configuration for the measurement was 5.6 GHz carrier
frequency and 240 MHz bandwidth. Four microstrip antennas were placed on the
car roof and along the vehicle axle direction with half wavelength spaced, the
antenna height was 1.73 m and the maximum radiation direction pointed toward
front, back, left and right respectively. Three scenarios were designed as follows.
The 1st scenario was at crossroads, where the transmitter standed statically near an
intersection, while the receiver moved in the perpendicular direction of intersection
with a speed of 30~40 km/h. The 2nd scenario is at expresway with two lanes,
where the transmitting car was blocked in one lane and the receiving car
approached to the transmitter in another lane with a speed of 70 km/h. In the 3rd
scenario, two cars were drived at a speed of 110 km/h in one lane of a two-lane
3.3 Channel Measurement 79
highway where both cars were obstructed by a tall van. The AOA, AOD, delay, and
Doppler shift of the V2V channel were analyzed. It was found that in 2nd and 3rd
scenarios the channel was mainly composed of one-order reflection paths and has
smaller AS, so that beamforming technique was more suitable than diversity
technique for both scenarios. In the 1st scenario, there were a lot of multiple-
order reflections leading to a larger AS, so that diversity technique could be more
suitable.
In [65], the same sounding system as [64] was adopted to measure in more
scenarios including the following situations: (1) Road crossing (consisting of four
sub-scenarios with respect to suburban or urban, with or without traffic, and single
lane or multiple lanes) where both vehicles approach the crossing from perpendic-
ular directions, driving at 10~50 km/h. (2) General LOS obstruction in highway
where both vehicles drived in the same direction with similar speed of 70~110 km/
h. (3) Ramp merging with partly obstructed main road in rural enviroments where
both cars drived in the same direction at 80~90 km/h. (4) Traffic congestion
including two cases:both vehicles were stuck in a traffic jam and slowly drived at
15~30 km/h; one vehicle was totally stuck in traffic while the other approached at a
relative high-speed of 60~70 km/h. (5) In-tunnel with one way two lans where both
vehicles were driving at 80~110 km/h. (6) On-bridge where both vehicles aparted
from 150 m were drived at about 100 km/h. There were 3~15 measurements carried
out for each scenario. The research showed that V2V channel is non-stationary or
could be considered to be locally stationary. Therefore, the time-varying local
scattering function (LSF) was calculated, based on which the time-varying power
delay profile (PDP) and Doppler power spectral density (DSD) were obtained. The
time varying mean square DS and Doppler spread were further obtained and these
two statistical characteristics could be modeled as a bimodel Gaussian mixture
distribution. Bernado et al. further analysed the measured data to conclulde that the
power of the LOS path is Ricean distributed with with a time-varying K-factor,
while the power of other paths follow Rayleigh distribution [66].
As early as in 2007, some V2V MIMO channel soundings for four scenarios
(campus, six-lane highway, urban, suburban) were carried out by Aalto University
in Finland using self-developed channel sounder with 5.2 GHz carrier frequency
and 60 MHz bandwidth [67]. One snapshot took 8.4 ms and the snapshot repetition
rate was 14.3 Hz allowing a maximum Doppler shift of 7.15 kHz. Two vehicles
were drived in the same direction at 5~15 km/h. Either of the transmitter and
receiver was equipped with one hemispheric antenna array comprised of 15 dual-
polarized patch antennas (30 ports) to obtain 30-by-30 MIMO channel response
data in one measurement. A simplified 2-D geometry of the scattering environment
given the parameter of the scatters including densities, birth/death and locations
was used to build a GSCM for V2V channel, in which the time-varing fading gain
of the specular components (SC) was estimated and modelled as combination of
path loss, large-scale fading being by a Gaussian random process and small-scale
fading being Weibull distributed. Meanwhile, the DMCs were estimated and
modeled as a stochastic process with large-scale power decaying exponentially
(in dB) in the delay domain and small-scale fading being being Weibull distributed.
80 3 Channel Measurement and Modeling
In another measurement, the same sounding system is used to study the quasi-
stationary region of V2V channel [68]. The same receiving antenna arrys as above
was used, while a four-element uniformly linear array was used at the transmitter
side. One snapshot took 1.632 ms and the snapshot repetition rate was 66.7 Hz,
which limit the maximum speed could not exceed 40 km/h for the five scenarios.
The distance between two cars was between 10 m and 500 m according to the traffic
conditions. The PDP, averaged PDP (APDPs), autocorrelation of small scale
fading, coherence bandwidth (CB), correlation matrix distance (CMD), spectral
divergence (SD) and shadow-fading correlation (SC) were analyzed for different
scenarios and in the LOS condition. It was found that the quasi- stationary interval
is around 3~80 m, which is dramatically affected by the existence of LOS compo-
nent, vehicle speed, and antenna array size and configuration, etc.
One V2V channel measurement was carried out by the German Aerospace
Center using a RUSK channel sounder configured with single antenna, 5.2 GHz
carrier frequency and 120 MHz bandwidth [69]. This measurement mainly aimed to
verify the delay-Doppler PDF proposed for a GSCM. Some measurements in the
LOS and NLOS(due to vehicles or buildings/foliage) case for 7 scenarios were
carried out by NEC laboratory in German using the self-developed IEEE 802.11p
devices with single antenna, 5.9 GHz carrier frequency and 20 MHz bandwidth
(larger than the standardized 10 MHz of IEEE 802.11p) [70]. The measurement
mainly cared about the large-scale and small-scale signal variations. Another V2V
channel sounding was carried out by University of Southern California [71]. -
Software-defined radio platform WARP [72] and a commercial spectrum analyzer
were used as the transmitter and receiver respectively, in which carrier frequency
was set to 5.805 GHz quite close to standardized 5.9 GHz proposed by IEEE
802.11p, and bandwidth was 15 MHz. The channel that the link is blocked by
other trucks was considered for two scenarios, open space and high-rise building
areas. The path loss, SF, amplitude distribution (consistent with the Nakagami
distribution) of small-scale fading, and the correlation of DS and SF was measured
and analyzed.
Professor Bo Ai of BJTU divided the HSR scenarios into 12 categories including
viaduct, cutting, tunnel, and station and further 18 subcategories [73]. Extensive
measurements were done with the GSM-R system as the transmitter and a com-
mercial signal analyzer as the receiver, or with Propsound channel sounder [74
77]. As the train reaches up to 350 km/h, the coherence distance can be reduced to
10 cm [74], which presents challenges to the measurements. Some subway channel
measurement activities were also carried out in Madrid, Spain, and in Shanghai and
Beijing, China, and other cities [7881]. The SISO channel was concerned in these
measurements and several large- and small- scale parameters such as path loss, SF,
amplitude distributions, DS, K-factor, level crossing rate (LCR), average fade
duration (AFD), etc, were mainly analyzed. The HSR channel measurement for
the viaduct scenario were conducted at 2.6 GHz along the Harbin-Dalian
HighSpeed-Railway (HD-HSR) in China by the researchers of Beijing University
of Posts and Telecommunications (BUPT) to obtain 2 2 MIMO channel
3.3 Channel Measurement 81
The measurements and researches on high frequency (HF) band channel have been
carried out over 20 years. The main concerns have changed from narrowband
channel characteristics to wideband channel characteristics. The concerned small-
scale characteristics have changed from simple multipath characteristics in delay
domain to joint characteristics in both delay domain and angular domain. In recent
years, with the the gradual progress of 5G R&D, some key frequency bands in 6 ~
100 GHz, such as 11 GHz, 15 GHz, 28 GHz, 38 GHz, 60 GHz and E band, receive
more and more attention, in which the path loss, blockage loss, penetration loss, SF,
DS, AS, and other channel characteristics are being extensively studied. Many
communication enterprises, universities, and research institutes, including Profes-
sor Rappaports team, National Institute of Information and Communications
Technology (NICT) in Japan, Intel Laboratory (in Russia and Germany), Fraunho-
fer HHI laboratory in Germany, Aalto University, Tokyo Institute of Technology,
etc., have carried out extensive HF channel measurements in multiple frequency
bands. They also formed several influential international project teams such as
METIS, MiWEBA, mmMAGIC, 5GCM, etc, to do the channel measurements and
modeling in cooperation. Domestically, HUAWEI, Southeast University, Beijing
University of Posts and Telecommunications, Tongji University, Beijing Jiaotong
University, Shandong University and other research organizations and institutes
have begun the relevant researches and participated in the international coopera-
tion. Some measurements and the related details are listed in Table 3.8.
Table 3.8 Summary of high frequency band measurement activities
82
RxH:1.5 m
Ericsson Ltd. Multi-frequency: 2.44/ VNA+ up/ down- Tx: 7 dBi patch @2.44/ O2I( outdoor 60 m to indoor Peneration loss, DS
[1] [11] 0.08, 5.8/0.15,14.8/ converter 5.8 GHz, OEW@14/58 GHz; 2~15 m); TxH/RxH:1.5 m
0.2,58.68/2 Rx:2 dBi omni-directional; all Indoor(<80 m),Street canyon PL, diffraction and
vertically polarized (~130 m); TxH/RxH:1.5 m diffuse scattering
phenomena
Multi-frequency: 5.8/ TxRx: 2 dBi monopole Indoor(LOS 1.5 m, NLOS AS,DS,PDP
0.15, 14.8/0.2,58.68/2 Virtual cube array 25 [3] 14 m); TxH/RxH:1.5 m
Aalto 62/4 (measurement), 0.2 VNA+ up-/down- Tx: VHUPA(7*7) with Indoors: office; All large/ small
University (analysis) [49] converter biconical TxH:~2.3 m, RxH: ~1 m scale parameters
Rx: VVUPA(7*7) with OEW
61~65 [1] Tx:20 dBi horn antenna, Shopping mall, cafeteria, open All large/ small
rotating in azimuth with 3 square; TxH/RxH:~2 m scale polarized
step; Rx: biconical parameters, with-
out zenith angle
Multi-frequency: Tx: vertically polarized Street canyon (19~121 m); Polarized PDP,
14~14.5,27~27.9,59~63 biconical TxH/RxH:2.57 m PADP,Frequency
[11] Rx:19 dBi dual-polarized horn dependence,DS,
antenna,rotating in azimuth ASA,PL
with 5 step;
27.45/0.9 [11] Antenna same as above,except Open square access;
rotating with 7.2 step TxH:1.6 m, RxH:5 m
83
(continued)
Table 3.8 (continued)
84
In the early stage, some measurement results for urban and suburban environ-
ments at 9.6/28.8/57.6 GHz [84], 55 GHz [85], 60 GHz [86], 62 GHz [87] and other
frequencies were published, in which the narrowband channel characteristics
including path loss, rain attenuation and oxygen absorption were concerned. At
the end of the 1990s, the transmission and reflection coefficients of some typical
materials including walls, floors, ceilings, windows, etc, at 57.5 GHz, 78.5 GHz and
95.9 GHz bands were measured by NICT [88]. The measurement results were
compared with the reflection model of multi-layer materials. The early measure-
ments were carried out at single frequecy band, till recent the measurements at
multiple frequecy band in the same environment were carried out in order to
establish the fequency dependence of various prarmeters.
The main measurements of IEEE 802.15.3c channel model were carried out by
NICT [47], NICTA in Australia, IMST in Germany, France Telecom (FT), IBM,
University of Massachusetts (UMass) [8]. Except that IBM used a broadband
sounder with 600 MHz bandwidth [89], the other institutes used VNA based
sounder to channel measurement. Measurements mainly concentrated on ten indoor
scenarios, including the living room, office, desktop, library, corridor, aircraft
cabin, etc, in different in LOS and NLOS conditions. The Angle information was
obtained by rotating the standard gain horn antenna with 5 step in the azimuth
direction. Finaly, the cluster parameters (the number, arrival rate, power attenuation
index, AS) and intra cluster ray parameters (arrival rate, average time, power
attenuation rate, etc.) were given by analyzing the measurement data. See Sect.
6 in [8] for details.
Prof. Rappaports team used the separate components and modules to build a
broadband sounder based on the concept of sliding correlation [43]. The chip rate of
PN sequence could be changed to 200 MHz, 400 MHz to 750 MHz (the null-to-null
bandwidth is equal to two times of the chip rate). Some measurements at 28 GHz,
38 GHz, 60 GHz and 73.5 GHz band were successively carried out in VT, AUT, and
NYU [9099], from which the path loss, delay, angle, and other parameters were
analyzed [20]. All measurement activities were designed and targeted to a specific
measurement environment, but the similar equipments and procedures were used.
Directional antennas were adopted at the BS (the transmitter of a channel sounder)
and UT (the receiver of a channel sounder) to send and receive signals respectively.
The directional antenna was installed on a 3D rotating tripod. For every setting of
BS antenna orientation or UT antenna height, the UT antenna was steppedly rotated
in the whole solid angle to scan the channel. The related work are outlined in [20]
and Sect. 2.1.2 of [4]. Based on the measurements, an outdoor mmWave channel
model [100, 101] was established by using a time cluster-spatial lobe modeling
method, in which the path loss, the power and delay parameters of time clusters and
rays within a cluster, and the angular parameters represented by the number and
spread of spatial lobes were included. This model did not support the parametric
modeling of elevation angle i.e., it was a 2D model. Later by adopting the ray
tracing technique, a 3D mmWave channel model under the framework of 3GPP
(WINNER) was proposed [102]. Herein in which the average power in dB scale of
the cluster or intra-cluster rays are Gaussian distributed and decay exponentially
3.3 Channel Measurement 87
with delay as SV model. For a ray within a time cluster, a spatial lobe is randomly
assigned to it to be used to generate the angle values. The angles of spatial lobes or
rays followed Gaussian or Laplacian distribution. Three path loss models, i.e,
directional, beam combining, and omni-directional, with two model structures of
close-in reference distance and floating intercept, were proposed.
The main measurements to produce IEEE 802.11ad channel model [9] were
carried out by Intel Nizhny Novgorod Lab (INNL) in Russia [48], as well as
Technical University of Braunschweig (TUB) [103], TUI [104], Fraunhofer HHI
[105], MEDAV, and IMST in Germany. A customized broadband sounder com-
posed of a VSG, a VSA and two commercial 60 GHz up/down-converter modules
was used by INNL. Each 18 dBi standard gain antennas that could steppedly rotate
in the whole 3D space was used at the transmitter and receiver. A VNA-based
sounder was used by TUB to measure the human blockage loss. A channel sounder
used by Fraunhofer HHI was comprised of arbitrarily waveform generator (AWG),
oscilloscope and up/down-converter. TUI, MEDAV, and IMST cooperated with
each other to build a 1T2R (one transmitting port and two receiving ports) broad-
band (up to 3 GHz) sounding system composed by an ultra broadband receiver, a
frequency synthesizer and two up/down-converter. Two types of link, access and
D2D, were measured in indoor environments, such as living rooms, meeting rooms
and offices with cubicles. The measurement results showed that obvious clustering
exists in the channel, which matched well with the results of ray tracing simulation.
Fraunhofer HHI laboratory and Intel Mobile Communications (IMC) laboratory in
Germany took part in the MiWEBA project [4]. Fraunhofer HHI used self-
developed baseband sounder with 250 MHz bandwidth, commercial 60 GHz up/
down-converter module and approximate omni-directional (2 dBi) antennas to
measure the access channel for street canyon scenario. The transmit antenna was
3.5 m high and the antenna of the mobile receiver was 1.5 m. The power and delay
of several paths were measured and analyzed. IMC used the same equipment as
INNL to measure the access channel on campus. A 19.8 dBi rectangular horn
antenna or a 34.5 dBi lens antenna was used depending on the Tx-Rx distance
less than or more than 35 m and placed at 6.2 m high. The mobile receive antenna
was a 12.3 dBi circular horn antenna placed at 1.5 m high. The measurement results
showed that two strongest paths were LOS and the ground reflection path, while the
other paths were 15~20 dB lower. The measurement results matching well with the
ray tracing calculation prompted the project team to use the Q-D modeling method
to construct the channel model, which is comprised of deterministric D rays and
random R rays. For more details please refer to the MiWEBA model [4]. The
researchers in Fraunhofer HHI measured the propagation at 10 GHz and 60 GHz
simultaneously and and compared. In the LOS case, they found the propagation loss
at 60 GHz was 15.6 dB higher than that at 10 GHz, which conformed to the Friis
free space path loss equation. These two channels showed similar characteristics,
but at 60 GHz there were less resolvable paths due to smaller dynamic range. In the
NLOS case, some paths simultaneously occured at both frequency bands, while
some paths occured at only one frequency band. In general, similar to the LOS case,
there were less resolvable paths at 60 GHz.
88 3 Channel Measurement and Modeling
Some measurements at 15 GHz, 28 GHz, 60 GHz, and E band (81~86 GHz) were
carried out by Aalto University in Finland and outlined as follows.
1. The measurements at 60 GHz in conference room were carried out by using a
VNA-based sounder. At the transmitter side, a virtual horizontal uniformly
planar array (VHUPA) with 7 7 elements was formed by translating a
vertically polarized biconical antenna. At the receiver side, a vertically polarized
OEW was used to form a virtual vertical uniformly planar array (VVUPA) with
7 7 elements. The high resolution estimation algorithms were used to obtain
channel parameters, based on which a SV-like model were proposed [49].
2. In the METIS project, multiple measurements at 61~65 GHz (4 GHz bandwidth)
in the indoor shopping mall, indoor cafeteria and outdoor square were carried out
by using a VNA-based sounder [1]. A 20 dBi horn antenna was horizontally
rotated with a step of 3 at the transmitter side, while a 5 dBi biconical antenna
was used at the receiver side, so that only single-directional and 2D channel
parameters were obtained. In indoor cafeteria and outdoor square, more channel
parameters were obtained by adopting the point-cloud field prediction technique
(a laser ranging system was used to build a 3D electronic map called as a point
cloud). After calibrating with the measurement results, a 3D channel model for
these two scenarios were proposed.
3. In the mmMAGIC project, they completed many measurements, e.g., in the
street canyon at multiple frequencies, in the open square at 27 GHz, in the airport
check-in area at 15 GHz and 28 GHz.
4. With Sweep Frequency Generator (SFG) as the transmitter, and the VNA as the
receiver, the wireless channel at 81~86 GHz E-band was measured [106108]. A
24 dBi and a 45 dBi directional antennas were used in the measurement. The
scenarios included street canyon, root-to-street and the maximum measurement
range was up to 1100 m. It was found that there still existed the MPCs even in the
LOS condition and by using high-gain directional antennas, and the first MPC
was 20 dB lower than the LOS component.
The researchers in Ericsson Ltd. used a VNA-based sounder to complete a series
of measurements in participating the METIS, mmMAGIC, and 5GCM projects.
(1) The human blockage measurements in indoors at 57.68-59.68 GHz band with
two 10 dBi (60 elevation beam width, 30 horizontal beam width) antennas were
carried out and found that the blockage loss was as high as 1020 dB. (2) The
measurements in indoor medium range and a long corridor of the office environ-
ment were performed at multiple frequencies to find the path loss exponent and the
diffraction components contributed the main power for indoor mmWwave trans-
mission in the NLOS case. (3) Multi-frequency measurements at 2.44, 14.8 &
58.68 GHz were carried out in street microcell enviroments in the LOS and
NLOS case. The measurements found that in the NLOS case the path loss is less
than expected by knife edge diffraction, and the frequency dependency is not as
obvious as the diffraction, which revealed that there were other reflection or diffuse
components dominating the propagation in outdoor NLOS scenario. (4) The pen-
etration loss through the wall at multiple frequencies were measured. (5) The rich
3.3 Channel Measurement 89
channel measurements in a large open office in the LOS and NLOS cases [114], in
which the PADP, composite and per cluster DS & AS, cluster numer, ray number
within a cluster were analyzed. Xiongwen Zhao et al. of North China Electric
Power University (NCEPU) measured the blockage attenuation at 26 and 39.5 GHz.
The Voglers multiple KED (Knife Edge Diffration) model was used to model the
blockage effects and found the loss at 26 GHz was smaller than that at
39.5 GHz [115].
In channel sounding, the measurement data, or the received signals, are superposi-
tion of altered versions of the transmit signal passing through multiple paths with
different delays, gains, and directions. The processing to be applied to the mea-
surement data includes two aspects: path parameters extraction and channel statis-
tical analysis. Path parameters extraction is to extract the parameters of each path
from the received signals using a high resolution parameter estimation algorithm.
These path parameters include complex amplitude, delay, arrival angle, departure
angle, Doppler shift and so on. Channel statistical analysis is to further analyze the
obtained path parameters and find the statistical characteristics and descriptions of
the sounded channel.
There are mainly two categories of commonly used channel parameters extraction
algorithms, i.e., subspace-based methods and maximum likelihood methods.
Within the first category are MUltiple Signal Classification (MUSIC) algorithm
[116], Estimation of Signal Parameter via Rotational Invariance Techniques
(ESPRIT) algorithm [117118] and its variant Unitary ESPRIT [119121], etc. In
the second category, there are Expectation Maximization (EM) algorithm
[122, 123], Space Alternating Generalized EM (SAGE) algorithm [124129] and
frequency-domain SAGE algorithm [130], Sparse Variational Bayesian SAGE
(SVB-SAGE) algorithm [131], and Richters Maximum Likelihood Framework
For Parameter Estimation (RIMAX) algorithm [132], etc.
ESPRIT algorithm and its variant are been developed to either jointly estimate
delay and azimuth angle, or jointly estimate azimuth and elevation angles
[120]. MEDAV Companys MIMO channel sounder RUSK took ESPRIT as the
core algorithm at early stage, and later it adopts the RIMAX algorithm. EM
algorithm alternates E (expectation) step and M (maximization) step iteratively to
get the optimal solution but with a relatively slow convergence speed. SAGE
algorithm bases on EM algorithm, separates all parameters into several subsets
and updates a parameters subset each time in the iterative process, which reduces
3.4 Channel Data Processing 91
3.4.1.1 EM Algorithm
Basing on the signal model, one can write the probability density function (PDF) of
the measurement data in terms of signal parameters (aka the likelihood function of
parameters). Maximum likelihood (ML) parameter estimation algorithms choose
the parameter values that can maximize the likelihood function. Usually, the
expression of the likelihood function is too complex to directly obtain the param-
eters maximizing it. Thus, the iterative algorithms are often resorted. EM algorithm
can effectively estimate parameters by iteratively carrying out the E step and M step
[122, 123]. The classical EM algorithm can be described as follows. Firstly, in the E
step, the posterior distribution of the complete (unobservable) data is found with
respect to the observable measurement data (incomplete data) and assumed
(or updated in the last steps) parameters and used to calculate the expectation of
the complete-data log- likelihood. Then the parameters maximize the expectation is
searched in the M step.
For channel sounding, EM algorithm splits the optimization problem that jointly
estimate multiple superimposed paths into multiple separate optimization problems
for estimating a single path. Here we illustrate the EM algorithm used in a SIMO
(single transmitter and M receivers) sounding system. The received signal is
superposition of L paths signals, which can be expressed as
X
L
yty1 t; . . . ; yM t T st; l N t
l1
X
L
cl l expfj2vl tgut l N t 3:6
l1
where u(t) is time-domain sounding signal with transmit power Pu and period Ta.
I snapshots are measured with time intervals Tf. l [l, l, vl, l] denotes the vector
containing the parameters of the l-th path, including path delay l, azimuth AOA l,
Doppler shift vl and amplitude l. c() [c1(), . . ., cM()]T denotes the steering
vector of antenna array reflecting two aspects of information: antenna radiattion
92 3 Channel Measurement and Modeling
pattern and geometric structure of the array. n(t) [n1(t), . . ., nM(t)]T denotes
M dimensional complex white Gaussian noise. p
Define complete (unobservable) data sets xl tst; l N 0 =2nl t and
X
L
incomplete (observable) data sets yt xl t. First of all, assuming that xl(t)
l1
is known, the log likelihood function of complete data xl(t) in terms of the
parameter vector l can be expressed as follows,
1 T 0 0
0 0 2 0
xl ; l 2 R s t ; l xl t dt ks t ; l k dt 3:7
N0 D0 D0
The parameters that maximize the ML function are simply written as,
bl
x 2 arg max fxl ; l g 3:8
ML l l
E Step: In fact xl(t) is unknown, thus, we need to estimate xl(t) firstly based on
the incomplete data y(t). One direct way is to use the conditional expectation of xl(t)
with repects to Y(t) and the predefined or old parameters. Specifically, we can use
successive interference cancelation method to obtain xl(t) assuming the parameters
about other paths are known,
0 X
L 0
b yt
x^l t; b0
s t; 3:9
l
l0 1
l0 6 l
M Step: The l-th path parameters l can then be re-estimated by computing its
MLE based on the estimation of the l-th signal
0
b 00
bl b
x^l t; 3:10
l ML
and
I
X 0
z; ; ; xl u t0 ej2t cH ^
x l t0 dt0 3:11
i1 Di
As for the initialization of SAGE algorithm, the initial parameters of each path
can be set all zeros or obtained using the MUSIC algorithm. In the M step, since no
a priori knowledge about the phase of the complex amplitudes, the maximization
procedures for and are replaced by two non-coherent estimation programs.
( )
X
M X I 0 0 0 2
00 0 b
l arg max
b u t ^x l, m t ; dt
Di
m1 i1
94 3 Channel Measurement and Modeling
( )
I
X 0 H 0 0 0 2
b 00
arg max
u t b 00
l c ^ b dt
xl t ; 3:13
l
i1 Di
Whereas the calculation of Doppler shift still uses the same procedure as the
above iterative process. Correspondingly, since only the l1 paths are estimated, in
the initialization process the E step of estimating l-th path parameters is replaced by
0 Xl1
x^l t; b
yt 0 s t; b 00
3:14
l 1 l
For the single path signal, Fisher information matrix can be derived from its
likelihood function, Equ (3.5), and then the parameters performance can be got.
SIMO sounding is still taken as an example, i.e., one transmit antenna and uni-
formly linear array comprised of M receive antennas. It is assumed that the
sounding sequence u(t) containing K chips with the chip duration of Tp and the
sequence length of TaKTp. One channel sounding is carried out in each snapshot
and there is a total of I snapshots with snap spacing of Tf. Then Cramer Rao lower
bounds (CRLBs) of the path parameters, AOA , Doppler shift , delay and path
gain , are given by
3.4 Channel Data Processing 95
1 3
CRLB
O 2d=2 sin 2 M2 1
1 3
CRLB
O 2 2 T 2f I 2 1
1 1
CRLB
O 8 2 B2u
1
CRLB 3:15
jj O
p
where Bu 1=T p N s =21 1=K is the Gabor bandwidth of the transmitted
signal u(t). O is the signal-to-noise (SNR) at the output of the correlator and
O MIKNs I. I is the SNR at the input of each antenna port and I Pu||2/(N0/
Ts). d is the space between antenna elements. When d is equal to half a wavelength,
CRLB() coincides with the expression (27) in [126]. It can be seen from the above
formula that each parameter is inversely proportional to O, which means the
estimation error can be reduced by increasing antenna number M, snapshot number
I and sequence length K. Meanwhile, increasing M can further reduce the errors in
AOA estimation. is the angle between the arrival beam direction and the array
axis. The sin2() in the AOA performance bounds shows that estimation error is
minimum when beam impinging along the boresight of the array. CRLB() grad-
ually reaches to infinity when the beam direction gradually changes from boresight
to broadside of the array. Since is always between 0 and , the estimation error
should be bounded. Therefore when is close to 0 or , the estimation ambiguity
exists and CRLB() in the above expression is no longer applicable. Increasing
snapshot number I and snapshot interval Tf will reduce the Doppler shift estimation
error. However, Tf is limited by the Doppler shift according to the Nyquist sampling
theorem, i.e., Tf < 1/(2v).
The resolution of several commonly used estimation techniques for delay, angle
and Doppler shift, such as correlation, beamforming and Fourier methods, is limited
by the intrinsic parameters of the measurement system. These resolutions are
defined to be the half-widths of main lobe of the maganitude of their corresponding
correlation function. For uniform linear arrays (M antenna units), these half-widths
are respectively
360 1
c T p , c , c 3:16
M IT f
Two paths are called separable or resolvable only when the difference of their
parameters satisfy the following conditions
96 3 Channel Measurement and Modeling
> c 3:17
The SAGE algorithm mentioned is applicable for channel sounding using time
domain PN excitation signal. In some channel sounding, frequency excitation
signal is commonly used, such as broadband multi-carrier excitation or
VNA-based single frequency stepping excitation. Correspondingly the frequency
domain SAGE (FD-SAGE) algorithm is developed for parameter extraction
[130]. Here we still use a SIMO (single transmitter and multiple receiver) system
as an example, and assume that the entire measurement environment is stationary
(i.e., no need to estimate the Doppler shift). According to the configuration, the
received signal in frequency domain is given by
X
L
Y f cl l ej2f l N f : 3:18
l1
X
L 0
X^l kf ; b
0 Y f X kf ; b
l0 3:20
0
l 1
0
l 6 l
X^l kf ; b
0 is stacked into a (K+1)-by-M matrix, in which each column repre-
sents the K+1 frequency responses of each antenna element, and each row
3.4 Channel Data Processing 97
1 00 00
l
b
00
b ; X^
z b l ; 3:22
K 1M l l
Assuming the full zero initialization method is used, in the M step, the path delay
is obtained by a non-coherent estimation procedure given by
( )
X X
j2kf ^
b arg max e Xl k; m: 3:23
m
k
The calculation of AOA is the same as the iterative process. The item in braces
in (3.23) can be regarded as summing of the power delay profile of every antenna
element over all antenna elements. Using Fourier transform directly will lead to a
large number of sidelobe components in delay domain. The superposition of multi-
path signals may produce multiple false peaks, which may lead to improper delay.
Usually frequency-domain windowing, such as Kaiser, Hanning or Gaussian win-
dow, can be used to reduce the sidelobe in time domain [130]. That is, X^l k; m is
replaced by X^ l k; m X^l k; mW k, where {W(k), k 1, . . .} is the window
0
X
l1
0
X^l kf ; b
0 Y t X kf ; b
l0 3:24
0
l 1
The signal emitting from transmit antenna and arriving the receiving antenna is
composed of line-of-sight (LoS) component, specular components (SCs), and dense
multipath components (DMCs). Each SC corresponds to a discrete and strong
propagation path formed by an independent scatterer reflecting electromagnetic
waves. The propagation paths can be described by simple geometric relationships.
However, the contents of DMCs are far more complex and illustrated in Fig. 3.8.
It is usually known that the DMCs consists of Diffuse components (DC) reflected
off rough surface having wavelength comparable to radio wave, diffraction, reflec-
tion from different layers of scatterers, echoes among scatters, and so on.
The contribution of DMC in the total reveived power varies with the distance
between the transmitter and receiver as well as the environment. Intuitively, the
proportion of DMC is larger in indoor scenarios or in the NLOS case. Recent
studies have indicated that DMCs contribute 20% to 80% to the total receiving
power in indoor environment [137] and Industrial Environment [138]. Usually
DMC has larger angle distribution region than SC. Moreover, DMCs have longer
delay spread than the SCs and the power of DMC cluster approximately decays
exponentially with delay [139]. Ignorning DMCs in the channel modeling will
underestimate the channel capacity. Therefore, DMC has enormous significance for
channel modeling.
The earliest DMC modeling assumed that the DMCs have a white spectrum in
azimuth angle while they have an exponentially decayed Power Delay Profile
(PDP), which can be expressed as [133]
8
h i < 0, < d
E jxj2 d =2, d 3:25
:
d eBd d > d
d
f ej2f d 3:26
d j2f
0
where d d f 0. The parameters describing DMCs are denoted by
0
d d d d . The covariance matrix of DMCs sampled in frequency
domain is a Toeplitz matrix composed of , i.e. Rd(d) toep(, H). In [133],
Gauss-Newton algorithm is used to iteratively update the DMCs parameters,
while the initial estimation is found by so called global search strategies beginning
with estimated PDP. A simpler least-square based method can be used. First obtain
the residual CIR containing DMC by substracting SC component Hsc() (obtained
by high resolution estimation algorithm such as SAGE) from the total CIR HRx().
Then, averaging the residual CIR over multiple antennas to get the residual PDP
1
res HRx Hsc 2F 3:28
Nt Nr
where k k2F denotes the square of Frobenius norm of a matrix. The optimal DMC
parameters are those that can make parameterized PDP approach to the residual
PDP,
X
M 1
b
d arg min j res m d m; d j2 3:29
d
m0
where d(d) is the PDP of DMCs expressed in discrete time domain, which can be
obtained from Rd(d),
d d diag 1 Rd d , 3:30
where denotes Fourier transform matrix, and diag[] denotes the diagonal ele-
ments of the matrix.
More measurements show that DMC is not white (colored) spectrum in azimuth
angle. In [134], the estimation and analysis of DMC directional features were
presented. Herein we explain its basic ideas as follows. In delay domain, the
power of DMCs is still modeled as exponential decay function. In angular domain,
DMCs are modeled as the sum of multiple DMC clusters and each DMC cluster
corresponds to a SC cluster. The azimuth angle and elevation angle are described by
100 3 Channel Measurement and Modeling
Where Bi(, ) is the 3D radiation pattern of the i-th antenna element and Wi is the
window function used to reduce sidelobe. Firstly, DMCs are clustered in angle
domain according to the azimuth anglular spread and elevation anglular spread of
ph i
SC. ; DMC, k 2 ; SC, k 2 2 ;SC, k ; ;SC, k , where (, )SC, k and
;SC, k are the average angles and angular spreads of the k-th SC cluster. Each
DMC cluster has an independent measure of PDP, which is calculated as
maxDMC, k maxDMC, k
res, k jh; ; j2 sindd 3:32
minDMC, k minDMC, k
Then according to (3.29), the parameters of each DMC cluster are calculated.
It has been found that the channel MPCs tend to appear in clusters, i.e. in groups of
multi-path components (MPCs) with similar parameters. Meanwhile, describing
and subsequently processing based on clusters will significantly reduce the number
of parameters appearing in the channel modeling. This is also one reason why
clusters are used as the basis for the GSCM and SV modeling methods. In this part,
we will describe the automatic clustering algorithm. It involves two problems, one
is how to cluster all of the MPCs when the number of clusters is given, and the other
is how to determine the optimal number of clusters and the optimal criteria.
Clustering algorithms, such as K Means, standard or fuzzy CMeans and other
algorithms [144, 145], can be used to deal with the first problem. For the second
problem, the commonly used methods at present are to use a variety of indexes to
determine the number of clusters, such as the Kim-Park index [146], Davis-Bouldin
index, Dunn index, and Calinski-Harabasz index [147]. The following gives a
simple description on automatic clustering.
1. Clustering. Assuming there is L MPCs in total, each of which has four param-
eters: delay , AOA , AOD and power P. The parameter set of each path is
denoted by Xl [l, l, l, Pl] and the parameter set of all L MPCs are denoted by
{Xl, l1,. . .,L}. The expected number of clusters is K and each MPC belongs
to one of the K clusters. According to K-Means algorithm, clustering is executed
as follows. Firstly, K MPCs are randomly selected in as the center of
K clusters and denoted as ck 2 . Then the Multipath Component Distance
(MCD), MCD(Xl,ck), between each MPC in to the K cluster centers is
calculated. MCD denotes the distance between two paths. For i-th and j-th
MPC, their MCD can be expressed by [148]
q
MCD Xi, Xj kMCDAOA, ij k2 kMCDAOD, ij k2 MCD, ij 2 3:33
There are three MCD components, two for angle and one for delay. The
calculation of angular and delay MCD component are different. Angular MCD
component is related to the distance between 3D direction vector of the two paths,
and the delay MCD component is the normalized delay difference,
102 3 Channel Measurement and Modeling
0 1 0 1
sin i cos i
sin j cos j
1
MCDAOA=AOD, ij @ sin i sin i A @ sin j sin j A
2
cos i cos j
i j std
MCD, ij 3:34
max max
Where i and i are elevation angle and azimuth angle respectively, std is the
standard deviation of all path delay, max is the maximum excess delay. is a
proportional factor, usually setting to 1, which is used for adjusting the proportion
of delay in the calculation of distance.
Next, the index of cluster with its center being nearest to each path is taken as
cluster identification of the path, so that can be divided into K clusters. Then new
cluster centers are calculated, which is the power weighted average in delay, AOA
and AOD components of all the MPCs belonging to the same cluster, i.e.,
P
l2cki Pl AOA AOD l
ck
i1 P 3:35
k
l2c i Pl
2. Determinating the optimal number of clusters. The idea of using Kim-Park index
to determine the cluster number is to minimize the sum of normalized intra-
cluster distance and inter-cluster distance [146]. The average intra-cluster dis-
tance is calculated as follows
P
1 X l2ck MCDX l, ck
K
u K 3:36
K k1 jck j
Where |ck| is cardinality of ck, namely the number of MPCs in the k-th cluster. The
average intra-cluster distance reflects the compactness of a cluster. When the
number of clusters is excessive, u(K ) becomes very small. On the contrary,
when the cluster number is too small, many MPC that are distinct or less correlated
are divided into the same cluster, so u(K ) becomes large. The measure of inter-
cluster distance is calculated as follows
K
o K 3:37
min MCD ci ; cj
i6j
When the number of clusters is too small, the minimum inter-cluster distance
mini 6 jMCD(ci, cj) becomes large, so o(K ) becomes very small. On the contrary,
when the number of clusters is excessive, o(K ) becomes very large. Thus, there
will be a optimal point in u(K ) + o(K ) when varing the number of clusters. That is
3.4 Channel Data Processing 103
the key of K-P index. Going through all possible cluster numbers, K2,. . .,Kmax,
the optimal number of clusters Kopt is determined by
8 9
< u K min u K o K min o K =
K K
K opt arg max : 3:38
K :max u K min u K max o K min o K ;
K K K K
In this subsection, we will introduce the main channel statistical characteristics that
need to be acquired and the commonly used statistical analysis methods. The main
contents include the basic distribution fitting methods, the basic correlation analy-
sis, and those items calculated from delay, angular and cluster domains. For more
details please refer to WINNER II document [82].
The channel parameters, such as amplitude, delay and angle, are usually Random
Variables (RVs) following a certain distribution function expressed in PDF or
cumulative distribution function (CDF). According to the measured data, some
criteria are used to determine the suitable distribution function and statistical
parameters so as to describe the parameters accurately in a sense of probability.
These commonly used criteria include maximum likelihood (ML), least square
(LS), minimum mean squared error (MMSE) or uniformly minimum variance
unbiased estimation (UMVUE). For a channel parameter, RV X, it is assumed
that the PDF of X is known but with unknown parameters, which is denoted by
f(x;). Parameters estimation is to use the known observed points for X to estimate
the unknown parameter . Taking the normal distribution as an example, assuming
there are n observed points xi (1 i n) for a random variable X, the mean and
variance of X is given by
1X n
1X n 2
EX
b xi , b2 Var X xi b
3:39
n i1 n i1
In statistical analysis of channel parameters, the commonly used PDF that are
related with amplitude, angle, time and arrival are listed in Table 3.9, 3.10 and 3.11.
The estimation methods for the parameters and generating methods of RVs are also
given in the three tables.
Table 3.9 Commonly used distribution function for amplitude
104
Distribution
type PDF Parameters estimation Random Variable generation
!
Gaussian 2 b
EX BoxMuller method [149]:
1 x p
or Normal f x; ; p exp d X 2 ln U cos 2V , or
2 2 2 ML: 2 VarX p
X 2 ln U sin 2V
Nd
UMVU: 2 Var X U and V are uniform random variables:
N1 U,V~ Uniform[0,1)
2 p
Rayleigh x x2 d
E X X 2 ln U
f x; exp 2
2 2 2 2
x p
Ricean f x; ; K 2 MoM method [150]:P X2 R X2 Y 2
1 X,Y are Gaussian random variables:
x2 p K^
exp K 2 I 0 2Kx EP X ~ N(2K 2 cos , 2)
2 q 1
EP 2 Var P Y ~ N(2K 2 sin , 2)
is any real number
d
2 EP =2= K^ 1
p
Nakagami 2 m m MoM method X =2mY
f x; m; [151]:
E2 X 2 Y is a Chi distribution random variable Y ~ (2m) with the DOF
m m m^ ,
x2m1 exp x2 Var X2 of 2m;v
b E X
2 u 2m
uX
Yt i X2 ,X ~ N(0, 1)
i
i1
!
Lognormal 1 ln x 2
b Eln X X exp(XG)
f x; ; p exp b2 Varln X XG is Gaussian random variable:
2 x 2 2
XG ~ N(, 2)
Weibull k xk1 x k b
k E Xk X [ ln(U )]1/k
f x; k; exp U is uniform random variable:
E Xk ln X
k^1 k ElnX U~ Uniform[0,1)
E X
3 Channel Measurement and Modeling
Note: I0(x): Zero order modified Bessel function of the first kind.
Table 3.10 Commonly used distribution function for angle, X2[, )
Distribution type PDF Parameters estimation Random variable generation
Uniform 1 None X 2U
f x
2 U is uniform random variable:
U~ Uniform[0,1)
Wrapped Normal 1 Z eiX ,
b argEZ , X mod(XG + , 2)
f ; ; p :
c2
N1 XG is Gaussian random vari-
" 2 # ln
X 1 able:
2k2 N jEZ j2 1
3.4 Channel Data Processing
exp XG ~ N(, 2)
k1
2 2
p
Laplacian 1 2jx j MLE [152]: X p2 sgnU :
f x; ; p exp b E X
2 p ln(1 |U|)
b 2EjX b j U~ Uniform[1, 1)
von Mises 1 Reference [153] Reference [154]
f(x; , ) expcosx
2I 0 Z eiX , b argEZ ,
s
I1 b N jEZ j2 1
I0 b N1
von Mises-Fisher (p2 is von p=21 Reference [155]: Reference [156, 157]
Mises) fp(x; , , p) p=2
exp T x
bE
2 I p=21 X
=EX ,
I p=2 b
: average direction, 1 E X
: focus parameters, 0 Ip=21 b
FisherBingham(Kent) 1 Reference [158] Reference [159]
f x; ; ; 1 ; 2 ; 3 :
n h c ;
2 2 io
exp 1T x 2T x 3T x
c(, ) 2.
X1
j1 2
2j12
2j I 2j12
j0
j 1 2
x is 3d unit vector, denoted by a point on the unit sphere
105
Correlation reflects the similarity between two random variables (RVs) in one or
more dimensions. The correlation calculation does not need to know the distribu-
tion function of the RVs. It is assumed that the RVs are ergodic, so the correlation
can be calculated using ensemble average. Assume there are Nx and Ny collected
values for x and y respectively, the correlation value between x and y can be
calculated as
XN2 k N y 1; N x 1
1
Rxy k xn kyn , N 1 max
k; 0 : 3:40
N 2 N 1 nN 1
1 N 2 min N y ; N x k
This estimation is unbiased, but the estimated variance of Rxy(k) increases with k.
A biased estimation can be used instead when the variance is small
N2 N1
RB k Rxy k 3:41
min N y ; N x
1 XN2
Cxy k xn k x yn y
N 2 N 1 nN 1
1
3.4 Channel Data Processing 107
Ny
1 X Nx
1 X
x xn , y y 3:42
N x n1 N y n1 n
Cxy k
xy k pp 3:43
Cxx 0 Cyy 0
When spatial correlation is calculated, if the spatial distance between all snap-
shots is not constant, .e.g., the measured routes are not always along a straight line
or the speed is varying, then the above calculation will be problematic. The
correlation value will not show a certain relationship with snapshot index k.
Therefore, we must group the snapshot by distance, denoted by k(), and the
correlation will be
1 XX
R xn kyn , 3:45
N n k
where N() is the number of valid data pairs [x(n + k()), y(n)] in the distance .
The items to be analyzed in this part include CIR, time-varying PDP, PDP, MED,
DS, total power, path loss, SF standard deviation, and Ricean K-factor. At this
stage, all calculations are based on the CIR and no angular information is needed.
The CIRs can be got by correlation of the receiving samples with the PN sequence
(known for the receiver) in time domain sounding, or can be got directly by Fourier
transformation in frequency domain excitation including broadband OFDM signal
or single carrier in VNA based system. Of course, we can also use the high
resolution parameter estimation algorithm to extract angular the information, and
get CIRs through accumulating in the angle domain. The former is called direct
method, and the latter is called synthesis method.
108 3 Channel Measurement and Modeling
1 X Ns XNu
Pt; 0 ; p jht; u, s t; p; s; u; pj2 3:46
N s N u s1 u1
where u, s(t, p) argmax|h(t, , p)|2 indicates the delay adjustment for each CIR.
0 0
u, s(t, p) is denoted by . Hereafter, is used for delay instead of for clarity.
3. Power Delay Profile
The time-variant PDPs of nt snapshots in stationary interval are averaged to get
the PDP.
1 X nt
P; p Pti ; ; p 3:47
nt i1
Where the delay i get discrete value because the measurement system has a
certain delay resolution. The delay distribution function is estimated using PDP
as follows,
P i ; p
p i ; p PN x 3:50
i1 P i ; p
Mean delay , second-order moments of delay 2 and mean square DS DS( p) can
be got from delay distribution function by
p
DS p 2 2
X
Nx
p i pi ; p
i1
X
Nx
2 p 2i pi ; p 3:51
i1
X
Nx
P p P i ; p 3:52
i1
7. Path loss
To establish the relations between path loss and other propagation parameters
(such as Tx-Rx distance, carrier frequency, etc.) is a very important task in channel
modeling. The following takes analyzing the dependence of Path Loss on the
distance as an example.The Tx-Rx distance d for every snapshot should be recorded
during measuring. For each total path power there is a corresponding distance,
110 3 Channel Measurement and Modeling
d
Pp ! Pd; p. Considering the other parameters of the measurement system,
P such
as the transmitting power Ptx, transmitting and receiving antenna gain Gi , the
P i
total cable attenuation Ai , we can obtain the path loss
i
X X
PLd; p Ptx p Gi Ai Pd; p 3:53
i i
where all the parameters are in unit of dB. Multiple measurements at different
distances can obtain more observations of PL values with respect to the distances.
With these observations, linear regression analysis can be used to find slope A( p)
and intercept point B( p) of the path loss model,
2 3 2 3
PLd 1 ; p log10 d1 1
4 54 5 A p
1 3:54
B p
PLd n ; p log10 dn 1
Namely, the path loss model in terms of A( p) and B( p) can be given by,
In fact, path loss is also affected by other factors such as carrier frequency,
antenna height and environmental,etc. Therefore, large amounts of measurements
in various frequencies and antenna heights should be conducted to find their
relationships with PL. Multiple regression analysis may be used. Path loss modeling
always is the most important work in the channel modeling, and we will not give
much details due to space limitations.
8. Shadow Fading (SF) Standard Deviation
For N measured data in the path loss analysis, the measured SF values can be got
by substracting the expected path loss PLdn ; p from the measured values PL(dn, p),
SFdn ; p PLdn ; p PLdn ; p. Collecting these values, the standard deviation
of SF can be obtained by
v
u
u 1 X N
SF p t jSFdn ; pj2 3:56
N 1 n1
Using correlation analysis method, we can also calculate the correlation coeffi-
cients of SF at two distant locations, thus the correlation distance of SF.
9. Ricean K-factor
Ricean factor K is the power ratio of the LOS component to all other compo-
nents. In WINNER, IMT-A and 3GPP models, K-factor is calculated through the
method of moments (MoM) [150]. But the MoM method is only suitable for
3.4 Channel Data Processing 111
X
nt
where E{P} is the average power, EfPg 1=nt Pi. Var{P} is the power
i1
X
nt
variance, Var fPg 1=nt 1 Pi EfPg2 . The value of K can be used
i1
to estimate the conditional probability of LOS propagation. The obtained K can be
further used to calculate variance and correlation distance.
For the elevation angle, the wrapping range is [/2, /2). Then the anglular
power distribution function (normalized PAS) is estimated from the PAS,
0
0
PAS AODi
pas AODi PNx 0 : 3:59
i1 PAS AODi
112 3 Channel Measurement and Modeling
The first-order moment (mean) of angle distribution are calculated and removed
from the individual angle to get an unbiased value, again the wrapping process is
carried out.
! !
00 0
X
Nx
0
0
AODi mod AODi AODi pas AODi ; 2
i1
3:60
ASD is defined as the minimum second-order moment of the angle () that can
be obtained by varing .
With the ASDs obtained at multiple locations, the mean and variance of ASD as
well as the correlation distance can be obtained. The same processing can be carried
out for other angles.
After parameter extraction and cluster classification, some features of the clusters
can be tracked in mobile channel, such as the cluster life cycle, cluster generation
rate, cluster drifting in delay and angular domain. Clusters tracking is often ignored
in the existing models, however, it is particularly important for the new 5G channel
model. In mobile enviroments the parameters of the clusters will change slowly.
The whole life cycle of a cluster can be recorded from its appearance to disappear-
ance, so that it is easy to compute the average survival time and birth and death rate
of the cluster. In a stationary interval, the change of delay and angle of the clusters
from one snapshot to another snapshot is also very important, which can be used to
analyze the time evolution of the channel.
3.5 Existing Channel Models 113
This section will review several existing channel models, and point out their main
characteristics and restrictions. These channel models include those proposed by
various research project or groups, such as the WINNER famliy [13, 163, 164],
COST 259 [165]/ 273 [166]/ 2100 [167]/ IC1004 [168], METIS [1], MiWEBA [4],
QuaDRiGa [19], mmMAGIC [11], 5GCM [12], and proposed by the standardiza-
tion organizations, such as ITU-R IMT-Advanced [14] and IMT-2020 [178] chan-
nel model, 3GPP SCM [169]/3D [170]/D2D [3]/HF [6], IEEE 802.15.3c [8], IEEE
802.11 TGn [171]/ac [172]/ad [9]/ay [10].
Spatial Channel Model (SCM) is a GSCM developed for cellular MIMO system by
the Third Generation Partnership Project (3GPP) in 2003 [169]. The importance of
SCM lies in that it is convenient for expansion and parameterization and provides
an infrastructure for subsequent channel models. The frequency range and system
bandwidth supported is 1~3 GHz and 5 MHz respectively. The SCM channel model
suits for three scenarios including Suburban Macro (SMa), Urban Macro (UMa),
and Urban Micro (UMi). The path loss (PL) model uses modified COST 231 Hata
urban model (SMa, UMA), COST 231 Walfish-Ikegami NLOS model (UMi NLOS)
and COST 231 Walfish-Ikegami block model (UMi LOS) [173]. In terms of LSPs,
it takes the DS, AS and SF as lognormal distributed or normal distributed and gives
their auto-correlation and cross-correlation coefficient, inter-station SF correlation
coefficient and so on. With respect to the SSPs, a channel is composed of six
clusters, each of which contains 20 rays. The model gives the characteristics of each
cluster with angles being Gaussian distributed, delays being exponential distrib-
uted, and power decaying exponentialy with delay.
Based on the parameters presented by WINNER II and WINNER+, 3GPP
proposed two new channel models, 3D-MIMO and D2D. 3GPP 3D MIMO [170]
is suitable for the scenario of UMi and UMa, both of which include the case of
Outdoor-to-Outdoor (O2O) and Outdoor-to-Indoor (O2I). Its applicable frequency
range, bandwidth and maximum mobility is 1~4 GHz, 10 MHz and 3 km/h
respectively. It gives new path loss parameters (see Table 4.1 in [170]), the
correlation coefficient of intra-station LSPs, and the updated small scale
(SS) parameters (Table 7.3.6, 7.3.7, 7.3.8 in [170]), in which the azimuth angle
follows wrapped Gaussian distribution, zenith angle follows Laplacian distribution,
and the zenith angle parameters partially refer to the WINNER+ channel model.
3GPP D2D Model [3] support the scenarios of UMi (both O2O and O2I) and
Indoor-to-Indoor (I2I). The main contribution is the support for dual mobility and
the core of the model is calculating the Doppler shifts. Two system simulation
scenarios are defined: generic scenarios (2 GHz, mobility speed of 3 km/h) and
114 3 Channel Measurement and Modeling
public security scenario (700 MHz, maximum speed of 60 km/h). The bandwidth is
set as 10 MHz for uplink and downlink respectively in frequency division
duplexing mode, and 20 MHz in time division duplexing mode. Most of the
parameters and the channel generation process refer to the existing standard
models.
The 3GPP-HF channel model (6-100 GHz) is the first standarderized high
frequency band channel model released to the public. Besides those typical scenar-
ios specified in the 5GCM [12], additional scenarios including backhaul,
D2D/V2V, stadium and gymnasium, are also taken into consideration. The 3GPP-
HF channel model is developed based on the 3GPP-3D MIMO channel model and
adopts both measurement-based and RT modeling methods. Though the modeling
methods adopted and the parameter values are the same as those in the 5GCM,
3GPP-HF gives more detailed implementation. These revisions include multi-
frequency correlation to support carrier aggregation (cluster delays and angles are
the same for all frequence bands, while other parameters are frequency dependent
or frequency independent), finer rays modeling for powers, delays, and angles to
support large bandwidth, and delay drifting over the array for each ray to support
massive MIMO (still not support spherical wavefront). Based on the modeling
method supporting spatial consistency proposed in the 5GCM, the 3GPP-HF pro-
vides the correlation distance required by spatial consistency, and generates the
spatial-domain contineous RVs used to express LOS/NLOS probability, indoor/
outdoor probability, type of building and related SSPs. Time evolution of the
channel was modeled based on locations of clusters. The 3GPP-HF channel
model adoptes the blockage modeling method proposed in the 5GCM and refines
the expression in the polar coordinates, i.e., the blockages are divided into two
catagories. Furthermore, the 3GPP-HF also proposes several simplified channel
models, i.e., five CDL models and five TDL models for link-level simulations.
Moreover, a map-based hybrid model developed by ZTE ltd. is adopted by
combinating the deterministric clusters obtained by ray tracing and the random
clusters obtained by stochastic modeling. The model can be used in the case when
the system performance is desired to be evaluated or predicted with the use of
digital map in order to investigate the impacts of environmental structures and
materials.
The aim of Working Package 5 (WP5, i.e., the channel model group) of WINNER I
is to present a broadband MIMO channel model in 5 GHz frequency range [163]. At
the beginning of the project, two channel models are considered, i.e. 3GPP SCM for
outdoor scenario and 802.11n IEEE model for indoor scenario. Because SCM only
supports a bandwidth of 5 MHz, WP5 extends the SCM model to the SCME
channel model. But it still cant meet the requirements of simulation. In the year
of 2004~2005, seven partners (Elektrobit, HUT, Nokia, KTH, ETH, TUI)
3.5 Existing Channel Models 115
the use of unified mathematical model with different parameter sets. These param-
eterized models is built from a large number of measurement, which make them
widely be accepted and used. The models support the validation of various tech-
niques, such as multi-antenna, 3D MIMO, polarization, multi-user, multi-cell and
multi-hop network.
COST 259 is only a single directional channel model. COST 273 Working
Group continued to extend this model, and presented a double directional MIMO
channel model [166]. COST 273 introduces three categories of cluster, namely the
local clusters, single-bounce clusters and twin-clusters. The latter two are the
differentiations of far-end scatterers in COST 259. The model provides a method
to calculate the size and the distance of a cluster departing from the BS and UT
based on its delay, delay and angular spread. The supported frequency range is
1~5 GHz, and the bandwidth is less than 20 MHz. The parametric modeling for
different environments is difficult for COST model arising from the hardness to
extract characteristics of scatterers from measurements, which limits the applica-
tion of the COST 273 model. Although COST 273 defines 22 typical environments,
only three of which are parameterized.
COST 2100 channel model [167, 174] continues to adopt the concept of VR
proposed by COST 259 and three clusters proposed by COST 273, and further
extends. Its main contribution are the suppots of polarization characteristics, DMC,
the time evolution and multilink simulink. A SC cluster is modeled as a spheroid
with a certain axis length and orientation according to the delay and angular
information of the cluster, while a DMC cluster is modeled as a biger spheroid
concentric with its corresponding SC. Although the modeling idea is advanced, the
reference code provided for downloading [175] is still very simple, many advanced
ideas have not been implemented, which limits its application.
Differing from the WINNER/IMT-Advanced model, the COST channel model
emphasizes that the scatterers exist objectively in the environment, rather than only
belong to one cluster. COST Action IC1004 focuses on the cooperative radio
communications for green smart environments and ends in 2016. The major
objective of its Working Group 1 (WG1) is to develop an integrated radio channel
model (including mmWave band, D2D/V2V and massive MIMO) and submit the
models to standardization organizations. In the application of massive MIMO,
assigning different VRs for different antenna groups at both BS and UT may
model the antenna non-stationary of the channel. Because the locations of the
scatterers are known, the COST model can be easily extended to support the smooth
time evolution and spherical wavefront [167, 174]. The existing COST model is
also designed for the scenario with one end fixed. It also needs to be further
extended for the D2D/V2V scenario and the extension is relatively easy. Based
on the advanced modeling ideas and methods, COST model is likely to be a
reasonable 5G channel model through appropriate extending, so it is worthwhile
to concern.
band of 2.4 GHz and 5 GHz with a maximum bandwidth of 40 MHz and at most
4 antennas. Specifically, it chooses three models A, B, C with small delay spread
among the five channel models presented by HiperLan/2 [177] and addes three
additional models for typical small office, large indoor and outdoor open space, to
form a total of six models A~F. The DSs of six models are 0ns (narrow-band
model), 15 ns and 30 ns, 50 ns, 100 ns and 150 ns respectively. The six models are
composed of 1, 2, 2, 3, 4, 6 clusters respectively. The paths (taps) within a cluster
have different delays but have the same average departure and arrival angle and
AS. The AS and DS of a cluster show strong correlation with correlation coefficient
of 0.7. For each taps, the PAS of a cluster can be assumed as uniformly distributed,
truncated Gaussian distributed, or truncated Laplacian distributed. Based on PASs,
the correlation matrix of the transmit antenna elements and that of the receiver
antenna can be calculated and denoted by RT and RR respectively. The correlation
matrix of the channel is expressed as the Kronecker product of RT and RR. The
model supports UT at a maximum speed of 1.2 km/h, which will lead to a Doppler
shift of 6 Hz at 5.25 GHz and 3 Hz at 2.4 GHz. Doppler power spectrum is a Bell-
shape instead of common U-shape in classic Jakes model. Noise filtering method
instead of Sum of Sinusoids (SoS) is adopted to generate the channel samples.
In order to support higher bandwidth, transmission rate and serve the multi-user
scenarios (MU-MIMO), IEEE 802.11 TGac working group proposed a TGac
channel model based on the TGn model [172], which can support up to 1.28 GHz
bandwidth by interpolating the TGn channel taps. For multi-user case, the angular
parameters of each UT are independently generated, which does not coincide with
the actual situation, so the performance of MU-MIMO may be overestimated. In
addition, according to the actual situation, the smaller moving speed of 0.089 km/h
is proposed, which corresponds to a coherent time of 800 ms or mean square
Doppler spread of 0.414 Hz.
version of this model supports spherical wavefront modeling. Besides the six
scenarios defined in the WINNER II model, namely A1, B1, B4, C1, C2, and C4,
it also adds new UMa measurement scenario in Berlin and 4 scenarios of satellite
ground mobile link. At present, QuaDRiGa does not support precise modeling of
massive MIMO.
The mmMAGIC project, mainly constituted by several European research
institutions including Fraunhofer HHI, Ericsson, and Aalto University, aims to
develop new wireless access technologies for the 5G communications in the
5-100 GHz frequency band. The WG2 focuses on channel modeling and channel
measurement. The mmMAIGC concentrates on the scenarios like street canyon,
open square, indoor office, shopping mall, airport hall, subway station, O2I, and
gymnasium. The mmMAGIC model is developed based on the 3GPP-3D model and
adopts a combination modeling methods composed of measurement, RT and point
cloud field prediction. The model is implemented based on the QuaDRiGa model
library, adopts the KED blockage modeling method and more finer modeling for the
power, delay, and angle of the rays within a cluster. Currently it only synthesizes
the parameters from several existing channel models such as WINNER3GPP-
3DMETIS, and 5GCM and provides some initial model parameters.
IEEE 802.15.3c is the first 60 GHz channel model in the world proposed by IEEE
802.15 (WPAN) working group [8]. It extends the traditional SV channel model to
support arriving angles. However, it only provides the azimuth information, so it is
a 2D channel model. The main measurement to produce the model was completed
by several institutes including Japanese NICT and German IMST, etc. It contains
ten channel models CM1~CM10 for six scenarios including living room, office,
library, conference room, desktop and corridor. The model does not specify the
applicable frequency range and frequency band. According to the configuration of
the measurement equipment, we can infer its applicable frequency range is about
59~64 GHz, and the maximum bandwidth is no more than 3 GHz.
802.11ad IEEE channel model [9] is the channel model proposed for WLAN
system with ultra high data rate operating at unlicensed 60 GHz. It extends the SV
model to support the azimuth and zenith angle at both Tx and Rx, so it is a double-
directional 3D channel model. The model supports three kinds of indoor scenarios
(conference room, office, living room) and two link types of access and D2D. The
modeling methods adopted include RT, measurement, empirical distribution and
theoretical models. RT is used to determine the delay and average azimuth / zenith
angle of the clusters. The main clusters include LOS, one-order and two-order
reflection components. Empirical distribution is used to describe the amplitude and
inter-cluster angle distribution of the reflection paths. Theoretical model is used to
describe polarization characteristics. The main parameters of rays within a cluster
are obtained by measurements. Apart from the classical SV model, in IEEE
3.5 Existing Channel Models 121
802.11ad model the rays within a cluster are divided into pre-cursor and post-cursor
rays. Parameter fitting is done in the two part separately. The azimuth and zenith
angles of the rays within a cluster are independently normal distributed. Different
from the other channel models such as WINNER in which path loss, SF and small-
scale path gain together determine the strength of the clusters and rays, IEEE
802.11ad channel model doesnt distinguish path loss and small scale fading, but
independently generates a path gain for each ray. In general, the model can provide
accurate characteristics of the channel in space and time domain, and support
beamforming, polarization, and consider the blockage loss caused by human body.
In 2012, IEEE 802.11 Working Group established the IEEE 802.11aj Task
Group, which targets the next generation WLAN standards for mmWave band of
45 GHz in China. A series of channel measurements were carried out by Key Lab of
mmWave at Southeast University. A path loss model for three indoor access
scenarios was proposed and the delay spread was analyzed. In March 2015, IEEE
802.11ay Working Group was established to develop a standard for the next
generation 60 GHz transmission system, which intended to extend the application
scope of IEEE 802.11ad with the support for Backhaul and Fronthaul as well as
mobility and the minimum band width of 4 GHz [179]. The channel model
developed has several features: extending the indoor SISO channel models of
IEEE 802.11ad to the MIMO channel models, using Quasi-Deterministic (Q-D)
methodology to build channel models for new scenarios, including open area
outdoor hotspot access, outdoor street canyon hotspot access, large hotel lobby,
ultra short range, and D2D communications. Note that except the D rays and R rays
as like the MiWEBA model, a third type of rays (F-rays) that appear for a short
period of time, e.g., a reflection from the moving cars and other objects, may be
introduced and described with the same way as the R-rays for the special
non-stationary environments.
3.5.8 MiWEBA
rays) as well as several stochastic clusters (called R rays). The D rays and its
parameters like path delay, power, angle and polarization can be determined
through ray tracking according to the propagation environment. Whereas the
parameters of lower power R rays reflected from far walls or random objects
(cars, lampposts etc.) or second-order reflected can be obtained through measure-
ment and analysis. This approach follows the IEEE 802.11ad model, but there exist
some differences. The reflection coefficient of a Q-D ray is calculated using the
Fresnel equation in addition to considering the roughness of reflection surface.
Moreover, there are only post-cursor rays within a cluster.
3.5.9 METIS
3.5.10 5GCM
The 5GCM is a 5G mmWave channel model alliance initiated by the U.S. National
Institute of Standards and technology (NIST) and includes many companies and
universities such as NIST, NYU, AT&T, Qualcomm, CMCC, Huawei, and BUPT.
The typical scenarios in the 5GCM are UMi (urban street canyon and open square),
O2O (outdoor-to-outdoor)/O2I, UMa O2O/O2I, and InH (open or closed indoor
office and shopping mall). The 5GCM is developed based on the 3GPP-3D channel
model and adopt the muti-frequency channel measurement and RT modeling
method. Path loss, LSPs, penetration losses, and blockage models in the LOS and
NLOS cases for several scenarios have been obtained. The path loss is modeled by
the CI, CIF (CI with frequency), and ABG model. Dual-slope CIF model and ABG
model are also provided for InH scenario in the NLOS case. For the O2I penetration
loss, two models were provided, i.e. low loss model and high loss model for the
external wall with different glass. Without doubt the path loss and penetration loss
will depend on the frequency band. However, the LSPs do not show a certain
frequency dependency by analyzing the measurement results. A weak frequency
dependence of LSPs is only discoveried by the RT technique. The correlated
distances and correlation coefficients among multiple LSPs are inherited from the
3GPP-3D model parameters. Three methods are proposed to support spatial con-
sistency, i.e., (1) spatial-time-frequency consistency RVs used to generating the
LOS/NLOS state, path gain, delay, and angles of rays, etc, are obtained by inter-
polating the RVs on the regular grids at the UT location; (2) Dynamic evolution
method similar to the one presented in QuaDriGa; (3) the GGSCM modeling
method proposed by METIS. For the blockage modeling, the KED expressions in
the Cartesian coordinates and polar coordinates are provided. The parameters of
two typical blockages, i.e., human body and vehicle, are suggested. Since the
measurement campaigns have not been finished, the current model is not the final
version.
With the progress of 5G research and development, more channel model are built
by the methods combing the deterministric modeling based on Ray Tracing and
stochastic modeling based on channel measurement. The existing channel models
124 3 Channel Measurement and Modeling
Channel coefficients
Apply antenna
Generate Generate
mutual
random initial channel
coupling,
phases coefficients
PL,SF
Fig. 3.11 2D and 3D distances of outdoor (left) and indoor (right) users
building, the number of floors, N, are uniformly distributed in 4~8 with floor
height of 3 m. The floors that the UTs may stay, nfloor, are uniformly distributed
between 1~N so that the heights of indoor users can be written as hUT 3(nfloor-
1)+1.5. The distances between indoor users and the BS can be expressed as
(Fig. 3.11)
q
d 3D d3Dout d3Din d2Dout d2Din 2 hBS hUT 2 : 3:63
It is necessary to ensure that the UT is within the Fraunhofer Region (FR) of the
BS antenna, that is, d3D should be larger than the Fraunhofer distance. In the past, it
was guaranteed by ensuring that the horizontal distances between UT and BS are
more than 35 m (UMa) or 10 m (UMi). If the locations of BS and UT are given,
AoD (LOS,ZOD, LOS,AOD) and AOA (LOS, ZOA, LOS, AOA) of the LOS path can be
determined. In D2D scenario, it is necessary to generate two locations for two UTs
in each link.
7. Generating array orientations of the BSs with respect to the GCS. The BS array
orientations is defined by three angles, i.e., bearing angle BS, downtilt angle
BS and slant angle BS. The bearing angles can be determined in the phase of
network deployment. The mechanical downtilt angles is usually set to
12 degrees, while the slant angle is usually set to 0 degrees. These three angles
constitute the 3D rotation vector (BS, BS, BS) of BS Local Coordinate System
(LCS) relative to the GCS.
8. Generating array orientations of the UTs with respect to the GCS. The array
orientations of each UT is determined by three angles: bearing angle UT,
3.6 Stochastic Channel Generation 129
downtilt angle UT and slant angle UT, which can be randomly generated or
based on the practical situation. These three angles constitute the 3D rotation
vector (UT, UT, UT) of UT LCS relative to the GCS.
9. Determining moving velocity vector of UT in the GCS, including velocity
v and motion direction (v, v). The motion direction is set to be random in
horizontal plane. The typical velocity can be set to 3 km/h, or be set according
to practical situation. In D2D scenario, the velocity vector of both ends are
generated with this method.
10. Determining the LOS/NLOS conditions of a link. For indoor and outdoor users,
the propagation conditions (LOS/NLOS) are determined according to
Table 3.13.
Going through the above steps, a complete GSCM simulation environment can
be built. All the location information and array configuration of BSs and UTs are
defined in this simulation environment, in which all transmission links are deter-
mined as well.
This subsction contains two aspects. One is to generate the path loss of all links
according to the definition of simulation environment. The other is to generate LSPs
of all links, such as SF, Ricean K-factor, DS, AS, etc.
130 3 Channel Measurement and Modeling
The large scale fading models differ with the propagation scenarios and conditions.
Table 3.14 shows the path loss model in 3D-UMi scenario. Unlike IMT-A path loss
model, 3D-UMi scenario only takes the traditional cellular scenarios into account
and does not involve the Manhattan grid deployment. In addition, the SF in 3D-UMi
scenario is lognormal distributed and its variance is also given in Table 3.14.
When the UT is located outdoors, the link can be either LOS or NLOS. In NLOS
conditions, the PLLOS is the path loss assuming the propagation condition is LOS at
0
the same location apart from BS. dBP is the 2D breakpoint distance of PL model.
0 4hBS hE hUT hE
d BP 3:64
where hE is the effective height (in meters) of surrounding environment that the link
exists, which is set to 1 in 3D-UMi scenario. hBS and hUT are the antenna height of
BS and UT, respectively. Generally, hBS is lower than the average height of
surrounding buildings, and hUT is in range of 1.5~22.5 m. When the user is located
indoors, PLb is the basic path loss, which is equal to PL3D-UMi, namely the path loss
assuming the UT is outdoors. PLtw is the penetration loss and set to 20 dB in cellular
cell. PLin is the indoor path loss, which is related to d2D-in, i.e. the perpendicular
distance between the wall and the UT. d2D d2D-in + d2D-out, with d2D-in uniformly
distributed in 0~25 m. The Antenna height can be expressed as hUT 3(nfloor 1) +
1.5, where nfloor is the floor on which the UT locates, with the value usually set to
1~8. nfloor 1 corresponds to the ground floor.
In 3D-UMa scenarios, the path loss model is shown in Table 3.15. Its SF is also a
logarithmic model, and specific fading parameters are also given in the table.
In NLOS transmission, W is the width of the street (in meters), which is generally
set to 5~50 m. h is the average height of buildings (in meters), which is generally set
3.6 Stochastic Channel Generation 131
to 5~50 m. The path loss model in LOS condition is similar to that of 3D-UMi
scenario except the value of hE. When the link is LOS transmission, the probability
of hE equaling 1 m is, p(hE 1m) 1/(1 + C(d2D, hUT)), where the definition of
C(d2D, hUT) has been described in Table 3.13. Otherwise hE will get values uni-
formly from the set of {12, 15, ... , (hUT1.5)}. In addition, when the UT is located
inside the building and the link is O2I transmission, the path loss model and the
definition of its parameters are the same as the O2I path loss model in 3D-UMi
scenario.
Other LSPs, including SF, Ricean K-factor, DS, and AS (ASA, ASD, ZSA, ZSD)
remain to be generated. Different LSPs of the same UT show certain correlation
between them, which are known as intra-station correlations. In multi-link simula-
tion, different links (generally only the UTs connected to the same BS are consid-
ered) of each LSP show also certain correlation between them, which are known as
inter-station correlations. Assume there are K links corresponding to K user located
at (xk, yk) and each link has M LSPs. There are a total of NM*K large scale
parameters. Usually these parameters are lognormal random variables, and they are
anN-by-N matrix C.
correlated with each other. The correlation matrix is denoted asp
N LSPs random variables can be generated by multiplying C with an N-by-1
column vector formed by N independent standard normal RVs. Obviously, the
computation of LSPs with tens or hundreds of links in system level simulation is
extremely huge. Therefore, WINNER suggests to calculate the intra-station corre-
lations and the inter-station correlations separately.
132 3 Channel Measurement and Modeling
1. For each LSP parameter, the inter-station correlation is calculated first, which
can be completed through the LSP map. The specific procedure of this method is
as follows. Firstly, generating a two-dimensional mesh based on UT locations.
The region of the mesh is outwardly extention of a rectangle covering all UTs by
double correlation distances. For each mesh grid, M standard normally distrib-
uted RVs (corresponding to M LSPs) are assigned and filtered by their
corresponfing 2D FIR filter separately. Impulse response of the filter for the m-
th LSP can be expressed as:
d
hm d exp 3:65
dm, cor
where d is distance. dm,cor is the correlation distance of the m-th LSPs, which is
dependent on simulation environments and links condition. The specific parameters
are given in Table 3.16 and 3.17. Finally, M filtered LSPs, m (xk, yk), are obtained at
every user locations (grids). The above procedure can only ensure that the inter-
station correlation decreases negatively exponentially with the distance in horizon-
tal and perpendicular directions. QuaDriGa proposes a more reasonable method,
considering the correlation between the two diagonal directions. It uses two sets of
filters. One set is applied in horizontal and perpendicular directions, just as WIN-
NER, and the other is applied in the two diagonal directions. Assuming the grids
spacing is dpx, the filter coefficients at distance kdpx for two filter sets are
1 kd px
am k p exp
d m, cor dm, cor
p
1 k 2dpx
bm k p exp : 3:66
dm, cor d m, cor
This method can discretize the whole two-dimensional plane into finer mesh
grids (less than 1 m), and the LSPs of UTs can be obtained by interpolating the LSPs
at adjacent mesh grids [19]. Whereas in WINNER, the smallest location resolution
of the UT coordinates and mesh grids is one meter.
The above method is only effective for 2D plane. When the UTs are distributed
in 3D space, especially when there exists two-way movement in D2D/V2V scenar-
ios, there will be totally 6D coordinate locations for both ends. Therefore, the
filtering method will be very difficult to use. METIS model introduces a new
method of Sum of Sinusoids (SoS) to describe the inter-station LSP correlations
[184]. Specifically, taking SF as an example, it can be expressed as
r
2 2SF X M
m m
SF sin D 3:67
M m1
3.6 Stochastic Channel Generation 133
Table 3.16 Large-scale parameters 3GPP-3D channel model (3GPP TR36.873 [170])
3D-UMi 3D-UMa
Scenario LOS NLOS O-to-I LOS NLOS O-to-I
DS log10([s]) DS 7.19 6.89 6.62 7.03 6.44 6.62
DS 0.40 0.54 0.32 0.66 0.39 0.32
AOD spread ASD 1.20 1.41 1.25 1.15 1.41 1.25
(ASD) log10([ ]) ASD 0.43 0.17 0.42 0.28 0.28 0.42
AOA spread ASA 1.75 1.84 1.76 1.81 1.87 1.76
(ASA) log10([ ]) ASA 0.19 0.15 0.16 0.20 0.11 0.16
ZOA spread ZSA 0.60 0.88 1.01 0.95 1.26 1.01
(ZSA) log10([ ]) ZSA 0.16 0.16 0.43 0.16 0.16 0.43
SF [dB] SF 3 4 7 4 6 7
Kfactor (KF) K 9 N/A N/A 9 N/A N/A
[dB] K 5 N/A N/A 3.5 N/A N/A
Crosscorrelation ASD vs DS 0.5 0 0.4 0.4 0.4 0.4
ASA vs DS 0.8 0.4 0.4 0.8 0.6 0.4
ASA vs SF 0.4 0.4 0 0.5 0 0
ASD vs SF 0.5 0 0.2 0.5 0.6 0.2
DS vs SF 0.4 0.7 0.5 0.4 0.4 0.5
ASD vs ASA 0.4 0 0 0 0.4 0
ASD vs K 0.2 N/A N/A 0 N/A N/A
ASA vs K 0.3 N/A N/A 0.2 N/A N/A
DS vs K 0.7 N/A N/A 0.4 N/A N/A
SF vs K 0.5 N/A N/A 0 N/A N/A
ZSD vs SF 0 0 0 0 0 0
ZSA vs SF 0 0 0 0.8 0.4 0
ZSD vs K 0 N/A N/A 0 N/A N/A
ZSA vs K 0 N/A N/A 0 N/A N/A
ZSD vs DS 0 0.5 0.6 0.2 0.5 0.6
ZSAvs DS 0.2 0 0.2 0 0 0.2
ZSD vs ASD 0.5 0.5 0.2 0.5 0.5 0.2
ZSA vs ASD 0.3 0.5 0 0 0.1 0
ZSD vs ASA 0 0 0 0.3 0 0
ZSA vs ASA 0 0.2 0.5 0.4 0 0.5
ZSD vs ZSA 0 0 0.5 0 0 0.5
Correlation dis- DS 7 10 10 30 40 10
tance ASD 8 10 11 18 50 11
(in horizontal ASA 8 9 17 15 50 17
plane) [m]
SF 10 13 7 37 50 7
K 15 N/A N/A 12 N/A N/A
ZSA 12 10 25 15 50 25
ZSD 12 10 25 15 50 25
134 3 Channel Measurement and Modeling
Table 3.17 ZSD and ZOD offset of 3D-UMa and 3D-UMi (3GPP TR36.873 [170])
3D-UMa 3D-UMi
LOS/ O-to-I NLOS/ O-to I LOS/ O-to-I NLOS/ O-to I
Scenario LOS NLOS LOS NLOS
ZOD ZSD max[0.5, 2.1 max[0.5, 2.1 max[0.5, max[0.5, 2.1
spread (d2D/1000) (d2D/1000) 0.01 2.1(d2D/ (d2D/1000)
(ZSD) 0.01 (hUT (hUT 1.5)+0.9] 1000)+0.01| +0.01max(hUT
log10([ ]) 1.5)+0.75] hUT hBS| hBS,0) +0.9]
+0.75]
ZSD 0.40 0.49 0.4 0.6
ZOD offset, 0 10^ 0 10^
offset ZOD {0.62log10(max {0.55log10(max
(10, d2D)) (10, d2D))+1.6}
+1.930.07
(hUT1.5)}
The specific parameters of the elements in correlation matrix CMxM(0) are also
given in Table 3.16. All seven LSPs can be represented as normally distributed RVs
in logarithm domain. But they are somewhat DS and four ASs use log10()
different.
processing method, i.e., log10 X N lgX ; lgX , which is equivalent to
2
This subsection will discuss the generation the SSPs, including cluster parameters
(delay, power, angle, XPR) and intra-cluster parameters (delay, power and angle of
rays in clusters).
1. Generating the relative delay of multipath .
In 3GPP 3D channel model, the cluster delays follow exponential distribution,
which can be generated by the methods specified in Table 3.11, i.e.,
0n r ln Xn : 3:69
Where 0n is the absolute delay of the n-th cluster. r is the ratio of standard deviation
of delay distribution delays over DS , i.e. r delays/ . Xn is a RV uniformly
distributed in [0, 1]. A total of N cluster delays are generated. The value of
N depends on the environment, which is given in Table 3.18. The delays are
being normalized and sorted in ascending orde, i.e., 10, n-1n, n 2, . . ., N
n sort 0n min 0n : 3:70
Table 3.18 Small-scale parameters of 3GPP-3D channel model (3GPP TR36.873 [170])
3D-UMi 3D-UMa
Scenario LOS NLOS O-to-I LOS NLOS O-to-I
Delay distribution Exp Exp Exp Exp Exp Exp
AOD and AOA distribution Wrapped Gaussian Wrapped Gaussian
ZOD and ZOA distribution Laplacian Laplacian
Delay scaling factor r 3.2 3 2.2 2.5 2.3 2.2
XPR[dB] 9 8.0 9 8 7 9
3 3 5 4 3 5
Number of clusters 12 19 12 12 20 12
Number of rays per cluster 20 20 20 20 20 20
Cluster ASD 3 10 5 5 2 5
Cluster ASA 17 22 8 11 15 8
Cluster ZSA 7 7 3 7 7 3
Per cluster shadowing std [dB] 3 3 4 3 3 4
n
nLOS : 3:71
CDS
with KF being the Ricean K-factor in dB scale generated in the phase of LSPs
generation.
2. Generating cluster power P.
Cluster powers are calculated assuming a single slope exponential power delay
profile, which is a basic assumption for all GSCMs. The average cluster powers
decay exponentially with the increase of delay, given by
r 1 CSFn
P0n exp n 10 10 , 3:73
r
where CSFeN 0; 2CSF is the per cluster shadow fading (CSF) in dB scale, whose
statistical parameters are also specified in Table 3.18. Note that the delays specified
in Equ. (3.70) are used to generate cluster powers, while not the scaled delays,
i.e. Equ. (3.71), even in the case of LOS conditions. The sum power of all clusters is
normalized to one, i.e.
P0
Pn P N n : 3:74
n1 P0n
3.6 Stochastic Channel Generation 137
In the case of LOS conditions, additional specular component is added to the first
cluster. The cluster powers are given by
1
PnLOS Pn n 1P1, LOS
KF 1 3:75
1 KF
Pn n 1 ,
KF 1 KF 1
where P1, LOS is the power of LOS component and KF is Ricean K-factor in linear
scale. (n) denotes Dirac function. Generally, except the two strongest clusters, the
cluster power is uniformly allocated to every ray within the cluster. Assuming a
cluster contains Mn rays, the power of each ray within a cluster n is given by Pn/Mn.
Some weaker clusters, such as the cluster with power being 25 dB lower than the
maximum cluster power, can be removed.
3. Generating AOA and AOD
The Power Angular Spectrum (PAS) determines the spatial correlation proper-
ties of the channel. It reflects the power distribution in different directions and is
regarded as an important parameters of a MIMO channel model. At both the
transmitter and the receiver, 3GPP 3D channel model assumes that the composite
PAS in azimuth of all clusters is modeled as Wrapped Gaussian distribution, while
the composite PAS in zenith of all clusters is modeled as Laplacian distribution, as
shown in Fig. 3.13. It can be seen that the peak of Laplacian distribution is more
abrupt than that of Wrapped Gaussian distribution, which means that the energy is
more concentrated in zenith, while the energy is more dispersed in azimuth.
0.5
Wrapped Gaussian
0.45 Laplacian
0.4
0.35
probability density
0.3
0.25
0.2
0.15
0.1
0.05
0
-200 -150 -100 -50 0 50 100 150 200
angle(o)
The constant CAS is a scaling factor related to the number of clusters N and given
by Table 3.19.
In the case of LOS conditions, additional scaling of angles is required to
compensate for the impact of LOS power on the angular spread, so that constant
LOS
CAS is substituted by Ricean K-factor dependent scaling constant CAS ,
LOS
CAS CAS 1:1035 0:028KF 0:002KF2 0:0001KF3 3:77
with KF being the Ricean K-factor in dB. The AOAs of clusters can be generated
for the case of LOS and NLOS respectively
b n, AOA Y n LOS, AOA
Xn , NLOS
n, AOA 3:78
b n, AOA Y n X1
Xn b 1, AOA Y 1 LOS, AOA , LOS
Where CASA is the RMS ASA of each cluster and given in Table 3.18. m is the
offset angles of m-th ray within a cluster and given in Table 3.20. With this method,
only rough angular resolution can be obtained, which does not meet the high angle
resolution requirements of massive MIMO or pencil beamforming. METIS
suggested a more precise offset angles generation method, i.e. sampling the Gauss-
ian function directly. For more details please refer to [185].
3.6 Stochastic Channel Generation 139
Table 3.20 Ray offset angles Ray path m Ray path offset angle m [ ]
within a cluster, given for 1
1,2 0.0447
RMS AS
3,4 0.1413
5,6 0.2492
7,8 0.3715
9,10 0.5129
11,12 0.6797
13,14 0.8844
15,16 1.1481
17,18 1.5195
19,20 2.1551
The constant CES is a scaling factor related to the number of clusters N and given by
8
< 1:104, N 12
CES 1:184, N 19 3:81
:
1:178, N 20
The ZOAs of clusters can be generated for the case of LOS and NLOS
respectively
Xn b
n, ZOA Y n ZOA , NLOS
n, ZOA 3:83
Xn n, ZOA Y n X1 b
b n, ZOA Y 1 ZOA , LOS
where
random variable
Xn is set to 1 or +1 with equal probability and
2
Y n eN 0; ZSA=7 . ZOA is determined dependent on the location of UT. If the
UT locates indoors, ZOA 90 , otherwise ZOA LOS, ZOA, namely the ZOA of the
140 3 Channel Measurement and Modeling
LOS direction determined at the stage of network layout. Finally, the ZOA of ray
n, m is calculated by adding offset angles to the cluster ZOA,
where CZSA is the RMS ZSA of each cluster and also given in Table 3.18. The
value of m is illustrated in Table 3.20. It should be noted that, the value of n,m,ZOA
is wrapped in [0, 360 ] in the calculation above, while usually n,m,ZOA2[0,180]. If
n,m,ZOA falls within [180 , 360 ], additional processing is applied to set n,m,ZOA
360 n,m,ZOA.
(c) Generating zenith angle of departure (ZOD)
The process of generating the cluster ZOD is similar to the process of generating
ZOA, in which only an extra offset is introduced, i.e.,
(
Xn b n, ZOD Y n offset, ZOD LOS, ZOD , NLOS
n, ZOD 3:85
Xn n, ZOD Y n X1 b
b n, ZOD Y 1 LOS, ZOD , LOS
where Xn is set to 1 or +1 with equal probability, Y n eN 0; ZSD=72 . ZSD is the
RMS zenith angle spread of departure. For its calculation please refer to the section
of LSPs generation. offset,ZOD is a modified factor in the case of NLOS conditions,
which is related to Tx-Rx distance and antenna height, and given in Table 3.17.
Finally, the ZOD of ray n,m is calculated by
3
n, m, ZOD n, ZOD 10ZSD m , 3:86
8
where ZSD is the mean of ZSD with lognormal distribution, which is also given in
Table 3.17. It is also dependent on the Tx-Rx distance and antenna height.
(d) Coupling of rays within a cluster for both azimuth and elevation
Couple randomly n,m,AOA to n,m,AOD within a cluster n (or within the
sub-clusters in the case of two strongest clusters). Couple randomly n,m,ZOA
to n,m,ZOD using the same procedure. Then couple randomly n,m,AOD to n,m,ZOD.
As a result, four angles of each ray within a cluster n are determined (n,m,AOD,
n,m,ZOD) and (n,m,AOA,n,m,ZOA).
4. Generating XPRs
Generating XPR for each ray within each cluster. XPR is lognormal distributed.
Draw XPR values as
n, m 10X=10 , 3:87
Till now, the LSPs and SSPs are generated for each ray m within each cluster n.
Next we will discuss the generation of channel coefficients.
r M " #T
Pn X Frx, u, n, m, ZOA , n, m, AOA
u, s, n t;
H NLOS
M m1 Frx, u, n, m, ZOA , n, m, AOA
2
q 3 " #
ejn, m 1
n, m e
jn, m
Ftx, s, n, m, ZOD ;n, m, AOD
4
q 5
Ftx, s, n, m, ZOD ;n, m, AOD
1
n, m e
jn, m
ejn, m
1 T
e j20 rx, n, m drx, u vrx t tx, n, m dtx, s vtx t n, m
T
3:88
Where Ftx, s, and Ftx, s, are the vertical and horizontal polarimetric radiation
pattern of s respectively. Frx, u, and Frx, u, are the vertical and horizontal polari-
metric radiation pattern of u respectively. tx, n, m and rx, n, m are the unit direction
vectors of the ray n,m at transmitter and receiver respectively. dtx, s and drx, u are the
location vectors of s and u respectively. n,m is the cross polarisation power ratio in
linear scale, and 0 is the wavelength of the carrier. n, m is the delay of ray n,m. vtx
and vrx are the velocity vectors of the transmitter and receiver relative to the-first
bounce and last-bounce scatter respectively, so that this model is applicable for
D2D/V2V scenarios.
2. For the two strongest clusters, twenty rays of a cluster are spread in delay into
three sub-clusters, and the relative delays of each sub-cluster is as follows,
142 3 Channel Measurement and Modeling
Table 3.21 Sub-clusters power and delay information (3GPP TR36.873 [170])
Sub path Ray path Average power ratio Delay offset
1 1,2,3,4,5,6,7,8,19,20 10/20 0 ns
2 9,10,11,12,17,18 6/20 5 ns
3 13,14,15,16 4/20 10 ns
n, 1 n 0 ns
n, 2 n 5 ns 3:89
n, 3 n 10 ns
Where n is the relative delay of the cluster to be seperated. Table 3.21 shows the
power allocation for the rays with different index in the three sub-clusters.
The strongest clusters separating into three sub-clusters may increase the delay
resolution of the channel model, and correspondly support larger bandwidth up to
100 MHz compared with 5 MHz bandwidth of SCM model.
In the case of LOS, an LOS component needs to be added and the power of each
ray is scaled down in terms of Ricean K-factor. The channel coefficients are given
by
r r
1 KF
, s, n t
H uLOS H u, s, n t n 1
NLOS
KF 1 KF 1
" #T
" #
Frx, u, LOS, ZOA ;LOS, AOA ejLOS 0 Ftx, s, LOS, ZOD ;LOS, AOD
Frx, u, LOS, ZOA ;LOS, AOA 0 ejLOS Ftx, s, LOS, ZOD ;LOS, AOD
1 T
e j20 rx, LoS drx, u vrx t tx, LoS dtx, s vtx t
T
3:90
In the applying of massive MIMO technology, the coupling between the antenna
elements will be nonnegligible, so it should be considered in modeling. According
to the method in [31], the channel coefficients are given by
where Rt is the real part of the impedance matrix of transmitting antennas Zt,
Rt Re {Zt}. Rl is the real part of load impedance Zl, Rl Re {Zl}. Zr is the
impedance matrix of receiving antennas. r11 is t real part of the self-impedance of
single antenna z11, r11 Re {z11}. Finally, apply path loss and shadow fading on the
generated channel coefficients to get
p
H u, s, n t PL SFH ~ u , s , n t 3:92
The delay values of obtained clusters (rays) are continuous, which are not sure to
coincide with the periodically sampling instants of digital communications system.
3.6 Stochastic Channel Generation 143
NETWORK LAYOUT
600
Link 1/2,
500 3D-UMi-O2I
400
Cells area, Y[m]
100
0
260 280 300 320 340 360 380 400
Cells area, X[m]
Figure 3.15 shows the spatial and temporal distribution of these rays. A ray is
marked with tag o, with marker size reflecting its power intensity and color (red,
blue, black, pink for link 1~4 respectively) indicating the link it belongs. There is
the large power difference among rays. In order to show the rays clearly, here we
carry out a simple process.In the LOS case, the rays powers are normalized by
scaling between 2 and 30, where in the NLOS case, the power range is between
2 and 10. The subfigure (a) presents a three-dimensional distribution of the rays.
The subfigure (b) gives the two-dimensional view of ZoDs-AoDs. The subfigure
(c) shows the two-dimensional view of the ZoDs-delays. The subfigure (d) shows
the two-dimensional view of AoDs-delays. Several points can be got obviously
from these figures. (1) Link 3 is LOS transmission, and the LOS components can be
identified clearly from the figure. (2) Link 1 and 2 have shorter communications
distance, so that they have minimum propagation delay. However their DSs are not
small. Link 1 has the largest DS. Link 4 has farthest communications distance, so its
propagation delay is largest while its DS is smallest. (3) All links have obvious
angular spread in the zenith dimension. The span of zenith angles of the four links
are approximately 70, 10, 30 and 50 degrees respectively. The zenith AS are
independent with the propagation conditions, such as the link 3 (LOS) has a midium
zenith AS. (4) Link 1 and 2 have almost the same average azimuth angle and
azimuth AS because the UTs are in the same building. (5) All links have almost the
same azimuth AS which is greater than the zenith AS.
3.6 Stochastic Channel Generation 145
(a) (b)
Delay
(c) (d)
Delay Delay
Fig. 3.15 Rays parameters distribution for one simulation: angles at BS, delays and powers.
(a) 3D view; (b) ZoDs-AoDs view; (c) ZoDs-delay view; (d) AoDs-delay view.
Figure 3.16 shows the channel impulse response of four different links observed
from the selected antenna pair, i.e. the 2nd antenna of BS array and the 1st antenna
of every UT. The path loss and shadow fading are also integrated into the channel
impulse response. In simulation, it is assumed that propagation conditions, delays
and angles of the multipath rays are kept constant. Only the effect of Doppler shift
caused by the movement of UTs is investigated. The figure can reveal: (1) All
channels have obvious time-varying characteristics. Because the UTs in Link 1 and
2 are at a low speed, the two channels change slowly in the observation time span of
0~70 s. The channels of Link 3 and 4 change violently even in a short observation
time span of 0~0.7 s. (2) Link 3 is in the LOS condition. LOS path is dominant in
CIR. (3) Link 4 has more stronger NLOS multipath components and smaller DS,
but the rays scatter in angle domain, which make the channel appear to be more
time-selective.
146 3 Channel Measurement and Modeling
Fig. 3.16 Downlink time-varing channel from 2nd antenna at BS to 1st antenna at UT
Channel model is crucial for research and development for 5G and future mobile
communications. The 5G channel model needs to face up with a variety of chal-
lenges and requirments, including massive MIMO, mmWave, high mobility and
diversified application scenarios. This chapter begins with the requirement of 5G
channel model, and reviews five modeling methods: measurement-based GSCM,
regular shape-based GSCM, CSCM, extended SV and ray tracing. Especially, ray
tracing has caught more and more attention. With the combination of ray tracing
and other modeling methods, several new channel models are proposed. The
formation and verification of all the channel models cannot be separated from
channel sounding, which can provide real measurement data for channel modeling
and verify the presented channel model. This chapter introduces the measurement
methods and several channel sounders, as well as three types of 5G channel
measurement activities (massive MIMO, mmWave, and high mobility). High
resolution parameter estimation algorithms are used to extract path parameters
from the measured data. These post-processed data are further handled by statistical
analysis to obtain probability distribution functions (PDFs) and the corresponding
References 147
numerical values of statistical parameters. More than ten existing channel models
are introduced and compared according to the requirements of the 5G channel
model. At the lat section, we take 3GPP-3D channel model as an example to gives
the general process of stochastic channel generation, and finally give a simple
instantiation of channel simulation.
References
1. METIS D1.4 V1.0, METIS Channel Models, ICT-317669, METIS project, Feb. 2015.
https://www.metis2020.com/
2. 5G Vision and Requirements, IMT-2020 (5G) Promotion Group. May 2014.
3. 3GPP TR36.843, Study on LTE Device to Device Proximity Services, Radio Aspects, 3rd
Generation Partnership Project, V12.0.1, March 2014. http://www.3gpp.org
4. MiWEBA D5.1, Channel Modeling and Characterization, FP7-ICT-608637, V1.0,June
2014. http://www.miweba.eu
5. METIS D5.1, Intermediate description of the spectrum needs and usage principles, V1.0,
ICT-317669, METIS project, August 2013. https://www.metis2020.com
6. 3GPP TR 38.900, Study on channel model for frequency spectrum above 6 GHz, v14.0.0
2016
7. W. Roh, Ji-yun seol, Jeongho park, et al. Millimeter-wave beamforming as an enabling
technology for 5G cellular communications: theoretical feasibility and prototype results.
IEEE Communications Magazine, vol. 52, no. 2, pp.106-113, 2014.
8. Su-Khiong Yong, IEEE P802.15 Wireless Personal Area Networks - TG3c Channel Model-
ing Sub-committee Final Report, IEEE 15-07-0584-01-003c,March 2007
9. A. Maltsev, V. Erceg, E. Perahia, C. Hansen, R. Maslennikov, A. Lomayev, A. Sevastyanov,
A. Khoryaev, G. Morozov, M. Jacob, S. Priebe, T. Krner, S. Kato, H. Sawada, K. Sato and
H. Harada, Channel Models for 60 GHz WLAN Systems, IEEE 802.11ad 09/0334r8, 2010.
10. A. Maltsev, A. Pudeyev, Y. Gagiev, et.al., Channel Models for IEEE 802.11ay, IEEE
802.11-15/1150r9, 2016
11. H2020-ICT-671650-mmMAGIC/D2.1. Measurement Campaigns and Initial Channel
Models for Preferred Suitable Frequency Ranges, 2016. https://bscw.5g-mmmagic.eu/
pub/bscw.cgi/d94832/ mmMAGIC_D2-1.pdf
12. 5GCM White Paper, 5G Channel Model for bands up to 100 GHz, v2.0, 2016. http://www.
5gworkshops.com/
13. WINNER II D1.1.2, Channel models, IST-4-027756 V1.2, Sept. 2007. Online] Available:
http://www. istwinner.org/deliverables.html
14. ITU-R M.2135-1, Guidelines for evaluation of radio interface technologies for
IMT-Advanced, International Telecommunication Union (ITU), Geneva, Switzerland,
Technical Report, December 2009.
15. K. Zheng, S. Ou, and X. Yin, Massive MIMO channel models:A survey, International
Journal of Antennas and Propagation, vol. 2014, 2014
16. C. A. Balanis, Antenna Thory:Analysis and Design, John Wiley & Sons, Hoboken, NJ, USA,
2012.
17. S. Wu, C.-X. Wang, H. Aggoune, M. M. Alwakeel, and Y. He, A non-stationary 3D
wideband twin-cluster model for 5G massive MIMO channels, IEEE J. Sel. Areas Commun.,
vol. 32, no. 6, pp. 1207-1218, June 2014.
18. X. Gao, F. Tufvesson and O. Edfors, Massive MIMO channels:Measurements and models.
in Proc. of 2013 Asilomar Conference on Signals, Systems and Computers, 2013
148 3 Channel Measurement and Modeling
19. S. Jaeckel, L. Raschkowski, K. B orner and L. Thiele, QuaDRiGa: A 3.D Multicell Channel
Model with Time Evolution for Enabling Virtual Field Trials, IEEE Transactions on
Antennas Propagation, 2014. http://quadriga-channel-model.de
20. T. S. Rappaport, G. R. Maccartney, M. K. Samimi, et al. Wideband Millimeter-Wave
Propagation Measurements and Channel Models for Future Wireless Communication System
Design.IEEE Transactions on Communications, vol.63, no. 9, pp.3029-3056,2015.
21. G. Calcev, D. Chizhik, B. Goeransson, S. Howard, H. Huang, A. Kogiantis, A. F. Molisch,
A. L. Moustakas, D. Reed and H. Xu, A Wideband Spatial Channel Model for System-Wide
Simulations, IEEE Trans. Vehicular Techn., March 2007.
22. Matthias Patzold, Mobile Radio Channels, 2nd Edition, John Wiley & Sons, Chichester, UK,
2012
23. Y. Yuan, C.-X. Wang, Y. He, M. M. Alwakeel, and H. Aggoune, Novel 3D wideband
non-stationary geometry-based stochastic models for non-isotropic MIMO vehicle-to-vehicle
channels, IEEE Transactions on Wireless Communications, vol 13, no. 1, pp. 298-309, 2014.
24. A. Ghazal, C.-X. Wang, B. Ai, D. Yuan, and H. Haas, A non-stationary wideband MIMO
channel model for high-mobility intelligent transportation systems, IEEE Trans. Intell.
Transp. Syst., vol. 16, no. 2, pp. 885-897, Apr. 2015.
25. A. M. Sayeed, Deconstructing multiantenna fading channels, IEEE Trans. S ignal Process.,
vol. 50, no. 10, pp. 25632579, Oct. 2002.
26. W. Weichselberger, M. Herdin, H. Ozcelik, and E. Bonek, A stochastic MIMO channel
model with joint correlation of both link ends, IEEE Transactions on Wireless Communi-
cations, vol. 5, no. 1, pp. 90-100, Jan. 2006.
27. J. Hoydis, S. ten Brink, and M. Debbah, Massive MIMO in the UL/DL of cellular networks:
how many antennas do we need?IEEE Journal on Selected Areas in Communications, vol.
31, no. 2, pp. 160171, 2013.
28. C. Masouros, M. Sellathurai, and T. Ratnarajah, Large-scale MIMO transmitters in fied
physical spaces:the effct of transmit correlation and mutual coupling, IEEE Transactions on
Communications, vol. 61, no. 7, pp. 27942804, 2013
29. B. Clerckx, C. Craeye, D. Vanhoenacker-Janvier, and C. Oestges, Impact of antenna
coupling on 2 2 MIMO communications, IEEE Transactions on Vehicular Technology,
vol. 56, no. 3, pp. 10091018, 2007.
30. C. Masouros, J. Chen, K. Tong, M. Sellathurai, and T. Ratnarajah, Towards massive-MIMO
transmitters:on the effcts of deploying increasing antennas in fied physical space, in Pro-
ceedings of the Future Network and Mobile Summit, pp. 110, 2013.
31. Y. Fei, Y. Fan, B. K. Lau, and J. S. Thompson, Optimal single-port matching impedance for
capacity maximization in compact MIMO arrays, IEEE Trans. Antennas Propagat., vol.
56, no. 11, pp. 35663575, Nov. 2008.
32. R. Srinivasan, J. Zhuang, et. al., IEEE 802.16m Evaluation Methodology Document (EMD)
, IEEE 802.16m-08/004r2,July 2008.
33. A. Saleh and R. Valenzuela, A Statistical Model for Indoor Multipath Propagation,IEEE
J. Select. Areas Commun., Vol. SAC-5, No. 2, pp. 128-137, Feb. 1987.
34. Wu Zhizhong, Mobile Communications Radio Waves Propagation, Beijing: PeopleS Posts
And Telecommunications Publishing House, 2002. (in Chinese)
35. Nan Wang, Modern Uniform Geometrical Theory of diffraction, Xian: Xian Electronic
Sience &Technology University Press, 2010. (in Chinese)
36. J. Pascual-Garca, M.-T. Martinez-Ingles, J. M. Molina Garcia-Pardo, J. V. Rodrguez, and
L. JuanLlacer, Using tuned diffuse scattering parameters in ray tracing channel modeling,
in Proc. 9th European Conf. Antennas and Propagation (EuCAP 2015), Lisbon, Portugal,
pp. 14.
37. Pekka Kyosti and Tommi Jamsa, Complexity Comparison of MIMO Channel Modelling
Methods, ISWCS07, Trondheim, Norway, October 2007
38. J. Chen, X. Yin, L. Tian and M. D. Kim,"Millimeter-Wave Channel Modeling Based on A
Unified Propagation Graph Theory." IEEE Communications Letters 21(2): 246-249, 2017.
References 149
39. Elektrobit Ltd., Propsound - multi-dimensional radio channel sounder, System specifica-
tions document, Concept and specifications, Technical report, 2004.
40. Channelsounder.de, MEDAV GmbH, [Online]. Available:http://www.channelsounder.de/
41. V. Kolmonen, J. Kivinen, L. Vuokko, and P. Vainikainen, 5.3.GHz MIMO radio channel
sounder, IEEE Trans. Instrum. Meas., vol. 55, no. 4, pp. 12631269, Aug. 2006.
42. K. Kitao, K. Saito, Y. Okano, T. Imai and J. Hagiwara, Basic study on spatio-temporal
dynamic channel properties based on channel sounder measurements,Asia Pacific Micro-
wave Conference (APMC), pp. 1064-1067, 2009.
43. W. Newhall, T. Rappaport, and D. Sweeney, A spread spectrum sliding correlator system for
propagation measurements, RF Design, pp. 4054, Apr. 1996.
44. S. Salous, R. Lewenz, I. Hawkins, N. Razavi-Ghods, and M. Abdallah,Parallel receiver
channel sounder for spatial and MIMO characterization of the mobile radio channel, in Proc.
Inst. Elect. Eng. Commun., vol. 152, no. 6, pp. 912918, Dec. 2005.
45. Y. Konishi, M. Kim, M. Ghoraishi, J. Takada, S. Suyama, and H. Suzuki, Channel sounding
technique using MIMO software radio architecture, in Proc. 5th EuCAP, Rome, Italy, Apr.
2011, pp. 25462550.
46. K. Minseok, J. Takada and Y. Konishi, Novel Scalable MIMO Channel Sounding Technique
and Measurement Accuracy Evaluation With Transceiver Impairments,IEEE Transactions
on Instrumentation and Measurement, 61(12): 3185-3197, 2012.
47. H. Sawada, Y. Shoji, H. Ogawa, NICT propagation data, IEEE 802.15-06/0012-01-003c,
Jan. 2006
48. A. Maltsev, R. Maslennikov, A. Sevastyanov, A. Khoryaev, and A. Lomayev, Experimental
investigations of 60 GHz wireless systems in office environment, IEEE J. Sel. Areas
Commun., vol. 27, no. 8, pp.1488-1499, Oct. 2009.
49. C. Gustafson, K. Haneda, S. Wyne and F. Tufvesson, On mm-wave multi-path clustering
and channelmodeling, IEEE Trans. Antennas Propag., vol. 62, no. 3, pp. 1445 -1455, 2014.
50. J. I. Tamir, T. S. Rappaport, Y. C. Eldar, and A. Aziz, Analog compressed sensing for rf
propagation channel sounding, in 2012 I.E. International Conference on Acoustics, Speech
and Signal Processing (ICASSP), pp. 5317-5320, March 2012.
51. Zhu Jin, Wang Haiming and Hong Wei, Large-Scale Fading Characteristics of Indoor
Channel at 45-GHz Band, IEEE Antennas and Wireless Propagation Letters, 14:735-738,
2015
52. X. Gao, O. Edfors, F. Rusek, and F. Tufvesson, Linear precoding performance in measured
very-large MIMO channels, in Proceedings of the 74th IEEE Vehicular Technology Con-
ference (VTC 11), pp.15, Budapest, Hungary, Sept. 2011.
53. S. Payami and F. Tufvesson, Channel measurements and analysis for very large array
systems at 2.6 GHz, in Proceedings of the 6th European Conference on Antennas and
Propagation (EuCAP 12), pp. 433437, Prague, Czech Republic, March 2012.
54. X. Gao, F. Tufvesson, O. Edfors, and F. Rusek, Measured propagation characteristics for
very-large MIMO at 2.6 GHz, in Proceedings of the 46th IEEE Asilomar Conference on
Signals, Systems and Computers (ASILOMAR 12), pp. 295299, Pacific Grove, Calif, USA,
November 2012.
55. X. Gao, O. Edfors, F. Rusek and F. Tufvesson, Massive MIMO performance evaluation
based on measured propagation data,IEEE Transactions onWireless Communications, PP
(99): 1-1, 2015.
56. J. Flordelis, X. Gao, G. Dahman, F. Rusek, O. Edfors and F. Tufvesson, Spatial Separation
of Closely-Spaced Users in Measured Massive Multi-User MIMO Channels, in Proc. IEEE
International Conference on Communications (ICC), London, June 2015.
57. J. Hoydis, C. Hoek, T. Wild and S. Ten Brink, Channel measurements for large antenna
arrays. 2012 International Symposium on Wireless Communication Systems (ISWCS) , 2012
58. A. O. Martinez, E. De Carvalho and J. O. Nielsen, Towards very large aperture massive
MIMO:A measurement based study. Globecom Workshops (GC Wkshps), 2014
150 3 Channel Measurement and Modeling
59. L. Liu, C. Tao, D. Matolak, Y. Lu, B. Ai, H. Chen, Stationarity Investigation of a LOS
Massive MIMO Channel in Stadium Scenarios, in Proc. of IEEE 82th Vehicular Technology
Conference (VTC Fall), 2015
60. D. Fei, R. He, B. Ai, B. Zhang, K. Guan, and Z. Zhong,Massive MIMO Channel Measure-
ments and Analysis at 3.33 GHz, ChinaCom, 2015
61. IEEE P802.11p-2010:Part 11:Wireless LAN Medium Access Control (MAC) and Physical
Layer (PHY) Specifications:Amendment 6:Wireless Access in Vehicular Environments, Jul.
15, 2010, DOI:10.1109/IEEESTD.2010. 5514475.
62. A. Roivainen, P. Jayasinghe, J. Meinilau, V. Hovinen and M. Latva-Aho, Vehicle-to-vehicle
radio channel characterization in urban environment at 2.3 GHz and 5.25 GHz,In Proc. of
IEEE 25th Annual International Symposium on Personal, Indoor, and Mobile Radio Com-
munication (PIMRC), 2014
63. J. Karedal, F. Tufvesson, N. Czink, A. Paier, C. Dumard, T. Zemen, C. F. Mecklenbrauker
and A. F. Molisch, A geometry-based stochastic MIMO model for vehicle-to-vehicle
communications,IEEE Transactions on Wireless Communications, 8(7): 3646-3657, 2009.
64. T. Abbas, J. Karedal, F. Tufvesson, A. Paier, L. Bernado and A. F. Molisch, Directional
Analysis of Vehicle-to-Vehicle Propagation Channels. inProc. of IEEE 73rd Vehicular
Technology Conference (VTC Spring),2011
65. L. Bernado, T. Zemen, F. Tufvesson, A. F. Molisch and C. F. Mecklenbrauker, Delay and
Doppler Spreads of Nonstationary Vehicular Channels for Safety-Relevant Scenarios,IEEE
Transactions on Vehicular Technology, 63(1): 82-93, 2014.
66. L. Bernado, T. Zemen, F. Tufvesson, A. F. Molisch and C. F. Mecklenbrauker, Time- and
Frequency-Varying K-Factor of Non-Stationary Vehicular Channels for Safety-Relevant
Scenarios, IEEE Transactions on Intelligent Transportation Systems, 16(2): 1007-1017,
2015.
67. O. Renaudin, V. M. Kolmonen, P. Vainikainen and C. Oestges, Wideband Measurement-
Based Modeling of Inter-Vehicle Channels in the 5-GHz Band,IEEE Transactions on
Vehicular Technology, 62(8): 3531-3540, 2013.
68. He Ruisi, O. Renaudin, V. M. Kolmonen, K. Haneda, Zhong Zhangdui, Ai Bo and C. Oestges,
Characterization of Quasi-Stationarity Regions for Vehicle-to-Vehicle Radio
Channels,IEEE Transactions on Antennas and Propagation, 63(5): 2237-2251, 2015.
69. M. Walter, U. C. Fiebig and A. Zajic, Experimental Verification of the Non-Stationary
Statistical Model for V2V Scatter Channels, in Proc. of IEEE 80th Vehicular Technology
Conference (VTC Fall), 2014
70. M. Boban, J. Barros and O. K. Tonguz, Geometry-Based Vehicle-to-Vehicle Channel
Modeling for Large-Scale Simulation,IEEE Transactions on Vehicular Technology, 63(9):
4146-4164, 2014.
71. He Ruisi, A. F. Molisch, F. Tufvesson, Zhong Zhangdui, Ai Bo and Zhang Tingting,
Vehicle-to-Vehicle Propagation Models With Large Vehicle Obstructions,IEEE Trans-
actions on Intelligent Transportation Systems, 15(5): 2237-2248, 2014.
72. K. Amiri,Y. Sun, P. Murphy,C.Hunter, J.R.Cavallaro, andA. Sabharwal, WARP, a unified
wireless network testbed for education and research, in Proc. IEEE MSE, 2007, pp. 5354.
73. B. Ai, X. Cheng, T. Krner, Z. D. Zhong, K. Guan, R. S. He, L. Xiong, D. W. Matolak, D. G.
Michelson, C. Briso-Rodriguez, Challenges Toward Wireless Communications for High-
Speed Railway. IEEE Transactions on Intelligent Transportation Systems , vol. 15, no. 5:
2143-2158.2014
74. R.S. He, Z. Zhong, Bo Ai, G. Wang, J. Ding, A.F. Molisch , Measurements and Analysis of
Propagation Channels in High-Speed Railway Viaducts,IEEE Transactions on Wireless
Communications, vol.12, no.2, pp.794,805, February 2013
75. Tao Zhou; Cheng Tao; Liu Liu; Zhenhui Tan, Ricean K-Factor Measurements and Analysis
for Wideband Radio Channels in High-Speed Railway U-Shape Cutting
Scenarios,Vehicular Technology Conference (VTC Spring), 2014 I.E. 79th, pp.1,5, 18-21
May 2014
References 151
76. Guan Ke, Zhong Zhangdui, Ai Bo and T. Kurner, Propagation Measurements and Analysis
for Train Stations of High-Speed Railway at 930 MHz,IEEE Transactions on Vehicular
Technology, 63(8): 3499-3516, 2014.
77. Guan Ke, Zhong Zhangdui, Ai Bo and T. Kurner, Propagation Measurements and Modeling
of Crossing Bridges on High-Speed Railway at 930 MHz,IEEE Transactions on Vehicular
Technology, 63(2): 502-517, 2014.
78. Guan Ke, Zhong Zhangdui, J. I. Alonso and C. Briso-Rodriguez, Measurement of Distrib-
uted Antenna Systems at 2.4 GHz in a Realistic Subway Tunnel Environment,IEEE Trans-
actions on Vehicular Technology, 61(2): 834-837, 2012.
79. R. He, Z. Zhong, B. Ai, K. Guan, B. Chen, J. I. AIonso, and C. Briso,Propagation channel
measurements and analysis at 2.4 GHz in subway tunnels, IET Microwaves, Antennas &
Propagation, vol. 7, no. 11, pp. 934941, 2013
80. K. Guan, B. Ai, Z. Zhong, C.F. Lopez, L. Zhang, C. Briso-Rodriguez, A. Hrovat, B. Zhang,
R. He, T. Tang, Measurements and Analysis of Large-Scale Fading Characteristics in
Curved Subway Tunnels at 920 MHz, 2400 MHz, and 5705 MHz,Intelligent Transportation
Systems, IEEE Transactions on , vol.PP, no.99, pp.1,13, 2015
81. J. Li, Y. Zhao, J. Zhang, R. Jiang, C. Tao, Z. Tan, Radio channel measurements and analysis
at 2.4/5GHz in subway tunnels,Communications, China, vol.12, no.1, pp.36,45, Jan. 2015
82. WINNER II D1.1.2 WINNER II Channel Models Part II Radio Channel Measurement and
Analysis Results, IST-4-027756 V1.0, Sept. 2007. http://www. istwinner.org/ deliverables.
html
83. Qian Wang, Chunxiu Xu, Min Zhao and Deshui Yu, Results and analysis for a novel channel
measurement applied in LTE-R at 2.6 GHz,in Proc. of Wireless Communications and
Networking Conference (WCNC), 2014 IEEE,2014
84. E. J. Violette, R. H. Espeland, R. O. DeBolt and F. K. Schwering, Millimeterwave propa-
gation at street level in an urban environment,IEEE Transactions on Geoscience and Remote
Sensing, vol. 26, pp. 368-380, 1988.
85. H. J. Thomas, R. S. Cole and G. L. Siqueira, An experimental study of the propagation of
55 GHz millimeter waves in an urban mobile radio environment,EEE Transactions on
Vehicular Technology, vol. 43, pp. 140- 146, 1994.
86. L. M. Correia, J. J. Reis and P. O. Frances, Analysis of the average power to distance decay
rate at the 60 GHz band, in IEEE in Vehicular Technology Conference, 1997.
87. A. M. Hammoudeh, M. G. Sanchez and E. Grindrod, Experimental analysis of propagation
at 62 GHz in suburban mobile radio microcells, IEEE Transactions on Vehicular Technol-
ogy, vol. 48, pp. 576-588, 1999.
88. K. Sato, H. Kozima, H. Masuzawa, T. Manabe, T. Ihara, Y. Kasashima, K. Yamaki, Mea-
surements of reflection characteristics and refractive indices of interior construction materials
in millimeter-wave bands,in Proc. of IEEE 45th Vehicular Technology Conference(VTC
Spring 1995), vol.1, pp.449,453, 25-28 Jul 1995
89. Thomas Zwick, Troy J. Beukema, and Haewoon Nam, Wideband Channel Sounder With
Measurements and Model for the 60 GHz Indoor Radio Channel,IEEE Trans. On Vehicular
Technology, vol. 54, no. 4, pp.1266-1277, JULY 2005.
90. X. Hao, T. S. Rappaport, R. J. Boyle and J. H. Schaffner, 38-GHz wide-band point-to-
multipoint measurements under different weather conditions,IEEE Communications Letters,
vol. 4, pp. 7-8, 2000.
91. H. Xu, V. Kukshya, and T. S. Rappaport, Spatial and Temporal Characteristics of 60 GHz
Indoor Channels, IEEE J. Sel. Areas Commun., vol. 20, no. 3, pp. 620630, Apr. 2002.
92. T. S. Rappaport, E. Ben-Dor, J. N. Murdock and Q. Yijun, 38 GHz and 60 GHz angle-
dependent propagation for cellular & peer-to-peer wireless communications, in IEEE
International Conference on Communications (ICC), 2012.
93. E. Ben-Dor, T. S. Rappaport, Q. Yijun and S. J. Lauffenburger, MillimeterWave 60 GHz
Outdoor and Vehicle AOA Propagation Measurements Using a Broadband Channel
Sounder, in IEEE Global Telecommunications Conference (GLOBECOM), 2011.
152 3 Channel Measurement and Modeling
111. X. Wu, C.X. Wang, J. Sun, J. Huang, R. Feng,Y. Yang, X. Ge, 60 GHz Millimeter-Wave
Indoor Channel Measurements and Modeling for 5G Systems, IEEE Trans. Antennas Propag,
2017, vol. 65, no. 4, pp. 1912-1924, 2017.
112. Mingyang Lei, Jianhua Zhang, Tian Lei, Detao Du, 28-GHz Indoor Channel Measurements
and Analysis of Propagation Characteristics,IEEE 25th International Symposium on Per-
sonal, Indoor and Mobile Radio Communications (PIMRC 2014),2014
113. Zhang Nan, Yin Xuefeng, S. X. Lu, Du Mingde and Cai Xuesong, Measurement-based
angular characterization for 72 GHz propagation channels in indoor environments,
Globecom Workshops (GC Workshps), 2014
114. YIN Xuefeng, LING Cen and KIM Myung-Don, Experimental Multipath-Cluster Charac-
teristics of 28-GHz Propagation Channel. IEEE Access, 3, 2015. 3138-3150
115. X. Zhao, Q. Wang, S. Li, S. Geng, M. Wang, S. Sun, and Z. Wen, Attenuation by human
bodies at 26 and 39.5 GHz millimeter wave bands, IEEE Antennas Wireless Propag. Lett.,
vol. 19, pp. 1229 1232, Nov. 2016.
116. R. O. Schmidt, Multiple emitter location and signal parameter estimation, IEEE Trans.
Antennas Propagat., vol. AP-34, pp. 276280, Mar.1986.
117. R. Roy and T. Kailath, ESPRITEstimation of signal parameters via rotational invariance
techniques, IEEE Trans. Acoust., Speech, Signal Processing, vol. 37, pp. 984995, July
1989.
118. A. van der Veen, M. Vanderveen, and A. Paulraj, Joint angle and delay estimation using
shift-invariance properties, IEEE Signal Processing Lett., vol. 4, pp. 142145, May 1997.
119. M. Haardt and J. Nossek, Unitary ESPRIT: How to obtain increased estimation accuracy
with a reduced computational burden, IEEE Trans. Signal Processing, vol. SP-43,
pp. 12321242, May 1995.
120. M. Zoltowski, M. Haardt, and C. Mathews, Closed-form 2-D angle estimation with rectan-
gular arrays in element space or beamspace via unitary ESPRIT, IEEE Trans. Signal
Processing, vol. SP-44, pp. 316328, Feb. 1996.
121. J. Fuhl, J.-P. Rossi, and E. Bonek, High-resolution 3.D direction-of-arrival determination for
urban mobile radio, IEEE Trans. Antennas Propagat., vol. AP-45, pp. 672682, Apr. 1997.
122. G. McLachlan and T. Krishnan, The EM Algorithm and Extensions. Probability and Statis-
tics. New York:Wiley, 1996
123. A. P. Dempster, N. M. Laird, and D. B. Rubin, Maximum likelihood from incomplete data
via the EM algorithm, J. Royal Statist. Soc., Ser. B, vol. 39, no. 1, pp. 138, 1977.
124. J. A. Fessler and A. O. Hero, Space-alternating generalized expectation- maximization
algorithm, IEEE Transactions on Signal Processing, Vol 42, no 10, pp 2664-2677, 1994.
125. B. H. Fleury, D. Dahlhaus, R. Heddergott, and M. Tschudin, Wideband angle of arrival
estimation using the SAGE algorithm, in Proc. of the IEEE Fourth Int. Symp. on Spread
Spectrum Techniques and Applications (ISSSTA 96), Mainz, Germany, pp. 79-85, Sept.
1996.
126. B. H. Fleury, M. Tschudin, R. Heddergott, D. Dahlhaus, and K. I. Pedersen, Channel
parameter estimation in mobile radio environments using the SAGE algorithm, IEEE
J. Sel. Areas Commun., vol. 17, no. 3, pp. 434-450, Mar. 1999.
127. M. Tschudin, R. Heddergott and P. Truffer, Validation of a high resolution measurement
technique for estimating the parameters of impinging waves in indoor environments, in
Proc. of The Ninth IEEE International Symposium on Personal, Indoor and Mobile Radio
Communications(PIMRC 1998),1998
128. B. H. Fleury, Yin Xuefeng, K. G. Rohbrandt, P. Jourdan and A. Stucki, Performance of a
high-resolution scheme for joint estimation of delay and bidirection dispersion in the radio
channel,IEEE 55th Vehicular Technology Conference(VTC Spring 2002), 2002
129. Yin Xuefeng, B. H. Fleury, P. Jourdan and A. Stucki, Polarization estimation of individual
propagation paths using the SAGE algorithm,14th IEEE Proceedings on Personal, Indoor
and Mobile Radio Communications(PIMRC 2003), 2003
154 3 Channel Measurement and Modeling
148. N. Czink, P. Cera, J. Salo, E. Bonek, J.-P. Nuutinen, J. Ylitalo, Automatic clustering of
MIMO channel parameters using the multi-path component distance measure, WPMC05,
Aalborg, Denmark, Septemter 2005
149. G. E. P. Box and Mervin E. Muller,A Note on the Generation of Random Normal
Deviates,The Annals of Mathematical Statistics (1958), Vol. 29, No. 2 pp. 610611
150. Greenstein et al, Moment method estimation of the Ricean K-factor, IEEE Comm. Lett. Vol
3, Issue 6, p.175, 1999.
151. R. Kolar, R. Jirik, J. Jan (2004) "Estimator Comparison of the Nakagami-m Parameter and Its
Application in Echocardiography", Radioengineering, 13 (1), 812
152. R. M. Norton,The Double Exponential Distribution:Using Calculus to Find a Maximum
Likelihood Estimator,The American Statistician (American Statistical Association) 38 (2):
135136, May 1984. doi:10.2307/2683252.
153. Borradaile, Graham. Statistics of Earth Science Data. Springer. ISBN 978-3.540-43603.4.
Dec 2009.
154. D. J. Best, N. I. Fisher, Efficient Simulation of the von Mises Distribution,Applied
Statistics, vol. 28, No. 2, pp 152-157, 1979
155. Sra, S. A short note on parameter approximation for von Mises-Fisher distributions And a
fast implementation of Is(x), Computational Statistics 27:177190. 2011.doi:10.1007/
s00180-011-0232-x.
156. Andrew T.A Wood, Simulation of the von mises fisher distribution, Communications in
Statistics - Simulation and Computation, 23(1):157-164, 1994.
157. S. Jung, Generating von Mises Fisher distribution on the unit sphere (S2) ,Oct. 2009.
[Online] Available at http://www.stat.pitt.edu/sungkyu/software/randvonMisesFisher3.pdf
158. J. T.Kent, The FisherBingham distribution on the sphere,J. Royal. Stat. Soc., Sr. B,
44:7180, 1982
159. John T. Kent, Asaad M. Ganeiber and Kanti V. Mardia, A new method to simulate the
Bingham and relateddistributions in directional data analysis with applications,Arxiv.
[Online] Available at http://arxiv.org/abs/1310.8110
160. L. Devroye, Non-Uniform Random Variate Generation,Springer-Verlag.1986
161. Donald E. Knuth, Art of Computer Programming, Volume 2:Seminumerical Algorithms (3rd
Edition), Addison-Wesley Professional, November 14, 1997
162. F. Babich and G. Lombardi, Statistical analysis and characterization of the indoor propaga-
tion channel, IEEE Trans. Commun., vol. 48, no.3, pp. 455464, Mar. 2000.
163. WINNER1 WP5:Final Report on Link Level and System Level Channel Models Deliver-
able D5.4, 18.11.2005
164. WINNER+ D5.3, Final channel models, V1.0, CELTIC CP5-026 WINNER+ project.http://
projects.celticinitiative.org/winner+/deliverables_winnerplus.html, 2010.
165. M. Steinbauer, A. F. Molisch, Spatial channel models, in Wireless Flexible Personalized
Communications, L. Correia (ed.), John Wiley & Sons, Chichester, 2001.
166. L. Correia, Ed., Mobile Broadband Multimedia Networks. Academic Press, 2006.
167. L. Liu, J. Poutanen, F. Quitin, K. Haneda, F. Tufvesson, P. D. Doncker, P. Vainikainen, and
C. Oestges, The COST2100 MIMO channel model, IEEE Wireless Commun. , vol. 19, no.
6, pp. 9299, Dec. 2012.
168. C. Oestges,C. Brennan, F. Fuschini, M. L. Jakobsen,S. Salous, C. Schneider and
F. Tufvesson, Radio Channel Measurement and Modelling Techniques, Chapter 9 of
Cooperative Radio Communications for Green Smart Environments, River Publishers 2016
169. 3GPP TR 25.996,Technical Specification Group Radio Access Network; Spatial channel
model for Multiple Input Multiple Output (MIMO) simulations, V11.0.0, 2012-09. http://
www.3gpp.org
170. 3GPP TR36.873, Study on 3d channel model for lte, 3rd Generation Partnership Project,
v12.1.0. 2015. http://www.3gpp.org
171. V. Erceg, et al. IEEE P802.11 Wireless LANs -TGn Channel Models, IEEE 802.11-03/
940r4, May 2004.
156 3 Channel Measurement and Modeling
172. G. Breit, et al. IEEE P802.11 Wireless LANs - TGac Channel Model Addendum, IEEE
802.11-09/0308r12, March 2010.
173. Cost Final Report, http://www.lx.it.pt/cost231/
174. R. Verdone and A. Zanella, Pervasive mobile and ambient wireless communications,
COST Action 2100, Springer, 2012.
175. L. Liu, Implementation of the COST 2100 model, v2.2.5 [Online]. Available:http://code.
google.com/p/cost2100model/
176. L. Hentila, P. Ky osti, M. Kaske, M. Narandzic, and M. Alatossava. (2007, December)
MATLAB implementation of the WINNER Phase II Channel Model ver1.1 [Online]. Avail-
able:https://www.ist-winner.org/phase_2_model.html
177. J. Medbo and P. Schramm, Channel models for HIPERLAN/2, ETSI/BRAN document
no. 3ERI085B.
178. ITU-R Document 5D/TEMP/332(Rev.1): Preliminary draft new Report ITU-R M.[IMT-
2020.EVAL] Channel model (part), 27th meeting of WP 5D, Niagara Falls, Canada,
June 2017
179. Jian Luo, et. al., Channel Sounding for 802.11ay, IEEE 802.11-15/0631r0, May 2015
180. J. Jarvelainen and K. Haneda, Sixty gigahertz indoor radio wave propagation prediction
method based on full scattering model, Radio Science, vol. 49, no. 4, pp. 293-305, 2014.
181. METIS D1.2, Initial channel models based on measurements, v1.0, 2014. https://www.
metis2020.com/
182. 3GPP TR 38.901. Study on channel model for frequencies from 0.5 to 100 GHz (Release 14).
3rd Generation Partnership Project (3GPP), V14.0.0, Mar. 2017.
183. P. Agrawal and N. Patwari, Correlated link shadow fading in multihop wireless networks,
IEEE Trans. Wireless Commun., vol. 8, no. 8, pp. 40244036, Aug. 2009.
184. T. Jamsa and P. Ky oti, Device-to-device extension to geometry-based stochastic channel
models, EuCAP 2015, Lisbon, Portugal, 2015.
185. W. Fan, T. Jamsa, J. O. Nielsen and G. F. Pedersen, On Angular Sampling Methods for 3.D
Spatial Channel Models, IEEE Antennas and Wireless Propagation Letters, Vol. 14, 2015.
Chapter 4
Software Simulation
Starting from the requirements and technical indicators of 5G network system, this
chapter analyzes the technical challenges in designing and realizing 5G software
simulation system, and introduces the link level simulation and system simulation
techniques with the focus on test evaluation methods, key technologies and appli-
cations, which provides technical reference for the design and realization of 5G
network system software testing.
Requirement Analysis
Networking Network Service Performance Other
QoS
Scenarios Characteristics Type Index Requirements
System Modeling
Link Level System Channel Traffic Antenna Other
Model Level Model Model Model Pattern Models
given input parameters and models so as to check whether the simulation system
realization is right. Due to the introduction of massive MIMO and other new
technologies, 5G simulation system needs to implement plenty of calibration
work according to new simulation parameters and models to guarantee the
correctness of the simulation output result.
From the above analysis, we can know that the software simulation process of
the whole wireless system is very complex. Thus the workload of building simu-
lation system software is huge. To improve the efficiency, simulation designers can
develop the simulation system with the help of proven commercial software
package. Common commercial software tool packages include MATLAB,
OPNET [6], NS2/NS3 [7], QualNet [8], etc. All of these packages provide all
kinds of basic parts that software framework of communications system needs,
including modeler, model library, simulation kernel and postprocessor, but the way
of realizing these parts and focus range of provided model library might be
something different. The design and development of 5G software simulation testing
system inherit and develop from4G and earlier technologies. Early simulation
platforms are of great reference value for building 5G simulation software. Mean-
while, since 5G system will introduce more new functions and new technologies,
designers and developers need to deeply analyze the characteristics and realization
plans of candidate technologies so that they can design and realize 5G software
simulation system efficiently.
As an entirely new network, 5G is the wireless communications system that faces the
information society in 2020. Currently, 5G has entered the key stage of key tech-
nologies breakthrough and standards formulation. Thus corresponding key technol-
ogies simulation requirements will also help the 5G simulation to be promoted to a
new height on system architecture and simulation efficiency. Evaluating the perfor-
mance indicators of 5G network comprehensively, fast and accurately is the impor-
tant requirement for designing and realizing 5G software simulation systems.
Comprehensiveness is the functional requirement of the software simulation
system, which mainly refers to the comprehensive support for 5G performance
indicators and candidate technologies. 5G performance indicators include Key
Performance Indications (KPIs) such as peak data rate, guaranteed minimum user
data rate, connection density, service traffic volume density, wireless delay, end-to-
end delay [9], etc. It also includes secondary indicators that support these KPIs such
as SINR, paging success rate, access success rate, handover success rate, etc.
Software simulation system must provide corresponding modeling, statistic and
evaluation for these new performance indicators. 5G candidate technologies can be
162 4 Software Simulation
classified into two types, namely air interface technology and network technology.
Air interface technology includes EE-SE co-design, massive antenna, full duplex,
novel multiple access, new waveform, new modulation and coding, software
defined air interface, sparse mining, high frequency band communication, spectrum
sharing and flexible application, etc. Network technologies include C-RAN,
SDN/NFV, Self-Organizing Network (SON), Ultra Dense Network (UDN), multi-
network convergence (multi-RAT) and D2D [9], etc. These new technologies
impact on existing network design and realization can be divided into three types
in terms of architecture, namely architecture level, network element level and
module level. New technologies in architecture level, such as SDN and SON, have
the greatest impact on the design of software simulation system. Because it needs to
model the new network architecture, design new network element types, interfaces
between new network elements and new protocol stacks. All these changes are basic
and we still need to make large-scale changes on the entire network design, which
make the design more difficult and bring software simulation more workload. New
technologies in network element level, such as massive MIMO technology, involve
coordinated design between multiple modules and the influence is mainly limited in
the network elements, so there is no need to make big changes on network architec-
ture and interface between network elements. In comparison, the influence scope of
new technologies in module level is much smaller, which is limited in one function
module within the network element and other related modules can support it with
small modifications, for instance, new modulation coding technologies. The main
work of software simulation system includes comprehensively analyzing every 5G
candidate technologies, proposing corresponding resource models, and designing
and implementing schemes, which serve as significant guarantees for the complete-
ness of 5G software simulation system functions.
Rapidity is the time efficiency requirement for software simulation system. This
requirement is embodied in two aspects. One is that it can substantially reduce
simulation time of 5G network traffic and performance evaluation. The other is that
it can quickly adapt to the change and adjustment of network architecture. Part of
5G technologies [10] have huge simulation objects, for example, the base station
antenna scale of massive MIMO technology can be more than 128, and the number
of cells under UDN reaches hundreds or even more. Some computational overheads
of the simulation characteristics are very large. For example, massive MIMO
precoding calculation involves a lot of more than 128 30 dimensions matrix
operations. Large simulation objects scale and computational complexity require
the computational performance of software simulation system based on existing
system to have great improvement. Only in this way can the requirements of
simulation tasks and timely evaluation be met. Sharp growth of requirements for
computational performance requires the systematic brand-new design and imple-
mentation of the hardware platform, software platform and the simulation program.
For the software simulation system, the key problem is how to complete the
software system concurrent design and coding implementation on the new and
powerful hardware platform with powerful computing power, which require the
simulation program to realize the greatest concurrent on key computation paths,
4.2 5G Software Simulation Requirements 163
such as the concurrent processing of granularity that reaches the subcarrier level.
Rapidity is also reflected in the fast response to the change of 5G network archi-
tecture with a relatively small cost. Different from previous wireless communica-
tions systems, 5G has many network level candidate technologies with great
changes in network architecture and rich supporting scenarios, so the simulation
system needs to be flexible enough. Its architecture design, module design, and
interface design should have characteristics like decoupling, modularity, interface
scalability and ease of integration, etc., which put forward higher requirements on
network resource model design, software function modular design, and the inter-
face design between network elements and modules. In general, the complete cycle
of development and validation of a simulation task should be as short as possible, so
that it can meet the rapid verification and selection requirements of 5G candidate
technologies.
Accuracy is performance requirement for software simulation systems. One of
main factors influencing the simulation accuracy is the simplified method intro-
duced to network modeling, simulation service abstraction, design and realization.
Those simplified methods are usually the results of the compromise between the
simulation authenticity and the simulation calculation cost, i.e., reducing the sim-
ulation calculation cost at the cost of simulation authenticity. Take the widely used
EESM/MIESM link mapping method as an example. This method makes equiva-
lent mapping for link model, in which the simulation process of the link function is
equivalently mapped to several performance system level indicators, so as to
simplify the computational process of link realization in the system simulation
process; yet meanwhile system errors are also introduced. In order to reflect the
impact of link process on system simulation more accurately, hardware and soft-
ware methods can be combined to truly reflect the link process. The link to be
simulated is deployed in hardware (such as FPGA, channel simulator) for real-time
calculation, and the simulation accuracy can be improved through the hardware
simulation calculation. Other simulation processes can do similar processing. When
computational overheads allows, the simulation accuracy can be improved at the
cost of increasing computational resources.
On the basis of summarization of 5G network software simulation systems
requirements, objectives and implementation ideas, the overall technical roadmap
can be obtained in detail, as shown in Fig. 4.2. All the key supporting techniques
involved will be discussed in detail in subsequent chapters.
Based on the above simulation requirements analysis, we will take the require-
ment for rapidity as an example, to briefly introduce some common solutions. All
the present communications networks have complex system architectures. It usu-
ally takes very long time to do the simulation calculation to accomplish a specific
task. The time needed for communications system simulation is at least a few days,
and can be as long as a month or even longer. Aiming at this problem, in addition to
the advanced simulation technology shown in Fig. 4.2, as the old saying goes,
Good tools are prerequisite to the successful execution of a job. We should also
include the following common ways to improve the simulation efficiency from the
perspective of hardware configuration.
164 4 Software Simulation
1. High configuration desktop. The most simple and direct method to improve the
simulation efficiency is to buy high configuration computer, such as choosing
multi-core high-speed processors, large memory and high speed hard disk,
which is the most common practice to improve the simulation efficiency.
2. Multicore servers. Improvement approach typically refers to the purchase of
multicore servers to improve the simulation efficiency. To achieve the best effect
with the server simulation, it also needs to design parallel program, and make
each CPU run programs respectively.
3. High performance supercomputer. Supercomputers have higher system rate,
there are special parallel tools and job scheduling system, which can be very
convenient for users. Renting a supercomputer is not only convenient and fast
but also cost controllable. It can be charged through the actual number of CPU
cores used and the time in the simulation program.
The main factors influencing the system simulation software design and implemen-
tation can be divided into three categories: architecture, function and performance.
Architecture reflects the design of components and interfaces of the system simu-
lation software as well as the definition of the interfaces. Function embodies
simulation objects and behavior scope of the system simulation software. Perfor-
mance shows the computational performance that can be reached by system
simulation software and its platform. As shown in Table 4.1, the effects of main
new 5G candidate technologies on the simulation software design and implemen-
tation can also be analyzed from the above three aspects.
Table 4.1 Analysis on the impact of 5G key technologies on system simulation software
Impact analysis
Key Computational
technologies Overview Software architecture Simulation function performance
Massive In the base station end hundreds of anten- No impact Functional enhancements of Antenna number increases
MIMO nas (128, 256 or more) are installed to channel model and the greatly, interference calcu-
realize the data transmission of massive physical layer are added, lation, precoding, receiver,
MIMO. Specifically, it includes precoding including precoding algo- scheduling and other simu-
design, pilot frequency design, feedback rithm, channel estimation lation modules show
design, receiver algorithm, large-scale and signal detection, etc. sharply increased calcula-
active array design, and channel measure- tion strength
ment and modeling, etc.
EE-SE joint- Energy efficiency and spectrum efficiency No impact New algorithm function Depending on the compu-
design are jointly designed, providing compre- tational complexity
4.2 5G Software Simulation Requirements
Impact analysis
Key Computational
technologies Overview Software architecture Simulation function performance
New coding Including LDPC codes of FQAM, Raptor No impact Physical layer module sup- No impact
and modulation type, Polar code, Gray coding of ports new coding and modu-
non-uniform distribution APSK, diversity lation; the link adaptive
technology of joint coding modulation and module is enhanced
other new coding modulation technologies
High frequency Communications are realized through No impact Channel model, Calculate intensity
communication short-wavelength millimeter wave band, beamforming, and some increases with the increase
which is able to provide more than ten other functions are added of system bandwidth
times bandwidth than 4G
C-RAN A green wireless access network architec- Virtualization technology New network types such as Calculation intensity
ture based on centralized processing, col- requires the various modules virtual cell are supported; depends on the complexity
laborative radio and real-time cloud of the simulation system to topology management, wire- of the centralized manage-
computing architecture decouple fully less resource management, ment algorithm
resource, and resource
scheduling modules are
enhanced
UDN The deployment density of low power sta- It is necessary to support the Topology management, Calculation intensity
tion in hot spots in the cellular network is ultra dense nodes deploy- interference calculation, and increases with the increase
increased to improve system capacity and ment, virtual cell and other handover modules should be of number of base stations
network coverage and to reduce the delay characteristics enhanced
and energy consumption
SDN In the SDN architecture, control plane and Control plane and data plane New type network element is Calculation intensity is
data plane are separated; control function is separated; centralization and supported; protocol stack decided by the network
centralized; the bottom network infra- virtualization management module, resource manage- scale and the complexity of
structure is abstracted; and interfaces for for control modules;, unified ment module, resource centralized management
the application and network services are interface is opened scheduling module all need algorithm
provided to be enhanced
4 Software Simulation
4.3 5G Software Link Level Simulation 167
Major breakthroughs in basic theories and key technologies of the wireless trans-
mission have been leading the evolution and innovation of the wireless communi-
cations system and its standardization. A series of novel multiple access
technologies and transmission technologies (such as TDMA/FDMA/CDMA/
NOMA, OFDM/MIMO, etc.) have brought about the significant improvement of
the information transmission rate and opened up a new era of the wireless technol-
ogy revolution [11]. Facing the massive growth of the future wireless data and the
rich variety of services and experience requirements, 5G transmission technologies
will explore a series of new types of multiple access and transmission mechanisms,
to greatly improve the spectrum efficiency and the energy efficiency of the wireless
systems [9].
The academia and industry at home and abroad have put forward a number of
candidate technologies for 5G wireless transmissions. Massive MIMO, high-
efficiency modulation and coding, non-orthogonal multiple access, full duplex
and other new transmission technologies have caused great concern in 5G [12]. Fur-
thermore, energy efficiency becomes one of the important performance indicators
of the 5G system [13]. Compared with the 4G technologies, the signal processing
mechanism adopted by the 5G transmission system will be even more complicated,
and performance evaluation indicators would be more multidimensional, which
will challenge the 5G air interface technology standardization, the program system
design and the simulation verification.
As shown in Fig. 4.3, massive MIMO has become one of the important 5G key
transmission technologies [14]. Massive MIMO can bring the huge array gain and
the interference suppression gain through large-scale antenna arrays, thus greatly
improving the system spectrum efficiency and the edge user spectrum efficiency.
Current researches on the massive MIMO transmission mainly focus on system
capacity, performance analysis, precoding, pilot pollution and other issues. The
evaluated performance results based on the link level simulation for the massive
MIMO technology will present important information for the massive MIMO
system design.
Compared with the traditional multiple access technology, the non-orthogonal
access technology based on the advanced waveform design can achieve much
higher spectral efficiency. Non-orthogonal accesses in the frequency domain, the
time domain as well as the code domain, such as Interleave Division Multiple
Access (IDMA), Low Density Spreading Multiple Access (LDSMA), spatial cou-
pling, and Power Domain Multiple Access (PDMA), have attracted wide research
attentions [15]. The performance of multiple access technologies, as the key
components of the wireless system physical layer transmission, needs comprehen-
sive evaluation and analysis with the help of link level simulation.
In addition, the future 5G network deployment will have distinctive heteroge-
neous characteristics [16], which include heterogeneous connections through var-
ious air interface technologies, the heterogeneous network coordination and
interference management, etc. In the massive MIMO and dense coverage scenario,
the highly-efficient coordinated interference signal processing methods such as
interference alignment will have great expected performance in improving the
systems coverage, the spectrum efficiency and the user service experience, etc.
The complex interference modeling and interference processing mechanism will
also play important roles in improving the precision of the simulation results, which
will be covered in both the 5G link level and system level simulation.
Overall, with the breakthrough in technologies such as massive MIMO, the
mobile communications industry has been in the stage of the 5G era. In terms of
5G transmission technologies, on the one hand, we need to carry out the theoretical
research and algorithm design. On the other hand, we also need to put forward the
corresponding technical evaluation mechanism to provide the detailed performance
evaluation results through constructing the efficient and accurate link level simu-
lation platform.
The link level simulation is mostly used to evaluate the physical layer transmission
performance of the wireless communications system. It usually includes the down-
link simulation and the uplink simulation under certain configurations for the simu-
lated wireless channel and the adjacent cell interference environment [11]. Through
modular simulation design, the link level simulation can realize the performance
comparison of different transmission schemes with a variety of transmitter structures
and receiver algorithms. Therefore, it is able to provide the important reference for
choosing the appropriate physical layer design or implementation schemes.
4.3 5G Software Link Level Simulation 169
Performance metrics are important in the link level simulation, which mainly
include the Bit Error Rate (BER), the BLock Error Rate (BLER) or Frame Error
Rate (FER) and spectrum efficiency (bps/Hz), etc. As the increasing concern on the
power consumption, the performance evaluation of energy efficiency is also intro-
duced in the 5G system. The above metrics are used to evaluate the reliability
(BER, BLER, FER) and effectiveness (spectrum efficiency, energy efficiency) of
the system transmission. It mainly provides the performance results when the above
performance metrics change with the SNR or the SINR. The provided simulation
results will be used to determine whether the designed physical layer transmission
structure and the receiving algorithm meet the system performance requirements. It
also provides the design directions for the algorithm improvement.
Compared with 4G, the 5G transmission will pose higher requirements for the
computing abilities and the simulation speed of the link level simulation. In
addition, when transmitting wireless signals at high frequency bands, the physical
environment of 5G transmissions will be more complicated. In order to improve the
validity and authenticity of the link level simulation, the researchers will model the
wireless channel and the interference signals more accurately based on the actual
measure data. This will no doubt increase the complexity and the processing time of
the wireless link simulation. For the 5G link level simulation, the 5G software needs
to adopt new simulation methods, such as the high speed parallelization, to cope
with the challenge posed by the technical complexity of 5G new technologies. It
will be able to perform a series of rapid evaluation and verification for candidate
transmission technologies.
Concerning all the above mentioned problems brought by massive MIMO, dense
network nodes, and the more complicated wireless transmission environment, the
link level simulation needs to consider a variety of advanced simulation methods
and implementation mechanisms. Meanwhile, since there are a large number of
candidate technologies for the 5G transmission, it is necessary to make a detailed
analysis on technical features of candidate technologies to determine the
corresponding technologies application scenarios as well as the evaluation process.
The factors that need most consideration in the link level simulation of the main
candidate technologies include the following aspects.
Simulations for the massive MIMO channel
The channel model plays vital important roles in the link level simulations. In
terms of the 5G link level simulation, it is necessary to carefully consider various
characteristics of the massive MIMO channel such as the spatial correlation, the
coupling, the near field effect, etc. [17]. The empirical channel model can also be
constructed through analysis, comparison and fitting of the measured channel
data in combination with the theoretical analysis.
Simulations for the neighboring interference
In 5G scenarios with the ultra-dense nodes, the co-channel interference will
limit the network capacity [18].For the link level simulation, it needs to accu-
rately reflect the influence of the interference links from neighboring cells. On
the one hand, the real interference signals can be used in which the real-time
170 4 Software Simulation
The main role of the physical layer transmission is to ensure the reliability and
effectiveness of the information wireless transmission. The design of each function
module of the physical layer is usually completed after a lot of discussions on the
performance analysis, the algorithm evaluation and the standardization [19]. Take
the downlink system of wireless transmission as an example. The signal processing
modules at the transmitter mainly include channel coding, constellation mapping,
multiple antenna precoding, multiple access, reference signal generation, framing,
etc. The signal processing modules at the receiver mainly include cell search and
synchronization, de-framing, channel estimation, multiple antenna detection,
demodulation, channel decoding, etc. [2021]. In the link level simulation, pro-
gramming is required to realize each of the above mentioned function modules. In
addition, it also needs to implement the modules for the wireless fading channel and
the interference links. During the establishment of the link level simulation plat-
form, in order to guarantee the correctness of the simulation, its also necessary to
go through multiple levels of tests, in turn, including functional module testing,
subsystem testing, and integrated system testing.
The wireless link level simulation system usually includes two sets of simulation
systems for downlink and uplink. In order to improve the efficiency and transport-
ability, it usually needs to complete the software architecture design in advance.
4.3 5G Software Link Level Simulation 171
Figure 4.4 shows an example of the architecture design of the link level simulation.
In it, the library functions provide to the whole simulation system the public library
functions, respectively supporting functions like math calculation, memory alloca-
tion, testing comparison, etc. In addition, the section of the system parameter
configuration covers the system configuration parameters required by simulations.
It may include the system-level parameters such as the frame structure, the system
bandwidth, cell information, resource allocation methods, etc. It can also include
the transmitter configuration parameters such as control channel parameters and
data channel parameters. In addition, the receiver configuration parameters can also
be included such as configurations for the receiving algorithms. The part of the
main function completes the whole link process in accordance with the definitions
and transmission mechanisms defined as the standard protocols. Real-time simula-
tion results will be saved, and the simulation evaluations curves are drawn when
necessary. It is important for the architecture design to be with capabilities of
flexibility, detectability and transportability.
The realization of each function module in uplink and downlink and the interface
definition between modules are the most basic parts of the link level simulation. It
will realize the indispensable functions for the physical layer transmission. They
includes function modules for channel coding and decoding, modulation and
demodulation, multiple antenna precoding and detection, multiple access, reference
signal generation and channel estimation, cell search and synchronization, and
measurement and feedback [2122].
With the introduction of massive MIMO, non-orthogonal multiple access and
other candidate technologies to 5G transmission system, the implementation way of
multiple antenna processing module and multiple access module would undergo big
changes compared with that in 4G transmission system in the link level simulation.
Figure 4.5 gives a general process of the transmitter for the single user transmission.
As shown in this figure, the resulting implementation for massive MIMO and
172 4 Software Simulation
Reference
signals (RS)
Source
Channel Multiple
CRC Mapping Precoding
encoding access
non-orthogonal multiple access will affect the simulation modules of the reference
signals, the multi-user resource allocation, etc.
In the link level simulation, the receiver often has higher algorithm complexity
and therefore requires higher computing capability. The receiver usually involves
RF, analog front end, digital front-end, baseband processing and other components.
In the link level simulation, it usually pays more attention to baseband simulation
implementation and evaluation. The non-ideality brought by the RF link, analog
front-end and digital front-end is modeled and introduced to the baseband simula-
tion. Figure 4.6 shows the main processing of a receiver. For simplicity, it simplifies
the example of data extraction modules such as de-frame, resource demapping, etc.
For the receiver, each module has multiple choices for receiving algorithms. And
different algorithms concern different aspects of the system design in terms of
complexity and performance. Theoretical analyses are usually able to provide some
conclusions on the algorithm performance. But due to some ideal assumptions, the
performance results usually have great gap with the actual situations. At this point,
the link level simulation will play an important role in the performance evaluation
of the receiver algorithm.
4.3 5G Software Link Level Simulation 173
In addition to the general simulation process of the transmitter and the receiver,
the physical layer transmission also needs to distinguish between the data channel
and the control channel. They have great differences in key performance indicators.
For example, the requirements for the transmission reliability of the control channel
are usually much higher than those of the data channel. Therefore, the link level
simulation also needs simulation verification for a wide variety of physical channels
respectively. Besides, the performance and complexity of the receiver algorithm
will also directly affect the choice of the transmitter for each function module. After
the completion of the transmitter standards establishment, how to design a cost-
effective receiver is also important for chip and equipment manufacturers. There-
fore, detailed and reliable link level simulation results are of great significance for
evaluating the physical transmission technology, transmission system design, stan-
dardization and product design. The link level simulation results of each algorithm
module and the overall link under different scenarios and parameters configurations
will provide important reference basis for the scheme design and system perfor-
mance optimization.
This simulation evaluates the link level performance of the massive MIMO system
and the 88 MIMO system. For simplicity, the ITU multipath channel model is
adopted in the simulation (ITU Vehicular A).The channel state information is
assumed to be known. Turbo 1/3 code rate and Quadrature Phase Shift Keying
(QPSK) modulation are used. In addition, Adaptive Modulation and Coding (AMC)
and HARQ mechanism are not initiated. In the simulation, BER, FER and through-
put are performance evaluation criteria. In the massive MIMO simulation, the
number of the base station antenna is set to 128, and the number of users took
K 20, 30, 40, 50 respectively. When the number of the user changes, the
simulation results of different users can be obtained through evaluation. The total
users throughput under Zero Forcing (ZF) precoding transmitting scheme can be
obtained through simulation. In the 88 MIMO system simulations, the base
station is equipped with 8 antennas, the terminal is configured with single antenna,
and the number of users is 8. Table 4.2 shows the main parameters configuration in
the simulation.
Figures 4.7, 4.8 and 4.9 respectively show the performance of BER, FER, as well
as throughput. Since the massive number of antennas can bring a higher degree of
freedom, the performance of massive MIMO is far superior to the performance of
the existing small 88 MIMO system.
Table 4.3 shows the corresponding required Eb/N0 and the corresponding
throughput of different users under FER 5%.For example, when the number of
single antenna users in a massive MIMO system is 20 and FER 5%, the required
174 4 Software Simulation
0
10
massive MIMO: (128*1, 50)
massive MIMO: (128*1, 40)
-1
10 massive MIMO: (128*1, 30)
massive MIMO: (128*1, 20)
conventional MIMO: (8*1, 8)
-2
10
BER
-3
10
-4
10
-5
10
-6
10
-10 -5 0 5 10 15 20
Eb/N0 (dB)
Eb/N0 is 13.5 dB, and the corresponding throughput is 280 Mbit. For the 8x8
MIMO system, when FER 5%, the required Eb/N0 is 16.5 dB and the
corresponding throughput is 125Mbit. As it shows, when FER is 5%, the SNR of
massive MIMO system can be reduced to 30 dB and its throughput improves by
280/125 2.24 times. As a result, the massive MIMO system can greatly reduce the
power consumption of the base station and therefore improves energy efficiency
and spectrum efficiency.
4.3 5G Software Link Level Simulation 175
0
10
-1
10 massive MIMO: (128*1, 50)
massive MIMO: (128*1, 40)
massive MIMO: (128*1, 30)
-2
10 massive MIMO: (128*1, 20)
conventional MIMO: (8*1, 8)
FER
-3
10
-4
10
-5
10
-6
10
-10 -5 0 5 10 15 20
Eb/N0 (dB)
1000
massive MIMO: (128*1, 50)
900 massive MIMO: (128*1, 40)
massive MIMO: (128*1, 30)
800
massive MIMO: (128*1, 20)
700 conventional MIMO: (8*1, 8)
Throughput (Mbps)
600
500
400
300
200
100
0
-10 -5 0 5 10 15 20
Eb/N0 (dB)
Table 4.3 The required Eb/N0 and corresponding throughput under different users when
FER 5%
Single antenna UE number 20 30 40 50 88 MIMO system
Eb/N0 (dB) 13.5 9.6 6.8 4.2 16.5
Throughput (Mbit) 280 410 440 700 125
Table 4.4 Parallel acceleration results of link simulation (numerical unit: second)
Computational scheme K 20 K 30 K 40 K 50
Serial (single core CPU) 296,642 356,757 455,895 699,380
Parallel (40-core CPU server) 9215 10,697 11,992 18,137
Acceleration ratio (40-core parallel vs serial) 32 33 38 38
In terms of the simulation time, the general serial simulation method often
requires a long time since the link level evaluation needs to simulate the perfor-
mance of multiple SNR points and each SNR point needs to simulate fading
channel fully with ergodicity. Reasonable parallel simulation can greatly reduce
the simulation time. For the above simulation, the following is a brief experiment
for comparison.
The traditional serial simulation method obtains the data of one point when
running once. The program is running on a single CPU. One point on a massive
MIMO link needs 4 days running time and the complete massive MIMO link
simulation needs as many as 100 sampling points.
To accelerate the process, the parallel optimization is carried on. The link
simulation consists of two layers of cycles, namely the SNR cycle and the frame
cycle. Data in the SNR cycle and the frame cycle are independent. Thus the
simulation programs on the SNR and frame cycles can be parallel optimized. And
the theoretical parallel speedup result of the new parallel simulation method can be
faster in orders of the number of SNRs times by the number of the frames.
Parallelization of the SNR and frame cycles, of course, is the relatively simple
and effective way to improve the simulation timeliness. With additional internal
parallel operations, the simulation time can be further shortened, which will not be
discussed here in detail. The results after the parallel acceleration optimization are
shown in Table 4.4.
Energy efficiency is one of the key performance indicators in 5G systems. And the
wireless heterogeneous network is an effective means to alleviate the contradiction
between the data growth and the energy consumption [23]. In respect of mobile
intelligent terminals, different access methods and network planning schemes will
have different influence on the energy efficiency of mobile intelligent terminals.
K. Wei et al. [24] have proposed an energy efficiency indicator with comprehensive
consideration of the terminal and the network. The indicators have considered the
4.3 5G Software Link Level Simulation 177
impact of the new base station layout on the energy consumption increase at the
network side and the energy consumption changes at the terminal side at the same
time. So it can provide a quantitative reference criterion for operators to decide
whether laying out a new base station is a good choice in terms of energy efficiency.
Under this indicator, it further analyzes the changes of energy consumption at the
terminal side as well as the energy efficiency of the entire network after the micro
base station is laid out in the macro cell. And it has obtained the relation between
the terminal energy consumption and the low power station configuration param-
eters. Finally, it also derives the optimal cell radius to obtain the maximum terminal
energy saving and the highest energy efficiency.
In order to verify the rationality and validity of the proposed energy efficiency
indicators, in [24], the energy consumption and energy efficiency indicators have
been simulated after the deployment of different configuration schemes of pico base
stations and micro base stations. In the simulation process, it assumes that the
coverage of the low power consumption base station is always within the coverage
of a macro base station. Table 4.5 provided the detailed simulation parameters.
Figure 4.10 shows the relations between the terminal energy saving with differ-
ent micro cell radius as well as the distance (d ) between the macro base station and
the micro base station. It can be seen that the results of the approximate derivation
and the simulation results are very similar. The figure also shows that with the
increase of the distance between two base stations, the terminal will save more
40
theoretical
simulation d = 2900
35
d = 3300 d = 2500
30 d = 3700
UE energy saving(W)
d = 4100
25
20
d = 1700
15
d = 1300
10 d = 2100
d = 900
5
0
0 200 400 600 800 1000 1200 1400 1600 1800 2000
micro-cell radius (m)
Fig. 4.10 Relation between terminal energy saving and the micro base station location as well as
the coverage radius
energy under the same micro base station configuration parameters. Under the
simulation parameters, the highest energy saving of the terminal is 35 W, and the
energy consumption of all terminals in the entire network is 860 W. Setting up
micro base stations can bring about 4% of the energy saving for general users. For
users inside the micro cell, the energy saving can be as high as 18%. If active users
need to download more resources, the micro base station can bring more terminal
energy savings.
Figure 4.11 shows energy efficiency value of the network under different micro
base station locations and micro base station radius on the basis of the proposed
energy efficiency indicators. If the energy efficiency value is greater than 0, it is
believed that setting the micro base station is beneficial from the perspective of
energy efficiency. It can be seen from Fig. 4.8 that when the distance between the
micro base station and the macro base station is less than 2100 m, since the
spectrum efficiency of the micro base station is higher when served by macro
station. Even within the optimal microcell radius, the network energy efficiency
value is still less than 0. In this case, the setting of the micro base station is
inappropriate in terms of energy efficiency. The above analysis shows that the
proposed energy efficiency indicator provides a numerical reference criterion for
operators to judge whether a new base station is appropriate in terms energy
efficiency.
4.3 5G Software Link Level Simulation 179
10
d = 3300
d = 3700 d = 2900
5
d = 4100
0 d = 2500
5
energy efficiency index
10
d = 1700
15 d = 1300
d = 2100
20 d = 900
25
30
35
energy efficiency index
40
0 200 400 600 800 1000 1200 1400 1600 1800 2000
micro-cell radius (m)
Fig. 4.11 Relation of different micro base station positions and their radius as well as energy
efficiency indicators
The future 5G system not only can greatly improve the traditional cellular
network performance, but also plays an important role in the technology appli-
cations in vertical industries. This trend will become more obvious with the
emergence of the IoT. For example, various communications systems related
with IoT will have much greater demand for short packets. Thus, how to perform
channel encoding and decoding with high efficiency and low complexity has
become an important research direction. In the short packet transmission, short
codes are usually needed for channel coding such as tail-biting convolution codes.
Tail-biting convolution codes usually use Circular Viterbi Algorithm (CVA) for
decoding [25]. However, there is an important problem in the CVA-based itera-
tive sub-optimal decoding algorithm, i.e., the circular trap problem. That is, the
surviving path obtained with the current iteration is the same as the surviving path
obtained before. And as the iterative sequence continues, there is no new tail-
biting path that is closer to the receiving sequence while the decoder cannot
terminate the iteration. The works in [26] and [27] have studied the CVA iteration
process and the circular trap phenomenon existing in CVA, and put forward an
effective circular trap detection scheme. Detection of the circular trap can be used
to help control the CVA decoding process, so as to get the fast-convergent
iterative decoding algorithm.
180 4 Software Simulation
Table 4.6 BLER performance in the Additive White Gaussian Noise (AWGN) channel for the
suboptimal decoding algorithms (WAVA and S-CTD-II) and the maximum likelihood decoding
algorithms (CTD-ML decoder and two-phase ML decoder) for the tail-biting convolution code
with the generator polynomial of {133,171,165} and (120,40)
SNR
BLER 1.0 dB 2.0 dB 3.0 dB 4.0 dB
WAVA 8.535 102 1.372 102 1.353 103 7.280 105
S-CTD-II 8.598 102 1.377 102 1.352 103 7.280 105
Maximum likelihood decoder 8.466 102 1.363 102 1.352 103 7.280 105
In order to verify the decoding performance and the decoding efficiency, in the
simulation below, the simulation analysis has been made to the performance of
the Circular Trap Detection based ML decoder (CTD-ML decoder)algorithm and
a suboptimal Simplified CTD (SCTD) algorithm. Here SCTD corresponds to the
S-CTD-II algorithm in [27]. Meanwhile, it also compares with suboptimal
decoder WAVA [25] and the hybrid maximum likelihood decoding algorithm
based on Viterbi and the heuristic search [28], etc. The simulation has adopted
the convolution codes with two kinds of lengths respectively. One of the tail-
biting convolution codes adopts the convolution encoder in the LTE control
channel and the broadcasting channel. The generator polynomial is {133,
171, 165} [29] and the information sequence length is 40. This codeword is
denoted by (120, 40).The other code of the tail-biting convolution codes uses the
convolution encoder with the generator polynomial of {345, 237} [25]. The
information sequence length L is 32. This codeword is denoted by (64, 32).In
the simulation, the coded bits perform the QPSK modulation and then enter the
AWGN channel. For convenience, the hybrid maximum likelihood decoding
algorithm based on Viterbi and the heuristic search is denoted by the two-phase
ML algorithm.
Table 4.6 gives the decoding performance of the different decoding algorithm
son the tail biting convolution code (120,40), from which we can find that
suboptimal decoding algorithms, such as WAVA [25] and SCTD decoders, have
close BLER performance to that of optimal decoding algorithms. Maximum like-
lihood decoders refer to the CTD-ML decoder and the two-phase ML decoder.
Since these two decoders are both maximum likelihood decoders on tail-biting
trellis diagrams, their BLER performances are exactly the same.
Figures 4.12 and 4.13 respectively give the relations of the decoding com-
plexity performance of the CTD-ML decoder and the two-phase ML decoder and
the demand for storage space in the decoding process with SNR. From Fig. 4.12
we can see that for the tail-biting convolution codes of any length, the CTD-ML
decoder has higher decoding efficiency than the two-phase ML decoder. Mean-
while, Fig. 4.13 shows the CTD-ML decoder has lower requirements for storage
space. Therefore, the CTD- ML decoder on tail-biting trellis diagrams has
significant advantages in both computational complexity and storage space
reduction.
4.3 5G Software Link Level Simulation 181
105
CTD-ML decoder for long tail-biting codes
two phase-ML decoder for long tail-biting codes
average number of states per bit access
103
102
101
1 1.5 2 2.5 3 3.5 4 4.5 5
Eb /No (dB)
Fig. 4.12 Decoding complexity comparison of the CTD-ML decoder and the two- phase ML
decoder for the long tail-biting convolutional code with (120, 40) and the short tail-biting
convolutional code with (64, 32)
The next generation communications network will produce a large amount of data,
and thus the highly efficient processing of data samples will be one of important
requirements. Compressive Sensing (CS) is an important way of data processing,
which can recover the raw data from extremely few sample values. The algorithm
complexity of the compression process and the recovery process in CS is extremely
uneven. That is to say, the computational complexity of compression sampling is
much lower at the transmitter than that of the receiver. In wireless sensor network
applications, for example, the sensor nodes of sensor networks can compress and
sample data in the form of CS with only low computational cost. Most importantly,
the compressive sensing technology is able to balance the energy consumption of
sensor nodes in the network to solve the bottleneck effect/hot spot effect. So the
CS technology has a great application prospect in the sensor network data aggre-
gation application. The CS technology [30] has aroused great interest among
researchers. Zhao et al. [31] put forward a new type of Treelet-based Compressive
Data Aggregation (T-CDA), and design the corresponding data collection methods.
It is proved that T-CDA is better than traditional CS data aggregation algorithms in
terms of energy efficiency based on the simulation of the real sensor network data.
182 4 Software Simulation
104
2.5
CTD-ML decoder for long tail-biting codes
average number of required storage space
1.5
0.5
0
1 1.5 2 2.5 3 3.5 4 4.5 5
Eb /No (dB)
Fig. 4.13 Storage space requirements comparison of the CTD-ML decoder and the two-phase ML
decoder for the long tail-biting convolutional code with (120, 40) and the short tail-biting
convolutional code with (64, 32)
The simulation uses the data collected from a static sensor network in Ecole
Polytechnique Federale de Lausanne University [32].This sensor network records
the environmental data of 97 test points on the campus every five minutes from July
2006 to May 2007, such as temperature, humidity, illumination, etc. In [33], this
section uses the highly space-time correlated temperature data as the simulation
data. In the simulation, the overall network transmission overheads under different
recovery precisions are used as evaluation indicators for energy consumption.
Simulation has compared the performance of the following methods.
Discrete Cosine Transform-CDA (DCT-CDA) represents the traditional method
which uses DCT to make sparse compressed sensing data aggregation. CS
performance depends on the sparse degree of the original signa. The existing
researches usually use DCT [34] to make the actual signal sparse. However, the
actual signals from the nodes are distributed in the three-dimensional world. It is
difficult to prioritize them and make the data change trend smoothly. Therefore,
DCT can only process smooth signals without fully exploring the actual sensor
signals sparse.
Principal Component Analysis-CDA (PCA-CDA) represents the data aggrega-
tion method with the treelet algorithm in T-CDA being replaced by PCA. PCA is
a typical global data analysis method, and is introduced into the CS data
4.3 5G Software Link Level Simulation 183
collection in [35]. PCA can get a sparse transformation matrix of the original
data, which can be used to extract the main ingredients in raw data. But it is a
global operation process, which overwhelms the local properties in the data. In
addition, due to the global properties of the PCA algorithm, noise is also
regarded as useful data. As a result, this method is extremely unstable when
the observation data is small because any change of noise will affect the global
result.
T-CDA represents the proposed treelet-based compressive data aggregation
method. Treelet transformation is an algorithm for data clustering and regression
analysis, which can explore the local correlation characteristics in data. In sensor
networks, to obtain the accurate observation data is a job with large energy
consumption. It is difficult to support such global analysis algorithms like PCA.
And the tree transformation is able to overcome or relieve the situation. In
particular, treelet transformation can be used to utilize the correlation structure
between nodes from the sensor data or training data collected in advance and
then to obtain an orthogonal transformation matrix, which can fully reduce the
training data sparsity. Since sensing data also shows the correlation on time
dimension, the orthogonal transformation matrix can be used to reduce the
sparsity of the sensing data collected within several timing sequences after the
training data.
The simulation results of Figs. 4.14 and 4.15 provide the performance of the
T-CDA algorithm under different system parameters. Figure 4.14 shows the impact
of the selection of training sequence length (L) on the system performance. In the
Fig. 4.14 The impact of the length selection of the training sequence on system performance
184 4 Software Simulation
Fig. 4.15 The impact of the rounds number of continuous CS data collection on system
performance
figure factor L/, represents the number of rounds for carrying out T-CDA
after a round of training data. As the rough trend shows in Fig. 4.14, the smaller L,
the better system performance. This is because is fixed, which means longer
training sequence corresponds to more rounds of data collection. This suggests that
the major impact of the data recovery error comes from the changes of sensor data
over time, rather than the insufficient information in the training sequence. Fig-
ure 4.15 verifies the above conclusion. From Fig. 4.15, it can be seen that with the
increase of the number of data collection rounds, the accuracy of data recovery
deteriorates. Figures 4.14 and 4.15 result show that L 2 is the optimal value for
the system parameter selection.
The performance comparison of DCT-CDA, PCA-CDA and T-CDA is shown in
Fig. 4.16. By using the actual environment monitoring data, it can be seen that the
performance of DCT- CDA is the worst, and the performance of T-CDA is better
than that of PCA-CDA and DCT-CDA. As a result, we can reach the following
conclusions. Firstly, the method based on the training sequence is better than the
traditional method. Then, the performance of T-CDA is better than PCA transfor-
mation in this application, which means the local correlation of the mining data is
superior to the global correlation.
4.3 5G Software Link Level Simulation 185
100
DCT-CDA
PCA-CDA
T-CDA
signal reconstruction error
101
102
2000 4000 6000 8000 10000 12000 14000 16000
transmission overhead
Fig. 4.16 System performance comparison under different data acquisition methods
receivers with the good channel quality do not make full use of the channel.
Through the research on link adaptive problems in the D2D cluster network coding
multicast scenario, Zhou et al. [36] find that unlike traditional multicast/ broadcast,
the network coding multicast has its special properties. That is, in the network
coding, different information merged via the heterogeneous domain is sent to
different receivers, which brings new possibilities for the adaptive modulation
coding selection for the network coding multicast.
Zhou et al. [36] put forward the user-oriented link adaptive method, in which two
multicast channels whose link quality corresponds to the maximum value of the
modulation type and the minimum value of coding type are chosen. And then
making bit mapping to the terminal with poor link quality at the transmitter to
reduce the equivalent rate. So after the terminals with different purposes make
network coding and decoding to the received multicast data packets, the informa-
tion obtained get different equivalent modulation and coding efficiency to adapt to
the quality of the two different channels. While it should be guaranteed that the
users with poor channel quality can accurately decode the multicast data packets,
the users with good channel quality should be able to obtain more useful informa-
tion in a more efficient modulation and coding type, and eventually the overall
spectral efficiency of multicast communications can be improved.
In order to verify the algorithms BER and spectrum efficiency, link simulations
are carried out to verify the different channel quality combination of two channels
within D2D clusters [36]. Turbo channel coding in 3GPP LTE standards [37] is
used, and the information byte length is set to 512 bits. The combinations of bit
rates and modulation mode are shown in Table 4.7.
Figure 4.17 has simulated the link performance of the terminals BLER and the
overall throughputs of D2D cluster multicast communications. It can be seen that in
the scheme adopted by [36], BLER of the users with poor link quality is basically in
coincidence with the traditional multicast MCS. It shows that the new scheme can
guarantee the users with poor link quality to get the equivalent coding efficiency in
consistency with the traditional way so that they can decode the multicast packets
accurately. Since the users with good link quality won more efficient modulation
and coding types, their overall throughput of D2D clusters improves significantly
by as high as 42%.
Figure 4.18 shows the simulation of the link performance of the overall spectrum
efficiency of D2D cluster multicast communications, with the different combina-
tions of modulation and coding types supported by two channel link qualities in
Fig. 4.17 Link performance of 1/2 BPSK and QPSK multicast link quality combination (a) BLER
(b) Throughput
188 4 Software Simulation
3.5
Conv 2/3
Novel 18% 16QAM
3 3/4 QPSK
1/2
13% 16QAM
2.5
2/3 QPSK
Spectrum efficiency
2 1/3
16QAM
1/2
1.5 QPSK 1/2
QPSK
1/2
1 BPSK 45%
42%
0.5
0
2 1 0 1 2 3 4 5 6
Es/No(dB)
The first problem of software simulation test is how to evaluate 5G new technol-
ogies. 5G networks have greatly different features from 4G networks. Its evaluation
methods should be studied from the following aspects.
A good evaluation system requires basic features of completeness, simplicity,
better usability, etc. Compared with 4G networks, 5G networks have changed not
only in quantity such as capacity, data rate, and delay, but also in intrinsic
quality, including the basic features of network such as virtualization and
4.4 5G Software System Level Simulation 189
For wireless access network side, the main functions of virtual management
layer include: creation and updating of service-centered virtual cell, management of
virtual resources (including backhaul network resources and air interface
resources), connection mobility control, and management and use of context
information. This network architecture introduces the mechanism of the control
and data separation. For example, based on the idea of separation of control plane
and user plane, Ishii et al. present a new network architecture on the basis of the
concept of virtual cell [40].
Two important problems of wireless network resources virtualization are node
mapping that provides different and virtual resource allocation. Node mapping is to
determine what nodes are used for a certain service. In the traditional network, a
service cell is selected for the user according to the signal strength indicators such as
Reference Signal Receiving Power (RSRP) / Reference Signal Receiving Quality
(RSRQ). All the users services are provided by this service cell. Under ultra-dense
network scenario, a large number of irregularly deployed Small Cell will face the
problems of complex interference and load distribution asymmetry. The traditional
way based on signal strength indicators for cell selection will no longer be applica-
ble. For future 5G network virtual cell, the cell can be no longer associated with the
nodes, but instead take user for the center [41]. Reference signals used for data
transmission will decouple with the node ID. Management and control are separated
from data nodes. User-centered virtual cell can be formed at any location in the
network, realizing global virtualization. For management nodes in the virtual man-
agement, virtual resources management is one of its important functions. How to
avoid the interference between each node and make effective utilization of time
domain, frequency domain and space domain resources are the key to the wireless
resource management.
Through the above introduction to the wireless network resource virtualization
technology, we can see the network resources model has undergone great changes.
The virtual cell and virtual resources need to be modeled. Virtual cell needs to
decouple with physical nodes, and provide services with users as the center. So,
different from traditional cells, the performance of virtual cells needs to be evalu-
ated from the users dimension.
This chapter describes problems and the corresponding design ideas in the platform
architecture design of the 5G system simulation software. Various kinds of new
technologies simulation design, implementation and evaluation in 5G networks
should be analyzed and carried out separately according to the specific services,
which will not be covered in this chapter. The key technologies of 5G system
simulation design are shown in Table 4.8.
4.4 5G Software System Level Simulation 191
5G technologies have brought more complex network scenarios and services types,
and also created all kinds of new technologies. It makes the simulation model and
process more complicated and changeable, and leads to the exponential rise in the
number of the simulation scenarios. It is embodied in the following aspects.
192 4 Software Simulation
and the concept of virtual cell is put forward in the UDN, etc. These new
technologies have put forward model design, decoupling, abstraction, resource
virtualization and centralized management, and other design requirements for
simulation model design, which are the key design requirements for the entire
simulation platform.
Simulation parameter library is generated according to the model and the
requirements, including the system specifications and the scenario parameters,
as well as configuration parameters of the key technologies. Since 5G network
models are more complex, and network modes are more abundant, the network
scenarios and configuration parameters involved increase dramatically. There-
fore, when designing the simulation parameters library, we should take the
simulation model as the center. Parameterized templates of networking scenario,
network function and system specification are established based on simulation
model, and with the reasonable combination of these parameter templates the
complexity of the parameter library is reduced.
The corresponding function libraries are mapped by the model. These functions
libraries consist of different functional components. Each component has real-
ized the corresponding model function. Communication interactions between
components are realized through specific function interfaces. Each component
can be allocated to the computational resources and can be managed as an
independent computational unit. Function libraries can realize decoupling and
scalability through the flexible interface design. For example, massive MIMO
technology will use different precoding techniques, and each precoding tech-
nique corresponds to a precoding matrix design algorithm. The input variables
and output variables of different precoding techniques have the same form, with
the input of channel matrix and the output of precoding matrix. Therefore,
different precoding functions can provide a unified functional interface. Realiz-
ing the separation of interface and implementation through the concept of virtual
function is good for the extension new precoding algorithm and the realization of
independent allocation of computational resources. For example, global
precoding matrix algorithm involves the big dimension matrix inversion and
multiplication calculation and it is the key path of the simulation platform
calculation. On the basis of precoding algorithm decoupling design, this function
can be flexibly deployed on GPU or FPGA for acceleration.
According to the simulation requirements, the mapped function library and
parameter library are organically organized to become a complete simulation
process. The design of the simulation process has ensured the consistency of
every step of the simulation on the whole. Simulation processes of different
types are realized through the rational allocation of scenario parameters, fea-
tures, and service model parameters. In UDN scenario, for example, the user
handover process has different implementation scheme, in which different
handover simulation process can be completed by configuring different hand-
over algorithm parameters.
By dynamically configuring the parameter library, function library, and the
simulation process, we can get the specific simulation tasks, which directly
face the users and need to provide friendly configuration management interface.
4.4 5G Software System Level Simulation 195
It can be seen from the above process that the key to dynamic simulation
modeling lies in the design of model, library components and parameters. When
the designs of stratification, encapsulation, and interface decoupling are used to
solve the coupling between the conceptual model and implementation model, the
goal of the minimal impact of the technological change on implementation can be
achieved.
Shared storage: The asynchronous parallel operation mode and shared data
storage mode are adopted with poor scalability. Typical examples include
programming models like OpenMP.
Data parallel: The loose synchronous parallel operation model and shared data
storage mode are adopted with medium scalability. A typical example is HPF.
HPF provides annotation-like instructions to extend the variable types so that it
can control the data layout of the array in detail.
The parallelization of simulation software is the key work of multi-core parallel
design of simulation platform, which needs to consider the following design
requirements.
Simulation software is decomposed in parallel from the aspects of function,
algorithm and operands. Serial simulation tasks are decomposed into fully
decoupled and independently-processed subtasks. Since different functions,
algorithms, and operands have different computational intensity and character-
istics, in parallel design we need to analyze and process them according to actual
situation.
The reasonable division design of simulation function modules can reduce the
communications data between parallel subtasks. It tries to ensure that the
equivalent calculation amount of each parallel subtask. And it can reduce the
waiting time needed for synchronous processing.
As the evolution of the multi-core CPU parallel processing, the heterogeneous
scheme of CPU + GPU has made it possible to improve the speed of the simulation
calculation [1]. This is because that the CPU, as a general-purpose processor that
has complex control logic, is good at complex logic operations. However, GPU is
graphics processor, which often has hundreds of stream processor cores. Its design
goal is to realize large throughput of data parallel computation with a large number
of threads. Its single precision floating points computing power can reach more
than 10 times of CPU, which is suitable for processing parallel computing of large
scale data.
Therefore, adopting the heterogeneous parallel architecture of CPU + GPU, in
which the two are coordinated and multi-core parallel CPU makes complex logic
calculation while GPU does data parallel tasks, can perform the maximum com-
puter parallel processing capabilities. The worlds fastest computer Tianhe-II
currently adopts CPU + GPU heterogeneous polymorphic architecture.
Programming of GPU generally adopts CUDA architecture, under which the
computational tasks are mapped to a large number of threads that can be executed in
parallel. Its organization form is the Grid-Block-Thread 3 levels. And it has the
hardware dynamic scheduling and execution.
Figure 4.21 shows a typical heterogeneous multi-core architecture. It can be seen
that multi-core CPU uses OpenMP while GPU uses Compute Unified Device
Architecture (CUDA) for processing. And task division is specified by program
and operating system level. The two parts are interconnected with PCIee bus.
198 4 Software Simulation
OpenMP CUDA
PCI-E
Group1 Group K
Simulation
model
instance
1
CPU server and stronger task management, resource scheduling and other logic
handling abilities than GPU server. With the rapid development of FPGA develop-
ment environment and compiler in recent years, the development difficulty for
FPGA board is reduced greatly. The main research points of hardware acceleration
simulation technology can be divided into the following three aspects.
1. Studies on key technologies of FPGA based high performance hardware
acceleration
With the development of integrated circuit technology, the computing power
and storage resources of FPGA chip grow rapidly. Meanwhile, the FPGA has
become the ideal algorithm accelerating implementation platform due to its
programmability, powerful parallel ability, rich hardware resources, flexible
algorithm adaptability and lower power consumption. FPGA can easily realize
the distributed algorithm structure, which is very useful for acceleration scien-
tific computing. For example, most matrix decomposition algorithms in scien-
tific computation require a lot of Multiply and ACcumulate (MAC) operation.
FPGA platform can effectively realize the MAC operating through distributed
arithmetic structure. Todays FPGA products have entered the era of 20 nm
\14 nm. Major companies are working on developing logic products with more
logical units and higher performance.
Research contents include high speed parallel processing, hardware and
software simulation task partition and mapping, high precision signal
processing, etc.
2. Interface and middleware design with the combination of hardware acceleration
and software simulation platform
For some calculations that are hard to complete by FPGA, we can transfer it
to C language. At the end of the design, the intermediate results from the
hardware are imported into C for visual analysis. Through drawing graphical
waveform and eye diagram, etc., we can check whether each part of the system
design is correct. Introducing the simulation based on C/C++ to the design of
FPGA has solved the FPGA debugging difficulty, as well as the frequent
inconsistency with the system simulation logic.
Research contents include virtual adaptation mechanism, middleware design,
and reusable calculation design.
3. The design of reconfigurable FPGA hardware accelerator board
Reusable programming is an important characteristic of the hardware accel-
eration based on FPGA platform. The FPGA based system can be altered or
reconfigured in the development and application phase arbitrarily, which will
increases the overall gain.
Research contents include: high speed PCIe interface design, data interaction
between high-speed USB 3.0 interface and the host.
The implementation process of hardware acceleration simulation is as
follows.
The first is the key technology research. Starting from the simulation design
requirements, we should fully combine the characteristics of the hardware
202 4 Software Simulation
System simulation
platform
Control plane: parameter Data plane: business
configuration, scheduling, flow generation,
command interface transmission, feedback
User Up
baseband DA RF/down
conversion AD decoding
data conversion
to antenna
simulation platform, sent to the air interface through the link transmission
module, and then through the link receiving module, received, demodulated,
and decoded, and then sent back to the system simulation platform.
This kind of software and hardware co-simulation method can fully use the
hardware high-speed processing capabilities, and enable some links system
simulation performance to be close to real-time. Combined with relatively
perfect system function of the system simulation platform, it can better simulate
the system application scenarios with high indicator requirements for system
transmission delay. The implementation scheme of patent CN101924751B [45],
for example, is the entire network solution scheme across the core network and
access network, ranging from the physical layer of base station to every layer of
core network protocol stack. The handover process is at millisecond-level scale,
which is difficult to verify on the old system simulation platforms. Adopting the
method of software and hardware co-simulation platform can well support
system function simulation test at millisecond-level small scale.
This section uses massive MIMO 5G key techniques as an example, and suggests
that how to apply the key technologies described before to complete the design and
implementation of massive MIMO technology in simulation system, so as to reduce
the simulation calculation complexity and accelerate the simulation calculation
speed.
1. Simulation scenario description
Simulation parameter description
Massive MIMO uses the MU-MIMO model to simulate LTE downlink
system performance. The channel matrix of MU-MIMO is formed with
128 base station transmitting antennas, 1 UE receiving antenna, 15 users
being scheduled in a single cell at the same time. Detailed simulation
parameters can be seen in Table 4.9.
The statistics indicators of simulation output mainly include the average
cell spectrum efficiency, 5% cell edge spectrum efficiency, UE downlink
SINR and some other indicators.
Simulation environment description
Hardware: GPU server XR-4802GK4
CPU configuration: two Intel Xeon Ivy Bridg E5 (3.0 G, single 10 cores,
20 threads)
GPU configuration: 8 pieces of TESLA K20
CPU memory: 256GB
Bus: PCIe 3.0 16
Software:
WINDOWS SERVER 2008R2
MATLAB R2014a
2. Simulation design analysis
(a) Functional procedure
Simulation process of MU-MIMO is shown in Fig. 4.25. The main
features of MU-MIMO are embodied in the following function modules.
Transmitter module: It includes CSI feedback, pilot pollution
suppression, antenna resource allocation, user scheduling, and transmitter
precoding.
Transmitting: Wireless channel modeling, including 3DUma, 3DUmi,
3DUMa-H channel.
Receiver processing: It includes interference calculation, SINR calcula-
tion, and receiving detection algorithm.
4.4 5G Software System Level Simulation 205
Place terminal
Calculate channel
Transmitter precoding
Receiver processing
End of simulation
3D channel
It includes the path loss calculation of large scale, shadow and small
scale. Because the number of transmitting and receiving antennas
increases significantly compared with 4G, and the antenna and the channel
parameter models are more complex, the computation has a huge increase
compared with the commonly used TU channel in 4G. Lets analyze only
the times of FFT transformation calculation from time domain channel to
frequency domain channel. If 3D channel FFT transformation calculation
in a cell is about M Nt Nr, then the calculation amount ratio is
(128 15)/(2 1) 960 compared with the scenario that antenna scale
of 4G is 2 1in the case that downlink antenna scale is 128 15. The
calculation amount ratio increases linearly with the product of transmitting
antenna number and receiving antenna number.
Transmitter precoding
According to the simulation parameter settings, the transmitter
precoding scheme is forced zero algorithm, and precoding matrix calcu-
lation formula is WZF H(HHH)1 , H 2 CNtxNr. Precoding computa-
tional complexity mainly lies in the matrix multiplication and inverse.
Under the condition of forced zero algorithm, the main calculation
includes two parts. The first part is C Nc times Nr Nr dimensional
matrix inversion. The second part is multiplication of C Nc times
Nt Nr dimensional matrix and Nr Nr dimension matrix. The algorithm
complexity of different kinds of matrix calculation is usually O (n ^ 3).
Compared with antenna scale with 2 1 scenario in 4G, the calculation
ratio of matrix inversion is 15^3/1^3 3375. The calculation ratio of
matrix multiplication is (128 15 15)/(2 1 1) 14,400, which
increases along with the three times power of antenna number.
SINR calculation
For the calculation of this part, the users received signal power, the
interference power in the cell, interference power between cells are calcu-
lated based on the channel matrix and precoding matrix, and then the
users SINR is obtained. According to the following MIMO signal
model, the size of calculation can be roughly obtained.
pXp H X
L K
dl
yjm dl l hljm wlk xlkdl njm
dl
l1 k1
p H p H X
K
p X L p H X K
dl j hjjm wjm xjm
dl
dl j hjjm wjk xjkdl dl l hljm wlk xlkdl
k1, k6m l1, l6j k1
|{z} |{z}
intracell interference intercell interference
dl
njm
SINR calculation
This part of the calculation is mainly the vector multiplication. Its
calculation amount is much smaller than channel calculation and
precoding module of the transmitter. So, it is able to obtain better results
with the help of CPU acceleration.
3. Simulation measurement results and analysis
The simulation measurement results are shown in Tables 4.10 and 4.11.
According to the calculation characteristics of different modules, the final
acceleration effects are also different when using different acceleration program.
SINR calculation module and message processing module adopt the
CPU parallel computing scheme. Precoding module adopts CPU + GPU
co-acceleration scheme. The acceleration ratio of the interference module is
less than that of message processing module from the perspective of acceleration
ratio. Its because interfere module needs to transfer large amounts of data
between the parallel computing tasks, including signal power, interference
power, SINR, channel allocation, scheduling information and other data, most
of which are vast data of subcarrier granularity. The time overheads on data
transmission is greater than that on message processing module, so its acceler-
ation ratio is smaller than that of the message processing module. The further
optimization of SINR calculation module includes increasing the number of
parallel CPU cores, compressing transmitted data, increasing the transmission
bandwidth (high-speed optical fiber transmission and memory reflection tech-
nology, etc.) and other schemes. Precoding module adopts CPU + GPU
co-acceleration scheme, whose acceleration ratio can reach 127 times. Due to
the limitation of hardware resources, the acceleration effect of this part is far
from upper limit. The above acceleration effect is measured on a single GPU
server. Because the parallel granularity of various software modules are much
more than the number of the server processor units, the parallel acceleration can
also still be greatly improved after improving the capacity of the hardware
Table 4.10 Computational time overheads of each MU-MIMO module in serial scheme and
parallel scheme
Other modules
Total length SINR Message (with no parallel
Computational of one TTI calculation processing Precoding optimization)
scheme (second) (second) (second) (second) (second)
Serial 9338.72 772.64 124.88 8411.95 29.25
Parallel 153.92 50.8 4.12 66.03 /
With the increasing number of base stations near the small base station in ultra-
dense network scenario, the interference will also be more serious. In particular, the
site number to be coordinated will also increase in resources allocation, making
resource allocation more difficult. The site deployment in hot spot areas shows a
trend of high density and no programming. With more and more deployed sites,
how the system performance changes in hot spot area is rather important for both
users and network operators. Liu et al. [46] simplify the resource allocation under
the ultra-dense deployment scenario through network clustering. Assuming that
intra-cluster channel is orthogonalized, it puts forward a semi-distributed resource
allocation algorithm based on clustering, which reduces the resource allocation
problem under the dense deployed scenarios. A semi-distributed resource allocation
algorithm based on clustering mainly includes the following three stages.
The above optimization problem is a nonlinear mixed integer planning problem.
The resource allocation in ultra-dense deployment scenarios is simplified based on
the assumption of network clustering and intra-cluster channel orthogonalization.
Then a semi-distributed resource allocation algorithm based on clustering is pro-
posed to reduce the resource allocation problem under the dense deployment
scenarios. A semi-distributed resource allocation algorithm process based on clus-
tering is shown in Fig. 4.26, which mainly includes the following three stages:
In the first stage, the network clustering algorithm based on Breadth-First Search
algorithm (BFS) is proposed, where the performance loss brought by the interfer-
ence is taken as the similarity parameters of the site, network clustering is carried
out based on BFS and the cluster heads are selected;
In the second stage, it introduces the concept of reference cluster to approxi-
mately estimate the received interference of each cluster for clustered network. In
the information interaction between cluster heads, it selects the cluster with the
strongest interference as a reference for a given cluster, and only takes into
consideration the interference of the reference cluster in the next phase of resource
allocation. Introducing the concept of reference cluster can reduce the information
interaction of channel state information during the channel power control and
reduce the complexity of the resource allocation problem by obtaining reasonable
suboptimal solution. In addition, the clustering algorithm in first stage assigns the
cells with strong interference to the same cluster. Combining the assumption of
orthogonal channel within the cluster, it can guarantee that the reference cluster
with the strongest interference is a reasonable approximation for all interference
sources.
In the third stage, solving the sub-problems of resource allocation for cluster and
reference cluster, and two-step iterative heuristic resource allocation algorithm is
4.4 5G Software System Level Simulation 211
NO
Convergence?
YES
Output
4
Interference Case1
3.5 Interference Case2
Interference Case1
Interference Case2
3 Uncoordinated
Average Data Rate (Mbps)
Cluster Size=16
2.5
Cluster Size=2
2
0.5
0
1 1.5 2 2.5 3 2.5 4
Number of UEs per sBS
Fig. 4.27 System performance comparison between interference estimation through reference
cluster and including all the interference
After multiple iteration, the solution of power and channel allocation is obtained.
And then the actual average rates of each user in the computational system are
obtained through computing all the interference. Simulation results including all the
interference have considered all the interference of the site to solve the power
allocation in iteration. The average rate of each user within the computational
system is also the actual average rate of each user including all the interference.
Simulation results show that reference cluster approximation and the way including
all interference have a very similar system performance. But the way using refer-
ence clusters to estimate the interference is better in approximation than including
all interference with the increase of the cluster size,. This is because the site number
within the cell is fixed, and the cluster number decreases within the cell when the
cluster size increases. In accordance with the concept of reference cluster, the
4.4 5G Software System Level Simulation 213
20
SDP Based
18 Proposed Algorithm
Uncoordinated
16 SDP Based
Average Data Rate (Mbps)
Proposed Algorithm
14 Uncoordinated
12
10
2
1 1.5 2 2.5 3 3.5 4 4.5 5
Number of UEs per sBS
Fig. 4.28 The user performance variation when adopting different ultra-dense network resource
allocation algorithms in different site densities
214 4 Software Simulation
interference from other sites is not considered in resource allocation. Besides, with
the increases in the site density, compared with 100 m 100 m (the lines with
hollow markers) and 60 m 60 m (the lines with solid markers) areas, the system
performance is even worse. The algorithm proposed in [48] can get better system
performance, but with high complexity. The algorithm proposed in this chapter has
the similar performance with that proposed in [48]. Through the above theoretical
analysis, we can know that our algorithm has lower complexity. Figure 4.28 in fact
verifies that the proposed algorithm can guarantee approximate optimal system
performance with lower complexity.
System Model
Considering the open-loop uplink power control in the terminals, the system model
of the uplink interference between cells is shown in Fig. 4.29. The model is
established through the location relationship between the nodes in polar coordinate
system. eNB1is located in the origin point of coordinates system, which is (0, 0).The
cell coverage area is a circle with the radius R. UEs are distributed randomly
4.4 5G Software System Level Simulation 215
UE (r , q )
eNB0 D
eNB1
( 0, 0)
( 3R, p ) Cell radius
R
uniformly in the eNB1 service region, with the coordinates of the location as (r, ).
eNB0 ispthe
interfered adjacent cell base station;
p the
distance between eNB0 and
eNB1 is 3R; the position in coordinates is 3R; .The distance between UE and
the interfered base station eNB0 is D.
Base station location is fixed with users distributed randomly and uniformly,
r
f r; 2 :
R r min 2
p q
q p
Dr; r 3R 2 3Rr cos r 2 3R2 2 3Rr cos :
2 2
1
I P0 1A Blog10 r Blog10 Dr; p XUE p XeNB1
2 2
1
p XeNB0
2
f I I f X I 1A P0 yf Y ydy,
1
where the distribution function of X related to shadow fading and the distribution
function of intermediate variable Y related to UEs position. They can be given
respectively as follows.
1 x2
f X x p exp 2 ,
2 2 1 2 1 2
dFY y
f Y y
dy
8
> 2
>
> 2 r y; r y;
>
> 2 d, Y min ; r min y Y 0
>
> R r 2 y
>
> min
>
< 0
:
>
> r 2 y;
>
> d
>
> 2 y
>
>
> R r y; 0 y d0 y 0 y
2
>
: , Y 0 < y Y max ; R
R2 r min 2 dy R2 r min 2
In order to verify the above theoretical derivation, the theoretical interference
distribution and Monte Carlo simulation of intermediate variable Y and interference
I are shown in Figs. 4.30 and 4.31 respectively. The system parameters involved in
theoretical calculation and simulation are shown in Table 4.13. Clearly, there is
high consistency between simulation results and theoretical results according to the
results in Figs. 4.30 and 4.31, which prove the correctness of the theoretical
derivation.
Further, based on the derived theoretical model, the impact of the power control
parameters (path loss compensation factor and nominal power P0) setting on the
uplink interference distribution is explored. The corresponding results are shown in
Figs. 4.32 and 4.33.
4.4 5G Software System Level Simulation 217
0.2
simulation
0.18 theoretical
0.16
0.14
0.12
PDF
0.1
0.08
0.06
0.04
0.02
0
110 100 90 80 70 60 50
Y (dB)
a=0.5
0.07
simulation
theoretical
0.06
0.05
P0=60dBm, =4
0.04
PDF
0.03
0.02
0.01
0
180 170 160 150 140 130 120 110 100
Uplink inter cell interference I (dBm)
a=0.5
0.07
simulation
theoretical
0.06
P0=50dBm, =4dB
0.05
P0=60dBm, =4dB
0.04
PDF
P0=50dBm, =8dB
0.03
P0=60dBm, =8dB
0.02
0.01
0
170 160 150 140 130 120 110 100
Uplink inter cell interference I (dBm)
Through the comparison in Fig. 4.32, we can see that the increase in P0 will
directly lead to the increase in the uplink interference power. The uplink interfer-
ence PDF curve will translate to the region with larger interference. The translation
scale is the increase scale of P0. The shape of the distribution curve will not change
with the change of P0. For different shadow fading scenarios, the impact of the
change of P0 is consistent with the interference distribution function.
4.4 5G Software System Level Simulation 219
0.07
simulation
theoretical
0.06
a=0.5, =4dB,
P0=60dBm
0.05
a=1, =4dB,
P0=100dBm
0.04
PDF
0.01
0
170 160 150 140 130 120 110 100 90 80
Uplink inter cell interference I (dBm)
Fig. 4.33 The impact of path loss compensation factor on interference distribution
Intuitively, for the same nominal power setting, the larger the path loss com-
pensation factor is, the greater the user transmitting power is, then the greater the
corresponding uplink interference for the adjacent cell is. It is not hard to find from
Fig. 4.33 that in the scenario with shadow fading standard deviation 4 dB,
although P0 is lower when 1 than that when 0.5 by 40 dB, the uplink
interference when 1 is still far greater than the uplink interference when
0.5. On the other hand, from the perspective of interference distribution,
when increases, the distribution is more dispersed, and the variance distribution
is bigger. In contrast, when is smaller, the uplink interference distribution is more
concentrated. For the scenario with shadow fading standard deviation 8 dB, the
changes of also impact the distribution of the uplink interference. From
the analysis conclusion in Fig. 4.32, the change of P0 does not affect the shape of
the uplink interference PDF, but only affects the translation of distribution curve on
the horizontal axis (interference). Therefore, the increase in path loss compensation
factors will not only introduce more uplink interference, but also lead to greater
volatility in the uplink interference.
resources and extends the computing power of the mobile terminal with limited
resources, so as to use the new computation-intensive applications. Hu et al. [50]
research the computation offloading in Heterogeneous Cloud Radio Access
Network(H-CRAN) scenario, and put forward User-Centric Local Mobile Cloud
(UC-LMC) based on the D2D communications. The basic idea of this framework is:
suppose that the users reach an agreement with the network, and based on some
kind of mechanism, the users can exchange for the traffic/service priority through
providing free computational resources so as to help network provide services to
other users. In the framework of H-CRAN, base station function modules in
baseband resource pool collect the appropriate computational resources, and build
and maintain its mobile local cloud for particular requesting users. Suppose that a
mobile application is divided into a series of subtasks. UC-LMC executes each
subtask in serial order; at the same time, it will send back the calculation results on
each auxiliary user to the requesting user equipment in the short distance commu-
nications link. For battery-powered mobile equipment, energy consumption is an
important factor needed to be considered in the process of offloading. The article
studies the subtask allocation problem of minimum energy consumption. Given that
different services require different network abilities for front haul network, the front
haul link loading limitation is added to the subtask allocation algorithm.
In order to simulate and evaluate the subtask allocation algorithm put forward in
[50], this section simulates with the simulation tools of Matlab. Simulation uses
small cell dense deployment model, and the simulation parameters are listed in
Table 4.14. In the simulation, 25 small cells are set up with each small cell serving
for 5 user equipment. Assuming that each small cell is square of 10 m 10 m. RRH
is located in the center of the small cell. And the mobile user equipment is
uniformly distributed in the small cell.
As shown in Fig. 4.34, this research simulates the influence of different data
input rates on the average energy consumption. The figure compares four kinds of
=
{ 30 * log10 (distance) + k * PL_wall + 37)
knumber of walls
PLwall penetrationloss
12
energy consumption/kbits (lg mJ)
11
10
9
8
7
6
5 offload to UC-LMC
offload to macro-cell BS/BBU pool
4 offload to micromicro-cell BS
offload to UC-LMC with forward link load balancing
3
3 5 10 15 20 25 30
The total amount of sub task data (kbits)
computation offloading strategies: (i) offloading to macro station (or through macro
station to the network side server); (ii) offloading to a small cell station (or via a
small cell station/RRH to be connected to network side server); (iii) offloading to
local mobile cloud without considering the effect of the fronthaul network;
(iv) offloading to local mobile cloud and considering load balancing of the prequel
network in the meanwhile. It can be seen from the simulation results that the energy
consumption of offloading the computational tasks to the macro station is the
highest. It is because that the requesting user equipment is usually far from the
macro station, its transmission energy attenuation is bigger on the air interface, and
thus the energy efficiency is minimal. For the same reason, the energy consumption
of offloading the computational tasks to the cell RRH is in the second place. In the
strategy proposed in [50], computational tasks is offloaded to a local mobile cloud
composed of the nearby auxiliary user equipment, which consumes the least energy.
When considering load balancing in fronthaul network, since its main purpose is to
scatter data, the performance of this strategy when the data rate is small is no better
than the situation that offloads the computational task to the small base station or
local mobile cloud. When the amount of data is large, channel differences between
auxiliary user equipment cannot play a key role, so the fronthaul network load
balancing algorithm is also able to reach a good performance.
The simulation result given by Fig. 4.35 is the impact of the set value of
different fronthaul load balancing factors on mobile equipments subtask alloca-
tion. Four different values are selected: 0.2, 0.5, 0.8, 1. When putting the
auxiliary user equipment in descending order according to the channel quality, we
can see when a fronthaul load balancing factor is bigger, the allocated computa-
tional data of auxiliary user equipment is mainly decided by the channel quality.
The channel with better channel quality would be allocated to more data. When
1, i.e., not considering fronthaul network capacity, it will try to allocate the
computational tasks to the auxiliary user equipment with good channel quality
until it reaches the upper limit of its computational power, which will lead to large
222 4 Software Simulation
2
1 w = 0.2
0
2 1 2 3 4 5 6 7 8 9 10
Forward link data rate
1 w = 0.5
0
1 2 3 4 5 6 7 8 9 10
2
w = 0.8
1
0
1 2 3 4 5 6 7 8 9 10
2
w=1
1
0
1 2 3 4 5 6 7 8 9 10
Number of mobile devices
Fig. 4.35 The subtask allocation considering fronthaul network load balance
load variance, as shown in Table 4.15. When the fronthaul load balancing factor is
small, the impact of channel quality will be weakened and the fronthaul capacity
restriction factors will be highlighted. The user with relatively good channel
quality, may be allocated with relatively less computational data than the auxiliary
user equipment with poorer channel quality because the fronthaul link capacity
of its correlated Remote Radio Head (RRH) is limited, but less data is assigned to
the auxiliary user equipment with abundant fronthaul link capacity of its corre-
lated RRH.
ITU-R WP5D spectrum requirements report points out that the world frequency
spectrum resource shortfall will reach thousands of MHZ in 2020. To meet the
requirements for high traffic and high rate brought by the rapid development of
intelligent terminals, its necessary not only to develop LTE available spectrum
(such as high frequency band and unauthorized frequency band), but also to
continuously explore the efficient use of spectrum. The network service require-
ments of different operators are unevenly distributed in terms of time, space and
frequency. The QoS requirements are diversified with great differences, and autho-
rized spectrum of different operators has different center frequency, bandwidth,
4.4 5G Software System Level Simulation 223
0.9
0.8
0.7
0.6
CDF
0.5
0.4
0.3
Proposed
0.2 Baseline 1
Baseline 2
High power
0.1 Middle power
Low power
0
1 1.5 2 2.5 3 3.5 4
SE(bps/Hz)
Fig. 4.36 The performance in dense deployment scenarios (small cell activation rate is 1)
spectrum sharing scheme without power control in this proposal. The line with short
line markers represents the scheme without spectrum sharing between operators. It
can be seen from the figure that, in a small cell dense deployment scenarios, the
scheme with spectrum sharing between operators have a significant performance
gain compared with the scheme without sharing. Meanwhile, with the use of power
4.4 5G Software System Level Simulation 225
0.9
0.8
0.7
0.6
CDF
0.5
0.4
0.3 Proposed
Baseline 1
0.2 Baseline 2
High power
Middle power
0.1 Low power
0
1.6 1.8 2 2.2 2.4 2.6 2.8 3
SE(bps/Hz)
Fig. 4.37 Performance in sparse deployment scenario (small cell activation rate is 0.5)
control, interference between the operators is reduced, which makes the perfor-
mance of the scheme with power control improved compared with the scheme with
no power control (using the same power, such as high power, middle power and low
power). The simulation shows that in small cell dense deployment scenario, the
inter-operator spectrum sharing mechanism can bring certain performance gain to
network performance. And the inter-operator interference can be eliminated by the
rough information of operator interaction.
Figure 4.37 is the simulation results of small cell sparse deployment scenario.
The activation rate of small cell is 0.5, i.e., only half of each operators small cells
in the rooms are active. In the small cell sparse deployment scenario, due to the
little spectrum requirement of each station in the simulation, it can still meet all the
spectrum access requirements of small cell even without cross-operator sharing
scheme. As a result, the performance curves of the three schemes are basically
overlapping. Meanwhile, the line with hollow circle markers and the line with
hollow rectangle markers adopt lower power, which directly reduce the network
spectrum efficiency. The simulation results show that in the sparse deployment
scenario, there is no need for inter-operator spectrum sharing. Of course, due to the
relatively low services traffic in the simulation, the intra-operator shortage of
spectrum resources may also appear if the services traffic of each small cell is
big. Thus the inter-operator spectrum sharing is needed.
226 4 Software Simulation
Here we introduce a kind of new simulation platform for 5G based on the universal
software and hardware platform. The platform adopts the distributed master-slave
parallel processing architecture, including the master node of the simulation plat-
form, the slave node of the simulation platform, the client end, and the communi-
cations interface. As shown in Fig. 4.38, the functions of each part are as follows.
The master computational node. It is the management center of the simulation
platform, which manages the simulation task requirement from the clients,
decomposes the simulation tasks, schedules the simulation task to multiple
core computational resource of the master node or slave node for parallel
processing, collects the simulation results of each slave node in real time, and
then presents the summary results to the clients. Master nodes are also the
computational nodes at the same time, which undertakes the specific simulation
task. The master node also undertakes master-slave node synchronization man-
agement to ensure that the master-slave node simulation task executed synchro-
nously. The multi-core parallel processing capabilities of master node and slave
nodes have greatly increased the simulation calculation performance. The mas-
ter node and slave nodes support both multi-core servers and hardware board/
FPGA.
The slave node. The slave nodes are managed by master node, take on compu-
tational tasks of the master node, and report the simulation results. A master
Client1
Master Slave
Client2 computing computing
node node 2
Clientm Slave
computing
node n
node can manage multiple slave nodes, so increasing the number of slave nodes
can improve the computing capability of the simulation system.
The client end. It provides human-machine interface, supports the starting up
and issuing of simulation tasks and the real-time display of the simulation results
on the interface.
The communications interface. Considering the high-speed and real-time simu-
lation services requirements, it requires the use of high speed bus and the
communications protocol with higher real-time property.
Figure 4.39 gives a deployment configuration of master-slave node and the client
end. Each software module of the simulation can be flexibly deployed on the
master-slave nodes according to the requirements and doesnt have to be in a
particular form.
The software functions of simulation system are organized according to
Fig. 4.40.
between video display effect of massive MIMO cell and common cell, as shown in
Fig. 4.41. The video on the left side is the video display effect of the users in the
massive MIMO cell, where antenna configuration number of the base station is 32.
The right side is the video display effect of the common cell, where the base station
antenna configuration number is 8. The trend graph of the cell throughput rate in
lower left shows the real-time changes in the cell throughput indicator of massive
MIMO cell and common cell. It can be seen from the throughput and video that
massive MIMO performance is superior to the common cells. By switching the
number of massive MIMO antennas to 64, the cell beamforming ability is
improved, which further enhances the performance as shown in Fig. 4.42.
Fig. 4.42 The coverage effect of the massive MIMO cell with 64 base station transmitting
antennas
230 4 Software Simulation
shown in the figure below, the map shows the coverage effect of the dense network
with 121 cells after the user opens and executes UDN project. As shown in
Fig. 4.43, each circle represents a cell. The closed circle with no filled color
represents the cell is closed and services is not available. The cell with color
represent that it is open and can provide services. The deeper the color is, the
greater services volume can be provided. The change in the user number in the
whole simulation and the cell throughput trend graph are given below the main
interface. The simulation task has simulated the whole process of a football match
held in the Bird Nest Stadium, which has experienced UDN cell from the closed
state to the open state from before to during the game, and the changing process of
improving the throughput through the interference coordination algorithm, as well
as the process from the UDN cells open state gradually to the closed state after the
match. It intuitively presented the simulation effect of UDN network in this
scenario.
Through analyzing the characteristics of the 5G network and the key technolo-
gies of system simulation software, we can see the future development trend of
software simulation test.
1. Rapidity. It can be predicted that within a period of time the parallel calculation
based on multi-core and many-core will become the major technology means to
improve the computing efficiency. Cloud computing and supercomputers will
References 231
gradually become the main stream calculation way of the software system
simulation.
2. Flexibility. Influenced by factors like the flexible network architecture, network
resource virtualization management, and parallel computational requirements,
each module of the system simulation software needs full decoupling, flexible
expansion. The requirements for the flexibility of software design become
higher.
3. Comprehensiveness. Out of the pursuit for higher QoS, simulation evaluation
indicators have expanded from the cell performance indicators to the user level
QoS indicators. It requires that the simulation should be more comprehensive, be
able to establish the services model closer to the real scenarios, design more
refined and comprehensive statistical methods and indicators, and achieve the
algorithm with more comprehensive performance.
References
15. D. Truhachev. Universal multiple access via spatially coupling data transmission. in Proc. of
IEEE International Symposium on Information Theory Proceedings (ISIT), 2013:18841888.
16. X. Chu, D. Lopez-Perez, Y. Yang and F. Gunnarsson. Heterogeneous Cellular Networks:
Theory, Simulation and Deployment. ISBN:9781107023093, Cambridge University Press,
2013.
17. S. Wu, C.-X. Wang, H. Haas, H. Aggoune, M. Alwakeel and B. Ai. A Non-stationary
Wideband Channel Model for Massive MIMO Communication Systems. IEEE Transactions
on Wireless Communications, 2015, 14(3):14341446.
18. R. Zhang, X. Cheng, Q. Yao, C-X. Wang, Y. Yang, and B. Jiao. Interference Graph Based
Resource Sharing Schemes for Vehicular Networks. IEEE Transactions on Vehicular Tech-
nology, 2013, 62(8):40284039.
19. S. Sesia, I. Toufik and M. Baker. LTE-The UMTS Long Term Evolution; From Theory to
Practice. ISBN:9780470697160, John Wiley & Sons, 2011.
20. TR 36.211. Evolved Universal Terrestrial Radio Accesss (E-UTRA); Physical channels and
modulation, version 12.5.0, Mar. 2015.
21. TR 36.212. Evolved Universal Terrestrial Radio Accesss (E-UTRA); Multiplexing and chan-
nel coding, version12.4.0,Mar. 2015.
22. J. C. Ikuno, M. Wrulich and M. Rupp. System level simulation of LTE networks. IEEE
Vehicular Technology Conference (VTC 2010-Spring), 2010, 11(18):15.
23. R. Q. Hu, Y. Qian, S. Kota, G. Giambene. Hetnets - a new paradigm for increasing cellular
capacity and coverage. IEEE Wireless Communications, 2011, 18(3):89.
24. K. Wei, W. Zhang and Y. Yang. Optimal Microcell Deployment for Effective Mobile Device
Power Saving in Heterogeneous Networks. Proceedings of IEEE International Conference on
Communications (ICC 2014), 2014:40484053.
25. R. Shao, S. Lin, and M. Fossorier. Two Decoding Algorithms for Tailbiting Codes. IEEE
Transactions on Communications, 2003, 51(10):16581665.
26. X. Wang, H. Qian, W. Xiang, J. Xu and H. Huang. An Efficient ML Decoder for Tail-biting
Codes Based on Circular Trap Detection. IEEE Transactions on Communications, 2013, 61
(4):12121221.
27. X. Wang, H. Qian, J. Xu, Y. Yang. Trap detection based tail-biting convolution code decoding
algorithm. Electronics and Information Technology Journal, 2011, 33(10):23002305.
28. H. Pai, Y. Han, T. Wu, P. Chen and S. Shieh. Low-complexity ML Decoding for
Convolutional Tail-biting Codes. IEEE Communications Letters, 2008, 12(12):883885.
29. 3GPP TS 36.2122007; 3rd Generation Partnership Project; Technical pecification group radio
access network; Evolved Universal Terrestrial Radio Access (EUTRA); Multiplexing and
Channel Coding (Release 8), Sep. 2007.
30. E. J. Candes, J. Romberg and T. Tao. Robust uncertainty principles:Exact signal reconstruction
from highly incomplete frequency information. IEEE Transactions on Information Theory,
2006, 52(2): 489509.
31. C. Zhao, W. Zhang, Y. Yang and S. Yao. Treelet-Based Clustered Compressive Data Aggre-
gation for Wireless Sensor Networks. IEEE Transactions on Vehicular Technology, 2015, 64
(9):42574267.
32. EPFL LUCE SensorScope WSN. http://lcav.epfl.ch/cms/lang/en/pid/86035.
33. G. Quer, R. Masiero, G. Pillonetto and et al. Sensing, Compression and Recovery forWireless
Sensor Networks: Sparse Signal Modelling and Monitoring Framework Design. IEEE Trans-
actions on Wireless Communications, 2012, 11(10): 34473461
34. N. Ahmed, T. Natarajan and K. R Rao. Discrete cosine transform. IEEE Transactions on
Computers, 1974, c-23(1):9093.
35. G. Quer, R. Masiero, G. Pillonetto and et al. Sensing, Compression and Recovery forWireless
Sensor Networks:Sparse Signal Modelling and Monitoring Framework Design. IEEE Trans-
actions on Wireless Communications, 2012, 11(10): 34473461.
36. T. Zhou, B. Xu, T. Xu, H. Hu and L. Xiong. User-specific link adaptation scheme for device-
to-device network coding multicast. IET Communications, 2015, 9(3):367374.
References 233
Software simulation has high scalability and flexibility. However, its authenticity
and efficiency have a certain gap to the hardware simulation. This chapter will
introduce the technique of co-simulation evaluation on software and hardware, so
as to make up for weakness of software simulation in authenticity and efficiency. As
a systematic statement, firstly, we will introduce the requirements, forms and
applications of the co-simulation evaluation test with software and hardware.
Hardware-In-the-Loop (HIL) simulation evaluation is an important and effective
method forthenew technology test evaluation of wireless communications. Next,
we will elaborate the co-simulefation evaluation test of software and hardware
forthelink-level and system-level simulations, and present the real implementations
of system test cases. Finally, we summarize the co-simulation test evaluation of
software and hardware.
The concept of software and hardware co-simulation was proposed as early as the
establishment of the Hardware Description Language (HDL) language. It offered a
reliable software and hardware co-simulation technology basis through the inter-
action between the driver realized by the C language and various hardware devices.
Valderramaet al. [1] proposeda unified model of software and hardware
co-simulation, and the authors in [24] presented different hardware acceleration
designs. The main thought of software and hardware co-simulation evaluation test
is [5]: dividing a system into two main parts. One is realized by using software
modules. The other is achieved by using hardware like actual equipments, devices,
or actual channels. Through the integration of two parts, a minimum system which
needs a technical evaluation will be achieved. It is impossible in the past that
software algorithms or measured objects can be fully evaluated and tested in such a
system.
As a technological means, software and hardware co-simulation evaluation and
test reflect the idea of authenticity and rapidness, which is an effective testing and
evaluating method to deal with the rapid development of 5G communications,
dispersing error detection in the whole design process and reducing the error risks
in the final product.
2. Rapidness
In general, the software simulation system is a complex computing system
based on operating systems, which needs a non-strictly determined period to
complete each operation. And the entire simulation is composed by tens of
millions or even billions of tiny calculation units. Therefore, the complex simula-
tion will not only take much time, but also have no definite period. For example, in
the LTE system simulation, simulation operations of the analog for one Transmis-
sion Time Interval (TTI) include 798,474,240 times Fast Fourier Transform (FFT)
calculations, 817,152 times matrix inversions, and 13,074,432 times matrix mul-
tiplications. If using software and hardware co-simulation and replacing the
software calculation by the real channel environment and hardware emulation,
we can greatly improve the efficiency of simulation, testing and evaluation.
The development of the modern technology and the market demand has
caused the increasing reduction of product development cycles, while the
needs for product testing and the integrity requirements are constantly growing.
This requires the effective evaluation in the algorithm or code stage, which
advances the original evaluation for the prototype to the technology research and
the development stage, while the methods of software and hardware
co-simulation evaluation test can effectively evaluate the result of technology
implementation in advance in the design stage.
In the system of mobile communication verification and evaluation, it is
significant to use software and hardware co-simulation test and evaluation to
test the key technologies under the real environment. Users realize the designed
algorithms quickly in the wireless link verification test platform based on the
hardware system. Then they can test and evaluate in a real wireless link
environment with many advantages. It can, provide a sufficient and effective
support for universities and research institutes to develop new communication
algorithms. It can alsoshortenthetechnology development cycle toa large extent
and reducing R&D costs and risks of new technologies, algorithms and standard
investment product. In addition, it canrealize the goal of effectively promoting
mobile communication technology development and overcome the shortcom-
ings of excessive codes and the simulation time brought by the pure software
simulation.
The forms of software and hardware co-simulation in this chapter are described
based on bidirectional uplink and downlinkcommunication applications. The
one-way communication can be used as a special case.
238 5 Evaluation Test of Software and Hardware Co-simulation
Figure 5.1 shows one form of software and hardware co-simulation with base
station side as the test and evaluation object. It can be divided into two conduction
test evaluations direct connection and through the channel. In this application, the
sample to be tested is base station physical hardware, and the test and evaluation
device at the terminal side is the joint platform of hardware and software.
Figure 5.2 shows another form of software and hardware co-simulation with
terminal side as the test and evaluation object. It can be divided into two conduction
test evaluations direct connection and through the channel. In this application, the
test sample is terminal physical hardware, and test evaluation device measured by
the base station is a joint platform of hardware and software.
Figure 5.3 does not specify the terminal side or the base station side as the test
and evaluation object. Instead it adopts the form of joint software and hardware
platforms on both sides, which extends the scope of test and evaluation to base
station and terminal sides. In this application, the conduction test and evaluation
can also be divided into two partsthrough and not through the channel. This
application form combines the base station with uplink and downlink terminals,
as well as channels to form a closed loop by using the software and hardware
combination. Thus, this kind of software and hardware co-simulation testing and
evaluation is called HIL simulation testing and evaluation.
BS UE
Software
downlink Realtime downlink
DUT uplink channel uplink Hardware
emulator
Fig. 5.1 Software and hardware co-simulation with base station side as the test and evaluation
object
BS UE
Software
downlink Realtime downlink
Hardware uplink Channel uplink DUT
emulator
Fig. 5.2 Software and hardware co-simulation evaluation with terminal side as the test and
evaluation object
BS UE
Software Software
downlink Realtime downlink
Hardware uplink Channel uplink Hardware
emulator
Fig. 5.3 No specification of terminal side or base station side as test and evaluation object
5.1 Overview of Software and Hardware Co-Simulation Evaluation and Test 239
Superiority of HIL
HIL simulation refers that during the system test, the tested system uses a real
control system while the rest parts use actual products if they can. If not, they will
use a real-time digital model to simulate the external environment of the controller
to test the entire system. In the HIL simulation, the actual controller and the
simulation model used to replace the real environment or equipment together
form a closed loop test system. The components which are difficult to establish
the mathematical simulation model can be retained in closed loop, so that the test
and initial matching work can be completed in the lab environment, which can
greatly reduce development costs and shorten the development cycle [6].
HIL simulation test evaluation is a semi-physical simulation developed on
the basis of the physical simulation and the digital simulation [7]. It is a typical
semi-physical simulation method based on DUT and environment [8, 9], which
realizes the function of a particular device or the external environment. In the
HIL simulation test, the simulation model replaces the actual equipment or the
environment. The model and real controllers constitute a closed loop test system
through the interface. For the components which are difficult to establish the
mathematical model (such as the inverter system), they can stay in the closed
loop system. Thus, the test for the controller can be completed in the lab
environment. Limit testing, fault testing and high-cost or the testing impossible
in the real environment can also be carried out. The HIL simulation technology
makes full use of computers modeling convenience and simplicity and reduces
the costs. It is easy to make fast and flexible changes to the system input,
by which the changes of system performance can be simultaneously observed
while changing the parameters. For the complex links of unessential investiga-
tions in systems, hardware can be connected directly with the simulation
system. There is no need to make mathematical modeling for all details of the
system [10, 11].
As a real-time simulation, since the HIL simulation technology can incorporate
some physical objects into the simulation loop, it mainly has the following
advantages. Real-time simulation takes the same time as the natural operation of
the system. It increases the reliability of hardware. By using the HIL technology,
the system function test can be carried out at the beginning of the design, which
can effectively reduce errors and defects possibly existing in the process of
development and design. It reduces test evaluation risks. In the process of using
the simulation system environment to simulate the actual testing, the high-risk
control functions, like security operations of verification systems, alarms and
emergency treatment, effectively reduce the testing risks. It reduces testing
costs. By using the simulation system environment to simulate the actual testing
process, it can avoid the procurement configuration of various ancillary equipment
in the early system design, reducing the system testing time and costs. It meets
testing requirements of different application environments. Using flexible soft-
ware configurations, different system environments can be simulated to meet
specific test requirements.
240 5 Evaluation Test of Software and Hardware Co-simulation
Currently, the relatively mature and rapid prototyping method to realize HIL
function is to design and develop the system architecture based on the current
Commercial Off The Shelf (COTS) module, the FPGA and the advanced multi-core
microprocessor. By making use of its functions and the prototype development
speed, we can accelerate the test process of the design and verification and can
verify and test the new technology algorithms in the analog environment. Because
the COTS system reduces the requirements for the specific customized hardware,
we can get rid of difficulties of the specific customized hardware in design,
maintenance and expansion.
As shown in Fig. 5.4, the HIL verification system in communication systemis
similar to the real communication system. It can provide a full-duplex communi-
cation loop and usethesoftware simulation code platform to intervene the real-time
hardware to simulate, verify and test the rest of system.
5.2 Evaluation and Test of Software and Hardware Link-Level Co-Simulation 241
Tx&Rx Tx&Rx
Fig. 5.4 Link simulation system configuration diagram of communication system hardware
in Loop
The HIL simulation combines the physical simulation with the mathematical
simulation. During simulation, it connects the computer with one part of the real
system and builds a mathematical model in calculation to simulate those that are not
easy to test or non-existing parts. This simulation model utilizes the computer
modeling features, featuring simple modeling, low cost, easy parameters modifica-
tion and flexible use. For the part that is difficult to establish the mathematical
model in the system, the actual system or physical model is accessed. Thus, the
operation of the entire system can be ensured, achieving the simulation of the
overall system. The HIL simulation has a higher authenticity, so it is generally
used to verify the validity and feasibility of the system scheme, and also to simulate
the products failure mode and have a dynamic simulation for the closed loop test of
communications systems. The HIL simulation makes simulation conditions closer
to the real situation. In pre-research, commissioning and testing of the product, it
can reflect the performance of products more accurately and objectively.
The HIL test system uses real-time processor operation simulation models to
simulate operating status of evaluated objects. It conducts the all-round and sys-
tematic test to the tested system by connecting to it via the I/O interface. Although
there are HIL tests in different fields with different testing emphases, the overall
simulation architecture is similar. Generally speaking, a HIL test system can be
divided into three main parts, namely, the real-time processor, I/O interfaces and
the operation interface.
(1) The real-time processor unit is the core of the HIL test system. Itis used to run
the real-time model of tested and evaluated objects, board driver and commu-
nication exchange information of up and low position, such as hardware I/O
communications, data records, stimulus generation and model execution. The
real-time system is essential for non-existing physical parts in anaccurate
simulation test system.
(2) The I/O interface is the analog, digital and bus signals interacting with the
measured components and connecting the HIL-based communications. Signal
242 5 Evaluation Test of Software and Hardware Co-simulation
analog,
digital, Bus
internet Real time interface
GUI I/O DUT
processor
MIMO MIMO
Channel
Tx/Rx Tx/Rx
Emulator
system system
Test result
verification
stimulus is produced through the I/O interface to obtain the data for recording
and analysis, and the sensor/actuator interaction is provided between the tested
Electronic Control Unit (ECU) and the virtual environment of the model
simulation. The operation interface can be provided for users to set parameters
and control the test platform. The HIL simulation is a powerful test method,
supporting the direct conversion from algorithms to RF signals (Fig. 5.5).
(3) The operation interface is similar to a real-time processor, which provides test
the command and visualization. It provides configuration managements, test
automation, analysis and reporting tasks. The HIL test system can simulate the
real system. It passes the hardware interface and generates physical signals to
the tested object. At the same time, the system will collect control signals from
output of the tested object and convert it into numbers to calculate, and then
combine the HIL system with the tested object to form a closed loop test
system. Figure 5.6 gives an HIL case architecture in the communication
domain. It mainly includes the signal transmitter, the signal receiver, channel
simulator between transmitter and receiver as well as the software module.
Through the real hardware, it can avoid defects of the pure software simulation
and accelerate the simulation speed.
testing system sends and receives. Transmitting module includes the embedded
controller used for the remote access and control the data generation as well as
the data transmission between modules, the arbitrary waveform generator used
for generating modulation baseband data in the test, and the frequency con-
verter used for up-convert a IF signal into a RF signal. Receiving module
includes the embedded controller used to remote access and control data
receiving and transmission between modules, the IF digital processor used to
receive, synchronize and demodulates down-converted digital signals, and the
frequency converter used to down-convert RF signal into IF signal or baseband
signal.
(2) Channel simulator: the real channel transmission environment used to simulate
different scenarios.
(3) HIL verification software module: including the HIL verification software, the
test result verification module and other system software.
Users can use the operation interface in the HIL verification software to control
the testing platform to make algorithm replacement test verification as well as data
processing. Adopting the HIL mode can avoid virtuality and limitations of software
simulation verification and increase the accuracy and validity of testing and veri-
fication. This new testing and verification method can meet requirements of com-
plex algorithm RF in-the-loop verification in the 5G wireless communication.
Simulation hardware
software accelerator card
244 5 Evaluation Test of Software and Hardware Co-simulation
PCI controller
daugh
ter
board
FPGA module FPGA chipset
Closed loop methods based on the HIL link simulation can be divided into the
following categories:
1. Signal real-time feedback based on the TCP/IP technology.
2. Signal real-time feedback based on Datasocket.
3. Signal real-time feedback based on the shared variable engine.
4. Signal real-time feedback adopting other high-speed bus (Fig. 5.9).
In a HIL verification systemforcommunicationsbased on the software defined
radio platform, the design of feedback signal implementation methods should
consider the following factors: software and hardware architecture of transceiver;
reduction of heterogeneous components of system software and hardware; using the
most reasonable, practical and efficient way to achieve feedback signal while
meeting the minimum system requirements. Considering all the factorsabove,
signal real-time feedback based on the TCP/IP technology and the shared variable
engine may be the best scheme.
Signal
Transmit channel Receiver data
resouce
(1)TCP/IP
(2)Data socket
(3)shared variable
(4)optical fiber
(5)Other high speed bus
Fig. 5.9 Feedback signal implementation methods in HIL verification system of communications
system
5.2 Evaluation and Test of Software and Hardware Link-Level Co-Simulation 247
Cyclic delay
1st FEC modual modulation vector
mapping unit
Input
bit
stream
S/P
Quadrature
amplitude
2nd FEC modual Process unit
modulation ..
unit .
Cyclic delay
modulation
vector
c mapping unit
d
sl ,1[n]
Quadrature cyclic prefix
amplitude sl ,2 [n]
modulation cyclic shift D 2 cyclic prefix
{cl , k }kN=-01 sl [k ] 1
unit IFFT unit
NT sl , nT [n]
cyclic shift D n
T
cyclic prefix
sl , NT [n]
cyclic shift D N T
cyclic prefix
Cyclic
Cyclic delay Cyclic delay
autocorrelati
modulation modulation deco
on function
vector vector mapping der
estimation Output
decision unit unit
unit bit
stream
P/S
OFDM demodulation
Fig. 5.13 System block diagram of semi-physical simulation testing of CDD-OFDM system
Fig. 5.14 Schematic diagram of LTE downlink OFDM signal generation based on SystemVue
Fig. 5.18 Time domain graph of OFDM signal without CDD modulation
5.2 Evaluation and Test of Software and Hardware Link-Level Co-Simulation 257
Fig. 5.19 Time domain graph of OFDM signal after CDD modulation
Fig. 5.20 Constellation graph of received signal from CDD-OFDM system simulation test
(16QAM)
I. Overview
The LTE uplink multiple access mode is SC-FDMA, whereas the downlink
mode is OFDMA. Both have higher spectral efficiency and use the frequency
domain balancing technology with low complexity to suppress multipath
fading. However, the SC-FDMA is a single carrier technology with even
lowerpeak rate, so it is used as an uplink multiple access technology to reduce
the costs of the mobile station. SC-FDMA has become an uplink transmission
technology of the LTE physical layer. The technology has achieved great
success in commercial and standardization activities. The reason is that
SC-FDMA system has the following advantages:
5.2 Evaluation and Test of Software and Hardware Link-Level Co-Simulation 259
i fi 1P rjr 0; ; P 1g 5:1
Assume that the length of the cyclic prefix Ng is larger than the maximal
channel delay length and the largest time shift among users.
X
M j2i n
rn yn i e N zn , Ng n N 1 5:3
i1
1 X M
R p Yi Ci Z 5:4
N i1
h iT
i i i
where R [R0, R1, , RN 1]T, Yi Y 0 ; Y 1 ; ; Y N1 ,
h iT
i i i
Ci C0 ; C1 ; ; CN1 , Z [Z0, Z1, , ZN 1]T, []Tdenotes the
i i j2i n
matrix transpose, Rk , Y k , Ck denote the FFT transform of r n , yni , e N .
The receiver detection algorithm is shown as below in Fig. 5.23.
To compensate for the carrier frequency deviation, before the FFT
processing, the received sequence rn is multiplied by the time domain
j2o n
sequence e N ,
j2o n
b
y n rn e N , 0nN1
b p1 R C0
Y o
N
1X M
0 1 0
Yi Ci Co p Z Co
N i1 N
1 X M
0 0
p Yi Doi Z
N i1
M
X
0 0 0
Hm Xm Dom Hi Xi Doi Z
i1
i 6 m
h 0 iT h iT
0 i i0 i0 i i i
Doi Do, 0 ; Do, 1 ; ; Do, N1 , Xi X0 ; X1 ; ; XN1 ,
n o h iT
i i i b Y b 1 ; ; Y
b0; Y b N1 , Co , k, Yk,
Hi diag H 0 ; H 1 ; ; H N1 , Y
j2 i o n
i0 i i j2o n
b k are the FFT transform of e N , yn, e N ,xi , hi , b
Do, k , Xk , Hk , Y n n yn.
In the expression, the first term includes the signal term of the k-th
sub-carrier and Self Interference (SI); the second is MultiUser Interference
(MUI); the third is additive noise. Received data on the k-th subcarrier is
given by the expression.
j N1m o sin m o
b k p1 Xm Hm e
Y N
k k
N m o
sin
N
X m o qm k sin m o q
1 j N1
m k
p Xqmm H qmm e N
N q 2 m o qm k
m m sin
N
qm 6 k
1 X M X j N1i o qi k sin q k
i o i
p Xqii Hqii e N Noise
N i 1 qi 2i i o qi k
sin
N
i 6 m
h i h i
m
Assume Xqii ,Xqmm ,Xq0 are not correlated,E Xqii E Xqmm 0, where
m
0 0
qi 2 i , qm , qm 2 m , qm 6 qm , E[ ] represents the expectation operator. Set
262 5 Evaluation Test of Software and Hardware Co-simulation
2
2
E Xqii 2X , E Hqii i , and then the powers on received signal of
user m on the k-th sub-carrier, SI and MUI are as follows:
m 2X sin 2 m o
2Signal
N sin 2 mNo
m 2X X sin 2 m o q k
m
2SI, k
N sin 2 m oNqm k
qm 2 m
qm 6 k
2X XM X sin 2 i o q k
i
2MUI, k i
N qi 2i sin 2 i o qi k
i1 N
i 6 m
m
where SIRm o P1 SIRk o , P N/M.
2. Test environment and setup
(1) Characteristics of input data
The input signal is the SC-FDMA signal, with the characteristics
including:
(1) Total sub-carrier number (256).
(2) Cyclic prefix length is 40.
(3) Modulation mode (QPSK)
(4) Subcarrier spacing (15 kHz)
(5) Moving speed (39 km/h)
(6) Carrier frequency (400 MHz)
(2) Characteristics of radio channel
The wireless channel is assumed to be a multipath Rayleigh fading
channel.
5.2 Evaluation and Test of Software and Hardware Link-Level Co-Simulation 263
3. Test indicators
BER: defined as the ratio of the number of received error bits and the
total number of sent bits.
4. Expression forms and accuracy
For tested data, the following expression can be used:
(1) When other parameters are unchanged, frequency deviation is fixed.
And BER performance is compared with traditional methods.
(2) When other parameters are unchanged, frequency deviation is ran-
domly and uniformly distributed. The range of random frequency
deviation is controlled, and BER performance is compared with the
value from the traditional methods.
II. Scheme design
According to test case requirements, the design uses NI PXIe-5673E, NI
PXIe-5663E and C8 to complete testing tasks (Table 5.2).
Hardware connections in testing scheme are shown as below:
The device connection is shown as in Fig. 5.24 According to the test algorithm,
5673E device generates the transmission signal waveform. It then passes
through the channel machine C8 to receive signals, and finally enters into
5663E. The stored data is processed by PC and then BRE is analyzed.
III. Test results
Because of equipment limitations, when in single-user situations, the user
number M 1, SC-FDMA system degenerates into a Single Carrier Frequency
Domain Equalization (SC-FDE) system. Test channels are AWGN and
Extended Vehicular A model (EVA) channels. BER performance of measure-
ments and numerical simulation are indicated as in Fig. 5.25.
In AWGN channel, the measured and numerical simulated BER perfor-
mance is basically consistent. For EVA channel, the measured BER perfor-
mance is worse than numerical simulation performance. When BER 0.1,
there is a loss of about 8 dB. The reasons for the poor measured BER
performance in EVA channel are as follows:
0
10
-1
10
-2
10
BER
-3
10
-4
10
AWGN PXI
-5
10 EVA PXI
EVA Simulated
AWGN Simulated
-6
10
0 2 4 6 8 10 12 14 16 18 20
SNR(dB)
without increasing the channel bandwidth and the total transmitting power.
Theoretically speaking, channel capacity will increase linearly with the
growth of the number of antennas. It has become the key technology and
hot research topic of IMT-A mobile communications systems. Therefore,
the test for multi-channel RF and wireless channels is a necessary and
important part of the key IMT-A technology testing and verification
platform.
The previous test solutions in the industry mostly concentrate on 2 2
channels. Their application scope is limited. Therefore, we can say that by
far there is no really mature commercial solution suitable for the multi-
channel (4 46 6, or even more number of channels) test.
German MEDAV has channel sounder products, and Finnish
EleKtrobithas ittoo. They both can support the multi-channel channel tester.
However, in their implementations, a single signal source transmits in
different time slices from the same source through the high-speed RF
switch. Therefore, they can not complete the test of diversity technology
and are different from the proposed implementation method of the multi-
channel adaptive and self-updating test platform. Besides, the value of each
companys single product not including software is more than 5 million.
1.1 Difficulties and challenges in the system development
The multichannel adaptive and self-updating test platform will
directly face the challenge of multi-channel testing. The typical chal-
lenges it brings to traditional test equipment are as follows:
1. It needs multiple signal sources to precisely synchronize generating
source signals and self-correct.
2. It needs multiple analyzers to precisely and synchronically analyze
the signals and self-correct.
3. Data sample transmission rate and memory depth of multi-channel
testing are far beyond the support ability of the traditional test
equipment.
4. It needs to develop complex channel matrix algorithms and multi-
path test algorithms.
5. It supports a channel model as the foundation of performance eval-
uation and comparison.
6. Therefore, the challenge and complexity faced by the development
are enormous. And with the increase of channels, the complexity
will increase dramatically. In this case, the design object is 6 6
channel RF and channel testing system.
2. Shared inter-modulated interfaces and mechanisms with software
simulation
In this case, the integrated verification platform fully uses the research
results of subtasks in the case software simulation platform. It truly fulfills
the reusability of project results. This is consistent with the initial target of
266 5 Evaluation Test of Software and Hardware Co-simulation
The large-scale database is used to store the test instrumentation, the test
task information, the test log data, etc. It also provides data service for the
manager and the adaptor to accomplish all kind of interfaces of the
configurator in the message mode, which realize a complete series of the
process from the remote user input, configuration, testing and data
playback.
II. Architecture design and implementation
The overall architecture design is shown as below and the system is divided
into seven parts.
1. 4-transmitting and 6-receiving types of the wireless link platform which is
based on modular instruments, have achieved synchronization among mul-
tiplex transmitters and among multiplex receivers as well as between the
multiplex transmitters and receivers.
2. Common inter-modulation interface and mechanism of software simulation
platform in which its algorithm can replace the interface specification.
3. Automation testing and verification systems of all the five manufacturers
and seven types of instruments are supported.
4. Test server technology, remote test configuration and implementation tech-
nology based on the B/S architecture.
5. Test configurators supporting remote operation, multi-task implementation
managers and task executors are developed.
6. A large-scale database that includes all kinds of test configurator interfaces
and supports fast indexing is developed.
7. Websites that users can use via network and the general control interface
supporting local and network use are developed (Fig. 5.26).
The verification platform of key IMT-Advanced technologies has com-
pleted the substitutable HIL key technology algorithm, 4-transmitting and
4-receiving, and the verification environment for channel real-time simulation.
As an open and sharable platform, the verification platform allows different
users to make remote verification configuration via network and browser and
to use it both online and offline. Therefore, the verification platform has
developed and accomplished the following function components:
1. Testing and verification configurator
It is designed to input and proofread multi-users verification parameters,
analyze their validity and store the data to the database.
2. Multi-task manager
It is designed to manage and schedule the testing and verification tasks
requested by multiple users and to interact with the database.
3. Multi-task actuator
According to managers scheduling, it actuates orthogonal multi-tasks
and manages the execution state. Composed of several standardized test
modules, it can get the test results directly and store them in the database.
5.2 Evaluation and Test of Software and Hardware Link-Level Co-Simulation 271
Parameter configration
4. Test database
It used to record the relevant user and the test data.
5. Database and configurator interface
It is used to finish the conversion from the XML-based user request to the
storage data.
6. General interface
It enables users to login the verification platform through local login or
remote login.
7. SWAN website
It is designed for remote users, supporting online and offline use via
browser.
III. Applications of integrated test
1. Examples of standardized test scheme
(1) General parameters measurement module
The function of this module is to control the spectrum analyzer
E4440A, by which to test indicators of wireless devices, mainly includ-
ing channel power, power spectrum, etc. The E4440A configuration
interface is shown in Fig. 5.27.
272 5 Evaluation Test of Software and Hardware Co-simulation
PXI 5601 are used for parameter configurations. The data of transmitter
and receiver can be seen and can be used to verify the transmitting and
receiving of algorithms withsingle antenna, as shown in Figs. 5.31,
5.32, 5.33, and 5.34.
5.2 Evaluation and Test of Software and Hardware Link-Level Co-Simulation 275
In this hardware and software system-level co-simulation, the physical layer of the
simulation platform is partially substituted by the real physical layer and the
transmission network, which can increase the reality and the instantaneity of the
system.
Design and implementation of the simulation software platform should refer to
Chapter 4 and the design and implementation of the real physical layer adopts the
above-mentioned HIL software and hardware link system. Mapping relation from
the system-level simulation software to the real physical layer should be custom-
ized in accordance with the specific requirements of the simulation evaluation. It
will be introduced in detail in the real case in Sect. 5.3.3 of this chapter.
Interface layer
Real time
channel
emulation/
PHY hardware platform Real PHY hardware platform
channel
System simulation
platform
Control panel : parameter Data panel Service
configuration, Scheduling, Flow, sending, feed
Command interface back receive
UE
baseband DA upconvert downconvert AD decode
data
Fig. 5.36 A block diagram of system implementation of software and hardware co-simulation
280 5 Evaluation Test of Software and Hardware Co-simulation
Interface layer
Real time
channel
emulation/
PHY hardware platform Real PHY hardware platform
channel
the delay of the whole process in which the data is sent from the simulation
platform to the air interface via the link transmitter module, and then goes back
to the platform after receiving and demodulating decode through the link
receiving module.
This kind of software and hardware co-simulation methods can make full
use of the hardware high-speed processing capabilities, and enables some links
system simulation performance to be close to the real-time level. Combined
with the relatively perfect system function of the system simulation platform, it
can better simulate the system application scenarios with high indicator
requirements for system transmission delay.
Fig. 5.38 NTB wont affect the algorithm of resource allocation of any system
Fig. 5.39 Communication mechanism within two PCI domains using NTB
domain of System A will serve as a window into the physical address space in
the PCI domain of system B while the address space of System B will be a
window into the physical address space within PCI domain of System A.
After finishing resources allocation of System A and B, NTB transfers data
between the two systems according to memory mechanisms. These mechanisms
include memo registers for transferring data and doorbell registers for interrupt
request, transferring large amount of the address space to the address space
by NTB.
Figure 5.39 uses the communication mechanism within two PCI domains
using NTB. The PXImc specification developed by PXISA defined specific
5.3 Evaluation and Test on Hardware and Software System-level Co-simulation 283
Fig. 5.40 An example of a distributed computing system that uses PXI Express system and
PXImc interface board
To simulate the interference of multiple users for adjacent cells to a valid user, the
PXI bus architecture is used as the core, and USRP-RIO is used to simulate the real
physical channel so as to complete the system configuration of upper level param-
eters and the downlink simulation for the LTE physical layer.
I. System architecture
The system simulation platform architecture based on USRP-RIO physical
layer acceleration is shown as Fig. 5.42. By integrating the link-level test into a
complete system simulation platform, the multi-cell HIL test and simulation is
realized.
While the traditional test scheme only considers the single link or a limited
number of interference, this HIL scheme added the HIL performance verifica-
tion of the transceiver to be tested in the scenario of real multi-cell multi-user
scheduling. This scheme aims especially to verify the performance of algo-
rithms that involves multiple cells, for example the ComP, synchronization of
carrier aggregation, etc.
5.3 Evaluation and Test on Hardware and Software System-level Co-simulation 285
Fig. 5.41 Composition ofsystem simulation platform with physical layer accelerationbased on
USRP-RIO
System simulation
interface
USRP-RIO RF USRP-RIO
RF Air interface or channel
Fig. 5.42 System simulation platform architecture based on USRP-RIO physical layer
acceleration
Compared with the traditional system, this scheme improves the hardware
acceleration and authenticity of the system simulation platform. As for the
implementation of traditional system simulation platform, the way equivalent
SNR mapping into BLER will result in a loss in accuracy. Especially for
complex transmission (equilibrium) scheme, the instantaneous SNR greatly
varies in each time-frequency location. Besides, it is difficult to evaluate
286 5 Evaluation Test of Software and Hardware Co-simulation
System simulation
platform
8381
8381
MXI
USRP-RIO(Tx) MXI USRP-RIO(Rx)
A user takes the real-time test on the actual physical channel. During
the process, the system simulation platform will transfer related user configu-
ration parameters to the transmitter, the receiver and the channel simulation
terminal via a high-bandwidth data board. And physical signals will be sent
by the USRP-RIO (Tx) through the air interface or the channel simulator.
After it is received and demodulated, the feedback information will return to
the system simulation platform via the high-bandwidth data transmission
board.
5.3 Evaluation and Test on Hardware and Software System-level Co-simulation 287
Table 5.5 PHY- > MAC Parameter name Range/value/remarks Data type/size
(transmitting)
Device No. INT 1
TTI No INT 1
UE ID INT 1
Cell ID INT 1
BTS ID INT 1
Feedback CRC INT 1
Feedback CQI FLOAT 1
Feedback ESNR FLOAT 1
290 5 Evaluation Test of Software and Hardware Co-simulation
USRP UE PC
Parameter
configuration
eNB UE MAC data
stream
Measurement
PHY Tx PHY Rx parameter
RF front end
Fig. 5.47 The system block diagram of the implementation of transmitters and receivers of
downlink in physical layer
5.3 Evaluation and Test on Hardware and Software System-level Co-simulation 293
Fig. 5.51 The estimated value of original band noise channel obtained from LS algorithm
Y = S*H +N
H_est = Y/S = H +N/S
H_filt
H
CQI = S*H/N = H/(N/S)
H_filt/(N/S)
Whereas
N/S = H_est H_filt = H + N/S H = N/S
so
SINR = H_filt/(H_est H_filt)
Fig. 5.53 The estimated channel values of each sub-band are extracted and then sent to filter
module
From the data of a subframe, in the order of index symbol first, and
then index sub-band, the estimated channel values of each sub-band
are extracted and then sent to filter module (as two steps below), as
shown below (Fig. 5.53).
In the upper computer, after across a low-pass filter, an approximate
value of denoising channel is obtained.
The difference of H_est obtained through the LS algorithm and
H_filt after filtering serves as the original estimated value of the noise.
The above two steps are shown as below:
5.3 Evaluation and Test on Hardware and Software System-level Co-simulation 297
LTE Host DL.gvi- > LTE Read Channel Estimates.gvi- > LTE Calcu-
late CQI.gvi- > LTE Calculate SubframeCQI.gvi- > LTE Channel
Estimation subband.gvi (Fig. 5.54)
N/S and H obtained in the previous stage are used to calculate CQI and
ESNR
LTE Host DL.gvi- > LTE Read Channel Estimates.gvi- > LTE Calcu-
late CQI.gvi- > LTE Calculate SubframeCQI.gvi->
LTE Calculate SNR for CQI
For CQI, as shown below, in Part 1 the average noise of a RB is
calculated; in Part II, the CQI values in each CRS position is calculated;
finally the average is got on sub-frame (Fig. 5.55).
For ESNR, each bit of RB allocation corresponds to 4 RBs. Each RB
includes 8 CRS, so every 32 CRS is a group (Fig. 5.56).
Since there is disparity between the estimated original channel
power and original noise power, it is likely to have disparity between
the obtained CQI and ESNR. Hence, calibration is needed. The cali-
bration equation is shown below:
CQI/dB 1.8 * CQIraw/dB 10.2 6 CQIraw/dB < 6
298 5 Evaluation Test of Software and Hardware Co-simulation
5.4 Summary
This chapter describes the classification, methods and applications of software and
hardware co-simulation test and evaluation. It puts emphasis on the HIL link
simulation technology and its applications. With the HIL technology, the software
and hardware co-simulation platform can be developed on the basis of existing
commercial equipment. The technology evaluation and test at the algorithm stage
can give us earlier test and evaluation results without increasing the product
development cycle. Then this chapter further introduces the methods and applica-
tions of system-level software and hardware co-simulation evaluation and test. In
the cases of link-level software and hardware co-simulation test and evaluation, we
not only introduce the cases where the hardware platform and the software platform
are in the same computing environment, but also introduce the cases of cross-
network remote tests.
References
1. C. A. Valderrama, A. Changuel and et al. A unified model for co-simulation and co-synthesis
of mixed hardware/software systems. European Conference on Design & Test, 1995:180184.
2. R. Ruelland,G. Gateau, T. A. Meynard and J. C. Hapiot. Design of FPGA-based emulator for
series multicell converters using cosimulation tools. IEEE Transactions on Power Electronics,
2003, 18(1):455463.
3. J. Ou, V. K. Prasanna, MATLAB/ Simulink based hardware/software cosimulation for design-
ing Using FPGA configured soft processors. International Proceeding of parallel and distrib-
uted processing symposium, 2005:148b-148b.
4. A. Hoffman, T. Kogel and H. Meyr. A framework for fast hardware-software co-simulation.
Proceeding of the conference on design,utomation and test in Europe, 2001.
5. K. Wei. Software and hardware co-simulation platform. E-World, 2012:128131.
6. W. Zheng. Research and development on new generation of hardware in the loop simulation
platform. Tsinghua University, 2009.
7. B. Heiming. Hagen Haupt. Hardware-in-the-Loop Testing oflutions for Networked Electronics
at Ford[C]. SAE Paper,2005.
300 5 Evaluation Test of Software and Hardware Co-simulation
two aspects of works contents: (1) studying evaluation and testing methods of 5G
mobile communication network and wireless transmission technology;
(2) establishing the corresponding 5G technology simulation testing and evaluation
platform, and finishing the evaluation and testing for 5G mobile communication
network and wireless transmission technology. In 2014, National Science and
Technology Major Project, in China wireless innovative technology test platform,
created public trials, verification and testing platform for wireless innovation tech-
nology of the 3GPP R12 and its follow-up standards. The project proposed to further
establish the multi-scenario real channel base with the characteristics of classic
Chinese environment to realize field trial in lab, and to develop the real channel
scenario system-level hardware and software co-simulation system with multi-cell
multi-user to support the research and innovation of new wireless technology. It also
proposed to develop and construct an open and shared, flexibly configured, new
technology R&D-faced, standardized and new technology applied, and multi-cell
and multi-user supported co-simulation, experiment and test platform of system-level
software and hardware. In 2015, National Science and Technology Major Project in
China further constructed the international standard evaluation environment of
IMT-2020 for 5Gcandidate technologies and international standardization. It
includes the simulation evaluation platform as well as test and verification platform
for IMT-2020 candidate technologies. The two platforms can complete feasible and
practical performance evaluation of potential network architecture, key technologies,
algorithms, protocols of IMT-2020, etc., and support IMT-2020 candidate technol-
ogy research, international standard-setting and follow-up product R&D. Based on
the above platforms and facing the requirements of IMT-2020 technology,
researchers can complete the evaluation and verification of massive MIMO array,
UDN, HFB communications, M2 M enhancement, D2D, C-RAN, SDN, NFV,
Content Delivery Network (CDN) and other IMT-2020 candidate technologies.
Overall, the current test platform presents the technology features of universal-
ization, platform-based and software-based. This chapter will introduce a typical
hardware test platform used for 5G evaluation. It includes the first parallel channel
sounder platform in industry used for channel measurement and modeling and
MIMO OTA platform based on a specified channel model, the first platform of
hardware and software supporting software open source community of 5G terminal
and base station.
Base Terminal
station
Many universities and research institutes have made researches on channel mea-
surement and modeling of some scenarios, but have not covered the new scenarios
defined by IMT-2020. Moreover, the majority of previous channel model studies
are focused on the derivation of the 2D channel model, not for 3D spatial features. It
is very necessary to systematically carry out channel measurement and modeling of
new scenarios. At the same time, the channel measurement equipment used in
previous researches mainly adopted serial channel test equipment. For 5G specific
scenarios, this equipment cannot be satisfied with 5G MIMO test and measurement
requirement.
In general, classic serial channel sounder consists of a pair of transceivers and
multiple antenna arrays and high-speed RF switches configured at both of trans-
mission and reception, as shown in Fig. 6.2 . In measurement, one transmitter sends
predefined waveform, such as time domain sequence. The transmitter and receiver
periodically and synchronouslyswitch antenna channels to capture air interface
signals. The receiver can correlate the received signal with local sequence and
obtain CIR of each transmitting and receiving antenna pair. The channel parameters
can be extracted by deriving CIR data using the parameter extraction processing
algorithm. The processes are shown in Fig. 6.3.
6.2 Test Platform of MIMO Parallel Channel 305
Fig. 6.2 Schematic diagram of serial channel sounder (single-channel antenna array)
The
Predefined The capture Channel
transmission
transmission of received parameter
of predefined
sequence signal extraction
waveform
1
2
3
1
2
3
RX 4
time
As shown in Fig. 6.4, the channel sounder of this architecture has advantages
that the channels are orthogonal in different ways. The calibration is convenient
and the post-processing is relatively simple. This scheme, however, is restricted
with the measurement period of all ergodic channels, so the channel coherence time
is short. The scheme can only test channel characteristics when it is unchanged or in
slowly changing static and quasi-static scenarios. This means it is not suitable for
channel test under high mobility scenario. In addition, traditional serial channel
sounder is limited by early equipment development conditions. It preserves CIR
files, rather than raw channel test data, limiting the accuracy of channel Doppler
measurement.
In other words, using the channel sounder with this architecture cannot measure
time-varying channel when there are a large number of transmitting and receiving
306 6 5G Hardware Test Evaluation Platform
antennas. In the millimeter wave band, the coherence time of channel decreases as
the carrier frequency increases. And when massive MIMO scenarios are consid-
ered, traditional serial channel sounder fails to meet the test requirements.
5G channel testing and measurement is urgently needed with the parallel channel
sounder. However, the development of parallel channel sounder has to face the
following difficulties and challenges.
(1) Synchronization across multiple channels
For the parallel channel sounder, picoseconds level synchronization
accuracy among the MIMO channels both in Tx and Rx sides must be achieved
in order to guarantee high spatial and temporal resolutions for channel
parameters estimation. This goal is much challenging because of the differ-
ences between clock Phase-Locked Loop (PLL) circuits across multiple RF
channels.
(2) Real-time storage of massive raw measurement data
For the parallel channel sounder, multi-channels raw data with a high
sampling rate per channel, must be simultaneously stored. Dozens of Gigabits
data streaming require data transmission interface/bus with very high through-
put and very efficient storage mechanisms design.
(3) Parallel channel calibration
Compared with TDM-based channel sounder which has only one Tx and Rx
pair, multiple RF channels in parallel channel sounder can be considered as a
multi-channel time-varying complex system. The non-ideal and different
responses among every Tx/Rx pairs rooted from the multipath clock Phase-
Locked Loop (PLL) circuits and RF devices. A new sophisticated parallel
calibration algorithm must be designed to carefully compensate the non-ideal
and difference in channel response as to guarantee the estimation accuracy of
channel estimation.
(4) High-speed continuous storage of raw data
The serial channel sounder first correlates the received signal with local code
to obtain the CIR, and then stores the CIR for each channel. However, this
correlation yields the average impulse response over a long period. Accurate
channel Doppler characteristics cannot be obtained based on this impulse
response when the channel time-varying is fast. Therefore, the most effective
way is to collect and store the original data. However, in the case of simulta-
neous storage of multi-channel with high sampling rate per channel, the archiv-
ing system must be able to achieve the high-speed transmission throughput with
an efficient storage mechanism.
6.2 Test Platform of MIMO Parallel Channel 307
Faced with these channel modeling requirements and technical challenges, Shang-
hai Research Center for Wireless Communications has developed a sophisticated
parallel MIMO channel sounder. Compared with a serial channel sounder, the
biggest difference is the use of multiple parallel RF channels based on code
division, i.e., multiple parallel transmitter transmit simultaneously time domain
sequences, and multiple parallel receiver receive the air interface signal at the same
time. The use of such kind of architecture can fundamentally solve the problem of
measuring fast time-varying channels in multiple antenna scenarios. Figure 6.5 is
the picture of the channel sounder.
The system architecture of parallel channel sounder is shown in Fig. 6.6, mainly
including the following subsystems:
RF/baseband subsystem
Clock / synchronization trigger subsystem
Synchronization Synchronization
trigger system trigger system
GPS/local GPS/local
reference clock Calibration reference clock
Calibration Parameter
subsystem subsystem extraction
Calibration subsystem
Antenna array
High-speed stream disk subsystem
Parameter extraction
1. RF/baseband subsystem
Parallel RF/baseband subsystem is the core of the entire parallel channel
sounder. Its performance and stability directly determine the accuracy of mea-
surement. The measurement system is realized by software defined radio plat-
form based on PXI bus.
At the transmitter, the parallel channel sounder generates a set of PN with
good cross-correlation characteristics as the transmission sequence of each
channel. Then through up sampling and shaping filter, the signal spectrum
shape is improved. After digital-to-analog conversion, the signal is converted
to a high-frequency analog signal by up-conversion modules and then radiate
through an antenna. In the receiver, down-conversion of parallel channel
sounder and the signal after AD sampling are sent into FPGA. In FPGA, the
data can be pre-processed accordingly. Then through Peer to Peer (P2P) FIFO, it
is transferred to disk array for storage at a high speed (Fig. 6.7).
(1) Test signal screening
A major difficulty of MIMO parallel measurement scheme is that multi-
ple concurrent detection signals will interfere with each other. Even with
orthogonal sequences, the orthogonality cannot be guaranteed for different
delay sequences, which will still produce interference. To this end, it is
needed to fully consider the requirements of parallel channel to screen a set
of eight signal sequences. The formula is shown in Table 6.1. These signal
Calibration Calibration
RF path (Low
RF path(DA
noise/down-
RF subsystem conversion/up-
conversion/AD
conversion/Amplifier)
conversion)
Transmitting-end
Trigger signal Receiving-end baseband Trigger signal
baseband processing
processing
Baseband (PN sequence
(digital filtering/digital
generation/RRC shaping
subsystem stream disk)
filter)
Reference Reference
clock clock
4500
4000
X= 1 X= 2049
Y= 4096 Y= 4096
3500
3000
2500
2000
1500
1000
X= 278
500 Y= 261.8396
0
0 500 1000 1500 2000 2500 3000 3500 4000 4500
4500
4000
3500
3000
2500
2000
1500
1000
X= 1431
500 Y= 280
0
0 500 1000 1500 2000 2500 3000 3500 4000 4500
4. Antenna array
As an indispensable part of channel measurement, MIMO antennas play an
important role in MIMO channel measurement. Shanghai Research Center for
Wireless Communications has developed an 8-antenna omni-directional antenna
array as the measurement antenna.
The basic requirements for measuring antenna are:
(1) Transmitting and receiving antennas support 8-transmission and 8-reception
respectively.
(2) Standing Wave Ratio (SWR) is not greater than 1.5.
(3) Antenna gain is greater than 6dBi.
(4) Antenna bandwidth is greater than 200 MHz.
To meet the technical requirements, the antenna design uses a monopole with
1/4 wavelength. Then, according to center frequency fc 3.5 GHz, 3 108/
3.5 109 85.71 mm. The antenna monopole length is 20.2 mm and diameter is
0.4 mm, with 1 pole in the center and 7 poles evenly distributing at the periphery.
The distance between central monopole and any edge monopole is 40.3 mm. The
antenna design is shown in Fig. 6.10. The antenna array using this scheme can
effectively estimate 3D channel characteristics.
5. High-speed stream disk subsystem
The stream disk scheme of measurement system is based on the combination
of zero-copy technology and asynchronous storage technology, and uses Tech-
nical Data Management Streaming (TDMS) to achieve the ultra high-speed and
low-latency data storage scheme thus greatly reducing the CPU consumption.
In this measurement system, the bandwidth of PXI bus provided by case A
backplane is 24 GB/s. For this high-speed bandwidth, the data transmission is
achieved through the establishment of Direct Memory Access FIFO
(DMA-FIFO). Another problem that affects the transmission delay is additional
data copies and state transitions brought by the operating system kernel. Using
1
0 1 2 3 4
CPU copy
DMA copy
user space user buffer
0 1 2 3 4
steps
Fig. 6.11 Traditional I/O operation of a file
the zero-copy technique (Fig. 6.12) can effectively reduce the additional copy
and state switch caused by the operating system kernel.
Figure 6.11 is the traditional I/O operation of file.
It can be seen that in order to complete the process from reading data in
hardware to write it in disk, four copying processes are needed. First, in the
system call of reading the file, the hardware FPGA transmits data to the
kernel buffer in CPU through DMA. Then, CPU performs copy operation to
copy the data in its user cache. In system call of writing file, CPU copies the
data in user cache to socket cache, and finally transfers to disk through
DMA-FIFO.
Figure 6.12 is the I/O operation based on the zero-copy technology. After the
zero-copy technology is used, only the operations that hardware FPGA transmits
to the kernel buffer through DMA and the socket buffer transfers to disk through
DMA are retained, eliminating the need for two data copies in kernel space and
reducing delay and liberating CPU. At this point, the CPU-triggered disk writing
is an asynchronous operation. That is, it can execute other processes in parallel
without waiting for the completion of DMA transmission.
As shown in Fig. 6.13, TDMS file storage format is a binary file storage form,
which has an advantage of small space usage. In addition, it can classify stored
data and abstract the composition of data into three categories: top-level file
group, channel group, and channel. A file group may contain several channel
groups, and a channel group can contain many channels.
314 6 5G Hardware Test Evaluation Platform
before sendfile
kernel context
syscall read
0 1 2 3
user space
0 1 2 3
steps
Fig. 6.12 I/O operation based on zero copy technology
Channel Group(s)
UUT
Procedure
Property 1
Test Fixture
Property 2 etc.
Property n
Channel(s) Name
Comment
Property 1 Unit
Sensor Info
Property n etc.
Actual stream disk program includes the GPS location information, IQ data or
other relevant information. These data need to be precisely synchronized. Using
TMDS can easily combine these data together and package them in a file as a
different data set or channel. This greatly simplifies the difficulty of different
data synchronization in data post-processing.
6. Parameter Extraction
SAGE algorithm is an iterative method of maximum likelihood estimation. It
can realize the optimization of multidimensional parameters by allocating the
observed data in multiple subspaces and then estimating the multipath parame-
ters in each subspace.
In order to obtain the results of the maximum likelihood estimation, the
SAGE algorithm uses iterative methods to carry out the interference cancellation
on the observed values in subspace, so as to ensure the monotonic rise of overall
likelihood of parameter estimation with increasing iterations. The optimal esti-
mation of output parameter set is achieved when the requirements for conver-
gence are satisfied.
The measurement platform of parallel transmitting and receiving channel is
optimized based on the multi-dimensional parameter estimation SAGE algo-
rithm in the development of feature extraction software. The parallel sounding
SAGE (P-SAGE) algorithm is proposed. This algorithm has the following
functions and features:
The received signal data analysis in baseband based on parallel M N
MIMO system;
Good orthogonality based on the baseband spread spectrum code;
The non-uniformity among RF channels is fully considered;
The calibration is done separately at Tx and Rx;
The complexity of algorithm is well reduced to achieve fast estimation;
The algorithm is optimized for the presence of phase noise and can be applied
to scenarios where phase noise exists.
Figure 6.14 shows a block diagram of the parameter estimation algorithm
with multi-dimensional channel characteristic based on the PS-SAGE algorithm.
The innovations of PS-SAGE algorithm are:
(1) Suppression of multi-channel phase noise. Parameter extraction and data
post-processing algorithm use multi-channel phase noise sample data to
construct the maximum entropy statistical model. The maximum likelihood
Transmitting channel IR
Baseband measurement data
calibration
The test results of the parallel channel sounder in anechoic chamber are shown in
Fig. 6.15:
For the static scenario in the chamber, we choose PN sequence with length of
4096 and an I/Q chip with rate of100 M/s. Each snapshot collected 100 cycles. The
total length is the measurement data within 10 ns 4096 100 2 8.192 ms.
We first use the 8-input 8-output connector to make calibration and measure-
ments, and then test via air interfaces. The test results are shown as below.
Fig. 6.17 PS-SAGE algorithm estimates the spatial parameters and reconstructs the signal spatial
spectrum
Figure 6.16 is pair wise multi-path delay spectrum of 64 groups for In it, they are
respectively, from left to right, air interface collection, calibration data, and the
results after using calibration data compensation. It can be seen from the figure that
after using 64 calibration files to calibrate original signals, the response difference
of each transmitting and receiving channel can be well compensated to obtain the
consistent delay as shown in the right figure.
Then here are the test results of channel parameter estimation. In Fig. 6.17, the
elements from left to right, respectively are air interface data, reconstructed signal
based on parameter estimation, and the power spectrum of Azimuth of Departure
(AOD)/ EOD of signal after cancellation. In it, the horizontal and vertical coordi-
nates are AOD and EOD respectively. The color represents the intensity of spectral
component. It can be seen from the figure that the reconstructed signal has a strong
similarity with the original signal spectrum. The energy of reconstructed signal
compared to the energy the original signal is reduced by 37 dB. The chamber
verification shows that the spatial parameters of wireless channels which are
obtained from the post-processing algorithm estimation are very close to actual
ones.
318 6 5G Hardware Test Evaluation Platform
40
original
45 estimated
residual
50
55
power in dB
60
65
70
75
80
0 10 20 30 40 50
delay samples
Fig. 6.18 PS-SAGE algorithm estimates the time domain parameters and reconstructs the signal
time-domain spectrum
Fig. 6.19 Main scenario graphs of progressed channel tests of parallel channel sounder test
Based on the parallel MIMO channel sounder and the definition of 5G application
scenarios in IMT-2020 (5G) promotion group white paper, Shanghai Research
Center for Wireless Communications began to construct shared data channel
model based on practical measured data in 2014. The designed channel model
library will basically cover the following two main application scenarios.
(1) 5G mobile Internet application scenarios
The future 5G mobile communication system will meet diverse service
requirements in different areas, such as residence, workplaces, leisure venues,
and transportation venues. It could provide ultra-high definition video, virtual
reality, real-world enhancements, cloud desktop, online gaming and other
excellent service experience for customers in different scenarios. The scenarios
include the places of dense residential areas, office, stadiums, open-air gather-
ings, subways, highways, high-speed rail, and wide-area coverage and other
application scenarios with ultra-high mobile data traffic density, ultra-high
mobile connection density and ultra-high mobility.
All above application scenarios can be summarized into two types: contin-
uous wide-area coverage scenario and high-capacity scenario with hot spots.
(2) 5G IoT application scenarios
The future 5G mobile communications system will penetrate to IoT and
various (vertical) industry fields, deeply integrating with industrial facilities,
medical equipment, vehicles, etc., so as to effectively meet the service
320 6 5G Hardware Test Evaluation Platform
follow up the applications and extensions of the SISO testing technology in MIMO
OTA test. Many standardization organizations and agencies including the 3GPP,
COST and CTIA named the test method of evaluating the performance of multiple
antennas as MIMO OTA.
The MIMO OTA test scheme is essentially a different simulation method of
multipath in space propagation environment, producing wireless communication
environment close to the reality to deal with the key challenge in massive antenna
MIMO OTA testing techniques. The challenge is how to generate an RF channel in
anechoic chamber that is closest to the real-world spatial, angular and polarization
behavior. Therefore, the main requirements can be divided into three types.
1. Emulator of the real wireless propagation environment.
This requirement is currently addressed by a channel simulator, which sim-
ulates the channel characteristics of test scenarios through a real-time channel
model. These features are also reflected in time domain, frequency domain and
spatial domain. Only the exact channel model can approximate the test
scenarios.
2. The close-to-reality wireless propagation playback environment.
The channel environment is played back by using different configurations of
chambers and antenna arrangements to combine with the channel emulator. In
particular, the spatial characteristics of signal should be considered.
3. Calibration of OTA test system.
In MIMO systems, spatial correlation is a very important parameter, which
contains the characteristics of antenna and transmission channel. In fact, the
unknown antenna characteristics of known antenna model cannot get the corre-
lation. Likewise, the unknown channel model with known antenna characteris-
tics cannot get the correlation. Therefore, the characteristics of antenna and
transmission channel to test multiple antenna terminals must be considered.
The test scheme based on anechoic chambers will connect RF channel simulator to
the probe array in an anechoic chamber environment. The probe array surrounds the
DUT to reproducibly simulate a wireless environment that generates complex
multipath fading at the location of DUT.
(1) The multi-probe OTA test block diagram based on anechoic chamber is shown
in Fig. 6.20, which mainly includes anechoic chamber, system simulator,
MIMO channel simulator and test probes, etc. The multi-probe OTA test
based on anechoic chamber is the most intuitive test method. The test system
simulates the downlink signal of a base station with channel simulator, launches
it through many test probes in anechoic chamber, and simulates the channel
model with a specific AOA in chamber center, and then sets terminal under test
in chamber center and tests the MIMO antenna performance in channel sce-
narios gotten from simulation.
Tests for multiple antenna wireless devices are typically conducted in a
conduction mode, and proper fading is typically simulated using a channel
simulator. The channel model used in the current test includes the signal AOD
and AOA information, as well as antenna patterns on both sides of the trans-
mitter and receiver of channels. These spatial channel models can be better
adapted to environmental simulation requirements of OTA test by modifying
the channel model in MIMO channel simulator.
wave-
absorbing
materials tested object
probe
antenna
Base station simulator Channel simulator
In such a scheme, the DUT is free to use the number and position of probes
to combine different test scenarios. For example, the probe can be placed in the
same orientation to simulate the 2D scenario, and the antenna can be placed in
different planes to simulate the 3D scenario. The system configuration can be
adjusted to obtain a signal similar to that received by the device in the real
world, thereby making it easier to evaluate the position, direction and impact of
MIMO antenna on system. The advantages of the multi-probe method are that
the concept is simple, configuration is flexible, and the testing accuracy is good.
However, the disadvantage is that multi-probe may cause high cost. Another
challenge of the multi-probe approach is that it can only simulate parts of the
3D channel, making it difficult to simulate a complete 3D channel.
(2) Ring-shaped probe test methods based on anechoic chamber symmetrically
distribute the probe antenna around the DUT equidistantly, and place the
measured object in the center of chamber, as shown in Fig. 6.21. The system
consists of anechoic chamber, multidimensional fading emulator, communica-
tion tester / BS emulator, OTA chamber antenna, etc.
Similar to the multi-probe approach, each probe antenna transmits signal
with a specific time domain after the processing of the channel simulator based
on the ring probe test method of an anechoic chamber. Unlike the multi-probe
scheme, there is no correspondence between signal AOD and the position of the
probe antenna. Based on it, it is possible to simulate any 2D spatial channel
model without readjusting the position of the probe antenna.
(3) The test method based on Spatial Fading Emulator (SFE) of darkroom was
initially proposed by Panasonic Company in Japan. The main feature of SFE is
the manufacture of spatial fading characteristics through the antenna probe and
associated RF devices surrounding the DUT. The amplitude of the signal
emitted by each probe is directly determined by the sampling of target PAS.
Doppler frequency shift depends on the angle between the probe position and
the direction of the virtual motion of DUT. Since there is no pre-fading, the
tested
object
communication tester
station simulator
Loss simulator
anechoic chamber
Fig. 6.21 Schematic diagram of a ring probe test method based on anechoic chamber
324 6 5G Hardware Test Evaluation Platform
number of probes determines the pros and cons of SFE for the reconstruction of
synthesized channel model. A theoretical study on the number of full-ring
antenna probes made by NTT DoCoMo has shown that for a single-cluster,
the test method of SFE based on chamber generally requires 10 probes to satisfy
1.5 times of wavelength terminal test. In the multi-cluster channel, at least
1115 antenna probes are needed to meet the requirements of the test domain
with 1.5 times of wavelength.
The SFE test method based on the anechoic chamber is essentially regarded
as a variation of the ring probe test method. The test method of SFE is to use a
programmable attenuator and an RF phase shift device to adjust the amplitude
and phase of each antenna to replace the channel simulator in the ring probe
method. The block diagram of its system is shown in Fig. 6.22. The RF signal
from a base station simulator is fed into a power divider, which provides the
same RF signal for each output. The number at the output end of power divider
is the same as the number of probe antennas. And each output is connected to an
RF phase shifter, which carries out phase offset adjustment by receiving a
control signal from the digital-to-analog converter. Through the attenuator,
the signal passing through the RF phase shifter is output to a horizontally or
vertically polarized probe antenna. The DUT measures the signals from each
antenna probe and outputs the data to a computer control terminal for further
UECVVGT
0 r WPKV
N-1 1
D h
2
5
R
4 3
4GEGKXGT
2QYGT
#PVGPPC 2JCUGUJKHVGT 6TCPUOKVVGT
FKXKFGT
&#
%QORWVGT
EQPXGTVGT
processing. At the same time, the computer control side also provides the
configuration and interaction of simulation parameters.
The SFE test method based on the anechoic chamber can generate multipath
fading of a particular distribution by controlling and adjusting the amplitude
and phase of RF signal in real time, such as Rayleigh distribution. Because the
scheme uses an attenuator to control the amplitude, the RF phase-shifting
device adjusts the phase, which is less expensive than a commercially available
channel simulator. But its flexibility has also been greatly limited. In addition,
the method cannot simulate the characteristics of the transmitter.
(4) The dual-channel measurement method based on anechoic chamber is a direct
and valid method to test OTA performance of MIMO equipment. Its principle
is shown in Fig. 6.23. At the same distance from the UE, two polarized test
antennas with a rotatable incident angle are placed, transmitting different
MIMO downlink signals respectively. The overall characteristics of UE
antenna are obtained by a combination of various azimuth and polarization.
As shown in the figure below, the two-channel chamber contains four angular
positioning devices: , 1, 2, angle positioners. The two test antennas A1
and A2 are distributed as 10 or 90 angle (simulate rural environment) and a
communications antenna ANTUL (simulate urban environment). An optional
angle positioner on turntable controls the tilt angle of antenna under test. All
of these angle controllers can be used to implement any combination of angles
for MIMO test. External devices include a base station simulator and a
switching matrix. The dual-channel approach can be viewed as a special
case of a multi-probe method using only two probe antennas without a channel
simulator.
The advantage of the dual-channel test method is the easy updating on the
basis of the original SISO test system. It only needs to add the second angular
positioner control system and a second test antenna, which greatly reduces the
cost. At the same time, it can also be used to verify the pattern, load and
impede the smart antenna which can be adaptive with the environment
changing.
The basic principle based on reverberation chamber OTA test scheme is actually to
provide the characteristics that the reverberation chamber is rich in reflection and
the specific channel environment created by mixer for DUT testing. The purpose of
the reverberation chamber is to produce a statistically uniform power distribution
around DUT. And the antenna and channel simulator can be used to generate the
desired delay characteristics.
Depending on whether the reverberation chamber is connected to a channel
simulator, the OTA test scheme based on reverberation chamber can be divided into
two types. The first is to use a separate reverberation chamber or a cascade
reverberation chamber, as shown in Figs. 6.24 and 6.25. And the second is the
reverberation chamber to connect the channel simulator. For example, the partic-
ipation of channel simulator in the second case realizes time diversity through
inputting a fading signal at different time steps, overcoming the limitations in the
first case.
In general, the reverberation chamber can provide a subclass of multipath
environments that can do frequency fading and time diversity simulation. The
reverberation room test scheme is limited by limited analog capabilities of different
fading environment, so it can only provide a limited performance evaluation for
terminals.
Limited by the statistical isotropy at receiver in spatial domain of channel
environment, i.e., its evenly distributed AOA, the spatial diversity cannot be
simulated. In addition, the statistical isotropy in reverberation chamber also deter-
mines that vertical polarization and horizontal polarization of the channel model
1 T
blender
channel
wall
simulator blender
antenna
(e)nodeB
simulator
tested
object
return path
turning disc
can only be equal. For the DUT using polarization diversity method, reverberation
chamber method cannot be effectively differentiated.
The scheme based on the reverberation chamber does not specify the device. The
same result can be obtained with any device. The size of test domain is not strictly
required, which is very convenient for the actual test operation.
The two-stage test method based on the anechoic chamber is shown in Fig. 6.26.
The two-phase test scheme consists of two test phases. The first stage, in an
isotropic environment, uses a conventional anechoic chamber as a basic test system
and an integrated tester to measure a complex active antenna array. The second
stage combines the information of antenna array with the channel model by the
following two means: using channel simulator to make conduction test or using
antenna array information obtained from test to calculate a theoretical channel
capacity performance by theoretical calculation. Therefore, at this point, the
two-stage test method can only obtain limited data, and further research is needed
to obtain accurate performance indicators.
In the two-stage OTA test method, the absolute accuracy of DUT power mea-
surement has little effect on the accuracy of final results, since the correction is
made at the second stage. The accuracy of relative phase measurement has great
influence on the result. But the existing DUT, usually mobile terminal, has strong
phase measurement ability, so it has limited influence on measurement result.
Because 3D antenna pattern can be measured more easily, a two-stage OTA test
method can simulate any 3D channel transmission condition. The quality factors
328 6 5G Hardware Test Evaluation Platform
test box
wave-absorbing
materials
MIMO
tested object
antenna
pattern
wire
MIMO
tested object
BER, FER
H, R
that the two-stage OTA test method can measure include TRF, TRS, throughput,
block error rate, MIMO channel capacity, antenna correlation coefficient, etc.
However, another feature of two-stage approach is that it cannot obtain the effects
of self-interference.
Compared with the conventional multi-probe test scheme, two-stage OTA test
only simplifies the receiving diversity performance rather than the channel-related
characteristic, which is a fast, accurate, economical and efficient MIMO OTA test
method. Secondly, the two-stage OTA test method can re-use the antenna pattern
obtained from test to simulate a 2D or 3D channel model without re-using an
anechoic chamber test, and thus improving its flexibility, and taking full advantage
of test platform resources constructed in the LTE phase to rapidly expand and
achieve MIMO OTA test, which is a fast and economical test solution.
In summary, the existing schemes focus more on two requirements of building
and playing back wireless propagation environments close to the reality and
OTA test system calibration. While wireless propagation environment close
to reality obtained from emulator all relies on ready-made commercial channel
simulators, so the pertinence and applicability of channel environment have
common flaws.
6.3 OTA Test Platform 329
Compared with traditional single-antenna OTA testing, the MIMO OTA testing
technology adds multiple antenna broadband testing. The tests must use multi-
dimensional RF channel parameters, such as fading, delay, Doppler, AOA and
polarization and other evaluations.
The key challenge in the development of MIMO OTA test platform is how to
generate an RF channel model in anechoic chamber that is closest to the real-world
spatial, angular and polarized behavior. Therefore, further study is needed for the
test combining channel simulator and chamber, and antenna characteristics must be
taken into account.
The key challenge in massive MIMO OTA test techniques is how to generate an
RF channel model in the anechoic chamber that is closest to the real-world spatial,
angular and polarized behavior. This complexity requires a lot of space and
equipment investment for the R&D of MIMO OTA test platform. Its cost is too
high for the majority of terminal equipment manufacturers. The following is the
discussion of key technologies and difficulties of OTA test.
1. Spatial fading simulation technology
Since wireless channels play a key role in MIMO performance, wireless
channel simulator is an important part in MIMO OTA air interface test system.
A test signal generated by a transmitter or a base station simulator passes through
a wireless channel simulator, which simulates a wireless channel according to a
predefined channel model. The signal is then separated in simulator and distrib-
uted to each probe in the chamber, each independently radiating into the
chamber. Its result is that the multiplexed radiated signal is synthesized in the
central space of the chamber and produces the desired wireless channel envi-
ronment around DUT.
The advanced SFE technology must be adopted to restore the real living
environment in the chamber. The most typical analog parameters include path
loss, multipath fading, delay spread, Doppler spread, polarization, and, of
course, spatial parameters such as AOA and AS.
In order to obtain valuable results from MIMO OTA test, wireless channel
simulator must have excellent RF performance. Its Error Vector Magnitude
(EVM) and internal noise level must be very low in order to minimize errors
that affect the measurement results.
Moreover, in order to obtain consistent test results in multiple measurements,
the fading process must be repeatable. This is very important when
benchmarking different DUT.
2. Stochastic channel model based on geometry
The channel model for MIMO OTA test is a GSCM, in which the wireless
channel is defined by the following parameters:
The location and array of transmitting antennas;
Propagation characteristics (delay, Doppler, AOD, AOA, angle spread of
transmitted signal, angular spread of received signal and polarization
information);
330 6 5G Hardware Test Evaluation Platform
As shown in Fig. 6.27, the MIMO OTA platform developed by Shanghai Research
Center for Wireless Communications is a scheme based on the anechoic chamber as
a whole. The principle of the scheme has been described in Sect. 6.3.2.1, and the
different points of the platform are highlighted here. The biggest difference lies in
the following two aspects.
I. An on-line channel model library is added to test platform as a support for
channel model. The on-line channel model not only contains the standard
channel model of the traditional channel simulator, but also has a typical
channel model of China, which increases the applicability of the channel
model in China.
II. In addition to using the traditional channel simulator based on channel models,
this platform can combine the parallel channel sounder to perform real-time
playback for the captured signal under appointed locations (areas). The evalu-
ation reliability of DUT for the suitability of appointed channel environmental
capabilities is improved.
The channel model library has been described in Sect. 6.2.6 in details. The
channel analog/playback device is highlighted below.
As shown in Fig. 6.24, the channel analog/playback device of the OTA test platform
developed by Shanghai Research Center for Wireless Communications uses two
forms: the commercial channel simulator and the self-developed channel simulator.
tested
base station/ object
base station
simulator
wave-absorbing anechoic
chamber
because the various DSP of different manufacturers or even the same manufacturers
are inconsistent in backward compatibility, and the supported real-time operating
systems are not the same, the industry currently lacks a unified platform and
standard. In addition, the BaseBand Unit (BBU) of mobile communication base
stations developed based on DSP platform system is generally a non-open propri-
etary platform, which still has many deficiencies in smooth upgrading and
virtualization of networks. With the rapid development of relative technologies,
GPP can gradually meet the requirements of high data load operations, such as
digital signal processing, and provide a new choice for software to realize digital
signal processing. The new technologies of these GPPs include multi-core, Single
Instruction Multiple Data (SIMD) supporting fixed and floating point arithmetic,
high capacity on-chip cache and low latency off-chip memory. With these new
technologies, GPP can finish digital signal processing performed by a DSP, espe-
cially baseband processing functions in base station devices. The use of GPP for
baseband signal processing has the following advantages:
(1) Simplified design process and shortened development cycle.
The communication system design process of traditional scheme is often
based on architecture characteristics, uses a simulation platform to achieve new
algorithm design, and then optimizes the corresponding codes according to the
characteristics of programming. After constant optimization and correction to
ensure the performance of the entire communication system, the fixed point
code is reconstructed and under the premise of maintaining system perfor-
mance. Finally, it is moved to the platform for corresponding programming,
optimization and testing, etc. R&D process is long and inefficient, and the
invested human labor costs are also rising. The migration between the platform
and fixed-point design of codes all give the smooth development of the project a
great deal of risks. On the GPP platform, the algorithm can be directly opti-
mized, thus effectively improving the efficiency of programming and greatly
reducing development cycle.
(2) Easy to realize multi-mode base station and achieve resource sharing.
The multi-mode base station based on the unified platform is based on a
universal platform composed of modularized and standardized hardware
units to realize the partial communication function of wireless equipment.
It has good scalability, which can effectively extend the life cycle of base
stations, save costs and seamlessly converge different communication
modes. The network upgrade and smooth evolution are realized through
software configuration without changing the hardware. In addition, using a
multi-mode base station enables the use of a set of base station equipment to
achieve multi-mode network coverage to save space, improve power effi-
ciency and reduce power consumption. In short, the fully software-based
baseband processing of base station can support multiple standards on a
single system, which is conducive to resource multiplexing, improving
resource utilization.
334 6 5G Hardware Test Evaluation Platform
For5G test and universal platform R&D based on GPP, Shanghai Research Center
for Wireless Communications has built and completed the open LTE network
platform based on universal server, which can well support the commercial termi-
nals access to Internet. The platform uses software defined radio design. The
processor uses Intel GPP. And the operating system uses open source Ubuntu
Linux. The development language uses C language. The entire platform (including
baseband) is totally implemented by software. The software architecture of this
platform is as shown in Fig. 6.28. The high-performance GPP is connected to
optical fibers via the PCIe bus and then to Remote Radio Unit (RRU) unit to realize
the complete functionality of the LTE system, including the base station, the core
network and the terminal. The core network includes the realization of software,
like MME, SGW, PGW and HSS and other network elements. The base station
includes software implementation of physical layer, MAC layer and RRC layer.
The core network may implement a software-defined full-function soft base station
on the same server, or may be connected to a plurality of third-party base stations to
achieve multi-user access.
The hardware architecture of Open 5G platform currently uses universal com-
mercial server hardware architecture. The key hardware involved in wireless signal
processing, transmitting and receiving includes CPU, Synchronous Dynamic Ran-
dom Access Memory (SDRAM), accelerator card and interface control board.
Far-end also includes Analogy Digital/Digital Analogy converter (AD/DA), RF,
and FPGA chips for front-end digital signal processing and interface control.
Multi-core CPU is used to allocate the signals to be processed to the kernel
which is used for baseband digital signal processing to process the transmission.
And some of the large computations, such as turbo decoding, is assigned to the
6.4 5G Open Source Community 337
3&,HEXV 2SWLFDO
ILEHU 5)
6'5$0 &35, )3*$
&38 &38 $''$
6'5$0
3&,H
6'5$0 VORW
6'5$0
)3*$ &35,
6'5$0
6'5$0
)3*$ &35,
6'5$0
FDOFXODWLRQDFFHOHUDWRUFDUGLQWHUIDFH
FRQWUROFDUG
calculation acceleration unit by task scheduler module for calculation to reduce the
baseband processing load of CPU.
The interface unit is an extremely critical unit whose role is to provide high-
speed data rate and low-latency data channel between BBU and RRU. One interface
unit is connected to the universal server memory with the PCIe standard interface,
reads and writes digital baseband signals in the universal server memory with DMA
technologies. And the other end is connected to the remote RRU by optical fibers.
This interface uses a Common Public Radio Interface (CPRI) protocol, including
Path I and Path Q data for transmission, as well as system operation, maintenance
and control signals.
The realization of real-time processing of high-speed baseband signals by
universal server requires solving two technical challenges. One is real-time signal
processing capability of the universal server, and the other is data interaction
bandwidth of the internal interface.
With the development of multi-core GPP, the new architecture continues to
emerge, so that the baseband signal processing capability has reached a high
performance and has basically met the requirements of high-speed baseband signal
processing. Moreover, by increasing the calculation accelerator card, CPU and
FPGA/DSP form a heterogeneous computing platform, which can greatly improve
338 6 5G Hardware Test Evaluation Platform
Fig. 6.31 Software-defined mobile network architecture with the key functions in core network
and base stations
(1) The functions of the LTE core network and the base station are achieved based
on the universal server with versatility, openness and function customization
capacity of the core network equipment. Based on OAI open source EPC and
through increasing the functions, such as mobility, multi-user multi-base sta-
tion, stability, the open network based on universal platform EPC is
constructed. As shown in Fig. 6.31, the core network of LTE is deployed in
the universal server of Intel Linux. CPU uses Core i7-5557 U, 3.1GHz, dual-
core and four threads, and supports 16GB of memory with 5.0GT/s of bus
frequency and 4 MB L3 cache.
(2) Core network and soft base station joint network based on GPP are supported.
The functions of the core network (MME, SGW, PGW, HSS, etc.) and the base
station (RRC, Packet Data Convergence Protocol (PDCP)/RLC/MAC and
PHY, etc.) are realized. Data transmission and reception are realized through
USRP B210. Self-made SIM is used to support the commercial terminal access
network, as shown in Fig. 6.32, Frequency Division Duplex (FDD) and TDD
modes as well as 5/10/20 M bandwidth. Both the core network (MME, SGW,
PGW, HSS, etc.) and the base station (coder and decoder and other physical
layer functions, scheduling, power control, link adaptation, etc.) can be
enhanced and updated.
Although some progress has been made in the GPP-based Open 5G platform,
there are still some shortcomings in the existing technology solutions, which are far
from practical applications. Considering all kinds of technical challenges, core
technologies of baseband signal real-time processing, resource and network func-
tion virtualization, resource scheduling and task matching need to be R&D and
verified.
340 6 5G Hardware Test Evaluation Platform
Fig. 6.32 Simultaneous access of multiple terminals and smooth switches among base stations
At present, the industry has not had the recognized implementation standard of
wireless baseband processing based on GPP. Various architectures and algorithms
are still under investigation. When dealing with network-intensive and computa-
tionally demanding baseband algorithms, the processing ability of GPP is not as
good as that of a dedicated chip. Performance bottlenecks may occur, and there may
be some distance from large-scale practical applications. First, because GPP usually
faces many tasks, operating systems (such as Linux) mainly aim for time-division
342 6 5G Hardware Test Evaluation Platform
operation design. And baseband signal processing is computing intensive with high
real-time requirements. Therefore, the interrupt response and delay waiting process
should be avoided in the design of the signal processing algorithm, reducing the
complexity of algorithm. Efficient algorithm should be used in subsequent appli-
cation development as much as possible. Second, GPP has many defects in high-
speed data access. However, the high-speed cache mechanism can be used to
improve the storage efficiency. For example, control logic is specified in advance
to determine which data and instructions are stored in the on-chip cache or memory.
Finally, additional CPU resources are needed for GPP to manage multiple threads/
multiple processes. However, using computer multi-threading/multiple processes to
deal with and optimize big data and repetitive computations can increase the data
rate exponentially. In addition, with the instruction set to optimize the CPU
operation, the purpose of accelerating the baseband signal processing can be
achieved.
Instruction set-based software acceleration methods include the following types:
(1) Look Up Tables (LUT): LUT [7] is a compromise operation after examining
computation complexity and space complexity. The LUT operation can greatly
reduce on-line processing delay by replacing conventional bit operations.
(2) SIMD: Intel [8] CPU has a special multi-data instructions, and SIMD instruction
set can accelerate signal-level computing signal processing. SIMD repeatedly
performs the same operation for symbol-level data. A single instruction of SIMD
can handle several operations with low computational cost (computational
resources) and fully use bit bandwidth to significantly increase CPU efficiency.
(3) Sample level-Intel Integrated Performance Primitives (IPP): IPP [9] developed
by Intel is a set of software library of cross-platform and cross-operating systems
that can realize signal processing, image processing, multimedia and vector manip-
ulation and other operations. In [7], IPP is used to implement Fast Fourier Trans-
form (FFT)/Inverse Fast Fourier Transform (IFFT). The test results show that
FFT/IFFT is accelerated by IPP, which can improve the performance significantly.
Based on the OAI open source platform, the bandwidth to test each 5 MHz system
of FDD uplink/downlink and each module of processing delay performance of
physical layer of single-base station single-user system with SISO modes are given.
Each module data of physical layer on this platform is floating-point type, using
SIMD instructions to achieve the acceleration of data processing.
System performance test environment of the OAI open source platform includes
that EPC, base station and user equipment run on the same host. And the base
station and user equipment are directly connected through the UDP. The program
runs on an IBM System 3400 M3 server. CPU uses 2.13GHz and 4-core Intel
Xeon E5606 processor; 4G memory; 256G hard drive. The operating system Linux
Debian 7 uses 64-bit Ubuntu 14.04 DeskTop version and installs 64-bit low-delay
core. CPU uses the highest operating frequency.
6.4 5G Open Source Community 343
Uplink
shared Multi-
channel Segmen- Turbo Data rate
CRC Interleaving channel Modulation
tation coding matching interleaving
IFFT & CP
SR /
CQI / RI / acknowledgement
acknowledgement PUCCH
UCI generation generation
PUSCH
Channel Log Data rate receiving
Data Channel Deinterlea Turbo
compensati IFFT likelihood de-
extraction estimation ving decoding
on estimation matching
FFT
SR / CQI/
aknowledgement
PUCCH signal Uplink control RI/acknowledgement
information
receiving receiving
The uplink transmitting procedure of the OAI platform is as shown in Fig. 6.34.
First, the uplink shared channel information makes 24-bit CRC and segmentation.
Second, turbo channel coding is performed. Then, bit interleaving and data rate
matching are conducted. CQI, Rank Indication (RI) and acknowledgement infor-
mation are added, and then multi-channel interleaving and modulation are carried
out. Then Scheduling Request (SR) and acknowledgement information and other
Physical Uplink Control CHannels (PUCCH) are combined to complete IFFT. And
CP is added and finally the frame is formed. Uplink receiving flow is shown in
Fig. 6.35. PUCCH can extract scheduling request and acknowledge indication
information after FFT. While the acquisition of Physical Uplink Shared CHannel
(PUSCH) information is subject to data extraction, channel estimation, channel
compensation, IFFT, log likelihood estimation, data rate matching (at this moment,
uplink control information can be solved, such as CQI, RI and acknowledgement
information), de-interleaving and turbo decoding and other procedures.
Figures 6.36 and 6.37 describe the transmitting and receiving flows of downlink
data respectively. The former downlink shared channel needs to complete CRC,
segmentation and turbo coding first. The broadcast channel information and down-
link control information respectively complete convolutional coding, interleaving
and data rate matching, scrambling and modulation. Physical HARQ Indicator
CHannel (PHICH) generates HARQ indication information and Physical Control
Format Indicator CHannel (PCFICH) generates control format indication informa-
tion. Furthermore, these channels are multiplexed with Primary Synchronization
Signal (PSS), Secondary Synchronization Signal (SSS) and Reference Signal (RS).
After that, FFT and CP is added, and finally physical baseband signal is generated.
The latter can output HARQ indication and control format indication information
respectively by FFT, channel estimation, frequency compensation and physical
layer measurement. And the downlink shared channel information, broadcast
channel information and Downlink Control Information (DCI) are received through
344 6 5G Hardware Test Evaluation Platform
Downlink
shared Segmentati Turbo Interleaving Data rate
channel CRC Scrambling Modulation
on coding matching
Broadcast channel
information Convolutio Data rate
Interleaving Scrambling Modulation
nal coding matching
Downlink control
information Convolutio Data rate
Interleaving Scrambling Modulation
nal coding matching
Control format
indicator PCFICH generation
PSS generation
RS generation
Broadcast
6.4 5G Open Source Community
HARQ indicator
PHICH receiving receiving
Control format
indicator receiving
PCFICH receiving
Table 6.2 Turbo decoding test results of user-side downlink shared channel
Data rate 2.152Mbps 4.392Mbps 8.76Mbps 11.832Mbps 13.536Mbps 17.56Mbps
Number of code blocks 1 1 2 2 3 3
Average number of iterations 2 2 4 4.08 6.25 6.54
Average value 113.44(s) 234.68(s) 465.01(s) 680.27(s) 734.86(s) 1047.61(s)
Variance 28.52 74.27 494.92 1796.38 3765.47 17459.83
Mean squared error 5.34 8.62 22.25 42.06 61.36 132.14
Maximum value 127(s) 253(s) 533(s) 850(s) 952(s) 1514(s)
Minimum value 106(s) 221(s) 429(s) 644(s) 667(s) 923(s)
6 5G Hardware Test Evaluation Platform
6.4 5G Open Source Community 347
Table 6.4 Turbo test results of uplink shared channel of base station side
Data rate 2.216Mbps 4.392 Mbps 8.76 Mbps 11.064 Mbps 14.688 Mbps
Number of code 1 1 2 2 3
blocks
Average number 2.21 2.19 5.62 5.38 7.51
of iterations
Average value 151.99(s) 307.47(s) 642.04(s) 822.57(s) 975.09(s)
Variance 213.85 753.29 3122.41 4504.32 3462.69
Mean squared error 14.62 27.45 55.79 67.11 58.84
Maximum value 211(s) 459(s) 841(s) 1145(s) 1256(s)
Minimum value 107(s) 212(s) 448(s) 583(s) 735(s)
(1) Complex network function and poor flexibility. Mobile communication net-
work is formed by a large number of single-function network equipment.
Hardware resources are specialized and fragmented, and there are various
types and large quantity in equipment. Once the network is completed, it will
be difficult to change, expand and update, and resources cannot be rapidly
allocated on demand within the whole network.
(2) Hardware and software of network element equipment is vertically integrated
with closed architecture. In order to provide different services and multiple
access modes, and meet various requirements of QoS and security, the com-
munication network introduces a large number of control protocols and is
bound to a specific forwarding protocol. The code is directly or indirectly
written in hardware, constituting a closed architecture with integrated control
and forwarding. In the long run, the equipment is becoming more and more
bloated. The space for improving performance is small. Technical innovation
and upgrading are difficult, and the scalability is limited. The service develop-
ment cycle is long.
(3) The network and business form the chimney group phenomenon. The provi-
sion of new services often leads to proliferation of new equipment types and
quantities. The division between departments is easy to form a large number of
independent and closed network and business chimney groups. In addition, the
cost of infrastructure construction is high, and resources cannot be shared with
each other. Network and services cannot be coordinated and converged and
cannot adapt quickly to new services and new models.
348 6 5G Hardware Test Evaluation Platform
(4) CAPEX and OPerating EXpense (OPEX) are still high. Because the special
nature of network devices results in the coexistence of equipment in different
manufacturers, it requires a lot of manpower and resources for different ages
and different standards, the procurement, integration, testing, deployment and
maintenance of equipment. At the same time, the insufficient competition
among different equipment manufacturers causes relatively high costs of
equipment, operation and maintenance management and upgrade. According
to 5G White Paper [16] 5G network architecture poses higher requirements in
the aspects of access speed, green power saving, cost-reliance, etc. Facing with
2020 and the future, the popularity of ultra-high definition, 3D graphics and
immersive video will drive a substantial increase in data rate, augmented
reality, cloud desktop, online gaming and other services, which will not only
make challenge for uplink and downlink data transmission rate, but also make
the harsh requirements of no perception on delay. In the future, a large
amount of personal and office data will be stored in the cloud. Massive real-
time data exchange will be comparable to the transmission rate of optical fibers
and make traffic pressure for mobile communication networks in the hot spot
area. Therefore, the new network architecture is imperative. The new network-
ing approach must address various problems of current networks so as to meet
the changing requirements of users.
First, as people have higher bandwidth requirements of wireless networks, for
mobile operators, the bulk of the increase comes from expenditures base stations
construction, operation management and network infrastructure upgrades. Besides,
enterprises are facing more and more intense competitive environment. However,
revenue may not grow at the same rate. The traffic of mobile Internet services is on
the rise due to competition, but the average revenue of single user has grown slowly
and sometimes reduced rapidly, which has seriously weakened the profit of mobile
operators. In order to maintain the ability to continue to make profits, mobile
operators must look for ways to increase the network capacity at a low cost, thus
providing better wireless service for users.
Traditional wireless base stations have the following characteristics. First, it is
difficult for each independently operating base station to increase the spectral
efficiency due to the capacity interference of system. Second, each base station
only covers a small area and connects a fixed number of sector antennas and can
only process the signal reception and transmission of the cell. Third, the base
stations are usually based on a vertical solution developed by the proprietary
platform. Compared with traditional base stations deployed by these above opera-
tors by connecting special line or optical fibers and core networks, the small base
stations in the future 5G access network are causally deployed by third parties or
users on the basis of requirements (such as deploying in a commercial and office
area or users home). This brings a huge challenge for operators. A large number of
small base stations mean high site matching and leasing, construction investment
and maintenance costs. Moreover, the greater number of small base stations means
that operators will have to pay more capital and operating expenses. The average
6.4 5G Open Source Community 349
load of network of small base stations is generally much lower than the busy-time
load, and the actual utilization efficiency is very low. Meanwhile, different small
base stations are difficult to share the processing power, making the spectral
efficiency difficult to improve. Finally, the specified platform used by small base
stations means that the operator needs to maintain multiple incompatible platforms
at the same time, and will require higher costs when expanding or upgrading.
At present, with the rapid development of the virtualization technology, given its
characteristics like lightweight management and optimization resources, etc., build-
ing a virtual management platform for small base stations with intensive deploy-
ment is becoming more possible. Virtualization is a technology to make abstract
analog for computer and communication resources. The virtualization technology,
on the basis of the existing computer and communication hardware resources,
simulates all or parts of virtual hardware resources, such as baseband, CPU,
memory, input and output devices. These virtual hardware resources can share
the same platform with local real hardware resources, which is called virtual
machine. In general, from the software point of view, virtual machine and real
machine have no difference. That is, the realization and operation of virtual
machine are transparent for software programs.
Considering the evolution of mobile communication networks, IMT2020 has put
forward new requirements in order to complete the evolution of old network
architecture and adapt to 5G evolution: NFV, Cooperative Communications (CC),
Automated Network Organization (ANO), Flexible Backhauling (FB) network and
advanced traffic management and offloading and other key technologies [11]
Among them, NFV is the next generation of network building scheme proposed
and dominated by operators, with the purpose to bear more and more mobile
network function software through the use of 86 and other universal hardware
and the virtualization technology, thereby reducing the high cost of network
equipment. At the same time, with the hardware and software decoupling and
functional abstraction, the network equipment function no longer depends on the
dedicated hardware. The resources can be shared flexibly, so as to realize the rapid
development and deployment of new services. The automatic deployment, flexible
scalability, fault isolation and self-healing are made based on actual business
requirements. The architecture is as shown in Fig. 6.38.
The nature of the virtualization technology is the division and abstraction of
computing resources. The advantages, such as isolation, consolidation and migra-
tion, make it possible to integrate applications on different platforms safely and
securely to the same server, and migrate an application on a server quickly to other
servers, thereby improving server utilization, reducing hardware procurement and
operating costs, as well as simplifying system management and maintenance. The
virtualization technology has had nearly 50 years of history [12, 13] The first virtual
machine was System/360 Model 40 VM developed by IBM Corporation in 1965. In
recent years, with the rapid development of the computer hardware technology and
continuous innovation of the computer architecture, especially in the late 1990s,
desktop computer performance has been able to simultaneously support multiple
systems to run. At this point, the contradiction between increasingly powerful
350 6 5G Hardware Test Evaluation Platform
computing power of the computer system and relative backwardness of the com-
puting model has become increasingly prominent. The own characteristics of the
virtualization technology can find a balance between hardware systems with rapid
development and application requirements with complex changes, which provides
more advantages for enterprise-class applications, such as improving resource
utilization, reducing management costs, increasing the use flexibility, improving
system security, enhancing the availability, scalability and maneuverability of
system. Therefore, the virtualization technology has attracted the attention of
academia and industry both at home and abroad, and become one of the hotspots
in current research [14]
From 1990s to today, virtualization technology has made considerable progress,
and a variety of technologies become mature. In addition to VMware, Denali and
Xen, there are also many emerging virtualization software, such as KVM, Virtual
Box, Microsoft virtualization series products (Virtual PC, Hyper-V), Paralles
virtualization series products (Virtuozzo, Parallels Desktop for Mac), Citrixs
XenServer, Suns xVM, Oracle VM and VirtualIron. The main applications of
virtualization today involve the following domains: (1) server domains, including
data centers with high requirements for hardware platforms, cloud computing,
distributed computing, Virtual servers, etc.; (2) enterprise management software,
such as trusted desktop based on virtual machine, easily and effectively managing
and supporting employee desktop computer; (3) individual users, including
antivirus technologies based on virtual machine, program development and
debugging, operating system kernel learning, server consolidation, cloud
6.4 5G Open Source Community 351
The segmentation and parallelization of tasks has two main advantages. First, it is
conducive to the increase in data rate. GPP has the advantages in multi-core
processing, large storage and fast cache. Using task segmentation to make parallel
processing of multiple threads can increase the operating rate of processor and
reduce time-consuming. Second, it improves the program structure and is easy to
understand and maintain the program [19] In the case of multi-user multi-base
stations, the processing of some very complex communication processes is divided
into several threads, and the individual user processes can be independently run by
multi-threading, which can reduce the interference of multiple users.
The parallelization improvements are made for communication procedures.
First, it is needed to segment the task, dividing the large computational processes
of coarse granularity into a number of fine-granularity subtasks. And then parallel-
ism should be found according to the logic of sub-tasks. If there is a data depen-
dency among sub-tasks, it can be vertically converged. Otherwise, the horizontal
convergence will be made [20] Common methods of task segmentation of commu-
nication systems are as follows.
(1) Data segmentation. Data are divided into several independent data blocks,
creating several threads to deal with respectively, so as to complete the
processing of all data.
(2) Process segmentation. Complex process can be divided into a number of simple
sub-processes with a certain degree of independence, using multi-thread to
implement sub-processes to speed up processing speed.
(3) Problem segmentation. Complex problem is divided into several independent
sub-problems. Various sub-problems are solved through multiple threads, and
finally the solution of original problem is gotten.
Task parallel processing methods of communication systems, according to the
amount of calculation, mutual independence, real-time requirements and other
factors, can be divided into two types:
(1) Distributed parallel approach. Large tasks can be divided into several small
tasks which can make parallel execution to process. As shown in Fig. 6.39 [20]
the large tasks A-B-C-D are equally divided into four threads, thread 14. Each
thread processes 1/4 of the task. This parallel approach can load in balance,
which is suitable for the large tasks easy to be divided into a number of separate
strong parallel subtasks [21]
Taking turbo decoding as an example (as shown in Fig. 6.40), the turbo
decoder repeats the same operation for each code block. The turbo decoder has
strong independence among code blocks, which is suitable for parallel
processing and can effectively reduce the time consumption of processing.
However, the creation and scheduling of threads all need certain extra
6.4 5G Open Source Community 353
thread1 A1 B1 C1 D1
thread2 A2 B2 C2 D2
Input data Output data
thread3 A3 B3 C3 D3
thread4 A4 B4 C4 D4
Time
thread1
Codebook1 Codebook4 codebook7
thread2
Codebook2 Codebook5 codebook8
Resource block Resource block
data input Turbo decoder 2 Turbo decoder 5 Turbo decoder 8 data output
thread3
Codebook3 Codebook6 codebook9
Input data
thread1 A1 A2 A3 A4
thread2 B1 B2 B3 B4
thread3 C1 C2 C3 C4
Time
bit-level pre-
Uplink Symbol-level
decoding signal Turbo decoding CRC de-check
receiving signal processing processing
Multiple
threads Symbol-level bit-level pre- Decoding
decoding signal CRC de-thread
pipelined signal processing processing thread
processing
processing
Fig. 6.42 Schematic diagram of pipeline parallel processing of uplink receiving flow [23]
Resource scheduling methods of threads have two types: dynamic and static. The
former means that the scheduler is responsible for dynamically mapping the
processing module needed to schedule to currently available CPU core perform
task processing. Due to the flexibility of dynamic scheduling, it is not suitable for
physical layer signal processing module with high data dependency. The latter
divides data flow into several sub-data streams, and then statically assigns different
6.4 5G Open Source Community 355
sub-streams together with related task processing modules to different CPU core
task processing. Static schedule is suitable for dealing with the physical layer data
flow with a substantially constant data structure, which can improve the utilization
rate of high-speed cache and reduce the overheads of data synchronization, so as to
improve the integrity of communication system [23]
The existing scheduling strategies based on multi-thread programming support
both real-time and non-real-time processes, and the priority of real-time processes
is higher than that of any non-real-time processes. The schedule policy of real-time
process is divided into the strategy of first come first served and the time-slice
rotation strategy. The former is queued according to the priority, and the system
schedules the process group with highest priority. At the same priority level,
priority schedule first arrives at queues process, and the current executing process
can continue to consume system resources until a higher-priority process arrives or
it exits. The latter is similar to the former. The process group with the highest
priority is scheduled first. The process group with the same priority is scheduled in
turn by time slice. After the time slice of current execution process is exhausted, it is
placed at the end of queue.
Non-real-time process uses the time-division scheduling strategy. The system
schedules each thread in a time-slicing fashion. Each process defines its priority at
the time of creation. The higher the priority, the longer the time slices. After the
time slice allocated by current execution process is finished, it is placed at the end of
queue and waits for the next schedule. This schedule is essentially a scheduling way
to share the proportion.
which can then be retrieved directly from the cache, and the data interaction
between two processes can also be made directly through the shared cache. This
acquisition speed will be much faster than memory.
(2) Scheduler should ensure the balance scheduling of load. On the one hand, it can
prevent overload and shorten average response time of task. On the other hand,
the full use of resources in the entire system can improve the efficiency of the
resource utilization. The current scheduling methods to realize static load
balance mainly are various heuristic algorithms and methods based on graph
theory. The methods of implementing dynamic load balance include scheduling
algorithms based on the gradient model and the probabilistic scheduling algo-
rithm based on random selection [24] etc.
(3) Scheduler should try to minimize the migration costs in migration process. To
achieve load balance, schedulers are often required to make inter-core migra-
tion of task processes in different cores. When the task is scheduled, the
performance of the system is greatly influenced by the scheduling mode.
Because the costs in migrating different processes among processors are dif-
ferent, schedulers must take full account of memory amount occupied by the
process, CPU occupancy, restricted types (CPU-bound, input/output-bound),
etc., and fully consider the load balance of system and the costs of dealing with
load balance.
Based on software defined radio, open 5G platform can achieve a full-featured
communication system with GPP. Due to the large data dependency among mul-
tiple processing modules of the communication system, it is needed to comprehen-
sively use task segmentation and parallel processing. Allocating corresponding
hardware resources to the task threads with scientific scheduling strategy, the
platform can meet the system requirements of delay, timing and isolation in
wireless communication protocol. Besides, by effectively scheduling the comput-
ing resources of the universal platform and designing a reasonable real-time
scheduling algorithm, the utilization efficiency of computing resources can be
improved, and the real-time processing of baseband signals can be realized.
6.5 Summary
This chapter mainly describes the MIMO parallel channel test platform, the channel
model built on the parallel channel test platform, and the MIMO OTA platform and
the 5G open source community platform developed by Shanghai Research Center
for Wireless Communications. These three sets of platforms are the first platform in
China currently, and are being put into the 5G R&D process. These three sets of
platforms will continue to evolve, such as adding the parallel channel test function
of millimeter wave, sharing millimeter wave channel test data and providing OTA
test prototypes of millimeter wave equipment, etc.
References 357
References
1. https://www.fcc.gov/document/fcc-promotes-higher-frequency-spectrum-future-wireless-tech
nology-0
2. NI TDMS format http://www.ni.com/white-paper/3727/zhs/
3. USRP[EB/OL]. http://www.ettus.com/.
4. GNU Radio[EB/OL]. http://gnuradio.org/.
5. K. Tan, H. Liu, J. Zhang, J. Fang and et al. Sora: High performance software radio using
general purpose multi-core processors. Communications of the Acm, 2011, 54(1):99107.
6. China Mobile Communications Research Institute. White Paper of C-RAN Access Network
Green Evolution. Version 2.5. October 2011. (Chinese)
7. K. Niu, J. Sun, K. Chen, and K. Chai. TD-LTE eNodeB prototype using general purpose
processor. Equine Veterinary Journalii, 2004, 36(3):248254.
8. Intel. Intel 64 and IA-32 Architectures Optimization Reference Manual. May, 2007.
9. Integrated Performance Primitives for Intel Architecture Reference Manual.
10. PlanetLab: An open platform for developing, deploying, and accessing planetary-scale ser-
vices. http://www.planet-lab.org.
11. GENI: Global Environment for Network Innovations. http://www.geni.net/.
12. L. Peterson, Anderson T, Blumenthal D and et al. GENI design principles. IEEE Computer,
2006, 39(9): 102105.
13. VINI: A virtual network infrastructure. http://www.vini-veritas.net/.
14. N. Feamster, L. Gao and J. Rexford. How to lease the Internet in your spare time. ACM
SIGCOMM Computer Communication Review, 2007, 37(1): 6164.
15. M. Peng, Y. Li, Z. Zhao and et al. System architecture and key technologies for 5G hetero-
geneous cloud radio access networks. IEEE Network, 2015, 29(2):614.
16. IMT-2020 (5G) Promotion Group. White Paper on 5G Concept. Beijing: Release Conference
of White Paper on 5G Concept, 2015.
17. P. Agyapong, M. Iwamura, D. Staehle and et al. Design considerations for a 5G network
architecture. IEEE Communications Magazine, 2014, 52(11):6575.
18. Y. Li, P. Hao, X. Feng and et al. Cell and user virtualization for ultra dense network. 2015 I.E.
26th Annual International Symposium on Personal, Indoor, and Mobile Radio Communica-
tions (PIMRC), 2015:23592363.
19. T. Wang. The implementation of sending and receiving LTE PUGG and SRS based on GPP.
Beijing University of Posts and Telecommunications, 2013.
20. Q. Zhang. Research on the key technology of multi-core processor. Fudan University, 2014.
21. H. He. The Research and realization of LTE real-time communication link on GPP-based SDR
platform.Beijing University of Posts and Telecommunications, 2013.
22. W. Shi. Universal multi-thread parallel processing technology of wireless signal on GPP
platform. Beijing University of Posts and Telecommunications, 2014.
23. X. Zhang. Study on key algorithms and optimization of uplink receiver for LTE-A based on
GPP platform. Beijing University of Posts and Telecommunications, 2014.
24. Z. Tan. Thread schedule based on multi-core system. University of Electronic Science and
Technology of China, 2009.
Chapter 7
Field Trial Network
communications needs, in 2020 and years to come, mobile Internet and IoT will
become the main driving force of the development of mobile communications.
However, with various scenarios and the extreme difference in performance
requirements, its unlikely that 5G system can form a solution of all scenarios
based on single technology just like before. Therefore, at the beginning of 5G
technology research, we must define the future application scenarios classifications
and the corresponding technical challenges.
Back in November 2012, EU launched 5G METIS project, in which the follow-
ing five characteristics of 5G application scenarios are summarized [1].
1. Fast, i.e., amazingly fast: 5G will guarantee a higher data rate for future
mobile broadband users.
2. Dense, i.e., Great in a crowd: 5G will guarantee that the densely populated
areas can get high quality mobile broadband access.
3. Complete, i.e., Ubiquitous things communicating: 5G is committed to efficient
handling of various types of terminal equipment.
4. Best, i.e., Best experience follows you: 5G is dedicated to providing mobile
users better user experience.
5. Real, i.e., Super real-time and reliable connections: 5G will support new
applications with more stringent requirements on latency and reliability.
In White Paper on 5G Concept (201502) [2] by Chinas 5G Promotion
Group, 5G technology scenarios are described. From the main application scenar-
ios, service requirements and challenges of the mobile Internet and IoT, four main
technology scenarios can be summarized, namely seamless wide-area coverage
scenario, high-capacity hot-spot scenario, low-power massive-connection scenario,
and low-latency high-reliability scenario.
Seamless wide-area coverage and high-capacity hot-spot scenarios mainly target
the demand for mobile Internet in 2020 and the years to come, which is also the
main technical scenario of the traditional 4G.
1. Seamless wide-area coverage is the most basic coverage means of
mobile communications, which provide users with seamless high-speed
service experience with ensuring the users mobility and service continuity as
its goal. The main challenge for this scenario is to provide more than 100 Mbps
user experienced data rate with guaranteed service continuity anytime and
anywhere (including harsh environment like coverage edge and high-speed
moving).
2. High-capacity hot-spot scenarios mainly target local hot-spot areas where ultra-
high data rates should be provided to users and ultra-high traffic volume density
needs to be handled. The main challenges include 1 Gbps user experienced data
rate, tens of Gbps peak data rate, and tens of Tbps/km2 traffic volume density.
Low-power massive-connection and low-latency high-reliability scenarios
mainly target IoT service, which is a newly expanded 5G scenario with the
key focus on IoT and vertical industry applications that traditional mobile
communications cannot provide good supports.
7.1 Requirements and Technical Challenges 361
latency. The key performance indicators for 5G include user experienced data rate,
connection density, end-to-end latency, traffic volume density, mobility, and user
peak date rate.
From the mobile Internet and IoT main application scenarios, we can sum up that
5G will meet peoples various service requirements in residence, work, leisure, and
transportation. Even in dense residential areas, office towers, stadiums, open-air
gatherings, subways, highways, high-speed railways, and wide-area coverage
which are characterized by ultra-high traffic volume density or ultra-high connec-
tion density and ultra-high mobility. The extreme service experience like ultra-
high-definition videos, virtual reality, augmented reality, cloud desktop, and online
games can be provided to users. Meanwhile, 5G will also penetrate into and deeply
converges with IoT and a variety of industries such as industrial facilities, medical
equipment, and transportation tools, effectively meeting the needs of diversified
services in vertical industries like industry, medical service, and transportation to
realize a real everything connected world.
5G will respond to the challenges brought by diversified performance indicators
in the various application scenarios. Different application scenarios are faced with
different performance challenges in user experienced data rate, traffic volume
density, latency, energy efficiency and connections, all of which may be challeng-
ing to different scenarios.
METIS project converts five characteristics of the 5G application scenarios into
a series of highly challenging requirements in the form of numbers.
In the densely populated city environment, 5G will provide the rate of
101000 Gbps, 10100 times of the current situation.
The unit area or single user mobile data will increase by 1000 times, or exceed
100Gbps/km2 or 500GB/user/month.
The number of the interconnected terminal equipment will increase by 10100
times.
Battery service life of large low-power communications equipment will be
extended by 10 times. And that of terminals such as sensors or pagers will
reach 10 years.
Ultra-fast response applications such as touch Internet will be supported. The
end-to-end latency within five microseconds and the very high reliability will be
realized.
In May 2014, Japanese operator NTT DoCoMo announced the start of the 5G
network experiment. It was reported that the companies that participated in NTT
DoCoMo 6 GHz-above spectrum 5G tests are Alcatel Lucent, Ericsson, Fujitsu,
NEC, Nokia and Samsung. NTT DoCoMo announced that 5G network commercial
7.1 Requirements and Technical Challenges 363
Overall, globally people have begun to prepare and plan the 5G technology test
network for testing and validation. However, by far these test networks are mainly
for the cellular mobile technology test verification.
Different from the traditional ways, 5G will no longer feature the single multiple
access technology. Instead, 5G should include cellular mobile communications
technology, new-generation WLAN technology and network technology. Since
the simple cell division technology is unlikely to improve the data capacity and
cell edge spectrum efficiency, Heterogeneous Network (HetNet) has gradually
caught attention [4]. Different communications technologies, such as cellular
communications 2G/3G/4G, broadband wireless access IEEE802.16/20 and short
distance communications WLAN, Bluetooth and UWB, offer a variety of services
to users. Using heterogeneous structure to build a mobile network is an effective
way to ease the hot-spot data traffic, which makes the HetNet will be the long-term
trend of the mobile network.
HetNet, in a broad sense, refers to the integration of a variety of wireless access
network technology, networking architecture, transmission mode and a variety of
transmission power of the base station types, such as adding WLAN hot-spots in the
mobile network. HetNet, in a narrow sense, means to add low-power nodes of the
same mode under the coverage of Macro eNodeB, such as micro cell, RRH and pico
cell, HeNB, relay node, etc [5].
Coordination and convergence between HetNets have become the focus of the
industry attention. Through the convergence of HetNet, we can fully utilize the
advantages of different types of network technologies and get various benefits. We
can greatly enhance the performance of a single network, support traditional service
and meanwhile create conditions for introducing new services. We can expand the
overall network coverage, so that the network has better scalability. We can balance
the network service load and increase system capacity. We can make full use of
existing network resources and reduce the cost of network operators and service
providers, thus making them more competitive. We can provide all kinds of needed
services to different users, so as to better meet the diverse needs of customers, and
improve customer satisfaction. We can also improve the usability and reliability of
the network and enhance the system survivability.
The concept of network convergence can be traced back to the 1970s, when
communications circles proposed the concept of network and service convergence,
such as the famous Integrated Services Digital Network (ISDN) and Broadband
Integrated Services Digital Network (B-ISDN) [6]. But limited by service and
technology development, ISDN failed. Until the 90s of last century, with the
development of mobile communications technology, it was proposed to develop
IMT-2000 global unified mobile communications standards, but also failed. Mean-
while, with the rapid development of Internet at the end of the last century, the
7.1 Requirements and Technical Challenges 365
industry put forward the concept of the Next Generation Network (NGN). And the
research idea changed from network integration to network convergence, which for
the first time displayed the prospect of the convergence of the information and
communications network on basis of unified IP technology. The research fruits of
NGN on network convergence are embodied in IP Multimedia Subsystem (IMS)
technology proposed by 3GPP, which integrates telecom network, Internet tech-
nology fixed network and mobile network technology. Network convergence inte-
grates Internet IP technology, soft switch technology and cellular core network
technology. IMS technology itself has some limitations. But as a core network
convergence technology, it is widely recognized by the industry [7].
In fact, since 1990s, the convergence and development of telecom network,
radio and television network and Internet have been put on the agenda, that is, what
we often call triple play. Triple plays goal is to achieve the interconnection of the
three major networks, resource sharing and service convergence, to provide users
with a variety of services including language, data, radio and television and
multimedia services. Triple play is the product of constant information technology
innovation, but also the inevitable requirement of the development of information
technology. However, until now the triple play in its real sense has not been
completed. Since triple play has entered a substantive implementation stage, this
book focuses on the convergence and interconnection between heterogeneous
communications networks (especially heterogeneous wireless networks).
At present, there are two main directions for the development of multi network
convergence. One is based on the IP backbone network and the other is based on
hoc Ad. Multi network convergence system based on Ad Hoc can extend the
coverage of wireless communications, improving resource utilization, improving
the system throughput, balancing the traffic, and reducing the power consumption
of mobile terminals. It has become a research hot-spot in recent years. Especially a
series of research fruits have been achieved in terms of the combination of wireless
ad hoc network and cellular mobile communications system. And many practical
network models have been proposed, such as Ad hoc assisted GSM (A2GSM),
Multi-hop Cellular Network (MCN), Self-Organizing Packet Radio Ad hoc Net-
works with Overlay (SOPRANO), integrated Cellular and Ad hoc Relaying System
(iCAR), Unified Cellular and Ad hoc Network architecture (UCAN) [812].
A2GSM in the traditional cellular network can support mobile terminal multi-hop
relay in order to increase the system capacity, enhance cellular network coverage
and solve the haunting problem of cellular coverage blind areas. MCN allows
packets move via multi-hop transmission from mobile nodes to the integrated
communications network model of cellular communications system base station.
SOPRANO combines cellular network with wireless ad hoc network, and intro-
duces the dynamically-distributed wireless router with routing relay function in a
cellular network, so as to provide wireless Internet and multimedia services. The
basic idea of iCAR is to set up a certain number of Ad hoc Relay network Stations
(ARS), so as to achieve traffic load balancing between cells and to control network
congestion, relieve hot spots and avoid line dropping etc.
366 7 Field Trial Network
Globally, EU is leading in the study of HetNet convergence, and has carried out
a series of research projects. The BRAIN project proposes an open system with
WLAN and Universal Mobile Telecommunications System (UMTS) converged.
DRIVE project studies the convergence of cellular network and TV and radio
broadcasting networks. MOBilitY and DIfferentiated serviCes in a future IP net-
worK (MOBYDICK) project discusses the convergence of mobile network and
WLAN in IPv6 network system [13]. My personal Adaptive Global NET (MAG-
NET) project provides mobile users with ubiquitous and secure personal service via
design, R&D, and realization of PN in a HetNet environment. End-to-end Quality
Of Service Support over Heterogeneous Networks (EuQoS) focuses on end-to-end
QoS technology of HetNets [14]. WINNER project [15] hopes to use a ubiquitous
wireless communications system to replace the current coexistence pattern of many
systems (cellular, WLAN and short distance wireless access, etc.) and to improve
the flexibility and scalability of the system, so as to adaptively provide various
services in a variety of wireless environments. These projects, covering the aspects
of access, network and services that are both competing and cooperating with each
other, have made meaningful researches from a number of aspects and perspectives
on the HetNets convergence. Although they put forward different ideas and
methods on convergence of different nets, there is still a certain distance to the
convergence of different HetNets. Recently, the concept of network convergence
based on cognitive networks and wireless ad hoc networks has been proposed by
Network Ambient. It can provide a more effective way to realize the convergence
of HetNets.
R&D of the 5G test field fully considers the HetNets convergence requirements of
3GPP and WLAN. And a unified control platform for HetNets convergence was
designed at the beginning of the design. Through the unified control platform and
the information interaction between the interfaces of the future 3GPP core network
and WLAN controller, HetNets convergence can be realized in the network side.
Basic architecture of the HetNets convergence network is shown in Fig. 7.1.
Similar to the under-construction 5G test field in U.K., our 5G test field is also
chosen in the campus environment, located in the campus of ShanghaiTech Uni-
versity. ShanghaiTech University is located in Pudong Science and Technology
7.2 5G Test Field Design 367
Metro
ShanghaiTech
Meglev
International
Airport
Park (central Shanghai Zhangjiang High Tech Industrial Development Zone. Its
specific location is to the south of Chuanyang River, north of Middle Huaxia Road
of Middle Ring, East of Luoshan Road, and west of Jinke Road) which is
constructed by the Chinese Academy of Sciences and the Shanghai Municipal
Government. The specific location is shown in Fig. 7.2.
368 7 Field Trial Network
The campus covers an area of about 900 mu (about 148.26 acres). And the
construction scale is 701,500 square meters of the total construction area, including
587,8000 square meters of Phase I new campus construction area (about 280,000
square meters of teaching and experiment buildings, sports facilities and auxiliary
buildings; about 160,000 square meters of student dormitories and teachers apart-
ments; about 150,000 square meters of underground construction). Universitys
industry-university-research base (Phase II Science and Technology Park) project
has a total construction area of 113,700 square meters.
Campus aerial view is shown in Fig. 7.3.
According to the above introduction, 5G mobile communications test field has
the following typical scenarios.
(1) High mobility scenario
The field is near the worlds only in-commercial-operation Shanghai maglev
line with the maximum speed of 430 km/h, suitable for high mobility scenario
test. Meanwhile it is close to the middle ring elevated road, suitable for 120 km/
h mid-to-high-speed mobility scenario test. Campus peripheral roads are suit-
able for below 60 km/h in slow mobility scenario test.
(2) A variety of indoor and outdoor typical scenarios
Compared with the single function park, this test field has abundant indoor
and outdoor typical scenarios. The field is divided into the area of teaching and
living by function. Among them, teaching area is mainly composed of teaching
buildings, research buildings, activity center, library, gymnasium and other
larger volume buildings, most of which are as high as 2030 meters. Living
area is mainly in high-rises, including apartment layoutthree apartments on
each floor, dormitory layout--rooms on both sides of the passway, as well as a
7.2 5G Test Field Design 369
hotel. The outdoor scenarios include the ways between buildings, the open
areas like square and stadium, and the areas with landscape of river, lawn,
trees, etc.
(3) Large enclosed underground indoor scenarios
This field has a 150,000 square meters of underground scenario, which is not
just a single underground garage, but also a planned underground space with
living functions, which is very suitable for indoor coverage, D2D communica-
tions and IoT and other related technology tests.
In order to realize the interconnection between LTE and non-3GPP access net-
works, the evolution of LTE system on the system network architecture is also
studied. In December 2004 in the 26th plenary session, 3GPP was officially
approved as a research project, which studies UTRA&UTRAN LTE feasibility.
High data rate, low latency, IP packet-based service and becoming competitive in
the next 10 years are the direction of the 3GPP system evolution research. In order
to achieve this goal, in addition to considering the evolution of wireless access
system, we also need to study the System Architecture Evolution (SAE) in order to
support the new LTE access network system. SAEs work objectives and research
direction are to achieve the full IP network architecture and provide real-time
services, and realize the interconnection between the network in evolution and
the existing 3GPP network or non 3GPP access networks (such as Worldwide
Interoperability for Microwave Access (WiMAX), WLAN). Supporting multiple
access systems is one of the basic principles of SAE, which is very important for the
competitiveness of SAE.
The main features of the SAE network include:
(1) Guaranteeing to support end-to-end QoS.
(2) Overall grouping. Providing pure packet access in the true sense, while no
longer providing a circuit domain service.
(3) Supporting multi-access technology. Supporting the interoperability with
existing 3GPP system and the access of non 3GPP networks (e.g., WLAN,
WiMAX). And supporting network roaming and switching between the 3GPP
network and non 3GPP.
(4) Adding support for real time services. Simplifying network architecture and
user service connection signaling process; reducing the service connection
latency. The time required for connection is less than 200 ms.
(5) Flat network hierarchy. Nodes of user plane are compressed as much as
possible. RNC is canceled in the access network. The user side nodes of the
core network are merged into one.
SAEs objectives are consistent with LTE. The first is to improve the perfor-
mance, reduce the latency, provide higher user data rates, improve system capacity
370 7 Field Trial Network
and coverage, and reduce the operation cost. The second is to achieve mobility
flexible configuration and implementation of an IP network-based existing or new
access technology. And the third is to optimize IP transmission network. However,
different from LTE, SAE tends to consider the trend and characteristics of future
mobile communications from the perspective of the whole system and to determine
the future direction of mobile communication from the angle of network architec-
ture. Under the condition that the wireless network interface technology is diver-
sified and homogeneous, the network architecture that can meet the future trend will
make operators more competitive in the future. And the users changing service
needs will also be well met.
In SAE system, non-3GPP access network is defined as IP access network whose
access technology is beyond 3GPP, such as WLAN, WiMAX, CDMA2000, etc.
According to the trust relationship between EPC and non-3GPP access network, the
access network can be divided into the non 3GPP access network and the
not-trusted non-3GPP access network. The criteria of trust for access network are
not decided by the characteristics of the access network, but by the operators
various strategies. The trusted non-3GPP access network indicates that the com-
munications between UE and EPC is secure. And all the communications between
the access network and the EPC is transmitted through the pre-established secure
link. The non-trusted non-3GPP access network indicates that the communications
between UE and EPC is not secure. And when UE joins the non-trusted non-3GPP
network, Internet Protocol Security (IPSec) tunnel between the access network and
EPC should be established.
According to different trust relations between non-3GPP access network and EPC,
and different protocols used for connection of non-3GPP access network and EPC,
six communications scenarios are defined.
(1) Realize the billing information connection between WLAN and mobile cellular
network.
(2) Realize user WLAN access authentication and billing functions via mobile
network.
(3) Support users to get access to mobile network packet domain service
via WLAN.
(4) Support users to switch between WLAN and mobile networks and achieve
service continuity, though during the switch there may be brief interruptions.
(5) Support users to seamlessly switch between WLAN and mobile networks,
while ensuring service continuity and non-interruption.
(6) Support users to get access to the mobile network circuit domain via WLAN.
These six kinds of connected scenarios in order provide users with more and
more perfect convergent functions. At present, in WLAN and 2G/3G network, the
connection of the first three access scenarios are realized. With the development of
7.2 5G Test Field Design 371
Core network is the core of the entire networks control and service functions. The
driving force of its development and evolution is mainly from two aspects: service
requirements and wireless access network evolution requirements, which are the
external causes of the evolution of the core network. The major changes in the
wireless network have brought the changes in the support system. For example, the
change from voice services to data services has brought the evolution from core
network to circuit domain and packet domain. With the demand for high-speed data
services, 2G network evolved into 3G network, and core network into soft exchange
time. With the demand for differentiated data services, core network further
evolved into SAE network. Network becomes flat and Policy Control and Charging
(PCC) is introduced for differentiated services. On the other hand, the internal
requirement for the core network, which mainly means to enhance network secu-
rity, improves the resources utilization and optimizes the network structure. For
example, the separation between call control and bearing has improved the net-
working flexibility and equipment utilization. The all-IP flat structure introduced by
SAE expanded the transmission bandwidth and reduced the end-to-end latency.
Core network EPC system network planning is the upgrading and evolution of
soft switching network. In principle, their network structures and soft switching
networks are similar in system planning, both of which need present network data
analysis, users and traffic prediction, preliminary exploration, network topology,
system design (including service test) and other steps. However, since SAE is based
on flat, full-IP system, in the network planning we must consider its unique system
characteristics, so as to give full play to SAEs technical advantages in conver-
gence, full-IP bearing, and high transmission efficiency. Also in the SAE network
planning and construction, we also need to consider the actual deployment of 2G,
3G network, landline network and WLAN network. Not only should we learn from
the planning scheme of the existing systems and cater to local conditions, but also
we should take into account the system coexistence and balance operation, striving
to achieve a perfect combination of excellent performance and low cost in the
design stage.
Network planning is the key work before the network construction. According to
factors like service demand, social and economic development, regional conditions,
372 7 Field Trial Network
and supporting facilitates, the capacity, QoS, network structure layout are config-
ured. And the construction principles of overall planning and step-by-step imple-
mentation are adopted. Overall planning means, in accordance with the principle
of all-process all-network, smooth expansion, and overall stability of network
structure, to develop a plan for the SAE network (short-to-middle term rolling
planning for small-range amendments). Step-by-step implementation means to
ensure the synchronization of network construction and market demand, avoid
resource waste, match each projects investment plans, and ensure the indepen-
dence of planning results as well as the sound construction and development of the
network.
EPC core network includes S-GW, MME, P-GW, HSS, PCC, DNS, CG, BG. Full-
IP construction is used.
MME is the only control plane equipment of the core network. Its main functions
are access control, mobility management, session management, network elements
selection and user bearing information storage.
S-GW is located on the user plane. Each of the EPS-related UE has a S-GW
serving it at a point of time.
P-GW is located on the user plane, which is the gateway targeting Public Data
Network (PDN) and ending at Short Guard interval (SGi) interface.
HSS is a database used to store user contract information. The home network can
contain one or more HSS. HSS can also generate user security information for
authentication, integrity protection and encryption. HSS is responsible for the
connection between call control and session management entities in different
domains and subsystems.
Policy and Charging Rules Function (PCRF) is the policy and billing
control unit.
For capacity allocation of the base station, we should mainly consider two aspects.
The first is the system bandwidth selection, and the second is the base station carrier
configuration.
Considering from the perspective of improving the spectrum efficiency, we
should use the large bandwidth, such as 20 MHz, 10 MHz. However, from the
perspective of improving coverage, we should choose the lower frequency bands as
much as possible. But most of the frequency bands below 2GHz in the present
3GPP-defined LTE frequency bands are currently occupied by other systems. These
bands may be released gradually in the future, but they are relatively dispersed.
While the variable bandwidth characteristic of LTE makes it possible to utilize the
dispersed bands.
7.2 5G Test Field Design 373
When we make network plans in reality, we should consider the frequency bands
division of the operators and properly choose the biggest possible bandwidth
configuration.
In LTE capacity estimation, we need to consider two aspects: control channel
and service channel. Among them, downlink capacity should be estimated from the
maximal simultaneously scheduled users supported by downlink control channel
Physical Downlink Control CHannel (PDCCH) and the average throughput of a
single cell Physical Downlink Shared CHannel (PDSCH). Uplink capacity is
estimated mainly from the user numbers that can be borne on one Physical
Resource Block (PRB) in PDCCH and the average throughput of a single cell
PUSCH.
The carrier configuration of the base station should be considered from two
aspects: the number of users that can be scheduled by the control channel and the
traffic that can be borne by the service channel. The overall estimation formula is
shown in (Eq. 7.1).
Requirement of data traffic Number of users
Number of carriers max ;
Througput=Carrier Scheduled users=Carrier
7:1
When the unit is converted to GByte, then the single cell per hour processable
data traffic is:
35Mbit=s
3600s 15:38GB=h 7:3
8192
The busy-time processable data traffic when the cell average load is above
50% is:
Suppose the traffic per user per month is 5GB, then the average busy-time data
traffic per user is:
[Method Two]
Single cell data traffic capacity: 35Mbits/s
The busy-time cell data traffic capacity when the cell average load is above 50%
is: 17.5Mbits/s
Required per activated user data rate: 1Mbits/s
Busy time user activation ratio: 1/20
Average busy-time throughput per user: 50kbits/s
The number of users that the three sector base stations can accommodate:
Through the above calculation, it shows that the LTE base station can accom-
modate a considerable number of broadband data users.
The transmission requirements of LTE base station mainly consider the trans-
mission bandwidth requirement of S1 interface and X2 interface. At the early stage
of network construction, when both the overall network traffic level and the average
busy-time throughput requirement are low, in order to give full play to the advan-
tages of LTE technology, the busy-time traffic requirements and the minimum peak
rate requirements should be considered in the interface transmission bandwidth
configuration.
LTE single-station transmission bandwidth requirements can be estimated based
on the service throughput and transmission overhead of each sector.
According to the test results of FDD test network, S1 interface user plane
transmission overhead is between 2% and 10%. At the early stage of network
construction, if we consider meeting the single-sector (20 MHz bandwidth,
2 2MIMO) peak data rate requirements, the S1 interface bandwidth configuration
should be:
According to the test results of TDD test network, S1 interface user plane
transmission overhead is generally less than 10%.
At the early stage of network construction, if we consider meeting the single-
sector (20 MHz bandwidth) peak data rate requirements, the S1 interface bandwidth
configuration should be:
It should be noted that the peak data rate achieved here is the downlink peak data
rate with 3:1 time slot ratio. If the time slot ratio is 2:2, the downlink peak data rate
should be reduced by 1/3.
Wireless communications model plays a key role in the link budget. And the
coverage radius of the base station is determined by the maximal path loss allowed
in the link budget. Wireless communications model falls into two types, namely,
indoor and outdoor. The difference between the two models lies in the different
parameters. In the outdoor wireless propagation model, the landforms and buildings
in the propagation path are the influencing factors that must be considered, because
the signal fading is different in different environments. The wireless propagation in
free space has the minimal signal fading. The fading in open area/suburb is greater
than that in free space. The fading in general urban area is greater than that in open
area/suburb. And the signal fading in dense urban areas is greater than that in
general urban area. The features of the indoor propagation model are low transmit-
ting power, small coverage, and complex surrounding environment. In the follow-
ing part we are going to introduce several common communications models.
1. Free space model
Free space represents an ideal space or a space composed of isotropic media.
When the electromagnetic wave propagates in this space, there is reflection,
diffraction or absorption. The cause of the propagation loss is only the electro-
magnetic wave propagation. Satellite communications and LOS communica-
tions of microwave lines are typical examples of free space propagation. Under
certain conditions, the base station and the terminal antenna can be installed at
any height. In such a case, the communications between the base station and the
terminal is LOS communications. If the clear LOS communications exist
between the transmitting antenna and receiving antenna, the path loss follows
the free space model. The propagation loss model in free space is as follows:
In it, d is the distance between the terminal and the base station, the unit is
km. f represents the frequency, the unit is MHz. This model is applicable when
the base station antenna and the terminal are installed at a certain height and
there is LOS between the base station and the terminal.
7.2 5G Test Field Design 377
2. Okumura-Hata model
As an evolutionary version of the Okumura model, Hata model is mainly used
in urban areas, whose application range is shown in Table 7.2.
Okumura-Hata Model can be expressed as (Eq. 7.12).
Among them, L is the path loss in urban areas with dB as its unit, f is the
systems operating frequency with MHz as its unit, hb represents the height of
the base station antenna with m as its unit, hre indicates the height of the terminal
antenna with m as its unit, a(hre) is the antenna height correction factor, d
represents the distance between the terminal and the base station with km as
its unit.
3. Cost231-Hata
Cost231-Hata can be used as the propagation model of the macro cell base
station, and its application range is shown in Table 7.3.
Cost231-Hata Model can be expressed as (Eq. 7.16).
Where
Among them, f is the systems operating frequency with MHz as its unit, hb
represents the height of the base station antenna with m as its unit, hm indicates
the height of the terminal antenna with m as its unit, d represents the distance
between the terminal and the base station with km as its unit, a(hm) is the
terminal gain capability. This function is related to antenna height, terminal
operating frequency and environment.
The values of Cm is determined by the landscape type. The values of Cm in the
standard Cost231-Hata model are as follows:
Big cities: Cm 3
Medium and small cities: Cm 0
Suburb: Cm 2[lg( f/28)]2 5.4
Rural open area: Cm 4.78(lgf )2 18.33 lg f 40.98.
Since the frequency of some of the mobile communications networks is more
than 2GHz, such as 2.3 GHz, 2.6 GLTE Hz and 3.5GHz, it has exceeded the
application range of the standard Cost231-Hata model. Therefore, in the actual
network planning and design, Cost231-Hata model must be corrected based on
the results of the CW test.
4. Standard Propagation Model (SPM)
SPM is particularly suitable for predicting the model of 150 ~ 3500 MHz
frequency band and long-distance communications (1 km<d<20 km).It is very
suitable for a variety of cellular mobile communications technologies. The
model is based on the terrain profile, diffraction mechanism and has considered
the clutter and effective antenna height to calculate the path loss.
This model can be used in any technology, and it can be calculated in
accordance with (Eq. 7.18).
LSPM K 1 K 2 lgd K
3 lg H Txeff K 4 DiffractionLoss 7:18
K 5 lgd lg H Txeff K 6 H Rxeff K clutter f clutter
Coverage Design
Capacity Design
TD-LTE system capacity is determined by many factors. First of all is the fixed
configuration and algorithm performance, including single sector frequency band-
width, time slot configuration mode, antenna technology, frequency usage, inter-
cell interference cancellation technology, resource scheduling algorithm, etc. Next,
the actual networks overall channel environment and link quality will affect
TD-LTE networks resource allocation and modulation and coding mode selection,
so the network structure also has a crucial impact on TD-LTE capacity.
(1) Single sector frequency bandwidth. TD-LTE supports the flexible bandwidth
configuration of 1.4 MHz, 3 MHz, 5 MHz, 10 MHz, 15 MHz, 20 MHz.
Obviously, with greater bandwidth, network resources will be more available
and the system capacity will be greater.
(2) Time slot configuration mode. TD-LTE uses TDD, which can, according to the
different uplink-downlink proportions in an area, flexibly configure the uplink-
downlink time slot ratio. The present protocol defined seven kinds of uplink-
downlink time slot configuration modes, the special time slot of which have
nine modes to choose from. Different configurations have distinct differences in
uplink and downlink throughput.
(3) Antenna technology. TD-LTE uses a multi-antenna technology, so that the
network can, based on the actual network requirements and antenna resources,
realize single stream diversity, multi stream multiplexing, multiplexing and
diversity adaptation, single stream beamforming, multi stream beamforming,
etc. They have different usage scenarios, but all will affect the user capacity to
certain extent.
(4) Frequency usage. Currently, the analysis shows that TD-LTE network can use
the same frequency network. But the capacity performance of single cell with
the same frequency network system of the same bandwidth will be poorer than
that of the different frequency network systems. So in the actual operation we
should comprehensively consider the frequency resource, capacity require-
ments and other factors to determine the frequency usage.
(5) Inter-cell interference cancellation technology. For TD-LTE system, due to the
characteristics of OFDMA, the intra-system interference is mainly from other
cells with the same frequency. The co-channel interference will reduce the
users SNR and thus affect the user capacity. So the effect of interference
cancellation technology will affect the overall system capacity and the cell
edge users data rate.
(6) Resource scheduling algorithm. TD-LTE adopts the adaptive modulation cod-
ing method, so that the network can detect the test feedback in real time
according to the channel quality, and dynamically adjust user data encoding
modes and occupancy resources, achieving the optimal performance. There-
fore, TD-LTE overall capacity performance and resource scheduling algorithm
is closely related. Good resource scheduling algorithm can significantly
improve the system capacity and user speed.
382 7 Field Trial Network
(7) Network structure. TD-LTE user throughput depends on the quality of the
users wireless channel environment. Cell throughput is determined by the
overall cell channel environments. And the most critical factors affecting the
overall cell channel environment are the network structure and the cell cover-
age radius. In TD-LTE planning, we should pay more attention to network
structure than the 2G/3G systems, choose site strictly in accordance with station
distance principle, and avoid high stations and the sites deviating greatly from
the cellular structure.
TD-LTE capacity evaluation indicators.
The TD-LTE system capacity analysis can be separated into control plane and
user plane. Control plane capacity indicators include simultaneous scheduled user
number and simultaneous online user number. The simultaneous scheduled user
number is the basic indicator for system evaluation, specifically including uplink
and downlink control channel capacity, which is limited by the air interface
resources and channel configuration. Meanwhile the simultaneous online user
number is not only limited by the channel resources of the air interface control
plane, but also closely related to the equipment hardware processing capacity. User
face capacity indicators can be divided according to service types, including Voice
over Internet Protocol (VoIP) services and non-VoIP data services. Non-VoIP data
service indicators have cell peak throughput, cell average throughput and cell edge
data rate, etc. The VoIP service indicator is the number of VoIP users. The basic
indicators of the user plane include the average cell throughput and the number of
VoIP users. The above indicators are briefly discussed in the following part.
(1) Simultaneous scheduled user number: the number of users that can be sched-
uled by every system TTI.
(2) Simultaneous online (activated) user number: the number of users that maintain
the system connected status.
(3) Cell average throughput: When distributed in certain rules, the average
throughput of the whole cell equals to the sum of all cell throughput / cell
number.
(4) Cell edge throughput: throughput of the users distributed at the cell edge. In the
simulation system, the edge user is defined as the user at 5% when all the users
in the network are ranked in descending order according to the user throughput.
(5) The number of VoIP users: the total number of VoIP users contained in the cell.
VoIP user number is related with bandwidth configuration, control channel
resources and VoIP scheduling algorithm.
downlink frequency near the base station so as to ensure that there is no strong
interference source. The isolation of LTE site and other wireless equipment
with similar frequency should be given full consideration. The sites where the
interference cannot be solved should be avoided, such as paging, microwave,
and other equipment with similar frequency.
(5) The site should be selected in the locations with convenient transportation,
good power supply and safe environment. Base station is the equipment that
needs long-term stable operation and regular maintenance. Any failure is likely
to reduce the users trust to the network operator and reduce the users support
for the network, causing users and income reduction and even the operation
failure. Transportation and city power supply must ensure the base station a
long-period high-power operation and the timely maintenance by maintenance
staff. Site should not be selected near the flammable and explosive buildings
and stacks, or near the industrial enterprises which emit harmful gases, heavy
smoke, dust, hazardous substances in the production process.
(6) Investment restrictions. In the site selection, the site with lower cost should be
preferred. For those high-cost sites, other alternative lower-cost sites should be
considered. In addition, while not affecting the overall layout, the existing
equipment room, power supply and other facilities should be utilized as much
as possible. At the early stage of network construction, in the case of insuffi-
cient funds, the coverage of important users and high density users should be
guaranteed as much as possible.
Indoor distribution is a successful solution for indoor user groups to improve the
mobile communications environment in buildings. In recent years, it has been
widely used in mobile communications operators all over the country.
Indoor distribution system provides a good solution for indoor signal coverage.
Its principle is to use the indoor antenna distribution system to evenly distribute the
signals of the mobile base station in every corner of the room, so as to ensure the
indoor area an ideal signal coverage.
Construction of indoor distribution system can comprehensively improve the
call quality within buildings, enhance the mobile phone connection rate, and open
up a high quality indoor mobile communications area. Meanwhile, micro cellular
systems can share the outdoor macro cell traffic and expand network capacity, so as
to improve the overall service level of mobile network.
Indoor distribution system is to introduce the base station signal into the build-
ing, make a reasonable distribution via the power distribution device, and then
transmit signals via indoor antennas.
With the increasing number of high-rise buildings in the city, the user satisfac-
tion is also rising. These buildings are large in size and good in quality, and have a
strong shielding effect on the signal of mobile telephone. Especially in the lower
floors of large buildings, underground shopping malls, underground parking and
7.2 5G Test Field Design 385
other environment, mobile communications signals are very weak. And mobile
phones cannot be used normally, forming the mobile communications blind areas
and shadow areas. In the middle floors, due to interference from surrounding base
stations, ping pong effect occurs, where mobile phones frequently switch or drop
off, seriously affecting the normal use of mobile phones. In higher floors of the
building, due to the height limitation of the base station antenna, normal coverage
cannot be achieved. Its also a blind area of mobile communications. Besides, in
some of the office buildings, although the mobile phones can be used for normal
phone calls, the mobile phones are difficult to go online because of the high user
density and the base station channel congestion. Under the fierce competition
environment, the indoor mobile communications network coverage, capacity,
quality are key factors for operators to gain the competitive advantage. It funda-
mentally reflects the service level of mobile network and remains the priority of
mobile operators in recent years.
WLAN, featuring high throughput and low cost, can be well combined with the
Internet service to provide users with convenient high-speed wireless Internet
access service. With the rapid development of the Internet, WLAN equipment is
rapidly becoming popular all over the world, offering great convenience for peo-
ples work and life. According to Wi-Fi Union statistics, currently there are over
1 billion WLAN users globally. Telecom operators also attach great importance to
WLAN and see it as an important complement and extension of fixed network and
cellular network. They deploy WLAN hot spots in large scale to provide services to
the public, which further promotes the WLAN development.
In order to meet the growing market demand, WLAN technology and standards
are constantly developing and improving, and data transmission capabilities con-
tinue to improve. After more than 20 years of development, the maximum infor-
mation transmission rate of WLAN equipment in the current IEEE 802.11n device
reach up to 600Mbits/s. In addition, the 802.11n device also uses two channel
access mechanisms, static 40 MHz and dynamic 20/40, significantly improving the
system throughput. IEEE has launched the next generation IEEE 802.11 ac/ad
WLAN technical standards, whose data throughput will reach 7Gbits/s at maxi-
mum, better meeting the market demand for high throughput wireless data services
such as wireless High Definition (HD) video transmission. IEEE 802.11 ac project
has already begun as early as the first half of 2008, when it was known as very high
throughput with the goal directly reaching 1Gbits/s. In the face of the requirements
of multiple HD video and lossless audio for over 1Gbits/s code rate, IEEE 802.11 ac
386 7 Field Trial Network
was helpless, especially in the indoor high speed data transmission environment.
Thus, IEEE 802.11ad was proposed, which would be used to achieve the transmis-
sion of domestic wireless HD audio and video signals, and bring more complete HD
video solutions for home multimedia applications.
In order to achieve higher wireless transmission rate, IEEE 802.11ad has aban-
doned the crowded 2.4GHz and 5GHz frequency bands, but rather uses 60GHz
frequency spectrum high frequency carrier with 57 ~ 66GHz unallocated frequency
band. With such a wide bandwidth, the data transmission rate can be greatly
improved. Since 60GHz spectrum in most countries (including the United States)
has large quantities of available frequencies, IEEE 802.11ad can realize the simul-
taneous multi-channel transmission with the support of MIMO technology with the
bandwidth of each channel exceeding 1Gbits/s. On the basis of integration of IEEE
802.11 s and IEEE 802.11z, it totally can used to realize file transmission and data
synchronization between equipment, and the speed will be faster than the second
generation Bluetooth technology by more than 1000 times. Of course, its main
purpose is to achieve HD signal transmission.
IEEE802.11 ac
802.11 ac is specifically designed for the 5GHz band. The unique characteristics of
the new radio frequency improve the performance of existing wireless LAN until it
can be comparable with the level of wired Gigabit network. 802.11 ac is a new
standard of IEEE wireless technology. It draws on the advantages of 802.11n and
further optimization. In addition to the most obvious characteristics of high
throughput, it has a lot of improvements in many aspects.
Improvement on the 1802.11 ac standard physical layer
802.11 standards include physical layer and MAC protocols. Since the first
release, the physical layer has made a number of important additions and amend-
ments, while most of the MAC basic features remain unchanged. Here we focus on
the changes in the physical layer of 802.11 ac.
(1) A wider channel bandwidth
802.11 ac supports 80 MHz bandwidth, and choose to use continuous
160 MHz frequency band or discontinuous 80+80 MHz frequency band. Its
channel distribution is shown in Fig. 7.4
IEEE802.11ad
In the face of the requirements of multiple HD video and lossless audio for over
1Gbits/s code rate, 802.11 ac was helpless. Therefore, 802.11ad came into being,
which will achieve the ultra-high data rate of 7Gbits/s and be mainly used for
domestic wireless HD audio and video signal transmission, bringing more complete
HD video solutions for family multimedia applications .
After 60GHzVHT project got approval in December 2009, 802.11VHT working
group passed the 802.11ad Draft 2.0 draft in March 2011, and the official standard is
expected to be published by the end of 2012.
In order to achieve a higher rate of wireless transmission, 802.11ad abandoned
the crowded 2.4GHz and 5GHz band. Instead it uses high frequency carrier 60GHz
spectrum (57GHz66GHz).Since 60 GHz spectrum in most countries have large
quantities of available frequencies, 802.11ad bandwidth of each channel can reach
2.16GHz, which will be 50 times of the 802.11n channel. In addition, 802.11ad also
uses adaptive beamforming, a variety of physical layer types, PBSS network
architecture, mm Wave channel access, fast session migration and other enhance-
ment technologies to improve the system throughput and coverage. However,
802.11ad is also faced with its technical limitations. For example, 60GHz carrier
has very poor penetration ability, and its signal attenuation is very serious in the air,
greatly affecting its transmission distance and signal coverage. So its valid connec-
tion can only be limited in a small range.
7.2 5G Test Field Design 389
and expands the signal coverage. Besides, if there is an obstacle in the line of
sight of the transceiver, the transceiver can quickly evade the obstacle and
rebuild a new link for communication. Beamforming can be realized by
different techniques such as beam switching, phase weighted antenna array,
multi antenna array, etc.
Beam shaping protocol consists of three stages: Sector Level Sweep (SLS),
Beam Refinement Phase (BRP), and Tracking phase (Tracking). SLS is divided
into a transmitting SLS and receiving SLS. The former is used to determine the
optimal transmitting direction of the transmitting device and the latter is used to
determine the optimal receiving direction of the receiving device. BRP further
optimizes the beam direction through the joint adjustment of transmitting and
receiving beam. Tracking stage is used to dynamically adjust the beam
according to the channel changes in the process of data transmission.
(4) New network architecture PBSS
802.11ad defines a new network architecture Personal Basic Service Set
(PBSS), which allows direct communications between two devices. PBSS
network architecture falls into the ad hoc network, similar to Independent
Basic Service Set (IBSS).But the difference between themis that in PBSS
architecture, one partys STAtion (STA) bears the PBSS Control Point (PCP)
function. Only the PCP can transmit beacon frames.
(5) mmWave channel access mechanism
The medium time in PBSS system is divided into Beacon Intervals
(BI) structure, and the sub time interval in BI is the access time. Different
channel access time follows different access rules. As shown in Fig. 7.5, one BI
contains the following four types of access time. First, Beacon Time (BT): PCP
sends beacon frames in different directions and discovers new STA. Second,
the Associated BeamForming Training time (A-BFT): Performing the
beamforming training between PCP and STA, establish beamforming link.
Third, Announcement Time (AT): It is used to transfer control frames between
PCP and STA. Fourth, Data Transmission Time (DTT): It is used to accomplish
data frame exchange between STAs, including scheduling-based Service
Period (SP) and Competition-Based access Period (CBP), or a combination of
any number of the two. During the SP, the scheduled STA can access the
channel. During the CBP, all STA can access the channel based on the
802.11DCF and HCF mechanisms. DTT scheduling information is transmitted
from STA to PCP through beacon frame and announcement flame.
BI
DTT
Time
Network Element
Mobile Network
deployment and independent network, and even has replaced the wired network in
some places. Traditional WLAN architecture has been unable to meet the require-
ments of large-scale network. Therefore, the Internet Engineering Task Force
(IETF) established Control And Provision of Wireless Access Points (CAPWAP)
working group, studying the large-scale WLAN solutions. After working on the
current mainstream WLAN solutions, CAPWAP working group divides the WLAN
system into three types: autonomous, centralized and distributed network
architecture.
1. Autonomous architecture
The early WLAN structure is a kind of autonomous system structure. In this
architecture, all 802.11 functions namely the 802.11 PHY Layer and MAC
functions are completed by AP. In addition, some complex functions, such as
the 802.11idefined security function, 802.11-defined QoS function, and even
Radius client functions are implemented by AP. As shown in Fig. 7.7 is the
autonomous WLAN network architecture.
7.2 5G Test Field Design 393
AC
Switch
L2 / L3
Equipment
AP
In it, PL(d, f ) is the indoor path loss, PLFS(d, f ) is the free space loss, which is
shown in the expression (Eq. 7.20):
In it, D is the propagation path, nis the attenuation factor. In different wireless
environments, attenuation factors nhave different values. In free space, the path
attenuation is proportional to the squared distance, i.e., the attenuation factor is
2. Within the building, the effect of distance on path loss will be greater than that
of free space. Generally speaking, the value of n in the open environment is
2.0 ~ 2.5, the value of n in the semi open environment is 2.5 ~ 3.0, and the value
of n in the closed environment is 3.0 ~ 3.5. The theoretical calculated value of
the typical path propagation loss is shown in Table 7.5.
3. AP signal link loss calculation
According to the model, the indoor path loss is equal to the free space loss and
the additional loss factor. Moreover it grows exponentially with distance. The
receiving level estimation formula is shown in (Eq. 7.22).
In it, Pr[dB] is the minimum receiving level, which is the receiving sensitivity
of AP at different transmission rates. Pt[dB] is the maximum transmit power.
Gt[dB] is the transmitting antenna gain. Gr[dB] is the receiving antenna gain.
PL[dB] is the path loss. We can make the theoretical calculation of the limit
propagation distance of AP signal as follows.
Assuming that the antenna transmitting and receiving gain is zero, when AP
transmitting power is 16.2 dBm, the maximum distance of theoretical indoor
propagation is shown in Table 7.6.
4. AP signal penetration loss
The experience values of the current 2.4G electromagnetic waves penetra-
tion loss for a variety of building materials are as follows.
7.2 5G Test Field Design 397
(1) Block by partition walls (brick wall thickness 100300 mm): 2040 dB
(2) Block by floors: above30dB
(3) Block by wooden furniture, doors and other wooden partition: 215 dB
(4) Thick glass (12 mm): 10 dB
In addition, when measuring the penetration loss of AP signals, we need to
consider the incident angle of the AP signal. For a 0.5 m thick wall, when the
linear connection of AP signal and the coverage area forms a 45 degrees
incident angle, its equivalent to a 1 m thick wall. With 2 degrees angle, its
equivalent to a wall of more than 14 m thick. Therefore, in order to get a better
acceptance effect, we should try to make the AP signal through the wall or
ceiling vertically (90-degree angle).
When planning WLAN network, we should firstly consider the interaction between
AP and wireless network adapter signal, as well as the users effective access to the
network. Therefore, how to ensure the wireless signal coverage is a factor that must
be considered in AP selection. Since WLAN has high work frequency band, low
sensitivity (compared with mobile base station/mobile phones), great signal reflec-
tion and diffraction, there are several different indoor coverage solutions that the
planning personnel can choose according to the actual situation on site.
On-site investigation must be carried out before designing to find out the
following points.
(1) Getting to know the coverage area and signal coverage quality requirements
since different locations have different coverage requirements.
(2) Investigating the existing signal distribution of the coverage area, to grasp the
blind spots, hot spots and signal collision areas.
(3) Investigating the composition of the buildings in the coverage area, and the
signal blocking.
(4) Signal access position and mode.
(5) Investigating the location where the equipment can be installed.
After the on-site investigation of the environment, we can choose three different
plans according to the actual situation, respectively, co-cellular network indoor
distribution system coverage scheme, independent AP distribution coverage
scheme and cross patch coverage scheme. Here are brief introductions to the
three schemes.
1. Co-cellular network indoor distribution system coverage scheme
(1) Application scope and usage requirements of the scheme
Application scope: mid-to-large scale indoor coverage. The system struc-
ture is complex. Mainly used in medium blind area coverage or important
public places to meet the coverage requirements of places such as hotels,
398 7 Field Trial Network
airports, conference centers, etc., but not suitable for the network with high
capacity requirements.
Usage requirements: the system is an indoor coverage system, and the
equipment is required to be installed indoors.
(2) Engineering design experience
Generally, if the antenna radiation power is 10 dBmW and a 2dBi small
omnidirectional indoor antenna is used, then within 30 meters from the
antenna, when in spacious conference center and without brick wall, the
coverage level can reach -75 dBm/WLAN.
For the hotel room with dense wall structure, if the antenna tip radiation
power is 10 dBmW, and a 2dBi small omnidirectional indoor antenna is
used, then within 8 to 10 meters from the antenna, the room signal can reach
70~85 dBm.In general, the signal to the room door should be controlled
within 4 m.
In the energy distribution, signals should be distributed as evenly as
possible. In design, in the paths from the base station to each antenna, the
emergence of more than two power dividers (or coupler) should be avoided,
so as to ensure the effective access of uplink signals.
The use of this scheme is suitable for indoor coverage of large scale hotel
lobby, airport, and conference centers with lower capacity requirements.
2. Independent AP distribution coverage scheme
(1) Application scope and usage requirements of the scheme
Application scope: Its suitable for small and medium scale indoor
coverage where there is no indoor distribution system, small area coverage
or important public places, such as hotels, conference centers, etc. The
system has simple structure and covered by independent APs.
Usage requirements: indoor coverage, applicable to the design and instal-
lation of indoor equipment.
(2) Engineering design experience
If only one AP is installed in a hall, then the AP would be better placed in
a central position in the hall. And it is better to be placed on the ceiling of the
hall. If two APs are installed within the same space, they can be put on two
diagonals.
The number of signals penetrating walls and ceilings should be kept
minimal. 2.4G signal can penetrate walls and ceilings, but each wall and
ceiling will reduce the AP signal coverage by 130 meters. AP and the
computer should be placed in a suitable position so that the signal block path
of the wall and ceiling can be kept the shortest and the loss minimum.
Linear connection between the AP and the coverage area should be
considered. The location of AP should be chosen in the way that the signal
can go through walls or ceilings vertically (90-degree angle).
Different building materials have different transmission effects. Building
composed of metal frame or door will shorten the WLAN signal transmis-
sion distance. Therefore, AP should be placed where signals go through dry
7.2 5G Test Field Design 399
walls or open doors, instead of where signals must penetrate the metal
material.
AP antenna direction is adjustable, so AP should be installed where the
antenna main beam directly faces the coverage target area to ensure good
coverage effect.
AP should stay far away from electronic equipment (1~2 meters), such as
microwave ovens, monitors, motors, etc.
3. Cross patch coverage scheme
(1) Application scope and usage requirements of the scheme
Scope of application: for large-scale indoor coverage, the system struc-
tural design is relatively complex and the renovation project is smaller.
WLAN signal source connected to the indoor distribution system,
complemented by independent separate AP layout covering a few blind
spots, and ultimately seamless coverage with good effect can be achieved.
It can meet the coverage needs of public places and open areas like halls and
airports, and can also cover the medium capacity needs of hotels and
conference centers.
Usage requirements: indoor coverage, applicable to the design and instal-
lation of indoor equipment.
(2) Engineering design experience
AP signal source access is similar to co-cellular network distributed
antenna system coverage scheme in terms of distribution system, design
requirements and hardware transformation requirements. It needs to be
noted that when the original indoor systems antenna location does not
meet the WLAN coverage/range requirements, we may not consider moving
the antenna or adding terminal antennas by changing the power divider. But
rather, we can use independent AP planning method to cover a small amount
of hot spots for coverage complement.
In cross distribution coverage planning scheme, when using the original
indoor distributed system of WLAN coverage, for the problems of massive
renovation project, more antenna positions changing, and difficult power
divider load matching, we can moderately use independent APs to cover
blind spots and hot spots, as supplement to the original indoor distribution
system, realizing good overlapping coverage effect. This scheme has the
characteristics of flexible planning. Before design, we should make
pre-evaluation on the renovation project scale and the expected coverage
effect, and then after comprehensive consideration, we develop an ideal
project plan.
In WLAN network planning, the impact of the whole mesh capacity on the system
performance is greater than that of the AP coverage. To avoid the situation that a
400 7 Field Trial Network
single user access slows down the whole cells transmission rate, an AP transmit-
ting power threshold should be set up. By adjusting the AP transmission power to
alter the cell size, we can make the cell smaller in design. In such a way, since there
arent many users in a cell small, each users high transmission rate can be
guaranteed.
Capacity in communications can be studied from two aspects: theory and practice.
Theological concept of capacity is based on the amount of information on unit
bandwidth (time or area). For voice services: Erl/unit bandwidth/unit area; for data
services: bit/unit bandwidth (time or area). The above theological concepts of
capacity are difficult to be applied in the actual measurement and comparison of
information. Practical concept of capacity is based on the amount of charges
(or users) on unit bandwidth (time or area). For voice services: Erl/unit bandwidth
or Erl/unit bandwidth/unit area; for data services: Channel/unit bandwidth. This
practical capacity concept is applicable to engineering communications capacity
measurement.
Network capacity, without a standard definition in industry, can be characterized
by different parameters from different angles, and the results are also different.
From the users point of view, by using mathematical basic theories to study the
network capacity, we can get the following definitions.
Definition 1: Network capacity C is defined as the sum of the communications
probabilities of all users, and its expression is as shown in (Eq. 7.23).
X
C Qi 7:23
i
where Qi is the communication probability of the user i and means the average
number of users communicating simultaneously at any given moment supported by
the network.
Definition 2: Communications probability Qi is the probability that the user can
normally send the data when the expected user does not transmit data. Assume that
user is communications status Zi is subject to two points, that is, Zi (0, 1), in
which Zi 0 means the user is not communicating and Zi 1 means that the user is
in a state of communications. Assume user is interference user number is Ni, and
then the Zi distribution rate is:
8
> Ni 1
>
< N , Zi 0
i
PZ i 7:24
>
> 1
: , Zi 1
Ni
7.2 5G Test Field Design 401
XM
1
C 7:25
N
i1 i
In it,
(
1, Ai 2 A; Bj 2 Di
gij 7:27
0, else
Based on the coverage matrix G, the network capacitys calculation steps are as
follows.
(1) Initialize parameters. Let i 0 , j 0, and all elements in Array flag[n] be 0.
(2) If gij 1, then Array Element flag[i] gets the value 1.
(3) If i < n, then i i + 1, jump to (Eq. 7.27). Or enter the next step.
(4) k 1 , Ni 0.
Pm
(5) If flag[k] 1, then N j N j gkj 1.
j1
(6) If k < m, then k k + 1, repeat Step (5). Or enter the next step.
(7) Nj Nj + 1; Qj N1j .
(8) If j < m, then j j + 1, return to Step (2).Otherwise, jump out of the program.
402 7 Field Trial Network
After the above process, we get each users communications probability vector
Q {Qi| i 1, . . . , m}, and then get the capacity value by the calculation of net-
work capacity (Eq. 7.23).
According to the definition of network capacity in (Eq. 7.23), the conditions for the
optimal network capacity are
!
X X
MAXC MAX Qi MAXQi 7:28
i i
X
1
MAXQi MAX MINN i 7:29
i
Ni
X
n
MAXC MAXQi n 7:30
i1
(Eq. 7.30) shows that in the network when the number of users is n, deploy n APs,
and each AP covers and only covers one user. When all users are covered, the
network has the largest capacity, and the maximum capacity value is n.
The goal of the actual network construction is that in the premise of ensuring the
user network throughput rate, using APs as less as possible to obtain the users
coverage as much as possible, and meanwhile pursuing good balanced coverage
results. The mathematical significance of AP is
capacity. That is to say, the network deployment plan in the maximum capacity
calculated in (Eq. 7.30) has no practical application value. Therefore, it is necessary
to comprehensively consider different factors and find a relatively optimal network
deployment scheme with a good overall cost and effectiveness.
Large scale user application test is the last step to carry out the network perfor-
mance evaluation of the communications system. The traditional method is to carry
out the relevant work by issuing the test user number in the pre-commercial stage.
For 5G systems, since technologies of various modes will be interconnected, it is
absolutely necessary to carry out the relevant tests and verification at the R&D
stage, which requires the test environment to provide the appropriate test
conditions.
ShanghaiTech University has 6000 students. Together with faculty and staff, the
overall size is more than 10 thousand people, which is very suitable for a dense
crowd massively connected scenario test. In particular, the characteristics of stu-
dents needs for new services make this test field very suitable for the new service
applications test.
In the preliminary experiment, it is clear that students characteristics for
network use make them very suitable for large-scale user application test. Different
from traditional voice services, students curiosity about new types of services
makes them more willing to cooperate with the application test of new services.
In addition, the tidal effect of the student population movement is also very suitable
for carrying out load balancing technology test.
Wireless communication field trials involve all aspects of the wireless communi-
cation system, covering from radio wave propagation characteristics research to the
test of air interface key technology, to network performance evaluation and vali-
dation. Every link needs field trial to confirm whether the technology meets the
design requirements.
A typical field trial procedure is shown in Fig. 7.10.
This chapter gives typical test cases from three aspects, namely wireless channel
measurement and modeling, wireless communications key technology test and
performance test of wireless network.
404 7 Field Trial Network
START
Requirement Analysis
N
Field is meet ? Reconstruction
Parameter
configuration
Data collection
Data analysis
N
Modify &
Results conform?
Retest
END
Test Purpose
Test Environment
Test Results
Through the field data collection in three test scenarios and the analysis on the
statistical fractal characteristic for the stochastic process with three methods, we
find that although there are slight differences between three different typical
methods in Hurst parameters estimation, the final estimations for the wireless
cellular coverage boundaries are basically the same. That is, the estimated average
Hurst parameters are approximately 0.9, which shows the real wireless cellular
coverage boundary exhibits a statistical fractal feature.
Test Purpose
managed by the nation, and its management mode is the fixed spectrum manage-
ment mode. World Radio Conferences in 1992, 2000, and 2007 has allocated many
frequency bands in succession for the wireless mobile communications system,
including 450 MHz, 700 MHz, 900 MHz, 1800 MHz, 2GHz, 2.3GHz, 2.6GHz,
3.5GHz, etc., which are the main frequency bands that support the current 2G, 3G,
4G mobile communications systems of GSM, TD-SCDMA, CDMA2000,
WCDMA, LTE, etc.
Wireless channel measurement, based on the feature analysis of radio wave
propagation, analyzes the large scale characteristics of radio wave propagation as
its basic content, that is, the path loss.
Most of the current wireless networks design has multi-mode, shared network
and shared antenna. Usually there is current network data of a certain network
frequency band. As for how to infer from the coverage characteristics of one
network to another networks frequency band propagation coverage, we lack the
experiential data of objective testing. So we cant guide the front line to use the
current network data in other networks with shared network. This requires the
multi-band radio wave propagation characteristic analysis. Through the actual test
and the test data analysis, we can find out the rules for the differences in propagation
characteristics between different frequency bands, so that we can maximize the
utilization of the current network data, support the use of the coverage data from
different networks in the coverage prediction of other networks, and improve the
coverage prediction accuracy.
Test Environment
Test Results
For this test, 11 typical frequency bands that are commonly used in wireless cellular
network within 700 MHz~3500 MHz range are selected for all the test scenarios.
Through data and processing, we have obtained the comparison results of path
loss indicator and shadow fading factor between same scenarios different frequency
and different scenarios same frequency. In data processing, we have studied the
geographical average method of the test sample points in the mobility test, and
408 7 Field Trial Network
compared the impact of the different processing methods on the results. The
following are part of the test conclusions.
(1) Comparing the test results of the suburban, dense urban and ordinary urban
environments, we find that the dense urban environment, with the most obsta-
cles, has greater path loss attenuation value than the other two communications
environments.
(2) Comparing the standard deviations of the test results and the theoretical
propagation model, we can see that the standard deviation decreases with
the increase of the frequency point, and the standard deviation is less
than 9 dB, which meets the requirement of the network test to the standard
deviation.
(3) Comparing three data processing methods of 10 meters, 10 meters grid and
17 meters grid, we find that the 17 meters grid can simultaneous meet the data
processing requirements from low frequency band to high frequency band, and
the smaller standard deviation can be obtained.
7.3 Typical Case Analysis of Field Trials 409
Test purpose
Inter-site
Intra-site CoMP
CoMP
Fig. 7.14 Schematic diagram of CoMP key technologies verification test environment
Test Environment
For CoMP key technologies test environment, we build a multi-layer LTE wireless
coverage network from outdoor to indoor according to the test requirements, which
is shown in Fig. 7.14.
As shown in the above figure, macro-0 and macro-1 are two sectors (cells)
of the same LTE outdoor macro cell base station (eNodeB). IA-2 is a sector of
the LTE outdoor micro cellular base station. In the teaching building we also
constructed more than 10 LTE indoor coverage stations, thus forming a macro
cell -- micro cellular --- indoor multi-layer coverage of wireless network test
environment.
In the test configuration, we can choose an appropriate position to achieve
different CoMP configuration.
(1) The outdoor CoMP Inter-site test environment composed of macro-0 and
macro-1;
(2) The Intra-site CoMP test environment composed of IA-2 and indoor sites;
(3) The Intra-site CoMP test environment composed of multiple indoor sites.
Test Results
This test verified the effect of CoMP in the system capacity performance improve-
ment in multi-layer network environment. More detailed test results will not be
presented here.
7.3 Typical Case Analysis of Field Trials 411
Test Purpose
AAS solution is to integrate the base station RF into the antenna and to use the
multi-channel RF and antenna array in coordination, so as to realize the space
beamforming and complete the RF signal transceiving. Active antenna is a new
form of base station architecture while BBU similarly brings baseband signals to
the active antenna unit. Different from BBU+RRU architecture, active antenna
divides the transceiving channels into the sub antenna oscillator level with more
refined particles size. Through different configurations of active antenna oscillators,
we can achieve the functions like flexible beam control and MIMO, and more
flexible and dynamic resource allocation and sharing, so as to achieve the goal of
the optimal performance and lower cost in the whole network.
AAS technology uses the adjustable angle design and thus improves the network
performance. It mainly includes the following several key technologies.
(1) Vertical sector splitting
AAS system can form multiple beams in horizontal and vertical dimensions,
achieving the multiplexing of the resource with the same time frequency. For
the AAS vertical sector splitting performance test, we need to evaluate the split
sector performance on the same horizontal plane.
(2) 3D beamforming
Through the 3D beamforming, AAS can achieve very good spatial resolu-
tion, making MU-MIMO and spatial interference suppression field have a
certain performance improvement.
(3) Proactive cell shaping
By flexibly adjusting the lower angle and the shape of the beam, AAS can be
carried out in advance, in order to promote the performance balance in the
macro cell and the low power node, especially in the non-equilibrium network
configuration scenario.
Test Environment
The test field for AAS key technologies verification is composed of four stations
(one center station, three interference stations), among which the center station has
three sectors. This network topology consists of six sectors (1 3 + 3 1). The
topological graph is shown in Fig. 7.15.
To facilitate the test of the network configuration, the core network EPC adopts
the simulated core network and builds the core network elements with servers,
including MME, SGW, and PGW.
412 7 Field Trial Network
Site 3
Site 2
S
AA
AA
S
SimEPC BBU2_1Cell BBU3_1Cell
Optic
AA
S
Switch
S
BBU1_3Cell
AA
ODF
AAS
Site 1
AA
S
BBU4_1Cell
Site 4
Test Results
For AAS key technologies verification field trial, we compare the network perfor-
mance in three modes, namely conventional RRU, vertical sector splitting and
virtual sector. The test results are summarized as follows.
(1) Conventional RRU mode, under the same test conditions in comparison test,
will change with the inclination angle. And RSRP, SINR, DownLink (DL),
UpLink (UL) and other data change regularly. In the planed road test area, there
will be the optimal inclination angle.
(2) For vertical sector splitting mode, under the same test conditions in comparison
test, with combination of different inclination angles, representative areas (far
end and near end) are selected in the planed road test area. And RSRP, SINR,
DL, UL and other data change regularly. There will be the optimal inclination
angle combination.
At the near end, the downlink test indicators will be worse than other modes,
mainly because of the same frequency interference by the side lobe of the outer
cell. At the far end, the downlink test indicators show similar performance with
the other modes.
(3) For virtual sector mode, under the same test conditions in comparison test, with
combination of different inclination angles, representative areas (far end and
near end) are selected in the planed road test area. And RSRP, SINR, DL, UL
and other data change regularly. There will be the optimal inclination angle
combination.
7.3 Typical Case Analysis of Field Trials 413
At the near end, the downlink test indicator is slightly worse than the
conventional RRU model but significantly better than vertical sector splitting.
The main reason is that avoidance scheduling is done in the base band side
according to the downlink interference detection, which significantly lowers the
near-end interference. At the far end, the downlink test indicators show similar
performance with the other modes.
(4) As for the uplink tests of the three modes, the conventional RRU mode is better
than the other two modes, while the virtual sector and vertical sector splitting
are similar. The main reason is that the same frequency interference between
UEs uplinks is at a very low level, the impact on performance is very small.
(5) In terms of network coverage evenness, vertical multi sector and virtual sector
are better than conventional RRU mode, with the coverage performance
improved.
power and time). The main technology includes Fractional Frequency Reuse
(FFR), multi station MIMO and power control technology, etc.
(3) Interference cancellation technology
The interference cancellation technology is to decode and copy the signals
from the interfering cells, and then subtract the interference signal from the cell
in the received signals. The advantage of interference cancellation technology
is that there is no limit to the use of cell frequency resources. But its limitation
is that the target cell must also know the pilot frequency structure of interfer-
ence cell to make channel estimation of the interference sources. As a result, the
signaling cost and implementation complexity of the interference cancellation
technology are relatively high.
Test Environment
The test field of ICIC key technology test is composed of two stations (one cell in
one station, and the two cells have overlapping areas).The topological graph is
shown in Fig. 7.16.
The test will take the parameters like the target cell RSRP, adjacent cell RSRP,
Cell Reference Signal (CRS) SINR, CQI, etc., which are obtained based on the
location of the test terminals, as the input variables of the cell interference coordi-
nation algorithm. Then it will calculate the optimized power parameters, configure
the base station transmitting parameters according to the optimized parameters, and
retest the validity of UE network performance verification algorithm.
RB1
RB2
Cell_1
UE11 UE12
UE32
UE21
UE22
Cell_3 Cell_2
UE31
Fig. 7.16 Schematic diagram of UICIC key technologies validation test environment
7.3 Typical Case Analysis of Field Trials 415
The test selects two scenarios for two tests. One is that the terminal is located
near the cell center, and the other is that the terminal is located in the overlapping
area of the cell.
Test Results
Table 7.7 The first test result when users are near the base station
Cell throughput Total Power
(Mbps) throughput Proportional efficiency
Power configuration (W) Cell 1 Cell 2 (Mbps) fairness and value (Mbps/W)
P117.31 20.06 27.34 47.4 6.307 EP11.159
P217.31 EP21.579
P16.76 19.33 24.84 44.17 6.174 EP13.222
P26.00 EP23.675
Table 7.8 The second test result when users are near the base station
Cell throughput (Mbps) Total Power
Power throughput Proportional efficiency
configuration (W) Cell 1 Cell 2 (Mbps) fairness and value (Mbps/W)
P117.31 27.986 12.293 40.279 5.841 EP11.617
P217.31 EP20.71
P14.38 15.916 17.232 33.148 5.614 EP13.634
P25.51 EP23.127
Table 7.9 The first test result when users are far away from the base station
Cell throughput (Mbps) Total Power
Power throughput Proportional efficiency
configuration (W) Cell 1 Cell 2 (Mbps) fairness and value (Mbps/W)
P117.31 9.882 6.668 16.55 4.188 EP10.571
P217.31 EP20.385
P14.38 10.509 7.556 18.065 4.375 EP12.399
P25.51 EP21.371
416 7 Field Trial Network
Table 7.10 The second test result when users are far away from the base station
Cell throughput (Mbps) Total Power
Power throughput Proportional efficiency
configuration (W) Cell 1 Cell 2 (Mbps) fairness and value (Mbps/W)
P117.31 11.606 4.708 16.314 4.000 EP10.670
P217.31 EP20.272
P14.38 11.685 7.309 18.994 4.447 EP12.668
P25.51 EP21.326
Test Purpose
Because WLAN system uses the unlicensed Industrial Scientific Medical (ISM)
frequency band and most of the terminals support the WLAN function, WLAN
network is widely used and brings users a good wireless connection experience.
However, due to the publics insufficient awareness for the unlicensed ISM band
using provisions and the irrational use of network overall plan and channel, massive
co-channel interference and adjacent channel interference occur, eventually making
the actual user experience cannot reach the expected performance. As for how to
evaluate the WLAN network coverage performance, a joint test of network trans-
mission performance and air interface signal quality is required to avoid the
incomplete evaluation for network performance by a single test.
Test Environment
This test chose the heavy-traffic public places in Shanghai, including the railway
station and the airport area. Figure 7.17 shows the test roadmap schematic diagram
7.3 Typical Case Analysis of Field Trials 417
Fig. 7.17 Schematic diagram of WLAN test environment of Hongqiao Railway Station
of Hongqiao Railway Station area. Figure 7.18 shows the test roadmap schematic
diagram of Pudong Airport area.
Test Results
For WLAN network, its KPI indicators system includes the following four categories.
(1) Signal optimization indicators
(2) AP performance optimization indicators
418 7 Field Trial Network
support system software is limited to verification conditions and other factors, its
accumulated technology and mature products are generally later than the develop-
ment of hardware devices. At this stage, in order to respond to the external policies
and market situation, the immature 3G, 4G and other new mobile communications
equipment has entered the large-scale network construction phase, but its
supporting software technologies have failed to develop synchronously.
In fact, this imbalance has led to the low efficiency of the current network
construction and operation. For example, the mobile phone user capacity and
mobile Internet access rate are much lower than the network design capabilities.
The signal blind area improvement is inefficient. The imbalanced development is
also suggesting the important future direction of the communications technology
market.
In recent years, both inside and outside of the telecommunications industry have
been actively exploring the issue of self-determined choosing or even dynamically
choosing the wireless network by mobile phone users. If the service model develops
well, it will have very significant impact on the competition mode of the existing
telecommunications industry. Telecommunication operators are no longer able to
lock the users by controlling the phone numbers or phone types, and the users will
be able to choose the right wireless network according to the quality of the network.
This new feature will force the operators to pay more attention to the network
quality assurance means to consolidate and develop the users.
Key Technologies
When the mobile communications network fully enter 3G and 4G era, network
characteristics will undergo great changes. Its wireless performance optimization
must also transform from 2G eras troubleshooting mode into equilibrium
mode, and the analytical method of complex giant system in the system theory
must be used to reconstruct the network optimization technology system.
The new generation wireless network intelligent analysis needs to provide the
following key technological means.
(1) The technological means to make an in-depth automatic comprehensive
analysis on the wireless system data
(2) The technological means to translate the wireless system data into the man-
agement information needed for the operational decision-making
(3) The intelligent correlation analysis method that can meet the requirements of
system equilibrium optimization
(4) The method to carry out overall monitoring and integrated optimization for
service performance and network indicators in all network protocol layers
(5) The dynamic planning and configuration management method that can adapt
to the frequent adjustment characteristics of the actual operating networks
(6) The dynamic network management method that can do real-time network
performance monitoring and system adjustment
7.4 Wireless Network Data Intelligence Analysis 421
(7) Collaborative performance analysis method covering both network side and
terminal side
(8) Application and verification method for the mid-to-long cycle radio resources
management (RRM) algorithm in the system implementation
(9) The network performance monitoring method that meet the characteristics of
massive data and fast evaluation in the balanced network
(10) The systematic analysis and management method that fully cover all kinds of
service performance and network indicators
(11) Data sharing and analysis method that links configuration, monitoring, plan-
ning, drive test, and customer service.
Meanwhile, the specific requirements in network operation and maintenance
need to be met both in breadth and depth. Its wireless analysis capabilities must
have the 6 alls features:
(1) All targets
Be able to meet both the targets in professionalism and management of
wireless network operation and maintenance.
(2) All networks
Be able to analyze the whole network, including special network element
equipment, such as repeater, home base station.
(3) All indicators
Be able to comprehensively analyze various types of service quality and
network performance indicators, and even the equipment reliability.
(4) All links
Be able to make balanced coordination analysis of uplink and downlink
wireless links.
(5) All time
Be able to make network performance analysis of various temporal granu-
larity, including second, minute, day, week, month, etc.
(6) All segments
Be able to make unified analysis on the data collected in configuration,
monitoring, planning, drive test, customer service, etc.
Technical Roadmap
Guided by the methods of system theory, combined with the advanced Busi-
ness Intelligence (BI) software technologies, realize wireless professional
in-depth analysis and verification means, and provide fast performance guar-
antee capability for the mobile communications network. Its important key
technical points are as follows.
(1) Wireless network information quantitative analysis and evaluation model
Various kinds of parameters, key events, and analytical algorithms are
summarized into the tree structure. Information analysis model, independent
from software platform, can realize dynamic upgrade and loading. In the model,
422 7 Field Trial Network
all kinds of data from the network side and the terminal side have unified
definitions. Quantitative data association attributes are provided. The profes-
sional wireless performance analysis framework is established. It is compatible
with various types of wireless networks.
(2) Multi-dimensional cube database
Multi-dimensional cube data model is designed to facilitate correlation
search and association analysis, which provides a logical basis for the informa-
tion analysis in the dimensions of physics, space, object, time, etc.
(3) OnLine Analysis & Process (OLAP)
The intelligent analysis on network performance is realized based on corre-
lation search technology. The advanced parallel processing technology is used
to realize the ultra-high speed real-time data correlation analysis.
(4) on-line transaction processing (OLTP)
It solves the problem of data acquisition and processing efficiency in the
means of trading space for time.
(5) Real-time display technology (including dynamic display based on GIS and
line graph)
It completely solves the problem of function and performance of the information
release, and ensures the high concurrency in the system application, which benefits
collaborative work.
Uniqueness
Requirements to Be Achieved
(1) The unified framework of complex network analysis algorithms: providing the
core theory of abstraction and integration of different types of analysis
algorithms.
(2) Quantitative evaluation model of wireless network performance: providing the
theoretical basis for the analysis and optimization of wireless network.
(3) Correlation analysis multi-dimensional database: providing the data model
meeting the requirements of the logic association mining.
(4) Correlation association analysis engine: providing the software means for
adaptive automatic analysis.
(5) Complex network visualization technology: providing the data presentation and
interactive method of complex relationship information.
Through loading expert experience algorithm library, the embedded intelligent
analysis engine can make automated correlation analysis and processing for system
operation information and test data of various kinds of communications networks.,
It can provide detailed diagnostic and statistical summary information for various
types of network anomaly, faults and causes in multiple forms, and provide the
specific rectification measures and suggestions for operation and maintenance
personnel to choose from.
It features three aspects efficiency, expertise, and ease of use.
Efficiency is reflected in the support for the information correlation processing
of multiple data sources. With massive data processing, effective information can
424 7 Field Trial Network
be fully mined and utilized. Network anomaly analysis and processing can be done
in a fully automated way, greatly reducing the time of manual work.
Expertise is embodied in the process that this technology fully integrates and
cherishes the expertise of communications experts. The algorithm library is flexi-
ble. The analysis process is close to the requirements of the network optimization
site. All the algorithms and solutions are derived from the practice of the network
optimization project. The analysis results are reliable and can be used. Meanwhile,
it outputs the software analysis conclusions in cutting-edge graphics, intuitively
reveals the generation mechanism of the complex network anomaly, makes hard
things simple, strengthens the operators imaginal thinking, and reduces the under-
standing difficulty, making it easy for operators to quickly capture and memorize
key information and solve complex technical problems.
Ease of use means the application and operation process is clear, convenient,
instructive and intuitive. The interface is always reasonably simple, and the interactive
process is comfortable, ensuring the operators work efficiency in massive data
analysis. All kinds of information in the interface are presented in a clear way with
refined visual effect, improving work pleasure to the greatest extent so that the operator
will not get tired easily. Meanwhile it provides a variety of online map automatic
update function, and there is no need for manual installation of the map data, easy for
the operator to carry out network analysis activities at any time any place.
7.4.2 Status
At present, there are many kinds of wireless mobile communications network data
analysis systems in the industry. According to analysis angles (terminal side and
network side) and functions (technical and management), product can be effec-
tively categorized.
Different product functions have different focuses, roughly shown in Fig. 7.19.
As the core of network performance analysis, network optimization software and
other analysis tools require very powerful expertise and comprehensiveness, and
need to achieve the above-mentioned 6 alls analysis ability and other profes-
sional requirements. However, seen from the development course of the commu-
nications system, restricted by the lack actual network verification environment, the
development of network analysis technology is often lagging behind the develop-
ment of hardware and equipment. And thus the development of related products
lags behind the development of network technology.
The development course of the system is shown in Fig. 7.20.
The first generation analysis system provides basic data display function, while
the extraction, sorting, statistical handling of the drive test recorded data need
manual work and the interpretation of data is completely based on the individual
experience. The second generation analysis system provides rich data display and
statistical functions, and can partially replace manual work for drive test data
sorting and statistics, but the data analysis process still needs manual interpretation.
7.4 Wireless Network Data Intelligence Analysis 425
Planning tools
Equipment OMC
Fig. 7.19 Data analysis system for wireless mobile communication network
Fig. 7.20 The development course of wireless mobile communication network optimization tools
426 7 Field Trial Network
DT Tools OMC
Performance Report
Configuration Report Network
Testing Phone Testing Manage Configuration
DT Log Equipment
Data Collection Warning
Network
GPS Receiver Data Playback Performance
Equipment
scanner Log
Scanner Data Playback
Signaling Analysis Post Analysis Signaling Analysis
Statistical Form
Signaling Collection Signaling Collection
Signaling Log
Equipment Analysis Equipment Analysis
User Analysis User Analysis
The third generation analysis system introduces the thematic analysis function,
which while replacing human for data sorting and statistics also provides some
thematic analysis process guide, but the manual interpretation remains the core of
each sub process of analysis. The current industry lacks of the fourth generation
analysis system, which, with wide coverage, is able to maximize the integration of
expert analysis experience, and provide the whole-process automated association
analysis function, while human only review and make decisions for the analysis
results, realizing the intelligent software analysis system in the real sense. Wide
coverage requires the system to be compatible with a variety of data sources and
data formats. Integration experts need to convert the complex thinking process into
machine language according to certain models. The automated process requires
each link and each module of the system work together for execution in accordance
with the established sequence in an orderly way. Association function requires
mining the existing data to the greatest possible extent and drain data to the last
drop. In such a way, we can maximize the role of machines and minimize manual
work while ensuring certain degree of accuracy, and the real intelligence can be
reflected.
As the core means of background analysis of the wireless network optimization
department, it deals with all kinds of data in wireless network, including the test
data of mobile phone test and automatic road test, sweeper test data, parameter data
of planned software base station, network management system report data, signal-
ing data, other internal data etc (Fig. 7.21).
The specific application scenarios of the intelligent analysis system are shown in
Fig. 7.22.
7.4 Wireless Network Data Intelligence Analysis 427
Configuration Algorithm
DT Log
parameter library
Model
Model Intelligent analysis correction
correction system
Coverage
simulation
System operating environment
Application process: start up field operation test ! acquire network test data !
import intelligent acquisition analysis system ! load auxiliary analysis data !
automatically detect anomalies ! automatically analyze network fault ! automat-
ically analyze fault correction measures ! output adjustment measures, model
correction results, coverage simulation results ! implement field network adjust-
ment ! restart field service test !re-acquire test data ! verify network problem
correction.
Innovation Points
Control management
view
Management Analysis conclusion navigation
Problem
evaluation tracking
Overall view
Optimizing
supervision Optimization
Conclusion drill
decision
Conclusion drill
Management Feature NE Geographical Time
dimension dimension dimension dimension dimension
Local view
defect
management
Cross dimension
inspection Anomaly analysis
guide
Trajectory
Process analysis Signaling view Trend chart Parameter view
diagram
Fig. 7.24 The brand new three levels and five dimensions information interaction mode
dimension and time dimension. The relationship of the views is shown in the
following figure (Fig. 7.24).
Fig. 7.25 The three-level network optimization management and control system that covers both
points and surfaces
maintenance tasks with different urgency and importance levels, achieving the
seamless docking of the active optimization and ensuring optimization goals
between the upper and lower levels personnel, and completing the network opti-
mization coordinately.
Alarm
abnormal point
According to the
origin collection element Defective
item
Fig. 7.26 Closed loop management of the problem points in the daily optimization stage
118 83
300
31
67
68
200 58
25
Alarm 63 30
26 38
number
46 18
16 34 116
21 18 104
100 36 89
82 47 80
0
20
33 8
0
2013-3-22 2013-4-23 2013-5-7 2013-6-27 2013-7-26 2013-8-7 2013-9-12 2013-10-10
The verification and optimization of single stations is the base of the whole
wireless optimization of mobile network. It is the unavoidable initial stage of
network optimization. The solution of single stations can help eliminate the
hidden troubles in the project construction stage, and effectively reduce the
pressure of the follow-up optimization work, so as to lay a solid foundation for
the higher level of the whole network optimization. Network construction and
maintenance department are generally in urgent need of professional and highly-
efficient technical means to replace manual analysis so that they can automati-
cally, accurately and quickly make optimization analysis and acceptance review to
the new base station inspection test data, quickly master each base stations
running status, performance and fault causes. In such a way, they can give
effective coordinated rectification to the existing problems so that the new stations
can quickly meet the acceptance criteria and enter into formal operation as soon as
possible.
Intelligent single station batch submission for acceptance function can make
fast, in batch and rolling judgment about whether the base station can fully meet
the preset quality standards, and provide detailed information on the fault
positioning. So that the various functional departments can accurately,
rapidly and collaboratively make rectification and eliminate the hidden troubles
of network construction to the greatest extent. Coordinated rectification can
only be implemented when the network problem causes are positioned
(Fig. 7.28).
anomaly
detection
fault
4 Submit inspection report analysis
B C
A
Inspection and
inspection
Batch 1 Results
1st OK
Base
Test log for
Inspection mission Analysis set station
each batch batch 1 1st NOK
1st NULL
Base 2nd OK
station 1st NOK
batch 2
2nd NOK
1st NULL
2nd NULL
It supports the checking and analysis of multi-batch data, makes it easy to track
failed base stations. The inspection results are summarized in rolling way. Based on
the submission for inspection function, the inspection road test data analysis is
carried out automatically. The progress and problems of the base station optimiza-
tion project can be mastered synchronically (Fig. 7.29).
7.5 Summary
As stated at the beginning of this chapter, the ultimate goal of the R&D and
evaluation of mobile communications technology is application. The field trial is
the last and also the most important part of the process from the technology R&D to
application. In particular, the acceleration of the new technology update makes it
more important to carry out the field trial and verification in the R&D stage. This
chapter, based on the analysis on the requirements and application scenarios of 5G
mobile communications technology, gives the field trial evaluation cases in the
LTE-A phase from the wireless channel measurement to the key technology
validation and then to the network performance validation, providing references
for the later 5G technology research. In addition, with the development of big data
technology, intelligent wireless network data analysis tools will greatly enhance the
network performances self-optimization ability. At the end of this chapter, we
present the architecture that is still in planning stage of the multi-scenario hetero-
geneous test network that supports large scale test users, providing an ideal test site
for the field trial and evaluation in R&D stage for relevant research institutions.
434 7 Field Trial Network
References
1. Scenarios, Requirements and KPIs for 5G Mobile and Wireless System. METIS Deliverable
D1.1, Apr. 2013, https://www.metis2020.com/wp-content/uploads/deliverables/METIS_D1.
1_v1.pdf
2. White Paper on 5G Concept. IMT-2020 (5G) Promotion Group. Feb 2015. http://www.imt-
2020.cn/zh/documents/download/23
3. 5G Vision and Requirements. IMT-2020 (5G) Promotion Group. May 2014. http://www.imt-
2020.org.cn/zh/documents/download/1
4. A. Ghosh, N. Mangalvedhe, R. Ratasuk and et al. Heterogeneous cellular networks:From
theory to practice[J]. Communications Magazine, IEEE, 2012, 50(6): 5464.
5. B. Soret, H. Wang, K. Pedersen and et al. Multicell cooperation for LTE-advanced heteroge-
neous network scenarios. IEEE Wireless Communications, 2013, 20(1): 2734.
6. W. Stallings. ISDN and broadband ISDN. Macmillan Publishing Co., Inc., 1992.
7. F. Xu, L. Zhang and Z. Zhou. Interworking of Wimax and 3GPP networks based on IMS
[IP Multimedia Systems (IMS) Infrastructure and Services]. IEEE Communications Magazine,
2007, 45(3): 144150.
8. M. He, X. Wang, T. D. Todd and et al. Ad hoc assisted handoff for real-time voice in IEEE
802.11 infrastructure WLANs. International Journal of Wireless and Mobile Computing, 2007,
2(4): 324336.
9. R. Ananthapadmanabha, B. S. Manoj and C. Murthy. Multi-hop cellular networks:the archi-
tecture and routing protocols. Personal, IEEE International Symposium on Indoor and Mobile
Radio Communications, 2001, 2:G-78-G-82 vol. 2.
10. A. N. Zadeh, B. Jabbari, R. Pickholtz R and et al. Self-organizing packet radio ad hoc networks
with overlay (SOPRANO). IEEE Communications Magazine, 2002, 40(6): 149157.
11. C. Qiao, H. Wu and O. Tonguz. Integrated cellular and ad hoc relaying system: U.S. Patent
Application 10/213,058[P]. 20028-6.
12. H. Luo, R. Ramjee, P. Sinha and et al. UCAN: a unified cellular and ad-hoc network
architecture. Proceedings of the 9th annual international conference on Mobile computing
and networking. ACM, 2003:353367.
13. M. Dick. Mobility and differentiated services in a future IP network. IST Project. http://www.
ist-mobydick.org, 2003.
14. T. Braun, M. Diaz, J. E. Gabeiras and et al. End-to-end quality of service over heterogeneous
networks. Springer Science & Business Media, 2008.
15. W. Mohr. The WINNER (Wireless World Initiative New Radio) projectdevelopment of a
radio interface for systems beyond 3G. International Journal of Wireless Information Net-
works, 2007, 14(2): 6778.