X-Vision An Augmented Vision Tool With Real-Time S

Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/325557252

X-Vision: An augmented vision tool with real-time sensing ability in tagged


environments

Preprint · June 2018

CITATIONS READS

0 2,233

4 authors, including:

Yongbin Sun Rahul Bhattacharyya


mit Massachusetts Institute of Technology
26 PUBLICATIONS 4,747 CITATIONS 56 PUBLICATIONS 1,138 CITATIONS

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Cell phone data based vehicle diagnostics View project

All content following this page was uploaded by Yongbin Sun on 02 July 2018.

The user has requested enhancement of the downloaded file.


X-Vision: An augmented vision tool with real-time
sensing ability in tagged environments
Yongbin Sun* , Sai Nithin R. Kantareddy* , Rahul Bhattacharyya , and Sanjay E. Sarma

Auto-ID Labs, Department of Mechanical Engineering


Massachusetts Institute of Technology
Cambridge, USA
{yb sun, nithin, rahul b, sesarma} at mit.edu
arXiv:1806.00567v1 [cs.HC] 2 Jun 2018

Abstract—We present the concept of X-Vision, an enhanced


Augmented Reality (AR)-based visualization tool, with the real-
time sensing capability in a tagged environment. We envision
that this type of a tool will enhance the user-environment
interaction and improve the productivity in factories, smart-
spaces, home & office environments, maintenance/facility rooms
and operation theatres, etc. In this paper, we describe the design
of this visualization system built upon combining the object’s pose
information estimated by the depth camera and the object’s ID
& physical attributes captured by the RFID tags. We built a Fig. 1. Left: A user wearing the system sees a cup with overlaid temperature
information. Right: System components: an Intel RealSense D415 RGB-D
physical prototype of the system demonstrating the projection of camera is attached on a HoloLens via a custom mount.
3D holograms of the objects encoded with sensed information like
water-level and temperature of common office/household objects.
The paper also discusses the quality metrics used to compare
the pose estimation algorithms for robust reconstruction of the of tagged object’s ID and physical attributes can be wirelessly
object’s 3D data. retrieved and mapped to a digital avatar.
In this paper, we have designed a visualization framework
I. I NTRODUCTION called X-Vision, hoping to equip users with the ability to di-
rectly see the physical attributes of surrounding objects (Figure
Superimposing information on to the real-world, the concept 1). One of the goals of this framework is to demonstrate
commonly known to us as Augmented reality (AR), has been the advantages of tagged environments to enhance the user-
rapidly evolving over the past few years due to the advances environment interaction with real-time sensing at low cost
in computer vision, connectivity and mobile computing. In for potential use cases in smart-spaces, home & office envi-
recent years, multiple AR-based applications have touched ronments, maintenance/facility rooms and operation theatres,
everyday lives of many of us: few such examples are Google etc. The rest of the paper is structured as follows: Section
translate’s augmented display [12] to improve productivity, II discusses the relevant work in fusion of other technologies
AR GPS navigation app for travel [22], CityViewAR tool for with RFID and AR; Section III provides the details of the
tourism [17], etc. proposed framework and the visualization system; Section IV
All these applications require a method to implement a link discusses the evaluations of the test experiments, followed by
between the physical and digital worlds. Often this link is the conclusions in Section V.
either ID of the object or information about the physical space,
for instance, an image in Google translate app or GPS location II. R ELATED W ORK
in AR navigation tool. This link can be easily established in a
A. AR-Based Smart Environment
informationally structured environments using visual markers,
2D barcodes and RFID tags. Among the three, RFID tags have AR brings digital components into a person’s perception of
an unique leverage with the ability to wirelessly communicate the real world. Today, advanced AR technologies facilitates the
within couple of meters of distance without requiring line of interactive bidirectional communication and control between
sight access. In addition, RFID tags can be easily attached a user and objects in the environment. Two main branches
to inventory and consumer products in large numbers at exist for AR associated research. In one branch, researchers
extremely low per unit costs. Passive RFID, in particular, has attempt to design algorithms to achieve accurate object recog-
many applications in object tracking [14], automatic inventory nition and 3D pose estimation for comprehensive environment
management [15], pervasive sensing [5], etc. In a tagged understanding. Related work in this direction can be found
environment, with RFID infrastructure installed, information in [16], [3] for object recognition, and in [28], [26], [23]
for 3D data processing. Research in this direction provides
* Equal Contribution theoretic supports for industry products. In the other branch,
Fig. 2. The pipeline of the proposed framework.

efforts have been devoted to applying existing computer vision sense the environment and object’s attributes to create a more
techniques to enhance user-environment interaction experience intimate and comprehensive interaction between the humans
for different purposes. Research work on this track benefits and surrounding objects.
areas, such as education [10], tourism [6] and navigation
[25], by improving user experience. Our work follows this III. S YSTEM
trend by fusing object recognition and 3D pose estimation Our system (hardware and visualization shown in Fig. 1)
techniques with RFID sensing capabilities, aiming to create a contains two parallel branches (shown in Figure 2) to concur-
smart environment. rently detect and sense RFID tag-sensor attached objects. On
one side, the system captures color and depth images using
B. Emerging RFID Applications the depth camera for in-view target object identification and
RFID is massively used as identification technology to pose estimation. On the other side, the system collects tag-
support tracking in supply chain, and has so far been suc- data reflecting the target object’s physical properties, such as
cessfully deployed in various industries. Recently industry’s temperature, using an RFID interrogator\reader. Information
focus seems shifting towards generating higher value from collected from both sources are uploaded to a shared central
the existing RFID setups by tagging more & more items server, where heterogeneous information is unified and deliv-
and by developing new applications using tags that allow ered to the HoloLens for augmented visualization. Details are
for sensing, actuation & control [8] and even gaming [1]. given in the following subsections.
Another such exciting application with industrial benefit is
fusion with emerging computer vision and AR technologies. A. Object Identification and Pose Estimation
Fusion of RFID and AR is an emerging field and there are Our system uses an Intel RealSense D415 depth camera
recent studies combining these technologies for gaming and to capture color and depth information. It is attached to an
education, yet we see lot of space to explore further especially HoloLens via a custom mount provided by [9], and faces in
going beyond ID in RFID. One of the earlier papers [20] the same direction as the HoloLens (Figure 1). The captured
studied the use of RFID to interact with physical objects images are used to identify the in-view target object and
in playing a smartphone-based game which enhanced the estimate its pose.
gaming experience. Another study [27] used a combination Object Identification: Object recognition is a well-studied
of smart bookshelves equipped with RFID tags and mixed- problem, and we adopt the local feature based method [3]
reality interfaces for information display in libraries. Another in our system, since it is suitable for small-scale database.
study [2] explores the use of AR with tags to teach geometry to Generally, to identify an in-view object from a given database,
students. These studies show a good interest in the community the local feature based method first extracts representative
to explore mixed reality applications using tags for object IDs. local visual features for both the scene image and template
In this paper, we use RFID for not just ID but also to wirelessly object images, and then matches scene features with those
of each template object. The target object in the view is
identified as the template object with the highest number of
matched local features. If the number of matched features
of all template objects is not sufficiently large (below a
predetermined threshold), then the captured view is deemed to
not contain a target. Our system follows this scheme, and uses
SURF algorithm [3] to compute local features, since compared
to other local feature algorithms, such as SIFT [18], SURF is
fast and good at handling images with blurring and rotation.
Pose Estimation: After identifying the in-view object, our
system estimates its position and rotation, namely 3D pose,
in the space, thus augmented information can be rendered
properly. We achieve this by constructing point cloud of the
scene, and aligning the identified object’s template point cloud
with it. Many algorithms exist for point cloud alignment, and
we adapt widely-used Iterative Closest Point (ICP) algorithm
[4] in our system, since it usually finds a good alignment
in a quick manner. To obtain better pose estimation results,
Fig. 3. Tag-sensors for sensing object properties: Top row: Water level
especially for non-symmetric objects (i.e. mug), a template sensing using paper tags; Bottom row: Temperature sensing using custom
object usually contains point clouds from multiple viewpoints. tag with EM4325.
Yet, the performance of ICP relies on the quality of the approach is based on tag-antenna’s response to changed en-
initialization. Our system finds a good initial pose by moving vironments as a result of sensing event. Change in signal
a template object’s point cloud to the 3D position that is back- power or response frequency of the RFID tag due to this
projected from the centroid of matched local feature coordi- antenna’s impedance shift can be attributed to sensing events
nates in the scene image. The coordinates of correctly matched like temperature rise [24], presence of gas concentration [19],
local feature are the 2D projections of target object surface soil moisture [13] etc. Another approach is to use IC’s on-
points, thus back-projecting their centroid should return a 3D board sensors or external sensors interfaced with GPIOs [7]. In
point close to target object surface points. After initializing the this study, we use both the antenna’s impedance shift approach
pose of template point cloud, our system refines its pose using to detect water-level and the IC’s on-board temperature sensor
ICP. Finally, the estimated pose can be represented as a 4 × 4 to detect the real-time temperature in a coffee cup.
matrix, Mpose = Mini Micp , where Mini is the transformation Water Level Sensing: Water-level sensor works on the con-
matrix for pose initialization, and Micp is the transformation cept of relating the detuning of the tag’s antenna in the
matrix for pose refinement using ICP. All the transformation presence of water in the neighborhood of the tag. In this study,
matrix are in the format of we used tags as the water-level sensors on common house-
 
R t hold/office objects such as coffee cup made of paper, ceramic
M=
0 1 mug and plastic bottle. In an empty state, the background
dielectric for the tags is air, therefore, the backscattered signal
, where R is a 3×3 matrix representing rotation, and t is a 3×1
strength from the tags is at the maximum. In the state where
vector representing translation. Related details are illustrated
the mug contains water, the antenna is significantly detuned
in [11].
due to the change in background dielectric, as a result the tag
B. RFID Sensing becomes unresponsive. However, when the mug is emptied the
tag can be read again indicating empty cup. We build on this
An office space already equipped with the RFID infrastruc- concept to detect discrete levels of water in the container by
ture is used as the tagged-environment for the experiments in using three tags to define states as empty, low, mid, and high
this study. The space is set up using the Impinj Speedway (illustrated in Table I). Fig. 3(a) shows the level sensor labels
Revolution RFID reader, connected to multiple circularly implemented on a standard ceramic coffee mug.
polarized Laird Antennas with gain of 8.5 dB. The reader
system is broadcasting at the FCC maximum of 36 dBm EIRP. TABLE I
WATER L EVEL I NDICATION
For the tag-sensors, we make use of the Smartrac’s paper
RFID tags with Monza 5 IC as the backscattered-signal based Status A B C
water level sensors and custom designed tags with EM 4325 Empty 7 3 3
Middle 7 7 3
IC as the temperature sensors. We use the Low Level Reader Full 7 7 7
Protocol (LLRP) implemented over Sllurp (Python library) to
interface with RFID readers and collect the tag-data. Temperature Sensing: Temperature sensor is implemented by
Purely-passive or semi-passive tags can be designed to sense using EM Microelectronics’s EM 4325 with on-board tem-
multiple physical attributes and environmental conditions. One perature as the RFID IC. Fig. 3(b) shows a T-match antenna
with EM 4325 IC and a button-cell battery implemented as
a temperature sensor on a standard coffee cup. Temperature
measurements from this IC can be made in both passive as
well as semi-passive mode. In the passive mode, the tag has
to be in range of a reader antenna. In the semi-passive mode,
the battery or external energy source keeps the IC on. The IC’s
temperature measurement is triggered by writing any random
information into the specific word of the user memory bank
(Memory bank:3 and Wordptr:256). The IC updates this word
with the measured temperature from the on-board temperature
sensor. By reading the word again current temperature can
be known. We have implemented this code using the Sllrup
library. Real-time temperature sensing is possible using this
IC within −64o to 64o Celsius.
C. Augmented Visualization
After obtaining the target object’s identity, pose and physical
properties, the system superimposes augmented information
(i.e. CAD model) onto the object. Since the object’s 3D pose
is estimated in the depth camera coordinate system, a series
of transformations are required to obtain the 3D pose in the
world coordinate system, which is required in the HoloLens
rendering system. Our system computes the transformation
using:
Mworld world HoloLens dep cam
pose = THoloLens Tdep cam Mpose

, where Mdep
pose
cam
and Mworld
pose are the 3D poses in depth
camera and world coordinate system, respectively, THoloLens
dep cam
maps the pose from the depth camera coordinate system to the
HoloLens coordinate system, and Tworld
HoloLens maps the pose
Fig. 4. Water level sensing results: (a) shows the HoloLens rendered results
before and after water is added into a tagged mug; (b) shows multiple objects
from the HoloLens coordinate system to the world coordinate with different detected water levels.
system. All the transformation matrices are in the same format
as those described for pose estimation.
IV. E VALUATIONS
A. Sensing Results Visualization
We first test our system’s performance on both water level
sensing and temperature sensing. A user’s augmented views
are recorded and shown to demonstrate the effectiveness of
the proposed system.
We present water level sensing results for both single object
and multiple objects cases (Figure 4). The system projects 3D
CAD models of identified objects into the scene according to
their estimated poses. The color of projected 3D models is
changed at different heights to reflect different water levels.
As can be observed, our system properly aligns 3D models to
corresponding target objects, even for non-symmetric shapes
(i.e. mug). The detected discrete water levels (empty, low, mid,
Fig. 5. A sequence of temperature sensing results after hot water is added.
high) also matches the actual water level in our experiments. The temperature change after adding hot water results from the temperature
Temperature sensing results are shown in Figure 5. These sensor latency.
results are selected from a recorded video clip containing the shown on the right of Figure 5. Our system shows visually
whole temperature changing process after hot water is poured appealing results.
into a tagged cup. In the beginning, the tag-sensor reports
room temperature for the empty cup. After hot water is added, B. Pose Estimation Evaluation
the tag-sensor reports water temperature (and intermediate We evaluate the pose estimation pipeline used in the system
temperatures). Temperatures are rendered using the color code by considering recognition accuracy, pose estimation quality
TABLE III
P OINT- TO -P OINT R ESIDUAL E RROR

Bottle Cup Mug Avg.


FPFH 0.0069 0.0059 0.0094 0.0075
Local Feature + ICP 0.0088 0.0074 0.0070 0.0077
Local Feature + FPFH 0.0057 0.0055 0.0087 0.0066
Fig. 6. Test objects.
and running time. The Fast Point Feature Histograms (FPFH)
algorithm [21] with and without using local features are imple-
mented as competing methods. The original FPFH algorithm
was designed for point cloud alignment. As a baseline method
in this experiment, it identifies the in-view object and estimates
its pose by aligning each template point cloud to scene point
cloud, and the object with the best alignment is selected.
“Local Feature + FPFH” works in the same way as our pipeline Fig. 7. Point cloud pose estimation examples of different methods. Scene
by first identifying the target object using local features and points are marked as blue, target object points are marked as green, and
then estimating its pose using FPFH. transformed template object points are marked as red.
Three different objects (a water bottle, a coffee cup and a
mug) are tested. For each object, five images from different Third, we compare the running time of different methods,
angles at a range between 0.3-0.5 meter are collected for and report the average time for each testing object in Table
evaluation (as show in Figure 6). Within this range, texture IV. Despite “Local Feature + ICP” shows a little higher point-
details of objects can be captured and their point clouds can to-point residual error than “Local Feature + FPFH”, it runs
be constructed with less noise. significantly faster and is suitable for real-time performance.
First, the recognition accuracy of each method is evaluated TABLE IV
and reported in Table II. As can be noted, compared to P OSE E STIMATION T IME (sec.)
the FPFH baseline method, local visual features enhance the Bottle Cup Mug Avg.
recognition accuracy for all cases. This performance boosting FPFH 4.603 4.502 4.590 4.565
results from the rotational invariance of the detected SURF Local Feature + ICP 0.055 0.015 0.074 0.048
features. Identifying in-view objects is important, thus correct Local Feature + FPFH 2.719 0.493 1.329 1.514
template point clouds can be used for pose estimation in the
following step. C. Working Range Testing
TABLE II Object recognition accuracy of our system is affected by
R ECOGNITION ACCURACY the camera-object separation. The minimum distance recom-
Bottle Cup Mug Avg. mended by the depth camera manufacturer is 30 cm. As the
FPFH 1/5 3/5 3/5 7/15 separation increases, the quality of the depth data deteriorates
Local Feature + ICP 5/5 5/5 5/5 15/15 and beyond 1 m, texture details of target objects are hard to
Local Feature + FPFH 5/5 5/5 5/5 15/15
capture. Similarly, RFID tag-reader communication is affected
by the tag-reader separation. If the separation is too large,
Second, the pose estimation quality is evaluated using point- the power reaching the tag is too low to power the IC and
to-point residual error, which is defined as: backscatter the signal to the reader. We define a score called
n normalized RSSI for generalized comparison between different
1X material-range-signal strength experiments. Score of 1 denotes
E= kti − pi k2
n i=1 a good backscattered signal strength of -20 dBm at the reader
and a score of 0 means signal strength is below the sensitivity
, where ti is the ith point in the target object point cloud of the reader (-80 dBm).
(green points in Figure 7), and pi is the closest point in the Recognition accuracy and normalized RSSI scores are ob-
transformed template object point cloud {p} (red points in tained for different objects in this study by varying the camera-
Figure 7) to ti , such that pi = arg minp kti − pk2 . Results object and reader-object separation distances (see Fig.8). From
are reported in Table III, where the residual error is averaged our observations, to achieve a reliable sensing and good quality
across all correctly identified target objects. Point clouds of visualization, we set an acceptable score of 0.5-1 for both the
target objects in the scene are manually labeled (i.e. green metrics. We propose a 40-75 cm working range between the
points in Figure 7) Due to good pose initialization from local camera & target object, and less than 100-150 cm working
visual feature matching, two-phase pipelines achieves lower range between the tagged objects & readers for good quality
point-to-point residual error. “Local Feature + FPFH” performs and reliable visualization. One of our ultimate goals is to
slightly better than “Local Feature + ICP”, since ICP is prone package the camera and reader on to the head mount so that a
to get trapped into local minima. Examples of successfully need for separate RFID infrastructure is eliminated. Therefore,
aligned point clouds are shown in Figure 7.
this data shows that the RFID range is suitable for this type [9] Mathieu Garon, Pierre-Olivier Boulet, Jean-Philippe Doironz, Luc
of application and the human-object distance is limited by the Beaulieu, and Jean-François Lalonde. Real-time high resolution 3d data
on the hololens. In Mixed and Augmented Reality (ISMAR-Adjunct),
camera. 2016 IEEE International Symposium on, pages 189–191. IEEE, 2016.
[10] Juan Garzón, Juan Pavón, and Silvia Baldiris. Augmented reality
applications for education: Five directions for future research. In
International Conference on Augmented Reality, Virtual Reality and
Computer Graphics, pages 402–414. Springer, 2017.
[11] James E Gentle. Matrix transformations and factorizations. Springer,
2007.
[12] Otavio Good. Augmented reality language translation system and
method, April 21 2011. US Patent App. 12/907,672.
[13] Azhar Hasan, Rahul Bhattacharyya, and Sanjay Sarma. Towards per-
vasive soil moisture sensing using rfid tag antenna-based sensors. In
RFID Technology and Applications (RFID-TA), 2015 IEEE International
Conference on, pages 165–170. IEEE, 2015.
[14] SNR Kantareddy, R Bhattacharyya, and S Sarma. Towards low-cost
object tracking: Embedded rfid in golf balls using 3d printed masks. In
RFID (RFID), 2017 IEEE International Conference on, pages 137–143.
IEEE, 2017.
[15] SNR Kantareddy, R Bhattacharyya, and SE Sarma. Low-cost, automated
inventory control of sharps in operating theaters using passive rfid tag-
sensors. In RFID Technology & Application (RFID-TA), 2017 IEEE
Fig. 8. Plot showing normalized detection parameters (normalized RSSI in International Conference on, pages 16–21. IEEE, 2017.
case of RFID and projection accuracy in case of augmented vision) from 1: [16] Leonid Karlinsky, Joseph Shtok, Yochay Tzur, and Asaf Tzadok. Fine-
good to 0: poor; 0.5 is chosen as the safe limit for good quality detection and grained recognition of thousands of object categories with single-
rendering. Shaded regions show safe distances for object & RFID-reader and example training. In Proceedings of the IEEE Conference on Computer
object & depth-camera to achieve good quality results Vision and Pattern Recognition, pages 4113–4122, 2017.
[17] Gun A Lee, Andreas Dünser, Seungwon Kim, and Mark Billinghurst.
Cityviewar: A mobile outdoor ar application for city visualization. In
Mixed and Augmented Reality (ISMAR-AMH), 2012 IEEE International
V. C ONCLUSION Symposium on, pages 57–64. IEEE, 2012.
[18] David G Lowe. Object recognition from local scale-invariant features.
We present the working of an enhanced augmented-vision In Computer vision, 1999. The proceedings of the seventh IEEE inter-
system named X-Vision which superimposes physical objects national conference on, volume 2, pages 1150–1157. Ieee, 1999.
[19] Cecilia Occhiuzzi, Amin Rida, Gaetano Marrocco, and Manos Tentzeris.
with 3D holograms encoded with sensing information captured Rfid passive gas sensor integrating carbon nanotubes. IEEE Transactions
from the tag-sensors attached to everyday objects. Two testing on Microwave Theory and Techniques, 59(10):2674–2684, 2011.
cases, water level and temperature sensing, are demonstrated in [20] Omer Rashid, Will Bamford, Paul Coulton, Reuben Edwards, and
Jurgen Scheible. Pac-lan: mixed-reality gaming with rfid-enabled mobile
this paper. Further experiments are also performed to evaluate phones. Computers in Entertainment (CIE), 4(4):4, 2006.
the pose estimation pipeline and working range of the system. [21] Radu Bogdan Rusu, Nico Blodow, and Michael Beetz. Fast point feature
histograms (fpfh) for 3d registration. In Robotics and Automation, 2009.
R EFERENCES ICRA’09. IEEE International Conference on, pages 3212–3217. IEEE,
2009.
[1] Ankur Agrawal, Glen J Anderson, Meng Shi, and Rebecca Chierichetti. [22] Mona Singh and Munindar P Singh. Augmented reality interfaces. IEEE
Tangible play surface using passive rfid sensor array. In Extended Internet Computing, 17(6):66–70, 2013.
Abstracts of the 2018 CHI Conference on Human Factors in Computing [23] Jonathan Dyssel Stets, Yongbin Sun, Wiley Corning, and Scott W
Systems, page D101. ACM, 2018. Greenwald. Visualization and labeling of point clouds in virtual reality.
[2] Andrés Ayala, Graciela Guerrero, Juan Mateu, Laura Casades, and In SIGGRAPH Asia 2017 Posters, page 31. ACM, 2017.
Xavier Alamán. Virtual touch flystick and primbox: two case studies [24] Alexander Vaz, Aritz Ubarretxena, Ibon Zalbide, Daniel Pardo, Héctor
of mixed reality for teaching geometry. In International Conference Solar, Andrés Garcia-Alonso, and Roc Berenguer. Full passive uhf
on Ubiquitous Computing and Ambient Intelligence, pages 309–320. tag with a temperature sensor suitable for human body temperature
Springer, 2015. monitoring. IEEE Transactions on Circuits and Systems II: Express
[3] Herbert Bay, Tinne Tuytelaars, and Luc Van Gool. Surf: Speeded up Briefs, 57(2):95–99, 2010.
robust features. In European conference on computer vision, pages 404– [25] Junchen Wang, Hideyuki Suenaga, Kazuto Hoshi, Liangjing Yang,
417. Springer, 2006. Etsuko Kobayashi, Ichiro Sakuma, and Hongen Liao. Augmented
[4] Paul J Besl and Neil D McKay. Method for registration of 3-d shapes. reality navigation with automatic marker-free image registration using
In Sensor Fusion IV: Control Paradigms and Data Structures, volume 3-d image overlay for dental surgery. IEEE transactions on biomedical
1611, pages 586–607. International Society for Optics and Photonics, engineering, 61(4):1295–1304, 2014.
1992. [26] Yue Wang, Yongbin Sun, Ziwei Liu, Sanjay E Sarma, Michael M
[5] Rahul Bhattacharyya, Christian Floerkemeier, and Sanjay Sarma. Low- Bronstein, and Justin M Solomon. Dynamic graph cnn for learning
cost, ubiquitous rfid-tag-antenna-based sensing. Proceedings of the on point clouds. arXiv preprint arXiv:1801.07829, 2018.
IEEE, 98(9):1593–1600, 2010. [27] Ko-Chiu Wu, Chun-Ching Chen, Tzu-Heng Chiu, and I-Jen Chiang.
[6] Namho Chung, Heejeong Han, and Youhee Joun. Tourists intention to Transform children’s library into a mixed-reality learning environment:
visit a destination: The role of augmented reality (ar) application for a Using smartwatch navigation and information visualization interfaces.
heritage site. Computers in Human Behavior, 50:588–599, 2015. In Pacific Neighborhood Consortium Annual Conference and Joint
[7] Danilo De Donno, Luca Catarinucci, and Luciano Tarricone. A battery- Meetings (PNC), 2017, pages 1–8. IEEE, 2017.
assisted sensor-enhanced rfid tag enabling heterogeneous wireless sensor [28] Yu Xiang, Tanner Schmidt, Venkatraman Narayanan, and Dieter Fox.
networks. IEEE Sensors Journal, 14(4):1048–1055, 2014. Posecnn: A convolutional neural network for 6d object pose estimation
[8] Manuel Ferdik, Georg Saxl, and Thomas Ussmueller. Battery-less in cluttered scenes. arXiv preprint arXiv:1711.00199, 2017.
uhf rfid controlled transistor switch for internet of things applicationsa
feasibility study. In Wireless Sensors and Sensor Networks (WiSNet),
2018 IEEE Topical Conference on, pages 96–98. IEEE, 2018.

View publication stats

You might also like