FoodRecognition PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

Food Calorie Measurement Using Deep Learning

Neural Network
Parisa Pouladzadeh1, Pallavi Kuhad1, Sri Vijay Bharat Peddi1, Abdulsalam Yassine1, Shervin Shirmohammadi1,2
1
Distributed and Collaborative Virtual Environments Research (DISCOVER) Lab
University of Ottawa, Ottawa, Canada
2
Colleges of Engineering and Natural Sciences, Istanbul Şehir University, Turkey
Email: {ppouladzadeh |ayassine|shervin}@discover.uottawa.ca, {vpedd013| pkuha009}@uottawa.ca}

Abstract—Accurate methods to measure food and energy Results in [4] show mobile phone food record is important
intake are crucial for the battle against obesity. Providing among both adolescents and adults. Also more accurate
users/patients with convenient and intelligent solutions that help methods of dietary assessment will help strengthen the ability
them measure their food intake and collect dietary information of researchers to identify diet–disease and diet–gene
are the most valuable insights toward long-term prevention and relationships. For above reasons, many researchers have been
successful treatment programs. In this paper, we propose an proposing assistive calorie measurement systems that run on
assistive calorie measurement system to help patients and doctors smartphones and allow the user to take a picture of the food
succeed in their fight against diet-related health conditions. Our and measure the calorie intake automatically. In our previous
proposed system runs on smartphones, which allow the user to
work [5] -[8] , we have introduced a semi-automatic system
take a picture of the food and measure the amount of calorie
intake automatically. In order to identify the food accurately in
that assists dieticians to measure calories and daily nutrient
the system, we use deep convolutional neural networks to classify intake for the treatment of obese and overweight patients. The
10000 high-resolution food images for system training. Our system enables the user/patient to obtain the measurement
results show that the accuracy of our method for food recognition results of the food intake from the application, which simulates
of single food portions is 99%. The analysis and implementation the calculation procedure performed by the dietician.
of the proposed system are also described in this paper. In this paper, we study the application of deep learning for
food classification and recognition. Deep learning is an
Keywords—calorie measurement, food recognition,
segmentation, graph cut, deep neural network.
emerging approach from machine learning, and has been
proposed in recent years to move machine learning systems
I. INTRODUCTION towards the discovery of multiple levels of representation. We
show that deep learning can be a powerful method to
Obesity in adults and children is considered a global significantly increase the accuracy of food classification and
epidemic [1]. The main cause of obesity is a combination of recognition. Our proposed system makes two main
excessive food consumption and lack of physical activities [2]. contributions to the state of the art, as follows:
Therefore, the need to accurately measure diet becomes
important. Preliminary studies among adolescents suggest that - We propose the use of deep learning neural networks
innovative use of technology may improve the accuracy of as a means of improving the accuracy of food classification and
dietary information of young people [3]. Also as people are calorie measurement systems. Experimental results of
becoming used to sedentary life style, they are involuntarily combination of different segmentation method such as color,
carried away from being aware of their food energy intake. texture, graph-cut segmentation and deep learning neural
There is overwhelming evidence that metabolic complications, network combination show a 99% accuracy of food recognition
which are caused by obesity, increase the risks for developing in single food portions.
adverse health consequences such as diabetes, blood pressure,
- We propose a model, which integrates our mobile
dyslipidaemia and hypertension [1]. People in general
calorie measurement application to the deep neural network.
understand the links between diet and health. In fact, there is a
Our proposed Convolutional Neural Network (CNN) serves as
wide spread of nutritional information and guidelines that are
a backbone of the application and handles the training and
available to users at their fingertips. However, such
testing requests at the top layers, without affecting the central
information alone has not prevented diet-related illnesses or
layers. We customize the top layer of the deep neural network
helped patients to eat healthily. In most cases, people find it
presented in [12]. This allows us to embed the functionality of
difficult to examine all of the information about nutrition and
the application easily and retain the top levels of the network to
dietary choices. Furthermore, people are oblivious about
spot the relevant food images even in low powered mobile
measuring or controlling their daily calorie intake due to the
devices.
lack of nutritional knowledge, irregular eating patterns or lack
of self-control. Empowering patients with an effective long- The rest of the paper is organized as follows: In section II
term solution requires novel mechanisms that help them make we present the related work. Section III gives our proposed
permanent changes to their dietary quality and calorie intake. system. In section IV, we describe the experimental results.
Finally, in section V, we conclude and provide direction for which has been addressed by [27], which proposes a system
future work. that captures the images and sends them to a research facility
where the analysis and the extraction are performed. The major
II. RELATED WORK disadvantage of such a system is that it does not provide
In this section, we present some of the most common food information to the users in real-time. There is a considerable
intake measuring methods that have been developed in the last delay in providing the information due to the offline processing
few years. The objective here is to describe the main of images. In [24] the authors ,propose a method to
advantages and drawbacks of these methods, in order to automatically identify and locate food in a variety of images.
demonstrate the novelty and contribution of our proposed Two concepts were combined in their algorithm. First, a set of
system. One of the first clinical works in this area is the 24- segmented objects partitioned into similar object classes based
Hour Dietary Recall (24HR) [13]- [14]. This procedure lists the on their features, for solving the idea they have applied
daily food intake using a special format for a period of 24 different segmentation method. Second, automatic segmented
hours. The patient is expected to recall all the foods and regions were classified using a multichannel feature
beverages consumed the previous day on 24 hours prior to the classification system. They have used SVM as their classifier.
interview. In this method, estimation of food portion size is The final decision is obtained by combining class decisions
made using standardized cups and spoons. The record of food from individual features. In [26], author is used computer
amounts are converted into nutrient intakes amounts using food vision methods for volume estimation which can be computed
composition tables. Another method is the Food Frequency from 3D models obtained using multi-view geometry. In our
Questionnaire (FFQ), which uses an external verification based system, we also use food images taken with the built-in camera
on double labeled water and urinary nitrogen [15]. FFQ focuses of a smartphone. But, we go one step further by implementing
on describing dietary patterns or food habits, but not calorie graph cut segmentation and deep learning algorithms as a
intake. The main disadvantage of the 24HR and FFQ are: the means of improving the accuracy of our food classification and
delay of reporting the eaten food, the underreporting of the size recognition system. Furthermore, the processing of the images
of food portions, relying on memory, requiring skilled and the calorie measurement results are provided to the user
interviewers who can guess how much calories and nutrient the instantly. This means that our system is convenient to use and
person has taken, not quantifying usual dietary intake, and well suited to be a long-term solution for users. Furthermore,
needing complex calculations to estimate frequencies. In other our system uses cloud-based virtualization where an emulation
methods, such as [16][17], the food is weighted before and of the smartphone environment allows the application to run
after eating and a modified set of kitchen appliances containing virtually on the remote server in the cloud. This is rather
an internal scale evaluates the plate and the portions before and significant to overcome the limited capability of smartphones
after the food intake. But, those kinds of approaches generate to run intensive machine learning algorithms similar to those
inconvenience to the users, increasing underreporting presented in this paper. Next, we present the details of the
generated by the proneness of the user to forget or the proposed solution.
unwillingness of the patient to use these kinds of procedures.
III. PROPOSED SYSTEM
To address the aforementioned issues, researchers have In this section, we discuss our proposed system in more
been looking into easier and as automated as possible ways to details. In subsection A, we introduce the reader to our
analyze food content, see, e.g.[19] [18]-[25] [25]. In [18], a previously proposed graph cut segmentation method see [7]
web-based application is proposed which detects whether the followed by deep learning neural network mechanism analysis
user has habits considered as risk factors for obesity. The in subsection B and details about calorie and time processing
application acquires and registers data about diet, exercise, measurement is introduced in subsection C.
sleep, and fat mass, by using a web application and health
information sensors. The major drawback of such systems is its A. Graph Cut Segmentation
inconvenience and its difficult learning process for the user. In Before performing the segmentation on the image, the user
[19] the authors propose a system that utilizes food images that captures the picture of the food with her thumb on a suitable
are captured and stored by multiple users in a public Web position on the dish so that the picture will not only contain the
service called FoodLog. Then, a dictionary dataset of 6512 food item, but also the user’s thumb is used for size calibration.
images is formed including calorie estimation. The images in To use graph cut segmentation, it is important to determine the
the dictionary are used for dietary assessment, but with only features of a good graph that will be extracted from an image.
6512 images, the accuracy of such approach is low. In [20], a The following three properties capture the most important
new 3D/2D model-to-image registration framework is features of our graph cut based food image segmentation
presented for estimating food volume from a single-view 2D method: First, it should be robust; i.e., if the image is somewhat
image containing a reference object. In this system, the food is distorted, the graph should not be deeply changed. In graph cut,
segmented from the background image using morphological each pixel of the image is mapped onto a vertex in a graph.
operations while the size of the food is estimated based on Neighboring pixels are connected by weighted edges where the
user-selected 3D shape model. In [23], a set of pictures is taken weight is determined based on a predefined energy function. In
for before and after food consumption in order to recognize and the normalized cut approach, the cut cost is determined by the
classify the food and determine its size. In such method, the fraction of the total edge connections to all the vertices in the
existence of a premeasured and predefined measurement graph. Second, it should also have good algorithmic properties.
pattern is used inside the images to translate the size in pixels This means that the graph, when drawn, is actually a symbolic
of each portion. All these conditions can generate difficulties,
representation of the image; for instance, the boundaries tanh(x) = (ex − e−x )/(ex + e−x ) (2)
between the regions should match with the edges of the graph.
Third, we would like to be able to rebuild an image from the sigmoid(x) = 1/(1 + e−x )
graph and for this new image to be a good compression of the
initial image. In other terms, the loss due to the extraction But we use the term rectified linear unit (ReLU) to refer to a
process should be minimal. We provide the details of our graph unit in a neural net that use the activation function max(0; x).
cut food image segmentation method in [7],[9]. In other words, we want to find a set of weights and biases
B. Deep Learning Neural Network which makes the cost as small as possible. To achieve that, we
In this section, we provide the details of the deep neural use an algorithm known as stochastic gradient descent. By
network method used in our system. The first step in our using a smooth cost function like the quadratic cost it turns out
approach is to generate a pre-trained model file with the help of to be easy to figure out how to make small changes in the
CNN network. This is performed by initially capturing a set of weights and biases so as to get an improvement in the cost
images of one particular class (e.g. 50 images of apple class) [28][29]. Hence we will be able to manipulate the weights and
and then labeling them with object name-set (object being bias to get the output closer to the desired outcome, during the
apple). These images will be considered as the set of relevant learning phase. Our goal is to train the neural network to find
(positive) images and are used to train the system. In the weights and biases which minimize the quadratic cost function
second step of the training, we re-train the system with the set C (w,b).
of negative images (images that do not contain the relevant The idea is to use gradient descent to find the weights wk and
object). In our case, we trained the system with the background biases bl which minimize the cost C. The gradient vector ∇C
images, so it does not categorize them as part of the apple has corresponding components ∂C/ ∂wk and∂C/bl . The
class. Once the model file is generated from the training, we stochastic gradient descent can be used to speed up learning
load it into the application and test it against the images by estimating the gradient ∇C by computing ∇ Cx for a small
captured and submitted by the user. The system then performs
sample of randomly chosen training inputs. By averaging over
the image recognition process and generates a list of
probabilities against the label name. The label with the highest this small sample it turns out that we can quickly get a good
probability is prompted to the user in the dialog box, to confirm estimate of the true gradient∇ C, and this helps speed up
the object name. Once the object name is confirmed, the gradient descent, and thus learning.
system performs the calorie computation part by calculating the For example, this algorithm will help us to tweak the weights
size of the food item with respect to the finger in the frame. It (w) and bias (b) during the learning phase, in a way we can
finally prints the output to the user with the required calorie. finally determine the output as one of the two (Apple or
Figure 1, illustrates the above mentioned process. We trained Cherry), without affecting the rest of the food classes. Delta
the system using the deep neural network model by [12], with changes in either the weights or the bias will change the result
various classes of food samples and useful hints from [9]-[11]. from one food class to another. As shown in the below
diagram, considering we have taken the color feature into
account, any change in the weight w1 or bias b would alter
Virtual Android
the final result, which in this case deciding between apple and
x86 Emulator cherry. If the probability of the image p>0.5 towards Apple, it
Our
Application Labelled as Apple Labelled as Sphaghetti would be classified as Apple and same is the case with any
Capture
Photos and other food type. An example is shown in Figure 2.
Submit Feature Extraction (Including shape, size,
color and texture) and Segmentation.
Test Captured
image against Train with Deep Negative Image (Not
model file and Belief Network Apple or Sphaghetti)
recognize
object Accuracy >
85%
Tested with food
Model samples for Accuracy.
Calorie
Calculation File

FIGURE 1 IMPLEMENTATION OF DEEP BELIEF NETWORK IN THE ANDROID


APPLICATION

From technical view, a neural network which we have FIGURE 2 AN EXAMPLE SHOWING IMPLEMENTATION OF STOCHASTIC
applied, computes a differentiable function of its input. For GRADIENT DESCENT
example, our application computes the probability of the match
of the input image with the corresponding label set, Algorithm implementing stochastic gradient descent [29].
p(label | an input image) (1)

The standard way to model a neuron’s output f as an activation


function of its input x is either a hyperbolic function or
sigmoid function
Input a set of training examples calculating the volume of the food object, which is further
 For each training example x: Set the corresponding input activation mapped to the corresponding calorie value of the food object.
all, and perform the following steps:
2) On the other hand as shown in the Figure 3 (Calorie
 Feedforward: For each l = 2,3, … , L compute
Measurement using Distance Estimation), the distance between
zl = w lal−1 + bl and al = σ(zl ). the food image and the mobile device is calculated using the
 Output error, δl : Compute the vectorδl = ∇Ca ⊙ σ′(zl ). mobile sensors, accelerometer and magnetic field sensors. The
 Backpropagate the error: For each l = L − 1, L − 2, … ,2 compute
values computed from these sensors, enable the system to
compute the angle at which the person is positioning his
 δl = ((w l+1)T δl+1) ⊙ σ′(zl ). device, which along with the person’s height. This known
 Gradient descent: For each l=L, L-1…2, update the weight value is used in computing the distance between the food
according to [29]. object and the mobile device. Hence the captured image along
* Where L denoting the layer number, δl being the output error, bl being the bias, with the distance computed is then processed in the system, in
w l being the weight and C being the cost. order to determine the block dimensions which would further
be used to compute the area of the food portion. Although, both
C. Calorie and Time Processing Measurement these methods make use of different methodologies they are
 Calorie Measurement: fundamentally based on the concept of the reference object.
Table 1 shows the uncertainty measurement calculations for
The calorie measurement procedure in our approach is as each of these methods:
follows:
TABLE 1 UNCERTAINTY MEASUREMENT FOR CALORIE MEASUREMENT USING
After classifying the food object the system then calculates DISTANCE ESTIMATION
the total amount of food portion on the plate in order to Calorie Uncertainty
Finger Based
estimate the calorie value. The constraint here is that the calorie Food Item Calorie
Measurement difference
estimation is calculated based on the food image captured by using Distance between the two
Measurement
the user which is the only known variable. The challenge here Estimation methods
Bread 139.018 138.276 0.742
is in determining the dimension of the food portion based on Apple 95 93.4 1.6
the image captured, since the food size in the image is relative Banana 105.21 99.3 5.91
to the distance from which the photo was captured. Hence,  Time Processing Measurement:
food images captured from closer range had larger food
dimension and vice versa led to an inaccurate estimation of The overall time taken for processing the food image,
calorie. To address this issue it is necessary to determine the recognizing the food object and calorie computation has
reference object in the image, which would also be dependent improved with every iteration. We have applied different
on each food category. We proposed two such approaches, methods which are shown in the chart below, we observed that
which would be able to estimate the calorie of the recognized MapReduce SVM for food recognition and calorie computation
food portion in the plate: took 163.5 seconds overall. The cause for significant large time
1) Finger Based Calorie Measurement. was due to allocation and deallocation of cloud resources while
implementing the SVM on runtime. With the use of deep
2) Calorie Measurement using Distance Estimation. learning algorithm we were able to achieve an improvement in
Both these calorie measurement methods make use of overall time consumption with 26.96 seconds. This was still
reference object: Finger is used as reference object in first considered significantly large time considering wait time for
method whereas calculated distance is used as a reference user to process the food image request. Using the cloud
object in the latter. virtualization along with deep learning we were able to reduce
the time for processing 14.64, which were further able to
improve by using decision mechanism.

Finger Based Calorie Measurement [5] Calorie Measurement using Distance


Estimation Error! Reference source
not found.

FIGURE 3 CALORIE MEASUREMENT METHOD

1) Finger Based Calorie Measurement: As shown in the


Figure 3 (Finger Based Calorie Measurement), food image is
captured for bread, where the user’s finger is placed next to the
food object in the plate. Since the dimension of the user’s
finger is known, the system can compute the corresponding
dimension of the food object (bread), when captured from top
view and side view. These dimensions are then used in
FIGURE 4 TIME PROCESSING WITH DIFFERENT ALGORITHM
IV. EXPERIMENTAL RESULTS TABLE 2 REPEATED UNCERTAINTY OF MEASUREMENT

This section presents the experimental results of our Real Average Standard
Food items
system. In this work, we have combined Graph cut Calories Calories Error
Red Apple 80 80 0
segmentation and deep neural network. The combination of Orange 71 70 0
these two methods allow us to improve the accuracy of our Tomato 30 30 0.01
food classification and recognition significant compared to our Carrot 30 28 0.1
Bread 68 68 0.5
previous work in [12]. By recognizing the food portions by Pasta 280 276 0.3
these two models and also by having the size and shape of the Egg 17 17 0
food portions from graph cut algorithm, we will have a chance Banana 10 10 0
to calculate the whole food portions calorie. Before the Cucumber 30 30 0.25
Green Pepper 16 16 0.04
implementation of the image recognition algorithm in the Strawberry 53 52 0.5
Android application, the first step in our approach is to Results shows the average calories are so close to the real
generate a pre-trained model file with the help of CNN one and also the small range of standard error also shows the
network. We performed this step by initially capturing a set of accuracy of the system. The overall accuracy of the system
images of one particular class (For e.g. 50 images of apple with both methods is shown in Table 3.
class) and then labeling them with object name-set (object
being apple). These images are considered the set of relevant TABLE 3 FOOD RECOGNITION ACCURACY FOR SINGLE FOOD
(positive) images. After the image-sets are captured, the system
is trained with these images. Then, the system is re-trained with Recognition Rate (%)
the set of negative images (images that do not contain the Using graph- Using Deep
Using color-
relevant objects). In our case, we trained the system with the N Food items cut, color- Neural
texture
background images, so it does not recognize them or categorize segmentation
texture Network
them as part of the image class. Once the model file is segmentation Method
1 Red Apple 97.64 100 100
generated from the training, we load it into the Android
2 Orange 95.59 97.5 99
Application and test it against the images captured and 3 Corn 94.85 96 99.5
submitted by the user. The label with the highest probability is 4 Tomato 89.56 95 100
prompted to the user in the dialog box, to confirm the object 5 Carrot 99.79 100 100
name. The dialog box will prompt the user to confirm the food 6 Bread 98.39 99 99
type. If the food type suggested by the application is correct, 7 Pasta 94.75 98 100
the user would then click "Yes", if not, the user clicks on "No" 8 Sauce 88.78 92 98
button. If the user clicks "Yes", the application will then 9 Chicken 86.55 89 100
10 Egg 77.53 83 100
display the estimated calorie value of the food type. If user
11 Cheese 97.47 97 100
clicks "No", then the application will prompt the user to enter 12 Meat 95.73 96 100
the correct food type and would further display the estimated 13 Onion 89.99 93 99.4
calorie value based on user’s entered information. In this paper, 14 Beans 98.68 98 100
our data set comprises of 30 different categories of food and 15 Fish 77.7 85 100
fruits. These food and fruit images are divided into training and 16 Banana 97.65 97 100
testing sets, where around 50% of the fruit images from each 17 Green Apple 97.99 97 99
group are used to train the system and the remaining images 18 Cucumber 97.65 98 100
19 Lettuce 77.55 85 100
serve as the testing set. As Figure 5 and Table 3 show, by
20 Grapes 95.7 95 98
applying graph cut segmentation and Deep Neural Network 21 Potato 88.56 89 100
algorithm, we have better recognition. Our system could 22 Tangerine 97.59 99 100
recognize the food portions very accurately in 3 seconds. The Chocolate
23 88.19 85 100
results of table I show that we have 99% accuracy in our single Cake
food portions. Furthermore, we have the measurement method 24 Caramel Cake 85.29 85 100
in our work which includes: One way to increase the 25 Rice 94.85 94 100
confidence in experimental data is to repeat the same 26 Green Pepper 97.99 98 100
27 Strawberry 83.47 98 99
measurement many times and to better estimate uncertainties Cooked
by checking how reproducible the measurements are. When 28 92.62 96 100
Vegetable
dealing with repeated measurements, there are important 29 Cabbage 77.55 100 100
statistical quantities: average or mean (estimate of the "true" 30 Blueberry 83.47 95 100
value of the measurement), standard deviation (a measure of Total average 92.21 95 99
the "spread" in the data), and standard error (estimate in the
uncertainty in the average of the measurements). Each category V. CONCLUSION
contains more than 100 images. When dealing with repeated Our aim in this paper is to empower the user by a
measurements, there are two important f quantities: average (or convenient, intelligent and accurate system that helps them
mean) and standard error. These are summarized in Table 2. become sensible about their calorie intake. We employed a
rather unique combination of graph cut segmentation and deep
learning neural networks as a means of accurately classifying
and recognizing food items. We showed that the combination
of those two methods provides a powerful instrument to attain [11] http://neuralnetworksanddeeplearning.com/chap2.html
a 100 % accuracy of food recognition in our system. We [12] https://www.jetpac.com/
presented the implementation of the virtualization approach of
[13] M. Livingstone, P. Robson and a. J.Wallace, “Issues in dietary intake assessment
the application which allows us to benefit from cloud based of children and adolescents,” British Journal of Nutrition, vol. 92, p. 213–222,
resources. Our plan for future work is to increase our database 2004.
of images and use the approach presented in this paper to test [14] L. Bandini, A. Must, H. Cyr, S. Anderson, J. Spadano and W. Dietz, “Longitudinal
mixed food portions. changes in the accuracy of reported energy intake in girls 10-15 y of age,” The
American Journal of Clinical Nutrition, vol. 78, p.p. 480–484, 2003.
[15] W. Luo, H. Morrison, M. d. Groh, C. Waters, M. DesMeules, E. Jones-McLean,
A.-M. Ugnat, S. Desjardins and M. L. a. Y. Ma, “The burden of adult obesity in
Canada,” Chronic Diseases in Canada, vol. 27, no. 4, p.p. 135-144, 2007.
[16] P.-Y. Chil, J.-H. Chen, H.-H. Chu and J.-L. Lo, “Enabling Calorie-Aware Cooking
in a Smart Kitchen,” Springer-Verlag Berlin Heidelberg, vol. 5033, p.p. 116-127,
2008.
[17] M. S. Westerterp-Plantenga, “Eating behavior in humans, characterized by
cumulative food intake curves-a review,” Neuroscience and Biobehavioral
Reviews, vol. 24, p. 239–248, 2000.
[18] Y. Kato, T. Suzuki, K. Kobayashi, Y. Nakauchi, "A web application for an obesity
prevention system based on individual lifestyle analysis," IEEE International
Conference on Systems, Man, and Cybernetics (SMC), p.p. 1718 - 1723, Oct.2012.
[19] T. Miyazaki, G.C. De Silva, K. Aizawa, "Image-based Calorie Content Estimation
for Dietary Assessment," IEEE International Symposium on Multimedia (ISM),
pp.363-368, 5-7 Dec. 2011.

FIGURE 5 RESULT OF FOOD RECOGNITION [20] H. C. n Chen, W. Jia, Z. Li, Y. Sun, M. Sun, "3D/2D model to-image registration
for quantitative dietary assessment," 38th Annual Northeast Bioengineering
Conference (NEBEC), p.p. 95-96, March 2012.
REFERENCES
[21] M. Sun, Q. Liu, K. Schmidt, J. Yang, N. Yao, J. D. Fernstrom, M. H. Fernstrom, J.
[1] World Health Organization. (2011, October) Obesity Study. [Online]. P. DeLany and R. J. Sclabassi, “Determination of Food Portion Size by Image
Processing,” International IEEE EMBS Conference, pp. 871 - 874, 2008.
http://www.who.int/mediacentre/factsheets /fs311/en/index.html
[22] Zakaria Al-Battashi, John Bronlund, Gourab Sen Gupta, “Investigations Into Force
[2] World Health Organization. (2012) World Health Statistics 2012. [Online]. Sensor Characteristics for Food Texture Measurements”, IEEE International
http://www.who.int/gho/publications/world_health_statistics/2012/en/index.html Conference on Instrumentation and Measurement Technology (I2MTC), p.p. 2089-
[3] Fengqing Zhu, Anand Mariappan, Carol J Boushey, Deb Kerr, Kyle D Lutes, 2094, 2015.
David S Ebert, Edward J Delp, “Technology-assisted dietary assessment”, [23] Y. Saeki and F. Takeda, “Proposal of Food Intake Measuring System in Medical
International Society for Optics and Photonics, p.p. 681411-681420, 2008. Use and Its Discussion of Practical Capability,” Springer-Verlag Berlin
[4] Bethany L Daugherty, TusaRebecca E Schap, Reynolette Ettienne-Gittens, Heidelberg, vol. 3683, p.p. 1266–1273, 2005.
Fengqing M Zhu, Marc Bosch, Edward J Delp, David S Ebert, Deborah A Kerr, [24] Fengqing Zhu, Marc Bosch, Nitin Khanna, Carol J. Boushey, “Multiple
Carol J Boushey, Novel “Technologies for Assessing Dietary Intake: Evaluating Hypotheses Image Segmentation and Classification With Application to Dietary
the Usability of a Mobile Telephone Food Record Among Adults and Assessment”, IEEE Journal of Biomedical and Health Informatics, Vol. 19, NO.
Adolescents”, Published online 2012. 1,pp. 377- 389, January 2015.
[5] P.Pouladzadeh, S.Shirmohammadi, and R.Almaghrabi, “Measuring Calorie and [25] Xi-Ai, C., Guang-Xin, Z., Ping-Jie, H., Di-Bo, H., Xu-Sheng, K., and Ze-Kui, Z.:
Nutrition from Food Image”, IEEE Transactions on Instrumentation & Classification of the green tea varieties based on support vector machines using
Measurement, Vol.63, No.8, p.p. 1947 – 1956, August 2014. terahertz spectroscopy, in: Instrumentation and Measurement Technology Conf.
[6] P. Pouladzadeh, S. Shirmohammadi, A. Bakirov, and A. Bulut, Abdulsalam (I2MTC), 2011 IEEE, 1–5, May, 2011.
Yassine “Cloud-Based SVM for Food Categorization”, Multimedia Tools and [26] J. Dehais, S. Shevchik, P. Diem, S.G. Mougiakakou, “Food Volume Computation
Applications, Springer, Vol. 74, Issue 14, pp. 5243-5260. for Self Dietary Assessment Applications”, IEEE 13th International Conference on
[7] P.Pouladzadeh, S.Shirmohammadi, A.Yassine, “Using Graph Cut Segmentation Bioinformatics and Bioengineering (BIBE), p.p.1-4, 2013.
for Food Calorie Measurement”, IEEE International Symposium on Medical [27] C. K. Martin, S. Kaya and B. K. Gunturk, “Quantification of food intake using
Measurements and applications, p.p.1-6, Lisbon, June 2014. food image analysis,” IEEE International Conference of Engineering in Medicine
[8] Parisa Pouladzadeh, Pallavi Kuhad, Sri Vijay Bharat Peddi, Abdulsalam Yassine, and Biology Society, vol. 2009, p.p. 6869-6872, 2009.
Shervin Shirmohammadi “Mobile Cloud Based Food Calorie Measurement” The [28] Krizhevsky, A., Sutskever, I., and Hinton, G. on “ImageNet classification with
4th International IEEE Workshop on Multimedia Services and Technologies for E- deep convolutional neural networks.” in NIPS,2012.
Health (MUST-EH), ICME, China, July 2014.
[29] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. “Gradientbased learning applied
[9] Y. B. Yuri, G. F. Lea, “Graph Cuts and Efficient N-D Image Segmentation,” to document recognition”, Proceedings of the IEEE, 86(11):2278–2324, 1998.
International Journal of Computer Vision, vol.70, no.2, pp.109-131, 2006.
[30] Pallavi Kuhad, Abdulsalam Yassine and Shervin Shirmohammadi, “Using
[10] Y. Boykov, V. Kolmogorov, “An experimental comparison of mincut/max-flow Distance Estimation and Deep Learning to Simplify Calibration in Food Calorie
algorrithms for energy minimization in vision,” IEEE transaction PAMI, vol.26, Measurement” in Computational Intelligence and Virtual Environments for
no.9. pp. 1124 1137, 2004. Measurement Systems and Applications [CIVEMSA]-2015, Shenzhen, China

You might also like