Research, Development and Innovation Agency Conference Portal, Int'l Conference on Adopting the Renewable Bioenergy and Waste Utilization to Support Circular Economy & Sustainable Env

Font Size: 
Anatomical features represented by visual words and their quantification
Sung-Wook Hwang, Kayoko Kobayashi, Junji Sugiyama

##manager.scheduler.building##: IPB International Convention Centre (IPB ICC)
##manager.scheduler.room##: Meeting Room A
Date: 2019-08-28 02:10 PM – 02:20 PM
Last modified: 2019-07-02

Abstract


In order to quantify anatomical features, we created visual words by extracting and clustering local features using the scale-invariant feature transform algorithm from the Lauraceae image dataset. The dataset consists of 1019 cross-sectional optical micrographs of 9 species across 6 genera. We have confirmed in our previous study that clusters of local features effectively detect major anatomical features. In the cross validation, since the minimum error value was produced when the number of visual words was 1000, further analysis was performed based on the visual word size. By analyzing the visual words, we can classify and quantify the aggregation of different combinations of cell elements. In addition, the analysis of the term frequency – inverse document frequency allowed us to predict which anatomical features are species specific. Although expert wood anatomists may have an empirical understanding for the quantification of wood cells for the familiar species they often deal with, it is difficult to present with quantified data. The proposed method enables quantification of wood cells and thus can be utilized as a tool to support the currently established wood anatomy.

Keywords


computer vision; visual words; wood anatomy

Conference registration is required in order to view papers.