Patrisius Bagus Alvito Baylon
Terbatas  Irwan Sofiyan
» Gedung UPT Perpustakaan
Terbatas  Irwan Sofiyan
» Gedung UPT Perpustakaan
BAB 1 Patrisius Bagus Alvito Baylon
Terbatas  Irwan Sofiyan
» Gedung UPT Perpustakaan
Terbatas  Irwan Sofiyan
» Gedung UPT Perpustakaan
BAB 2 Patrisius Bagus Alvito Baylon
Terbatas  Irwan Sofiyan
» Gedung UPT Perpustakaan
Terbatas  Irwan Sofiyan
» Gedung UPT Perpustakaan
BAB 3 Patrisius Bagus Alvito Baylon
Terbatas  Irwan Sofiyan
» Gedung UPT Perpustakaan
Terbatas  Irwan Sofiyan
» Gedung UPT Perpustakaan
BAB 4 Patrisius Bagus Alvito Baylon
Terbatas  Irwan Sofiyan
» Gedung UPT Perpustakaan
Terbatas  Irwan Sofiyan
» Gedung UPT Perpustakaan
BAB 5 Patrisius Bagus Alvito Baylon
Terbatas  Irwan Sofiyan
» Gedung UPT Perpustakaan
Terbatas  Irwan Sofiyan
» Gedung UPT Perpustakaan
BAB 6 Patrisius Bagus Alvito Baylon
Terbatas  Irwan Sofiyan
» Gedung UPT Perpustakaan
Terbatas  Irwan Sofiyan
» Gedung UPT Perpustakaan
COVER Patrisius Bagus Alvito Baylon
Terbatas  Irwan Sofiyan
» Gedung UPT Perpustakaan
Terbatas  Irwan Sofiyan
» Gedung UPT Perpustakaan
DAFTAR PUSTAKA Patrisius Bagus Alvito Baylon
Terbatas  Irwan Sofiyan
» Gedung UPT Perpustakaan
Terbatas  Irwan Sofiyan
» Gedung UPT Perpustakaan
LAMPIRAN Patrisius Bagus Alvito Baylon
Terbatas  Irwan Sofiyan
» Gedung UPT Perpustakaan
Terbatas  Irwan Sofiyan
» Gedung UPT Perpustakaan
GPS signals cannot penetrate buildings in indoor environments, making it difficult
for drones to locate their position. Using external visual sensors with object
detection models offers an alternative approach. However, these sensors have
drawbacks, including the risk of false detections due to prioritizing computation
speed over detection accuracy. Moreover, existing research focuses more on
positional determination rather than developing autonomous guidance capabilities
utilizing external visual sensor data. To address these challenges, this study aims to
determine the drone’s position in real-time using a fast-computing computer vision
algorithm. A Kalman Filter algorithm is then used to enhance the resilience of the
external visual sensors to a false detections. The study integrates the Kalman
Filter’s positional estimation data into a waypoint-following algorithm for
autonomous indoor navigation. The research involves a data collection of camera
parameters and training datasets for the computer vision model, flight testing,
analysis, and conclusion The external visual sensor uses curve fitting and camera
model equations to convert pixel coordinates into local coordinates, achieving top
performance with errors of 3.48% and 2.29% for X and Y axis respectively.
Position determination is further enhanced using a sensor fusion algorithm,
combining data from external visual sensors, accelerometers, and optical flow
sensors via an Adaptive Kalman Filter. The Adaptive Kalman Filter dynamically
adjusts measurement and process noise covariances using a forgetting factor and
compares the estimation data with previous estimations to increase robustness
during prolonged false detections. Subsequent flight tests demonstrate the
integrated system’s ability to navigate to waypoints while effectively reducing the
impact of false detections, but the position prediction during false detections
depends on the quality of its onboard sensors.