Development of Multiple AR.Drone Control System for Indoor Aerial Choreography *

Size: px
Start display at page:

Download "Development of Multiple AR.Drone Control System for Indoor Aerial Choreography *"

Transcription

1 Trans. JSASS Aerospace Tech. Japan Vol. 12, No. APISAT-2013, pp. a59-a67, 2014 Development of Multiple AR.Drone Control System for Indoor Aerial Choreography * By SungTae MOON, DongHyun CHO, Sanghyuck HAN, DongYoung REW, and Eun-Sup SIM Aerospace Convergence Technology Team, Korea Aerospace Research Institute, Daejeon, Korea (Received January 24th, 2014) In the past decade, small quadrotors have been widely used in various areas ranging from military to entertainment applications. Recently, quadrotors have become popular for demonstrating choreographed aerial maneuvers such as dancing and playing musical instruments with collaborative control of multiple vehicle systems. For collaborative control, several techniques are required. First, localization recognition is required to avoid collisions. Second, reliable communication is needed in order to receive information from multiple vehicles without disconnection. Finally, a ground control station is needed which can control multiple vehicles and play a mission scenario for the maneuvers. In order to accurately control multiple quadrotors indoors, a motion capture based system was developed. However, this system is expensive and sensitive to the surrounding environment. In this paper, we propose multiple AR.Drone control systems based on marker image with motion capture for an indoor environment in which GPS information cannot be received. We demonstrate aerial choreography with multiple quadrotors in accordance with a mission scenario. Key Words: AR.Drone Quadrotors, Indoor Navigation, Aerial Choreography, Control System 1. Introduction Quadrotors have been widely used in various areas ranging from military to entertainment applications because of reliable control and rapid movement. Recently, quadrotors have become popular for demonstrating aerial choreography like dancing and playing musical instruments with collaborative control of multiple vehicle systems. For collaborative control, several techniques are required. First, localization recognition is needed to avoid collisions. Second, reliable communication is needed in order to receive information from multiple vehicles without disconnection. Finally, a ground control station is needed, which can control multiple vehicles and play a mission scenario. The AR.Drone made by Parrot Company, shown in Fig. 1, is a popular quadrotor for leisure. The AR.Drone can perform automated actions like takeoff, landing and hovering in the same position while flying. It may be used inside or outside, with or without the protection structure. In addition, the AR.Drone is an affordable commercial quadrotor platform offering an open API and freely downloadable SDK for developers. Besides the bulk of information located on the developer s website, many useful pieces of information can also be found in numerous papers. The navigation and the control system of the first version of the AR.Drone are described in the paper of Bristeau 2) with a focus on a fusion of measured data. In addition, it is possible to modify the configuration of the drone because it uses an embedded 2014 The Japan Society for Aeronautical and Space Sciences * Presented at the 2013 Asia-Pacific International Symposium on Aerospace Technology (APISAT-2013), Nov , 2013, Takamatsu, Japan Linux operating system that can be accessed by telnet protocol. However, The AR.Drone is not suitable for collaborative control of multiple vehicles. Thus, in order to apply it to multiple formation flight, the internal system should be modified. Fig. 1. AR.Drone ) Localization recognition is a key factor indoors where no GPS information is available. In order to avoid collisions for multiple quadrotors, each quadrotor has to recognize its current position in real time above all things. For indoor positioning system, a motion capture and marker-based system are generally used. The motion capture system can track objects with an accuracy of less than 1 mm. However, it is very expensive to deploy the system and it is sensitive to aspects of the surrounding environment like sunlight. In addition, recalibration is required frequently because of the changing environment. The marker-based system only uses markers without equipment like motion capture and is robust in regards to the surrounding environment. However it is less accurate than the motion capture system. In this paper, we propose multiple quadrotors control systems for indoor environment in which GPS information cannot be received. We demonstrate aerial choreography a59

2 Trans. JSASS Aerospace Tech. Japan Vol. 12, No. APISAT-2013 (2014) with multiple AR.Drones in accordance with a mission scenario. In order to recognize their current positions, both motion capture and marker image based systems are implemented. The remaining part of this paper is organized as follows. Sections 2 and 3 introduce motion capture and marker-based systems for indoor aerial choreography, respectively. Section 4 explains how to recognize current position. Section 5 provides a way to modify AR.Drones for multiple controls. Section 6 explains the autopilot flight control system. Section 7 explains the implemented ground control station. The experimental results are presented in section 8. It is followed by the conclusion in section Motion Capture System The motion capture system is a process of recording the movement of objects or people. It is used in military, entertainment, sports, and medical applications because of accurate and fast measurement. At the Flying Machine Arena of ETH, a similar motion capture system is employed to demonstrate autonomous flight of indoor flying vehicles. 3) Fig. 3. Markers mounted on the AR.Drone. As a result, each AR.Drone is localized for all 6 degree of freedom every 10 ms. The vehicle pose data is distributed to the ground control station, called Qt-based Multiple AR.Drone Controller(QMAC) as shown in Fig. 2. The controller transmits calculated command data through a wireless link, WiFi g. However, in practice, the markers' configuration may be slightly altered because of the changing environment and this may cause offsets in the measurement of the true attitude of the quadrotor after a few flights. Therefore a recalibration procedure is required frequently because of the changing environment. In addition, it is very expensive to deploy the system. As a result, another image-based system, called the marker-based system, is developed without expensive motion capture and recalibration. 3. Marker Based System Fig. 2. Motion capture based system. A 14-camera motion capture system provides complete pose data for all appropriately marked vehicles within a space of 10mⅹ10mⅹ6.5m. This system offers 10 megapixels of resolution and full frame capture speeds up to 250 fps with a latency of approximately 10 ms. In order to localize the vehicle and measure its orientation, the vision system requires that some reflecting infrared markers are mounted on the AR.Drone as shown in Fig. 3. In order to detect the AR.Drone for motion capture, at least seven unique marker patterns are used for every vehicle. The marker-based system for aerial choreography is shown in Fig. 4. The system consists of quadrotors, the bottom image and the ground control station. When the AR.Drones take off, they transmit the bottom image and navigation data to the ground control station via wireless communication, g. The ground control station extracts the nearest marker s position information via the bottom image. Fig. 4. The marker based system. The marker consists of direction, unique id and parity a60

3 S. Moon et al.: Development of Multiple AR.Drone Control System for Indoor Aerial Choreography parts as shown in Fig. 5. For direction, there are fixed sub blocks (a, b, c and d). a block always has to be black and the others white. For a unique marker, 11 blocks are used from 0 to 10. By combining the 11 blocks, 2,048 markers can be created. Finally, a parity block is used for accuracy. When a marker is extracted from the image of the bottom camera, we can calculate the information of position and orientation coded in this marker. Fig. 5. Marker. On the other hand, if a user inserts a scenario which has a moving position and time for each quadrotor to the ground control station, it calculates control commands like roll, pitch, yaw, and thrust every time and transmits the command to the AR.Drones. The scenario is written in XML format. 4. Localization Recognition In order to control multiple AR.Drones in the same space, each quadrotor has to recognize the current position in real time and move to its destination without any collision. The motion capture system can measure current position through a motion capture camera. However, the marker-based system has to use a marker pattern for real-time current position recognition, which consists of many markers which have different position information. This approach uses the bottom camera of the drone to recognize special markers on the floor. In order to detect the marker, an opencv library is used 4). The marker detection algorithm is based on the contours as presented in Table 1. Once the image is received from the bottom camera of the AR.Drone, an RGB image is converted to a binary image in order to detect the contours easily. At this time, an adaptive threshold value is used to clarify image change. Then the contours are searched from this binary image. To reduce unnecessary computation and remove small curves, a contour approximation is used. Once the contours are detected, we look for the contours of a rectangular shape which can have 4 or more corners because of image distortion. After contour approximation, depending on the rectangular shape, the candidates are selected as markers considering the rectangular size and contours again. Table 1. Pseudo code for the marker detection algorithm. Marker detection algorithm 01 while camera_on do 02 raw_image = get_image_from_camera() 03 gray_image = convert_to_gray(image) 04 binary_image = convert_to_binary(image) contours = find_contours(binary_image) 07 approx_contours_list = approximate_poly(contours) for each contours in approx_contours_list do 10 if size(contours) > 4 then 11 size_rect = bounding_rect(contours) 12 marker_contours = approx_poly(contours) 13 if size_rect > 12 and size(marker_contours) == 4 then 14 recognize_mark(marker_contours) 15 Endif 16 end if 17 end for end while For a precise measurement of the current position, we use the information about the current camera angle, altitude of the drone and the position of the marker in the video image to calculate the physical distance from the drone to the marker in the ground-fixed coordinate. The bottom camera of the AR.Drone 2.0 has a diagonal aperture of 64 degrees. In order to determine the distance from the quadrotor to the mark, the vertical and horizontal angles of the camera for marks α and β should be measured. If px is the x value of the center of the mark in screen coordinates, we can measure α which is the angle for the mark in the horizontal view as in Eq. (1). In order to calculate α, we should know the horizontal apertures of camera and image resolution. 5) α arctan 32 (1) where HALF_WIDTH is the number of pixels of half horizontal width and 64 is horizontal apertures. However, the pose of the quadrotor has to be considered because it changes the pitch and the roll angles in the Y and X axes of the AR.Drone coordinates. This will also incline the camera. This effect is shown on the right in Fig. 6. Fig. 6. Measurement of the mark angle (left) and the mark angle considering roll (right). In order to determine the distance from the drone to the mark, equations are used for each horizontal and vertical distance as in Eqs. (2) and (3). a61

4 Trans. JSASS Aerospace Tech. Japan Vol. 12, No. APISAT-2013 (2014) tan (2) tan (3) Here and indicate horizontal and vertical distance, respectively. Additionally, and indicate the horizontal and vertical angles associated with the center of the mark. It is assumed that the drone coordinate system is the same as the marker pattern coordinate system until now. However, if the two systems are not matched, we need to consider the rotation as shown in Fig. 7. In order to measure the accuracy of the algorithm, we compare the current position reorganization between motion capture and the proposed algorithm as shown in Fig. 9. Most errors happen when the target marks change because of real position error and low camera image resolution Fig. 7. Comparison between AR.Drone and marker pattern coordinate when the AR.Drone is rotated. If the quadrotor has rotation in the Z axis of the AR.Drone coordinate, μ, we calculate the angle which considers yaw rotation with Eq. (4). Fig. 9. Comparison between image and motion capture based localization As shown in Fig. 10, the maximum error is 0.6 m when the altitude is 1.8m. However it is sufficient to move multiple AR.Drones without collision if each AR.Drone keeps a distance of more than 0.6m. arctan (4) The rotated value of and are calculated using Eqs. (5) and (6). cos (5) (6) Thus, the position of the quadrotor in ground-fixed coordinates is calculated from Eqs. (7) and (8). (7) (8) Here, and indicate the position x and y of the mark in ground-fixed coordinates, respectively. The logic to determine the current position is verified as shown in Fig. 8. Fig. 10. Image-based localization recognition error (maximum 0.6m) 5. Modification for Multiple AR.Drone Fig. 8. Localization recognition experiments. The AR.Drone is designed to be controlled directly by a smartphone or tablet. There are two main circuit boards. The motherboard holds the ARM9-core, 32-bit, 468 MHz processor which run with 128 MB DDRAM at 200 MHz frequency on a Linux real-time operating system. The second board is a 16-bit micro controller navigation board which interfaces with the sensors at a frequency of 40 Hz. Communication between the quadrotor and command station is done via a Wi-Fi connection within a 50m range via a62

5 S. Moon et al.: Development of Multiple AR.Drone Control System for Indoor Aerial Choreography three separate communication channels (command, navdata, and video stream). Controlling and configuring the drone is done by sending AT commands on UDP. Those commands are to be sent on a regular basis usually 30 times per second. By default, the AR.Drone is the server. The ground is the client. The AR.Drone is not suitable for collaborative control of multiple vehicles. Thus, in order to apply it to multiple formation flight, the internal system which is not open to the public should be modified Communication setting for multiple AR.Drone For the formation flight of multiple AR.Drone quadrotors, a communication interface allowing multipoint communication had to be developed. For that, the network configuration is modified so that we have two modes: AP mode and Node mode. The AP mode is the default mode which was already developed by Parrot for a single flight. Node mode is for multiple flights. This mode can be switched on by a reset button which is supported in the AR.Drone. For Node modes, the network configuration of the AR.Drone is modified as shown in Table 2. In order to run automatically at booting time, it is inserted in /bin/wifi_setup.sh. Table 2. Configuration for multiple AR.Drone communication. iwconfig ath0 mode managed essid ardrone_ap ifconfig ath xx netmask up route add default gw For the experiments, 30 AR.Drones are used as shown in Fig. 11. As a result, we can control 30 AR.Drones via one computer. However, when multiple quadrotors run in the same area, frequency interference occurs because of the collision of like frequencies as shown in Fig. 12. Fig. 12. Ultrasound frequency collision. In order to avoid frequency collision, a sniffing-like method is proposed in this paper. The original mechanism of the AR.Drone is shown on the left in Fig. 13. The raw data is taken by serial communication (/dev/ttyo1). The proposed mechanism changes the name of the serial communication from /dev/ttyo1 to /dev/ttyf1 for playing a trick as shown on the right in Fig. 13. Then, the serial sniffer program is inserted between /dev/ttyo1 and /dev/ttyf1. The program replaces the altitude value from the ultrasound to data calculated from the image. 6) For a more accurate altitude, adaptive data of the ultrasound and image are used. The modified altitude value is injected into /dev/ttyf1 from which the AR.Drone core program (program.elf) receives it. Fig. 13. Proposed frequency collision avoidance mechanism. Fig. 11. Experiment with 30 AR.Drones Ultrasound Collision The AR.Drone uses ultrasound data to measure its altitude. For this, we should know the protocol for raw data in which the ultrasound data are included. However, we have to analyze the protocol, because it is not open to the public. The header and trailer of protocol have size and checksum for each other. The size of the protocol is 60 bytes and it has sequence. In order to know the height by ultrasound, the ultrasound and sum_echo and flag are needed in the protocol as shown in Fig. 14. a63

6 Trans. JSASS Aerospace Tech. Japan Vol. 12, No. APISAT-2013 (2014) Fig. 14. Packet Structure of AR.Drone 2.0. Once the protocol is analyzed, the height is calculated from Eq. (9). 880 / (9) Where us is the ultrasound data received from the AR.Drone as shown in Fig and are offset and scaling factors, respectively. These values are derived from experiment measurements because of no public information from the ultrasound device of the AR.Drone. 6. Autopilot Flight Control System For autonomous flight control, we use state machine and monitoring. The flight state machine consists of 5 states (TAKEOFF, LANDING, CONTROL, HOVERING, OUTOFCONTROL). If the system takes a mission, the state is changed to CONTROL while monitoring the mission scenario. After completing the mission, the state is changed to HOVERING which can receive other missions. Because of this state, we can control flight without overlapping with other missions. Actually the attitude controller is already designed inside the AR. Drone s flight computer, and it is not possible to access this attitude controller. Thus, we just designed the outer controller to control the position, and a simple PID controller was adopted as a main position controller of the AR.Drones. The dynamics of the AR. Drone are required to design the PID controller, and these equations of motion can be easily obtained by using the simple guidelines from the papers of Krajuik 7) and van der Spek. 8) However, these suggested equations of motion are based on first order linear equations and they do not describe the real motion of the AR. Drone such as Fig. 15. In this figure, there are some estimation errors between real model and estimated model due to the internal attitude controller. For this reason, it is difficult to design the PID controller based on a quantitative analysis. However, in the test environment in this paper, the motion capture system is applied to the position sensor and this system provides very accurate position information with very high frequency. Therefore, it is possible to design a qualitative PID controller by using this motion capture system. Fig. 15. The result of parameter estimation based on the 1 st and 2 nd order linear model in pitch axis of AR. Drone. The simple bloc diagram is described in Fig. 16. In this system, we did not consider the flight path. Thus, the desired position and heading angle are the control input of the PID controller, and the target velocity is always zero. In this system, the feedback states are provided from the motion capture system in the reference frame. Thus, a rotational matrix is required to convert the position error from the reference frame to the body frame and this rotational matrix can be easily supplied from motion capture. Using these position and velocity errors in the body frame and heading angle and heading error, the control command can be calculated by the PID controller. The control results are demonstrated in Fig. 17 to Fig. 19. In these figures, the AR. Drone could follow the target positions and the final steady states errors are very small. This means that this designed PID controller can provide suitable accuracy for multi-flight. In Fig. 19, it seems that the controller cannot maintain the state error at zero. However, these control errors appeared at the start and stop time of the AR. Drone s motion in the other axis. There is also some gap in the coordinate system when a user sets the body coordinate system in the motion capture system. For multi-flight of AR. Drones, the collision avoidance algorithm is also designed and tested in this system. This collision avoidance maneuver is designed to bypass the other UAVs during a position changing mode by using the PID controller. This maneuver can increase the closest distance which can be calculated by using the relative position and velocity vector. This collision avoidance algorithm is demonstrated in Fig. 20. In this figure, two UAVs are moving to each target position, and during the maneuver the expected trajectories are straight-line from the current position to the target position based on PID controller. In these expected trajectories, the relative distance dropped under the safety distance of 50 cm and these UAVs collided with each other due to their size. To avoid this collision, the collision avoidance algorithm is applied and the real flight trajectories are displayed in Fig. 20. Thus there were no collisions during the four position changing maneuvers. a64

7 S. Moon et al.: Development of Multiple AR.Drone Control System for Indoor Aerial Choreography Fig. 16. The block diagram of motion controller for the AR.Drone. Fig. 19. The result of PID controller in Z axis. Fig. 17. The result of PID controller in X axis. Fig. 20. Collision avoidance maneuver trajectory. 7. Ground Control Station Fig. 18. The result of PID controller in Y axis. The ground control station is developed for the control of any multiple UAV. For that, the system is designed as a layered scheme for O/S (Operating System) independent and flexible framework as shown in Fig ) We create three software layers: User Interface layer, Flight Management layer, and Flight Communication layer. The User Interface layer provides the user global control and information about the system. The Flight Management layer manages and controls all UAV agents according to defined mission scenario. The Flight Communication layer manages flight agents which are specific to the vehicle. In this paper, we just create control command sender, NAV (navigation information) and image data receiver for AR.Drone communication. a65

8 Trans. JSASS Aerospace Tech. Japan Vol. 12, No. APISAT-2013 (2014) Fig. 21. Ground control station structure. The ground control station can show the current position and target destination in real time as shown in Fig. 22. In addition, it can express AR.Drone sensor data graphically or in a tree structure. Fig. 23. Scenario example. 8. Indoor Experiments For aerial choreography, various experiments were performed. In order to check communication, 30 vehicles were used as shown in Fig. 11. For image-based localization recognition, 400 pattern marks were used in a 10(m)ⅹ10(m) area as shown in Fig. 24. The video for image-based localization recognition test is available online. 1 This aerial choreography was shown at the Daejeon science festival in Korea. Fig. 22. Ground control station UI (User Interface). In order to develop the scenario, we use XML format as shown in Fig. 23. The type of action consists of takeoff, move, animation, and landing. All actions have quadrotor ID and time. For moving, we need 4 arguments which are made up of x, y, z, and heading. Fig. 24. Experiments environment. In addition, the indoor aerial choreography of motion capture based system is demonstrated in ADEX 2013 as shown in Fig or 5 AR.Drones perform with collaborative control of multiple vehicle systems. The video of the aerial choreography is available online a66

9 S. Moon et al.: Development of Multiple AR.Drone Control System for Indoor Aerial Choreography University of Technology, ) SungTae, M., DongHyun, C., SangHyuk, H.: Indoor Swarming Ground Control Station Using AR.Drone, KCC, 2013, pp Fig. 25. Demonstration of UAV group dancing in ADEX Conclusions We have described multiple AR.Drone control systems for indoor aerial choreography by motion capture and marker based systems. The motion capture system is accurate to within less than 1 mm. However it is expensive to deploy the system and sensitive to the surrounding environment. In the case of a marker based system, it is robust to environment and easily deployed. However, in order to recognize current position, intensive low-error image processing is required. If the quadrotors keep a minimum distance, the aerial choreography is performed according to a certain scenario. The experimental result showed that it is possible to fly, if the distance between quadrotors is kept at 0.6m with a height of 1.8 m. Meanwhile, the AR.Drone should be modified in order to control more than one AR.Drone. In the future, we will improve the accuracy by adding filtering and modifying mark recognition mechanisms. In addition, for a distributed system, the module of mark recognition is inserted into the quadrotor. References 1) Stephane, P., Nicolas, B, Pierre, E., and Frederic, D.: AR.Drone Developer Guide SDK 2.0, Parrot, 2012, pp ) Bristeau, P. and Callou, F.: The Navigation and Control Technology Inside the AR.Drone Micro UAV, IFAC, 2011, pp ) Ducard, G. and D Andrea, R.: Autonomous Quadrotor Flight Using a Vision System and Accommodating Frames Misalignment, IEEE International Symposium on Industrial Embedded Systems, 2009, pp ) Gary, B. and Adrian, K.: Learning OpenCV, O Relly, ) Graduacao, T.: A Java Autopilot for Parrot A.R. Drone Designed with DiaSpec, ) Kato, H. and Billinghurst, M.: Marker Tracking and HMD Calibration for a Video-based Augmented Reality Conferencing System, IWAR99, 1999, pp ) Krajuik, T., Vonasek, V., Fiser, D. and Faigl, J.: AR-Drone as a Platform for Robotic Research and Education, EUROBOT, 2011, pp ) van der Spek, J., and Voorsluys, M.: AR.Drone Autonomous Control and Position Determination, Bachelor Thesis, Delft a67