Design and implantation of a search and find application on a heterogeneous robotic platform

Size: px
Start display at page:

Download "Design and implantation of a search and find application on a heterogeneous robotic platform"

Transcription

1 Design and implantation of a search and find application on a heterogeneous robotic platform Ahmed Barnawi, Abdullah Al-Barakati Faculty of Computing and IT, King Abdulaziz University, Jeddah, Saudi Arabia Abstract. Recent and rapid advances on robotic agents have led them into becoming an essential part of our lives. To benefit from interconnection and to enable robotic unites to perform critical tasks beyond their capabilities individually, there is always a need to design autonomous and heterogeneous robotic systems. This is a none-trivial task given the question of programmability and interoperability. The aims of this project is to develop a heterogeneous testbed for developing a novel multi-agent robotic system with communication and task distribution attributes in real conditions. The foreseen verification scenario targets a setup of multiple Unmanned Arial Vehicles (UAV) and ground robots performing coordinated search and find task for objects of interest over a given area. The developed applications on top of the testbed enable the developers having a better understanding of system performance prior to deployment. In this paper, we introduce our system and present initial results for the work on progress. Keywords: UAV, Robotic Testbed, Software Architecture. 1. Introduction The interest on Aquatic Wireless Sensor Networks (AWSNs) has rapidly increased in the recent past by academia, industry and researching institutions for the possible use of these technologies in a wide range of potential applications and scenarios in the maritime domain [1]. Aquatic communication technologies and the ability to deploy assets at sea have grown fast in the last decade. In addition, Unmanned Aerial Vehicle (UAV) technology has significantly matured in the past five years and systems comprising teams of various unmanned vehicles have started to be deployed. Device cooperation and interoperability of heterogeneous assets is therefore becoming a key issue. However, although the status of the technology and the user maturity has rapidly improved, so far the use of swarm of vehicles that are able to communicate and cooperate to accomplish more challenging tasks has been only explored in research. There are at the moment no systems of unmanned mobile vehicles (UMVs) that are able to work autonomously in a cooperative way in an operational/commercial in field setting [2]. With the latest advancement in wireless communications and digital electronics, the design and development of low-cost, low-power, multifunctional sensor nodes as well as autonomous vehicles, (AUV, UAVs, ships, octocopters) have become possible. Nowadays, they are small, smart and they can communicate wired or wirelessly, in short distances, with other sensors, needing low energy [3]. All capabilities of these sensor devices, which include sensing, data processing, and communicating, enable to make sensor networks based on the collaborative effort of a large number of nodes. Since their processing capacity has been increased along the years, nowadays several types of low-cost multifunction sensors exist. In some cases, many sensors are needed to sense the environment or take measurements from the surroundings of a place. So a fully 381

2 adaptive and reconfigurable network of independent agents can be created, which would include highly heterogeneous agents and other devices for mobility. We intend to lay the groundwork for the outlined system by enabling a real world testbed with different agents equipped with communicable, sensing and traceable apparatus to test and develop search and find scenarios. Those autonomous robots will be able to form coalitions to perform basic tasks such as depicted in Fig. 1. Fig. 1. Basic experimental setup of locating objects using a coalition of Robots In order to achieve the objectives set, our experimental infrastructure consists of multiple remotely controlled and autonomous robotic agents with basic onboard equipment and core functionalities such as basic mission controllers, onboard sensors and communications systems along with software components distributed among agents and control base station. This paper is organized as follows, in section 2, we lay the literature review and motivation of this work. In section 3, the testbed hardware and software components and interfaces are introduced. In section 4, the developed search and find system is explained and finally in section 5, conclusion and future work are discussed. 2. Motivations and Related Work Robotics as a domain is well known. It has extensively been contributing to science since the 70 s as a key driver of large scale manufacturing industries, underpinning the new generation of automation technologies. Highly cooperative multi-robot technologies will be among the key enablers of the next generation of automation systems in domains such as agriculture, service robotics, customers-based manufacturing, doublepurpose systems, and others that, according to market forecasts [4], will experience the highest growth in near future. To address the design challenges, adoption of multi-agent-based design methods is a natural choice. There are various general purpose methodologies developed during the last decade, for example, Prometheus, Gaia, MaSE, Ingenias, etc. [5]. The multi-robot system adds interactions between software agents and autonomous robots. In particular, agents should be built to represent physical robots in a management system [6]. 382

3 Despite the availability of design methods, cognition is under extensive study. The development of multirobot systems with emergent cognitive abilities is largely studied in the swarm robotics domain. In [7] the issue is tackled explicitly in the context of swarm robotics, advocating the need for a swarm engineering". As a consequence, various design methodologies have been proposed, but are often somehow limited in their scope [8], [9], [10], [11] and [12]. Many systems emerged over the past decade with customized API s provided by the manufacturer of the drone systems. For instant Alberto et al. [13] developed Ground Control Station (GCS) for controlling and navigation of multiple UAVs. The System has two core modules, one of the core modules is responsible for handling UVAs mission planning while another core module is used for controlling the flight. However, UAV path planning and setting formation are directly depicted on the 3D maps and this is only possible as the core modules foundation based on NASA World Wind API. Despite the availability of design methods, cognition is still under extensive study. The development of multi robotic systems with emergent cognitive abilities is largely studied in the swarm robotics domain. In [14] the issue is tackled explicitly in the context of swarm robotics, advocating the need for a swarm engineering. As a consequence, various design methodologies have been proposed, but are often somehow limited in their scope [15], [16], and [17]. The main novelty of this project lies in the cooperative integration of a possibly large number of entities in tight cooperation in one single networked system with distributed control. The system strategically controls this network [1] for dynamically planning search and find missions with heterogeneous teams in uncertain environments. Such planning and coalition activities will be further investigated in the current project, since the link between the two is apparent. Our strategic decision in the design and implementation of this testbed is to build an open standard API interface from scratch to allow designers to examine the performance throughout the standard layers of networking between the agents and/or the BS. This architecture will ease the programmability and testability of developed systems. 3. Heterogeneous Robotic Testbed In this section we describe the architecture, hardware and software components of the system and define the logical and physical interfaces. We starts with general description of the experimental area then we give an overview of the system architecture and hardware components with some details about the agent state diagram and basic functionalities. 3.1 Testbed Components and Interfaces Fig. 2. shows the basic testbed components and Interfaces among them. Over the experimentation open air area, the agents are deployed. The UAV agents are physically connected to the Base Station (BS) via wifi router in star topology, nonetheless, the UAV agents are logically interconnected to simulate UAV to UAV link. It is noticed that different UAV s are introduced including ground/aquatic level robotic vehicles. And also the operator deals with the system via Graphical User Interface (GUI). Here, we identify the following interfaces: Interface X (UAV to UAV): logical interface where information exchanged between UAV s during flight that enables the agents to perform programmed maneuvers. Interface Y (UAV to BS): logical interface where all UAV s are connected to the BS. 383

4 Interface Z (Operator to BS): GUI interfacing where system I/O interaction takes place. Fig. 2. Shows the basic testbed components and Interfaces 3.2 Testbed Software Architecture Software architecture of the system consists of several modules, some of them run on drones while other ones run on the Base Station, see Fig. 3. Drone control SW is the main part of a code running on a drone. It communicates with drone hardware in order to provide functionalities for drone control, drone localization, location of objects of interest (targets). More specifically, the drone control functionalities realize: Automated take-off to hovering mode to the minimal flight altitude (5m). Automated landing procedure, landing onto given GPS coordinates. Due to assumed accuracy of the GPS, the landing area for each UAV should be circular with a radius of 3m at least. Automated emergency homing function to given GPS coordinates - activated if communication to the Base Station fails for a given time interval. Autonomous flight from a current to a given position (ground coordinates and flight altitude). Collision avoidance (mutual between UAVs plus with respect to known solid obstacles) will be provided. These functionalities will be based on knowing (in advance given) GPS coordinates of the obstacles/counter UAVs the UAV in question stops in the case of presumable collision). On the other hand, no obstacle avoidance (i.e. Planning to fly around obstacles) will be provided. Detection of targets will be based on recognition of visual uniquely labeled markers, each marker size up to around 0.5m. This functionality provides estimates of the target position, relative to the UAV. The errors in position estimates imposed by tilting of the UAV are not be compensated. The drone Control software of each drone communicates with the Communication server on the Base Station through the low level interface. The Communication server collects received data, provides them to modules on the Bases Station and allows to control the drones from the Base Station. Specifically, the server provides a current position, altitude and heading for each UAV, detected ground targets and their positions as well as telemetry information such as flight mode, current command execution, onboard resources (battery, WiFi signal level etc). Similarly to SW on drone, a dedicated API in C++ built over a socket communication interface will be realized to provide these functionalities. 384

5 Fig. 3. System Software Architecture 3.3 UAV Operational Modes and Functions UAV may be in one of the following operation modes: Disarmed (on the ground, taking off is denied) Landed (on the ground and ready to take off) Holding position (in air, not moving) Flying to position Blocked while Flying (due to collision avoidance) Taking of Landin Malfunction (not responding or reporting H/W problem) An UAV changes its operation mode either by user command, or by executing the current operation (e.g. reaching the destination). The Fig. 4. Shows the state diagram of the UAV modes and possible transitions between them whether caused by users (in red) or processing or external event (in green). 385

6 Fig. 4. UAV operation modes state diagram 4 Heterogeneous Robotic Testbed The development of search and find application on top of the testbed is proposed as a proof of concept where the testbed functionalities can be tested. The developed system ensures agent heterogeneity through an open interfaces in order to control the agents behavior in a coordinated search and find mission. Here we summarize the main features of our system. 4.1 The Default Scenario This Scenario is based on the flying drones only, ground robotic agent are not incorporated in this scenario. Nonetheless, system design ensures that ground agent integration via system interfaces doesn t impact the testbed core hardware and software components. Fig. 9. Shows a sketch of the default scenario. In this scenario all drones communicate to one another via the interface X (Logical Interface) while they communicate with the BS via interface Y (physical). An experimental area is an open air space with good coverage by Global Navigation Satellite System (GNSS) signal. The area has a circular shape with a diameter of 300 meters or a compact subspace of it. The whole area is covered by a wireless radio communication signal from the single Wi-Fi access point (AP) placed at the center of the area. Default Scenario Sequence of Events can be summarized as follows: 1 Operator feeds the system parameters on the mission via Interface Z (Operator GUI) 2 System (BS) calculates a search plan and assign search areas to UAV 1, 2, 3 and 4. 3 UAV 1, 2 and 3 carry on searching their assigned areas till either they find the object or search is concluded. 4 UAV 4 provides video feed and may be controlled by the operator to check on specific sectors. Based on the video feed, operator may instruct the system to recalculate the search plan and, therefore, the system instructs the UAV s to follow through 386

7 5 In case that one UAV for one reason or another had to leave the scene, the BS will have to recalculate the search plan and instruct the UAV s to follow through. 6 The system terminates the operation either when the objects are found or search is completed. Fig. 5. The default scenario. 4.2 Search planning Algorithm Search planner is a program that calculates (and recalculate) the agent search task plan, during the cooperative search campaign. An agent search task is in turn sent back to the agent. As for the default scenario, task planning takes place at the BS. The input of the search planner could be the agents status, locations and information on the searched area, information on the object(s) and the environmental parameters such as wind speed.etc. The output of the search planner is a data file contains the task data for each agent participating in the campaign, Fig. 6. illustrates the I/O for the search path planner. The search planner will assign to each agent an area to search, Fig. 7a. In the default scenario, we are assuming two traversing algorithms namely Spiral and Zig-Zag traversal as shown in the Fig. 7b and 7c. Both kinds of traversal patterns will execute the corners best possible way but with some errors (with either small overshooting, or with turns prior to reaching the corner points) Precision of the trajectory execution depends primarily on the GPS precision. 387

8 Fig. 6. Search planner I/O. Fig. 7. Searched area and traversal path. 4.3 Trajectory Execution The basic mode of the UAV from the user s point of view is flying along the trajectory (trajectory is the path segment that the UAV has to execute at a time) specified by GPS coordinates. The trajectory is specified by a list of waypoints and each waypoint is defined by the geographical latitude, longitude and altitude (above sea level). The navigation algorithm (through the autopilot) controls the UAV to fly the shortest possible way to the first waypoint on the trajectory, which is ideally a straight line. The flight velocity is limited by pre-set maximal velocity constraint. The ideal flight velocity is expected to be constant during the flight to the actual trajectory point, except an initial acceleration and a final deceleration in order to prevent final position overshooting. In real conditions, the planned trajectory cannot be followed exactly, mainly to the Limited precision of the conventional GPS localization and External distribution (e.g. wind). Considering the aforementioned minimal error and estimated position variance of the UAV, the autopilot cannot aim to reach an exact waypoint position (which would not be precise anyway), but the navigation to 388

9 that point is interrupted whenever the UAV reaches a predefined region around the actual waypoint. In accordance with common GPS variance, this region should be sized about m in radius. The Figures 8 and 9 and subsequent simulation provide some very basic insight how the real system may behave. These graphs are a result of a basic simulation, neglecting any disturbances, focusing exclusively on approach to the target waypoint. Besides, the simulation ignores an error of distance from the requested trajectory, as well as the UAV azimuth orientation control is not considered for simplicity. Herein, the simulated UAV with initial position (0, 0) flies through 3 waypoints (marked in red) in the Fig. 8. The yellow circles denote the 0.5m vicinity area of waypoints. If these are reached, the autopilot algorithm switches to the next waypoint. The cyan line marks an ideal planned path, while the blue line is the result of the simulation. The graphs in Fig. 9. shows an UAV distance to the next waypoint (in gray, the value skips, when the waypoint is switched to next one), absolute velocity and absolute acceleration. The purpose of this simulation is to give an idea, how the UAV behaves along the given trajectory. The model and the autopilot parameters were selected based on the multi- rotors of this size, weight and performance. The simulation is executed with maximum allowed acceleration and maximal velocity. Fig. 8. Trajectory follow simulation - x/y position graph Fig. 9. Trajectory follow simulation - time graph. 389

10 More importantly, the most parameters (except multirotor inner dynamics) may be adjusted in order to achieve requested flight performance. These parameters include maximal allowed velocity, maximal acceleration and gains in the position/velocity regulator. Generally speaking, when the trajectory should be followed more precisely, higher velocity changes will be needed to control the UAV. In other words, the flight velocity cannot be constant during the flight. This involves mainly breaking and speeding-up near trajectory waypoints. If lower precision of trajectory following is allowed, the regulator may work with smaller velocity corrections and the final trajectory would be smoother. Also, increasing maximal allowed flight velocity generally decreases feasible precision of trajectory following or increases the necessary variation of the flight velocity. 5. Conclusions and Future work In this paper we have introduced a testbed for heterogeneous robotic systems. The testbed was designed to accommodate different robotic agents to perform specific functions. We laid out the design and discussed important aspects related to agent s trajectory and behavior. This will help us to come up with optimized path planning tasks. Future work will include the implementation and testing of the testbed API s and functionalities combined with rigorous experimentation in real world scenarios. Acknowledgement. This paper contains the results and findings of a research project that is funded by King Abdulaziz City for Science and Technology (KACST), Grant No. ARP References [1]. Foe, E., Kudelski, M., Gambardella, L., and Di Caro, G. A. (2013). Connectivity-aware planning of search and rescue missions. In Proceedings of the 11th IEEE International Sym- posium on Safety, Security, and Rescue Robotics (SSRR), Linköping, Sweden, (2013) [2]. Martin, A. Y. Unmanned maritime vehicles: Technology evolution and implications. Marine Technology Society Journal, 47(5): [3]. Sendra, S., Floret, J., Garcia, M., and Toledo, J. F. Power saving and energy optimization techniques for wireless sensor networks. Journal of Communications, (2011) 6(6): [4]. SPARC s Strategic Research Agenda ( [5]. O.Shehory, A.Sturm. Agent-Oriented Software Engineering: Reflections on Architectures, Methodologies, Languages, and Frameworks Hardcover, Springer, (2014), 331p. [6]. E.Lavendelis et. al., Multi-Agent Robotic System Architecture for Effective Task Allocation and Management, Recent Researches in Communications, Electronics, Signal Processing & Automatic. (2012) [7]. M.Brambillaet.al., Swarm robotics: a review from the swarm engineering perspective. Swarm Intelligence, pp1-41, [8]. M Schwager et.al., Decentralized, adaptive coverage control for networked robots. The International Journal of Robotics Research, pp ,

11 [9]. C.A.C. Parker and Hong Zhang. Cooperative Decision-Making in Decentralized Multiple- Robot Systems: The Best-of-N Problem. Mechatronics, IEEE/ASME Transactions on, pp , [10]. S Berman et. al, Design of control policies for spatially inhomogeneous robot swarms with application to commercial pollination. In Proceedings of International Conference on Ro- botics and Automation, pp [11]. G. Sartoretti et.al., Decentralized self-selection of swarm trajectories: from dynamical sys- tems theory to robotic implementation. Swarm Intelligence, pp , [12]. M. Vigelius et.al., Multiscale Modelling and Analysis of Collective Decision Making in Swarm Robotics. PLoS ONE, pp9-19, [13]. A. T. Angonese, P. F. F. Rosa, "Ground Control Station for Multiple UAVs Flight Simulation", 2013 IEEE Latin American Robotics Symposium, pp , [14]. M. Brambilla, E. Ferrante, M. Birattari, M. Dorigo, Swarm robotics: a review from the swarm engineering perspective, Swarm Intell. (2013) [15]. L. Parker, Distributed algorithms for multi-robot observation of multiple moving targets, Auton. Robots 12 (3) (2002) [16]. K. Zhou, S.I. Roumeliotis, Multirobot active target tracking with combinations of relative observations, Trans. Rob. 27 (4) (2011) [17]. R.C. Arkin, Behavior-based Robotics, Intelligent Robots and Autonomou Agents, MIT Press, Cambridge, Mass,