Artificial Intelligence In Commodity Hardware Aerial Drones

Size: px
Start display at page:

Download "Artificial Intelligence In Commodity Hardware Aerial Drones"

Transcription

1 Artificial Intelligence In Commodity Hardware Aerial Drones Dept. of CIS - Senior Design Nina Charness chanina@seas.upenn.edu Univ. of Pennsylvania Philadelphia, PA Charles L. Kong ckong@seas.upenn.edu Univ. of Pennsylvania Philadelphia, PA Matthew Rosenberg rmatt@seas.upenn.edu Univ. of Pennsylvania Philadelphia, PA Sam Shelley sshel@seas.upenn.edu Univ. of Pennsylvania Philadelphia, PA Camillo Jose Taylor cjtaylor@cis.upenn.edu Univ. of Pennsylvania Philadelphia, PA ABSTRACT Recent computer science research has been particularly interested in autonomous or semi-autonomous unmanned aerial vehicles (UAVs) because of their application in situations too dangerous for human involvement. Modern artificial intelligence advancements have allowed for the implementation of highly complex spatial awareness algorithms, which enable autonomous flight on a variety of di erent flying devices. However, these approaches usually involve aerial vehicles with expensive sensors or a large number of surrounding cameras that are capable of extracting an entire 3D construction from multiple 2D images. Unfortunately, there are many situations when financially, or locationally, expensive sensors are not a viable option. This paper defines a set of improvements that can be made to the AR Drone 2.0 quadrocopter, a simple flying vehicle with limited hardware capabilities. These improvements enable it to simulate some of the tasks which more expensive implementations have already achieved. Software alternatives for visual guidance and obstacle avoidance are achieved by modeling techniques used by insects, such as the bumblebee, including object recognition, optical flow, and odometry. These improvements could be widely applicable and provide greater access to UAVs for situations where recovery is unlikely, or for other financially constrained applications. 1. INTRODUCTION Quadrocopters (also known as quadroters, or quadcopters) are flying vehicles propelled by four equally sized rotors. The quadrocopter s multiple rotors provide control redundancy and enable more precise maneuverability, allowing it to perform aerial maneuvers impossible for a helicopter to achieve. Although they introduce additional control over flight, the additional rotors dramatically increase flight complexity and consequently require a flight computer and onboard sensors in order to maintain in-air stabilization. The introduction of smaller and lighter electronics has made quadrocopters popular choices for UAVs used for military reconnaissance because of their application in situations too dangerous for human involvement. Recently, component costs have dropped even further, such that hobby companies have created more accessible quadrocopters such as the AR Drone 2.0 by Parrot, which retails for $300 [1] (Figure 1). The AR Drone 2.0 includes its own sensors and stabilizing algorithms and provides hobbyists the ability to fly their own quadrocopter using their computer, Android, or ios device. The drone has two video sources: one camera that faces forward and one that faces downward. It also uses sonar to help with altitude stabilization. While the size and weight of smaller vehicles like the AR Drone 2.0 serve as an advantage in maneuvering through confined, cluttered environments, they do not include the sophisticated sensors and GPS chips used by larger UAVs. The subsequent sections outline a series of improvements that can be made to the drone so that, despite its comparatively rudimentary sensors, it can mirror some of the more complicated functionalities of more costly quadrocopters and autonomous systems. What follows is a framework for more advanced investigation into artificial intelligence for quadrocopters and other vehicle systems with cheaper, more rudimentary sensors. These implementations and similar attempts will hopefully lead to more accessible quadrocopters, which if implemented using cheaper components, could be used more broadly. Potential practical uses include search and discovery missions for local forces, which are currently unable to a ord the extremely expensive drones the military is using, and eventually even delivery services. In order to accomplish these goals, the most basic problems associated with semi-autonomous quadrocopter flight need to be solved, namely: odometry (determining one s location and orientation), path re-tracing, obstacle avoidance, and starting point recognition. To achieve these goals, technical research concerning the techniques used by bumblebees and other insects for movement must be synthesized. There is substantial related work in the field of computer vision and object detection. The challenge is that many of these algorithms assume sophisticated hardware which flies with precision and reports accurate sensor data. Although algorithms have been developed and refined in the mathematical world, these algorithms require modifications in order to work with a real-world device due to computational

2 speed and inevitable inaccuracies in data returned by the device s sensors. This research presents proof-of-concept solutions to many of the aforementioned problems, laying the groundwork for future exploration. Figure 1: The AR Drone 2.0 [1] 2. RELATED WORK Due to the high costs of complicated visual sensors, many researchers have used insects and animals as a guide for developing more e cient robotic behavioral patterns. There is evidence that insects, such as the bumblebee, use shortcuts to perform functions such as measuring distance travelled and obstacle avoidance and some of these shortcuts have been implemented in autonomous vehicles with varying degrees of success. 2.1 Odometry In order to survive, insects make many foraging excursions, in which they locate food or supplies and then return home. Many scientists believe that they use an internal odometer to estimate the distance they have travelled to allow them to make roundtrips between home and a food source [15]. Odometry is a navigational technique in which a moving body estimates its change in position over time based on information gathered from its sensors. Bees use optical flow techniques to infer distance by measuring image movement through their vision [13]. They use visual flow, the perceived movement across visual fields, for tasks ranging from maintaining a straight path to regulating flight speed. In robotics, odometry uses cameras, accelerometers and other synthetic sensors to simulate the biological ones found in bees and other animals. In many past implementations of robotic odometry, position tracking utilized GPS readings. The AR Drone, however, does not possess a GPS chip so position tracking must be accomplished by integrating the velocities provided by the SDK and multiplying them by a rotation matrix. The rotation matrix accounts for the drone s heading so that all position tracking is done with respect to the same frame of reference throughout the flight. Recently with the increase in research into autonomous vehicles and robots, visual odometry has gained popularity. Visual odometry is the use of a camera to estimate the change in distance. This technique is known as visual SLAM (Simultaneous Localization and Mapping) and is normally accomplished by calculating the change in position of pixels between two consecutive images from the camera. The di culty with this process is estimating the scale by which the distance between pixels should be multiplied in order to obtain distance in the real world [7]. 2.2 Object Detection In real-life environments, e ective methods of object detection are crucial for survival and navigation. These concepts are equally important for flying autonomous robots. Current technology can develop three-dimensional maps, heat maps and x-ray images to detect obstacles by using multiple sophisticated sensors. However, processing the multitude of data produced by a large number of sensors is computationally expensive and time consuming. Other related work exists, which uses simpler algorithms targeted at monocular vision [18]. In order to move towards a target object, the target must first be detected. Many methods for object detection involve segmentation, which partitions the image into di erent regions based on some heuristic [12]. The goal of segmentation is to separate an image such that each pixel within a region is similar in color, intensity or texture. The regions can then be separated into either object or background. Choosing an appropriate algorithm is key in performing successful segmentation. One simple approach is color thresholding, which classifies pixels as part of the object if their value is within a threshold value [14]. If the color of the pixel is outside of the threshold, it is classified as background. The result is a binary image of object and not-object, where a pixel is set to 1 (or on ) if it is part of the object and 0 otherwise. Although its relative simplicity makes it easy to implement, this technique performs poorly for multicolored images and objects. A more sophisticated approach focuses on finding object edges. Edge detection is based on the observation that there is often a sharp adjustment in intensity at region boundaries [2]. The Canny edge detector is one such edge detector, which is popular because it is easy to integrate with a large number of object recognition algorithms used in computer vision and other image processing applications [5]. One problem with edge detection is the presence of noise. Noisy images can cause the position of detected edges to be shifted from their true locations. Many edge detection techniques, including the Canny algorithm, are implemented in OpenCV, the Open Source Computer Vision Library [4]. Another method uses Haar features, image sections which describe an object in a digital image, such as edges and lines, in order to detect objects. This method, first described by Viola and Jones [16], creates many simple classifiers based on Haar features from a set of training data. These classifiers are then applied sequentially to repeatedly disqualify regions of an image from being an example of the target object. A large number of Haar-like features (derived from a large set of training data) are necessary to be su ciently accurate in detecting objects. The aforementioned techniques used for detecting specific objects work best when qualities specific to the target object are known in advance. In order to detect objects without knowing their qualities in advance, di erent algorithms are required. These algorithms focus on information gathered from the pixels in an image or the change in pixels from image to image. These algorithms use this information to make inferences about object positions. A simple algorithm for ground vehicles with monocular vision segments an image into ground or obstacle by identifying every pixel in the image as either ground or obstacle and directing the robot to take the path of least hindrance

3 [2]. However, this algorithm assumes that the ground is always of constant intensity and obstacles have sharp, distinguishable features which allow the robot to easily discern between obstacles and non-obstacles. Such clear and consistent world definitions would likely be more di cult to develop for aerial vehicles. A more e cient option is a histogram-based method, which computes a histogram from all of the pixels in the image [12]. The peaks and valleys created can be used to locate objects in the image based on color or intensity. This method can also be recursively applied to objects in order to subdivide them. This option is more e cient because it requires only one pass. Unfortunately, it may be di cult to identify significant peaks and valleys in relatively homogenous images (or especially heterogeneous ones). Another method uses the concept of appearance variation cue in order to identify variations between textures in a single image [8]. This can be used to detect impending collisions using the assumption that as an object moves closer there is generally less variation in the image recorded from a viewpoint. However, this does not work in all cases, such as when an object in an image has a detailed texture. This technique may need to be combined with another object avoidance algorithm in order to have more reliable decision making processes. 2.3 Moving Towards Targets and Avoiding Collisions In order to move towards a target object or avoid an obstacle, a system must not only detect objects, but resolve its position relative to the object in order to determine how to move towards or away from it. Visual looming is a phenomenon that has been used to determine if and when an obstacle collision could occur. Looming describes a situation when the projected size of an object increases on a viewer s retina as the relative distance between the viewer and the object decreases. Previous works have identified this phenomenon in insects, including bumblebees, and flying locusts [10]. Because visual looming signifies an impending threat of collision, it can be used for autonomous obstacle avoidance in robots [11]. Researchers have developed theoretical models which try to measure this e ect quantitatively from a sequence of 2D images by studying the relative rate of change of image irradiance (the light it reflects when viewing it from di erent positions). However, this property may be di cult to measure on vehicles with limited image processing capabilities. A more generic method proposed to detect impending collisions is calculating the optic flow and inferring whether an object is close based on the calculated values [10] [17]. Optical flow is the apparent motion of an object, surface or edge caused by the relative motion between the observer and its environment. Insects not only use optic flow to maintain an odometer as discussed above, but also in obstacle avoidance. Optic flow is a function of the insect s forward velocity, angular velocity, distance from an object, and angle between its direction of travel and the object. Objects that are closer, have higher optical flow values, so by turning away from regions of high optic flow, an insect can avoid imminent collisions. Optic flow research focuses on examining the optical flow in images and estimating depth, velocity and motion of an object with the flow information. In the research involving a robot with two vertical cameras, the two frontal images provided orientation information that allowed the robot to act based on its confirmation of the nature of an oncoming obstacle [17]. There are two approaches to calculating the optical flow between two images. Sparse techniques calculate the optical flow of specific, pre-calculated image features, such as corners, or edges. While these techniques are faster, denser calculations of optical flow look at every pixel in the images, providing more accuracy [6]. In a paper by Gunnar Farneback [9], dense and sparse optical flow techniques are compared for their density. A sparse, quicker algorithm developed by Lucas and Kanade, has 35 % density, where Farneback s has 100 % density. Farneback s dense optical flow algorithm estimates the displacement of pixels between two images [9]. Optical flow is therefore the magnitude of the displacement of one pixel from one image to the next. First, the algorithm approximates each neighborhood of both images (or frames) by quadratic polynomials. By using the polynomial expansion transform, this calculation can be done e ciently [9]. The polynomial expansion of both images gives expansion coe - cients for the first and second image which are then used to estimate the displacement field of the image [9]. Final pixel displacement is estimated by integrating over a neighborhood of each pixel and smoothed out using Gaussian techniques. This is based on the assumption that pixels in proximity move in similar directions. This algorithm assumes that the displacement field is slowly varying, which can lead to complications in a real-time environment. In order to overcome this, segmentation techniques like those described earlier can be used in conjunction. The algorithm produces a displacement field from two successive video frames, which represents the optical flow at each pixel between two images. This displacement field can subsequently be used to determine if an obstacle is approaching which needs to be avoided. 3. SYSTEM MODEL Drone Velocity, Video, Time data Commands, Direction Wifi Connection Figure 2: The system model Algorithms to process input data form drone and convert to directional output Remote Processing Device (Laptop) A basic model which outlines the sequence of communication to the drone is depicted in Figure 2. The drone receives input data 15 times per second from its onboard sensors and passes the information through TCP (Transmission Control Protocol) and UDP (User Datagram Protocol) WIFI connections to a remote device (for example, a laptop). The remote device analyzes the information provided by the drone and determines new instructions for the drone. Finally, it sends out a command back to the drone, and the cycle repeats. Applying previously described related work combined with various smoothing techniques during the analyzation stage can produce a model which enables the drone to autonomously

4 navigate in its environment. 3.1 Environment and Testing Model Drone testing occured exclusively indoors, and was confined to a 5 meter by 5 meter area. For simplicity, only twodimensional navigation was tested. The drone was given an arbitrary set of commands to go out (e.g. move forward 4 seconds, turn right for 3 seconds, move forward for 3 seconds) that simulated a human instructing the drone using a joystick or other manual control device. Once the drone finished executing these commands the added autonomous flight algorithms were invoked. The drone then navigated back to its starting location using the recorded odometry information. Along the way if an object was detected using optical flow, the drone exercised a predefined avoidance routine. The size of the objects being avoided were bounded by the size of the drone s camera (e.g. any object can be used as long as it fits within the drone s vision) and were held stationary during flight. A predetermined object was placed at the drone s starting location. When the target object was detected (in the this case a brightly-colored sticky note to improve color thresholding e ectiveness), the drone switched its control method and used its camera to move close to the sticky note before landing. As this task of autonomously returning to a target object while avoiding any obstacles on its way involves many components, it is practical to divide the problem into three smaller subgoals. 3.2 Odometry In order for the quadrocopter to return to its starting position, it needs positional awareness - that is, after traveling in a certain direction for a certain period of time, it must know its current location (x, y coordinates) relative to its starting point. This can be accomplished by taking velocity measurements either through accelerometer readings or by measuring image translation through a downward facing camera. Velocity measures can be integrated over time to find direction and distance. For example, if a 2D world is assumed for simplicity sake, then given v x and v y (x, y velocities), as well as time, t, one can derive current position with the formulas x = x 0(previous position) + v x t and y = y 0 + v y t. With these measurements, a velocity and travel time can be calculated which would return the quadrocopter to its starting point. In order to tackle this problem, the drone must be able to model a very basic version of the shortest path algorithm. For example, given that the drone is 3 meters north, and 4 meters east of the origin, it should be able to rotate its head to point directly toward the origin, then travel for 5 meters back to its starting position. 3.3 Locating an Object at the Origin Although the drone should be able to guide itself most of the way back to its starting point using odometry, it is likely that these readings will not be su ciently accurate to retrace its steps. Therefore, in addition to odometry readings, the drone will rely on a brightly-colored sticky note at its start point, used to signify the target location. Consequently, once the drone is within range of the origin, it can use its frontward facing camera to detect the sticky note and compute the direction it needs to turn or move towards in order to reach the exact starting point. To accomplish this goal, the drone uses color thresholding for object detection and a simplistic form of visual looming for relative positioning. Once the drone is capable of determining its physical distance from the object on the screen, it can make algorithmic decisions regarding the direction and speed it should maintain in order to move away from or towards its target Color Thresholding As previously described, color thresholding algorithms detect objects by segmenting an image into two color spaces, pixels that fall into a predefined color range, and those that do not. The algorithm s output is thus an image that contains only two types of colored pixels, white and black. Thresholding the image makes it easy to find the object s image contours which in turns allows the description of the object as a solid space, with a height, width, and a x, y position that can be translated into an object. One algorithm for contour detection (and the one that is implemented in OpenCV) is described in a paper by Yokoi, Toriwaki, and Fukumura [19]. Their algorithm, titled border following chooses positive points in a binary image, and examines the point s neighbors for other positive points. It takes these positives points and recursively examines neighbors until it has visited all positive points on the image. Within any discovered cycles of positive points in the image, all pixels adjacent to a pixel with a value of 0 (a background pixel) can be considered part of the border line of the object (part of its contour) [19] Moving Towards the Target Object Instead of trying to figure out the image s position in the real world, a computationally simpler technique is to establish an initial x, y, height, and width of the target object and then calculate the di erence between that position and the object s current position on the video stream. The drone can then be instructed to move in the direction which would move the target object closer to its initial position. Determining whether the drone is too close or too far from the target can be accomplished by subtracting the current width from the initial width. Determining its left, right orientation, however is more di cult. The two dimensionality of the image means that there is no depth component. Left/right and up/down displacement must therefore be calculated by taking into account the change in the size of the object due to its distance change in the third dimension. This can be accomplished by subtracting the current x or y position from the initial x or y position, and then adding back half the change in size since we know if the sticky note has changed its size it is because it has moved closer or further away. These calculations are described in Figure 3 This technique enables a simplistic form of visual looming, whereby the drone can move and repeatedly recalibrate its position relative to the target, based on the aforementioned calculations. When the drone is within the range of the target, it can cease movement, and land. 3.4 Odometry and Obstacle Avoidance While the drone is flying autonomously back to its origin point, it should also be able to account for obstacles

5 x current location original location currentw idth initialw idth Figure 3: Because the target object exists in three dimensions, the amount it should turn left or right is a function of both the change in relative size as well as the objects x,y o set, where offset = x + (currentw idth initialw idth)/2 Figure 4: Optical flow using Lucas-Kanade sparse optical flow algorithm for corners introduced since it passed on its way to its initial destination. This model treats obstacle avoidance and odometry as separate problems, using odometry to move towards the origin, and switching to an obstacle avoidance routine when an obstacle is detected. Now that the drone can theoretically track its position at all times and find the origin, when an object is detected using optical flow movement, navigation can be temporarily overridden to avoid the object it detects. In this scenario the drone follows a predefined avoidance path to move around the obstacle, continuing to record position changes, and then resuming its path to the origin once the routine is completed. Obstacle detection is naturally a prequisite for being able to avoid obstacles. As previously mentioned, specific object detection techniques will not work in this scenario, as there are no predefined characteristics known about potential obstacles. For this reason, an implementation of optical flow is an e ective solution Optical Flow The OpenCV library provides a variety of functions for calculating optical flow between two images [4]. While the sparser algorithms look at the entire image and focus on the optical flow of features (see Figure 4), the denser algorithms look at the optical flow for each pixel (see Figure 5). Though it is more computationally expensive to use a dense optical flow algorithm, it provides a more accurate evaluation for an impending collision [6]. Given that the larger the optical flow, the closer the pixel is, the magnitude of optical flow can be used to detect a potential object collision [17]. A simplistic model treats obstacle detection as a boolean question. When examining di erent areas of the pixel map (or image) the algorithm calculates the sum of the optical flow in that region and determines if it exceeds a pre-calculated threshold. If it exceeds the threshhold, the algorithm returns that an object is detected, and engages the appropriate commands to avoid the object before resuming movement towards the origin. 4. SYSTEM IMPLEMENTATION The three problems discussed above assumed a perfect world in order to give a general overview of the algorithms that will be used to solve the outlined tasks. In this section, Figure 5: Optical flow using Gunnar Farneback s dense optical flow algorithm a more detailed description and implementation is presented, using the hardware and SDK of the Parrot AR Drone AR Drone Move Command Though it may sound trivial, commanding the drone to move in a particular direction is not straightforward and is a strict prerequisite for implementing any of the presented components. As the SDK Development Guide states, there is only one function call that is used to direct the drone to move: ardrone_at_set_progress_cmd() [4]. This function takes as input four floating numbers, all between -1 and 1: (left/right bending angle), (front/back bending angle), gaz (vertical speed), and yaw (rotational or angular speed). It is important to note that it does not take in time, or duration as parameters. For example, a sample call to the function may look like ardrone_at_set_progress_cmd(.5,0,0,0) which tells the drone to bend rightward. However, because all the values are between -1 and 1, they represent the percentage of the maximum corresponding values as set in the drone parameters. So a value of.5 would mean for the drone to tilt and move to the right 50% of the maximum angle bending rightward. In order to instruct the drone to move a set distance, the function required calibration through a series of trial tests to determine how many meters to the right the drone would go after the above sample command. Additionally, because these commands are instantaneous, the program must sleep for a few milliseconds after issuing a

6 command so that the drone will have su cient time to execute the instruction and realize its e ects. This movement technique was used throughout each of the task implementations. Because of the lack of granularity for movement commands, inaccuracies related to movement needed to be taken into account for each of these steps. 4.2 Odometry In Section 3.2, an algorithm that involved the basic integration of velocity over time was presented to achieve location tracking at any given point in time. However, there are many more factors that need to be taken into account when actually implementing this algorithm for the AR Drone 2.0. The first thing that must be considered is the frame of reference that is being used to measure the di erent velocities. The AR Drone 2.0 takes all of its measurements with respect to its own frame of reference, so it is critical that one makes the necessary adjustments to transform these measurements to the frame of reference of the world. For example, if the drone goes 5 mm/s in its y direction and makes a 90 turn to the left and continues to go 5 mm/s in its y direction, in the world it is actually going 5 mm/s in the y direction, then 5 mm/s in the x direction after the turn. To adjust for this, a rotation matrix is needed used to transform the velocities with respect to the drone to velocities with respect to the world. In 2D, this can be expressed as: apple cos R = sin sin cos Once this rotational transformation has been applied to the velocity vectors, they can integrated with respect to time to compute relative position. However, there are many technical issues related to the hardware that require additional smoothing algorithms in order to make the measurements as precise as possible. For example, one issue is that the drone will produce an inaccurate velocity reading from the time to time as it tries to stabilize itself when it flies. To remedy this, rather than use all of the velocity readings to calculate position, a running median is taken every ten readings to reduce noise. Another issue involves the drone s magnetometer, which it uses to determine magnetic north, and thus aid it when calculating angular orientation. Due to magnetic currents that occur indoors, occasionally the drone produces inaccurate angular readings. To remedy this issue, a threshold of 30 per second was used. That is, if the drone produced two readings that were more than 30 apart within a second, the latter reading was ignored. This threshold was developed during pilot testing of the drone, as it was discovered the drone could not physically turn this fast. The overall process of determining its current location is depicted in Figure 6. In the actual implementation, two threads are used - a read thread where the input data (velocity, time) are being passed in to be stored and analyzed, and a write thread where output data (direction and magnitude) are being passed to the drone so it knows where to move next. 4.3 Locating an Object at the Origin Once odometry has been implemented with reasonable accuracy, the second improvement requires the implementation of video processing algorithms. As previously mentioned OpenCV provides utilities and implementations of Input: Specified path (ex. go forward 2 meters) Read thread Receive velocity and time data from drone Adjust for frame of reference and integrate velocity over time Write thread Output: Current position of drone relative to start Figure 6: AR Drone Odometry process many of the aforementioned algorithms. Its API was therefore used to complete many tasks required for object detection and eventually obstacle avoidance [4] Video Processing with the ARDrone SDK Once distance and time instructions can be converted to AR Drone movement commands, video processing can be used to augment path accuracy. Although in the past few years, OpenCV has been largely rewritten in C++, it maintains a complete interface into the library through C methods. As the AR Drone SDK is written entirely in C, it makes the most sense to use the C interface when implementing the following techniques [4]. The ARDrone Video SDK provides access to the video bu er through pre-decoding and post-decoding stages. These stages have access to the raw bu er of image frame data, and can modify the data for subsequent stages in the pipeline to access. The ARDrone outputs video streams in RGB (RGB888) or RGB565 formats. RGB565 is a 16-bit format whereas standard RGB is a 24-bit format. RGB565 s 16-bit permits only 65,536 unique colors, whereas the much larger 24-bit space permits 16,777,216 colors. Using standard RGB is required for integration with most libraries, including OpenCV and is consequently used in this implementation. Once the data is accessible through a bu er, OpenCV provides a framework which can display an image bu er in a GUI window. The OpenCV GUI library expects data in BGR format (the opposite of RGB), which requires that the data be converted prior to display [4] Color Thresholding The accuracy of color thresholding is highly dependent on the range of colors provided to indicate the object to be detected. The color must be both well defined, and unique in the general environment which is being recorded. For this reason a bright pink sticky is used to maximize accuracy (Figure 7). Although the RGB image format provides a good abstrac-

7 tion for color storage for humans (because this is how we perceive color), its ability to represent di erent shades of the same color is rather poor (there is no clear axis along which di erent shades and hues can be described). Consequently, another color space, HSV (Hue, Saturation, Value) was developed which enables manipulation of color data based on saturation and value, which relate to the lightness and darkness of the color hue [14]. OpenCV provides an API for converting from RGB to HSV and another for returning a modified 1 channel image that contains only white or black pixels dependent on whether or not the original pixel fell within the predefined color range (Figure 7). Figure 8: AR Drone Video stream with green contours drawn around detected color thresholded area Figure 7: Color thresholded AR Drone Video stream with thresholding applied to a pink sticky note Once a binary image representing the object the edges of the object can be detected with the help of OpenCV APIs, specifically cvfindcontours. This method seeks out contours (edges) in the binary image by using the aforementioned edge finding algorithm [19]. The returned contours can then be drawn on the image either by using provided line drawing methods, or an additional contour drawing method, cvdrawcontours [3] (Figure 8). As color thresholding is by definition very sensitive to the environmental changes, the resulting borders can be very noisy, and generate false positives. This can be rectified by setting a minimum area of the object. Although this reduces the distance at which an object can be detected from, it dramatically reduces detection of stray areas of color within the defined threshold. All of the aforementioned procedures were implemented in a separate video thread so as not to interfere with the running time of the other procedures Moving Towards the Target Object Color thresholding can be combined with odometry to enable movement towards an target object once it has been detected. In theory, this requires implementation of Section 3.3.4, however practical implementation required additional considerations. Firstly, a clear definition of detection is needed. In practice, when the object is at the edge of its range for accurate object detection there are situations in which the drone would briefly detect the object, but then fail to detect it in subsequent frames. Moreover, occasional visual artifacts would introduce situations in which the aforementioned algorithms would generate false positives on objects that were not the target destination. To rectify this issue, an addi- tional threshold was added that required that the target appear in multiple sequential frames, prior to switching to the targeting portion of the system. Secondly, problems related to consistency of the video stream, caused similar detection problems and introduced situations in which blank frames would cause the object to incorrectly disappear. This made it impossible to distinguish between situations in which the object had actually moved out of drone s view (a situation that would warrant an emergency landing) and situations in which a frame had been temporarily dropped. Similar to the previous solution, thresholds were added to ensure that the sticky note was gone for multiple frames prior to engaging in an emergency landing. Once these problems had been rectified the drone could be instructed to move and then repeatedly rexamine its position ever 10 milliseconds to determine if it had reached its target. Due to the inaccuracies related to the drone s movement commands, a range was established to ensure that the drone would not repeatedly hover in place, unable to move accurately enough to reach its target location. 4.4 Odometry and Obstacle Avoidance Once odometry and target detection were implemented, obstacle avoidance could be combined to produce an autonomous drone. A general flow chart describing the task of odometry, obstacle avoidance and object detection for origin targeting is presented in Figure 9. It depicts how the drone uses its read thread to retrieve data to compute the next direction of travel and detect both the target and potential obstacles, while its write thread tells the drone to go to the destination and back based on the computed next direction of travel. If at any point an obstacle is detected, the write thread instructs the drone to navigate around the object before continuing. Finally, once the drone can detect the sticky note at the origin, odometry stops, and the drone continously moves towards the detected object, before landing Optical Flow As video is simply a bu er of images, the ARDrone SDK provides access to the image bu er of the video being recorded by the drone. Gunnar Farneback s previously dense optical flow algorithm discussed was used to calculate the optical flow between the previous image and the current image for

8 No Input: Destination [Ex. 4 meters, 30 degrees NW from origin] Read thread Write thread Command drone to fly in computed next direction of travel Obstacle detected? avoid obstacle, then reengage, otherwise continue Reach origin/detect sticky? Yes Switch to sticky honing algorithm Is drone close enough to origin? Compute current position and next direction of travel Receive realtime image data and compute optical flow and attempt to detect sticky determine distance from it the drone to respond in real time. This potential delay between detection and the commands to avoid the object, is one of the key technical issues within obstacle detection and avoidance. For example, one can imagine the drone approaching an obstacle which is 1 meter away - it gives the necessary input to the program and after all its calculations, it determines that it is indeed 1 meter away from an obstacle. However, by the time the calculations are complete, the drone has moved another.5 meters forward, and so the obstacle is actually half a meter away when the drone receives the next instructions, which are now invalid. A few techniques were implemented to improve the speed of the optical flow algorithm. The resolution of the image was reduced so that there were fewer pixels for which to compute displacement. Another assumption that increased the processing speed was to further crop the video stream, assuming that there existed a region of collision in the center area of the drone s viewport. Once this portion of the image was determined (see green box in Figure 5), the rest of the image was cropped, so that the reduced area could be used with Farneback s optical flow algorithm. A final optimization was made by reducing the number of frames for whih flow was calculated, instead calculating optical flow only every other video frame. By calculating optical flow less frequently, the drone no longer entered situations in which it fell further and further behind the live image stream. These techniques, however, only partially solved the problem of delay. The most e ective solution involved introducing an inter- 3 No Command drone turn/fly towards origin and then land Figure 9: Odometry with Target Detection and Obstacle Avoidance x position (meters) each pixel [9]. This was achieved by calling the method in OpenCV, cvcalcopticalflowfarneback which took two images, a previous image, and the current image and calculated the optical flow between the two in a 3 channel, 32-bit floating point depth image. The function could also be given various values to implement di erent degrees and techniques of smoothing. The previous image frame was therefore stored in a bu er during each processing step, and then accessed when the next frame arrived to process optical flow. The resulting optical flow data was summed in a region on the image and if it exceeded a threshold, a bit was flipped indicating the presence of an obstacle that the drone needed to avoid. Unfortunately there were many di culties associated with the use of a dense optical flow algorithm. First, looking at the pixel in each image and calculating its displacement is computationally very expensive. In order to e ectively avoid obstacles, object detection needs to operate fast enough for Actual Delay Time (seconds) Figure 10: Simple portrayal of hovering e ect. Assume a 1 second delay for time it takes to send data from drone to program and for program to calculate current position. Assume obstacle is 4 meters away, and obstacle can be detected within 2 meters. Red represents actual path of drone, Blue represents perceived path of drone by program after delay. At t =2, drone is at x =2meters and therefore sees obstacle at x =4meters; however program only realizes this at t =3, but it tells the drone to hover so drone still remains at x =3while program catches up. This problem was repeatedly faced while calculating optical flow.

9 mediary step between input and output, which will be called hovering or syncing. In this step, the drone hovered in place without movement in any direction. This step enabled the program to sync up to the current position of the drone. For instance, in the above example, once the drone first detected the existence of an obstacle in front of it, it would send the corresponding information to the program. The drone would still have moved half a meter by the time the program calculates that there was an obstacle 1 meter in front of it, however, at this time the program would tell the drone to hover in place so it could sync up and collect the most recent data and derive that it is actually only half a meter away (Figure 10). The second di culty is that using a dense optical flow algorithm and examining all pixels, instead of just features such as the corners in an image, causes the calculated optical flow values to be extremely sensitive to the environment. Consequently, the optical flow for the same scenario (drone, obstacle, path) varied greatly, depending on whether the tests were conducted in a dark or well-lit room (due to light particle movement). Thus, the optical flow threshold at which the algorithms decided an obstacle was present were pre-calibrated for each testing environment. This e ect could be somewhat reduced by cropping the image to reduce the number of pixels. Using this technique, the optical flow values remained more consistent, unless the light source was directly in front of and facing the drone Finally, in order to reduce variance futher, and achieve more consistency, Farneback s optical flow algorithm was combined with a segmentation technique, greyscaling the image. In the implementation, the images passed to cvcalcopticalflowfarneback are first converted to grayscale to reduce random pixel variance. More sophisticated segmentation techniques, such as Canny edge detection discussed above, could potentially have produced even more consistent results, but carry an additional computational cost. The aforementioned modifications mitigated some of the e ects of the two mentioned problems, enabling optical flow to be successfully integrated into the system. 5. SYSTEM PERFORMANCE As the system for autonomous flight was implemented as standalone modules, they were also evaluated individually (in addition to being integrated together) 5.1 Odometry There were two essential tasks that the drone was programmed to perform. The first task was for the drone to know its current location at all times. To test this, the drone was given commands to fly to an arbitrary point, and the di erence between the actual position and predicted position by the drone was measured. Success was evaluated in terms of percent error of total distance travelled, as shown in Table 1. For example, looking at the first trial, the drone believed that it was at 3.84 meters in front and.63 meters to the left of its starting location, when it was actually 3.86 meters in front and.7 meters to the left of the starting location. The total distance travelled of is simply the pythagorean distance of its actual location to the origin (0,0). The di erence of.082 is the pythagorean distance between the predicted location and actual location. Finally, the error of.021 is the di erence divided by the total distance travelled (.082 / 3.923). A total of ten trials were performed to measure how accurate the drone was in predicting its current location. In the end, the drone had on average a 5.1% error in measurement, or a 94.9% accuracy in predicting its location at any time. Table 1: Accuracy of Knowing Current Location Pred. X Pred. Y Actual X Actual Y Distance Di Err The second task was for the drone to be able to return to its starting location at any time. The drone was first given commands to fly to an arbitrary point. Then, it was commanded to return to the origin - to accomplish this, it first calculated the correct orientation it had to turn, and after orienting itself correctly, tried to fly straight back to its starting location. Because the drone naturally drifts left or right when it turns or moves, it was programmed to adjust its course every second to account for these unpredictable movements. To evaluate the success of this task, again percent error of total distance travelled was used, as shown in Table 2. For example, looking at the first trial, the drone travelled a distance of 7.96 meters (3.98 meters each way). It landed.25 meters away from its starting location, and as a result, had an error of.31 (.25 / 7.96), or 3.1%. A total of ten trials were performed to measure how accurate the drone was in returning back to its starting location. In the end, the drone was on average 4.1% o from the starting location. Note that these trials were performed in situations in which the magnetometer was able to correctly calculate magnetic north. When it was unable to do so, it was impossile to calculate angle, and consequently impossile to determine correct facing (despite knowing the correct x, y displayment from the origin.) Table 2: Accuracy of Returning to Origin Total Distance Distance Away Error From Origin Locating an Object at the Origin

10 Once the aforementioned smoothing techniques were implemented, successfully moving to the sticky note at the origin depended on how close the drone was to the target when the odometry step terminated. The accuracy of odometry aided this process greatly, but tests showed that the sticky could be detected from a substantial distance, providing a permissible margin of error if odometry were to malfunction and produce a lesser degree of accuracy. Table 3 shows the distance at which the sticky note was first detected as a function of the size of the sticky note. Once the sticky note was successfully detected, moving towards the target succeded 100% of the time over the course of over 10 trials, provided that the sticky did not move out of the viewport and no additional objects falling into the defined color threshold entered the viewport during the movement process. Although the drone was able to navigate to the target every time, the path it took varied depending on when the drone first engaged with the sticky. This would a ect whether or not the drone would accidentally overshoot the target during movement commands and have to readjust in the opposite direction. These readjustments slowed down the process overall, but did not a ect whether or not the drone was ultimately successful in reacting the target and landing. Table 3: Maximum Distance for Target Detection Distance at Which Target is Size of Target Detected 186 cm 7.4 cm cm 10.1 cm cm 14.5 cm Obstacle Detection and Avoidance The success of obstacle detection and avoidance greatly depended on the speed and the correct calibration (given the environment) of the optical flow algorithm. As discussed in Section 4.4.1, detecting an obstacle, required that the threshold of optical flow (to indicate an impending obstacle) greatly depended on the environment in which the drone was flying. If there was a lot of light in the room the calculated optical flow value threshold needed to be much higher than if calculated in a darker room. Depending on the environment the drone was being flown in, the optical flow algorithm had to be recalibrated by changing the threshold value. While the ability to detect an object was determined by the calibration of the optical flow algorithm, the ability to avoid an object once detected depended on the response time of the drone. The response time was limited by the computational speed of the optical flow algorithm and the computer on which it was running. If the delay between obstacle detection and the avoid command were too large, the drone would not respond in time to avoid a collision. In tests, the ARDrone was given 3.5 seconds to fly straight with an obstacle in its path. It was instructed to land when it had detected an object and determined that they would collide. Table 4, contains data related to the sequential improvements made to improve the computation time of optical flow discussed in Section The first column indicates the speed-up technique and the last column indicates whether the drone landed before colliding with the obstacle when using this technique with all previous. Crop image to determined size (See Figure 5) Table 4: Response Time Based on Algorithm Improvements Improvement Delay Respond Before (seconds) Collision Reduce image resolution >3.5 No Reduce number of frames to Yes with far obstacle, calculate optical flow from no otherwise <1.0 yes The delay column indicates the delay between optical flow crossing the threshold (when an object is detected) and the response of the drone. In order to avoid obstacles, the drone needs to respond to an object detection with enough time to maneuver around it. If the delay was larger than 3.5 seconds, the drone did not respond before colliding. This was observed when the drone would print that it had detected an object after the 3.5 seconds it had been flying. When the delay was between 1.0 and 3.5 seconds, the drone would respond in time if the object was su ciently far away. It was observed that the delay should be less than about 1.0 second to respond to obstacles at any distance. When the improved optical flow algorithm is calibrated correctly for the environment in which the drone is flying, and the listed improvements were implemented, always detected an obstacle and executed the avoidance routine prior to colliding with the obstacle. 6. ETHICS There are a number of dilemmas that accompany the act of making drones with artificial intelligence more readily available. First, the making of inexpensive drones could potentially aid malicious individuals seeking to conduct illegal surveillance which harms both civilians or governments. By reducing the barriers to entry for covert surveillance, this inexpensive technology would allow anyone previously incapable of acquiring this type of technology to violate privacy and potentially have implications for individual security. Secondly, although these artificial intelligence algorithms are primarily for navigation purposes, the physical drone is still a dangerous device which can potentially cause harm to civilians in certain situations. One such situation would be a drone navigating home which failed to correctly engage in obstacle avoidance and unintentionally collided with a human. The sharp rotors could potentially harm the person, thus raising questions of liability regarding the operator, the manufacturer, and the engineer. This issue and many other related to potential malfunction would need to be thoroughly examined before using these drones in commercial or more residential settings. Finally, the cheap components of the AR Drone enable the drone to be produced in bulk and used with less of a focus on accidental or intentional damage done to any one drone. Although this redundancy is a powerful concept for system design, it also introduces environmental concerns related to

Autonomous Battery Charging of Quadcopter

Autonomous Battery Charging of Quadcopter ECE 4901 Fall 2016 Project Proposal Autonomous Battery Charging of Quadcopter Thomas Baietto Electrical Engineering Gabriel Bautista Computer Engineering Ryan Oldham Electrical Engineering Yifei Song Electrical

More information

Presentation of the Paper. Learning Monocular Reactive UAV Control in Cluttered Natural Environments

Presentation of the Paper. Learning Monocular Reactive UAV Control in Cluttered Natural Environments Presentation of the Paper Learning Monocular Reactive UAV Control in Cluttered Natural Environments Stefany Vanzeler Topics in Robotics Department of Machine Learning and Robotics Institute for Parallel

More information

EPSRC Centre for Doctoral Training in Industrially Focused Mathematical Modelling

EPSRC Centre for Doctoral Training in Industrially Focused Mathematical Modelling EPSRC Centre for Doctoral Training in Industrially Focused Mathematical Modelling Swarm Robotics in Bearing-Only Formations Joseph Field Contents 1. Introduction... 2 2. Formulating the Problem... 3 Glossary

More information

Gesture Controlled UAV Proposal

Gesture Controlled UAV Proposal Gesture Controlled UAV Proposal Ben Schreck and Lee Gross 10/29/2014 1 Overview There are currently two types of unmanned aerial vehicles (UAVs): autonomous aircrafts and remotely piloted aircrafts. Remotely

More information

RESEARCH ON THE DRONE TECHNOLOGY FOR THE ISS APPLICATION TAI NAKAMURA ASIAN INSTITUTE OF TECHNOLOGY JAPAN AEROSPACE EXPLORATION AGENCY

RESEARCH ON THE DRONE TECHNOLOGY FOR THE ISS APPLICATION TAI NAKAMURA ASIAN INSTITUTE OF TECHNOLOGY JAPAN AEROSPACE EXPLORATION AGENCY RESEARCH ON THE DRONE TECHNOLOGY FOR THE ISS APPLICATION TAI NAKAMURA ASIAN INSTITUTE OF TECHNOLOGY JAPAN AEROSPACE EXPLORATION AGENCY CONTENTS Introduction Proposal of Space Drone Advantage of Drones

More information

Air Reconnaissance to Ground Intelligent Navigation System

Air Reconnaissance to Ground Intelligent Navigation System Air Reconnaissance to Ground Intelligent Navigation System GROUP MEMBERS Hamza Nawaz, EE Jerrod Rout, EE William Isidort, EE Nate Jackson, EE MOTIVATION With the advent and subsequent popularity growth

More information

AutoFlight Documentation

AutoFlight Documentation AutoFlight Documentation Release dev-preview Lukas Lao Beyer August 23, 2015 Contents 1 Overview 3 1.1 Basic Usage............................................... 3 1.2 Important warnings and known issues..................................

More information

DRONE REPRESENTATION...

DRONE REPRESENTATION... REALISTIC DRONE A project by AnanasProject Content INTRODUCTION... 1 OBJECTIVES... 1 PROJECT STRUCTURE... 1 DRONE REPRESENTATION... 2 SENSORS... 2 LAYER 0: STABILIZATION... 3 FLYING PRINCIPLES... 3 PID

More information

Autonomous Quadcopter with Human Tracking and Gesture Recognition

Autonomous Quadcopter with Human Tracking and Gesture Recognition Autonomous Quadcopter with Human Tracking and Gesture Recognition Functional Description and Complete System Block Diagram Students: Jacob Hindle, Daniel Garber, and Bradley Lan Project Advisor: Dr. Joseph

More information

Learning Autonomous Quadcopter Trajectories from Demonstration

Learning Autonomous Quadcopter Trajectories from Demonstration Learning Autonomous Quadcopter Trajectories from Demonstration BY LUCA GRAGLIA Laurea, Politecnico di Torino, Turin, Italy, THESIS Submitted as partial fulfillment of the requirements for the degree of

More information

Autonomous Battery Charging of Quadcopter

Autonomous Battery Charging of Quadcopter ECE 4901 Fall 2016 Final Report Autonomous Battery Charging of Quadcopter Team 1722: Thomas Baietto Electrical Engineering Gabriel Bautista Computer Engineering Ryan Oldham Electrical Engineering Yifei

More information

Landing of a Quadcopter on a Mobile Base using Fuzzy Logic

Landing of a Quadcopter on a Mobile Base using Fuzzy Logic Landing of a Quadcopter on a Mobile Base using Fuzzy Logic Patrick Benavidez, Josue Lambert, Aldo Jaimes and Mo Jamshidi, Ph.D., Lutcher Brown Endowed Chair Department of Electrical and Computer Engineering

More information

HARRIS RECON DRONE. Sean F Flemming, Senior in Mechanical Engineering, University of Michigan

HARRIS RECON DRONE. Sean F Flemming, Senior in Mechanical Engineering, University of Michigan HARRIS RECON DRONE Sean F Flemming, Senior in Mechanical Engineering, University of Michigan Abstract This project was sponsored by Harris Corporation as part of the Multidisciplinary Design Program (MDP).

More information

Developing Sense and Avoid (SAA) Capability in Small Unmanned Aircraft

Developing Sense and Avoid (SAA) Capability in Small Unmanned Aircraft Developing Sense and Avoid (SAA) Capability in Small Unmanned Aircraft J. Perrett, S.D. Prior University of Southampton, Autonomous Systems Laboratory, Boldrewood Campus, Southampton, SO16 7QF, UK, jp13g11@soton.ac.uk

More information

Quadcopter See and Avoid Using a Fuzzy Controller

Quadcopter See and Avoid Using a Fuzzy Controller 1 Quadcopter See and Avoid Using a Fuzzy Controller M. A. Olivares-Mendez and Luis Mejias and Pascual Campoy and Ignacio Mellado-Bataller Computer Vision Group, DISAM, Polytechnic University of Madrid

More information

THE SMARTEST EYES IN THE SKY

THE SMARTEST EYES IN THE SKY THE SMARTEST EYES IN THE SKY ROBOTIC AERIAL SECURITY - US PATENT 9,864,372 Nightingale Security provides Robotic Aerial Security for corporations. Our comprehensive service consists of drones, base stations

More information

APPLICATION OF DECISION TREE ON COLLISION AVOIDANCE SYSTEM DESIGN AND VERIFICATION FOR QUADCOPTER

APPLICATION OF DECISION TREE ON COLLISION AVOIDANCE SYSTEM DESIGN AND VERIFICATION FOR QUADCOPTER APPLICATION OF DECISION TREE ON COLLISION AVOIDANCE SYSTEM DESIGN AND VERIFICATION FOR QUADCOPTER Chun-Wei Chen, Ping-Hsun Hsieh *, Wei-Hsiang Lai Dept. of Aeronautics and Astronautics, National Cheng

More information

Unmanned Aerial Vehicle Application to Coast Guard Search and Rescue Missions

Unmanned Aerial Vehicle Application to Coast Guard Search and Rescue Missions Unmanned Aerial Vehicle Application to Coast Guard Search and Rescue Missions Allison Ryan July 22, 2004 The AINS Center for the Collaborative Control of Unmanned Vehicles 1 US Coast Guard Search and Rescue

More information

Technical Layout of Harbin Engineering University UAV for the International Aerial Robotics Competition

Technical Layout of Harbin Engineering University UAV for the International Aerial Robotics Competition Technical Layout of Harbin Engineering University UAV for the International Aerial Robotics Competition Feng Guo No.1 Harbin Engineering University, China Peiji Wang No.2 Yuan Yin No.3 Xiaoyan Zheng No.4

More information

Why Math Matters: Rethinking Drone Flight Stability last revised 3/18

Why Math Matters: Rethinking Drone Flight Stability last revised 3/18 Why Math Matters: Rethinking Drone Flight Stability last revised 3/18 This whitepaper discusses the importance of math in the context of our proprietary Folded Geometry Code (FGC). Digital Aerolus UAVs

More information

THE SMARTEST EYES IN THE SKY

THE SMARTEST EYES IN THE SKY THE SMARTEST EYES IN THE SKY ROBOTIC AERIAL SECURITY - US PATENT 9,864,372 Nightingale Security provides Robotic Aerial Security for corporations. Our comprehensive service consists of drones, base stations

More information

THE SMARTEST EYES IN THE SKY

THE SMARTEST EYES IN THE SKY THE SMARTEST EYES IN THE SKY ROBOTIC AERIAL SECURITY - US PATENT 9,864,372 Nightingale Security provides Robotic Aerial Security for corporations. Our comprehensive service consists of drones, base stations

More information

U g CS for DJI Phantom 2 Vision+

U g CS for DJI Phantom 2 Vision+ U g CS for DJI Phantom 2 Vision+ Mobile companion application Copyright 2016, Smart Projects Holdings Ltd Contents Preface... 2 Drone connection and first run... 2 Before you begin... 2 First run... 2

More information

Team MacroHard: The Perfect Selfie Shreesha Suresha Mary Anne Noskowski Simranjit Singh Sekhon Bragatheesh Sureshkumar Beau Rampey

Team MacroHard: The Perfect Selfie Shreesha Suresha Mary Anne Noskowski Simranjit Singh Sekhon Bragatheesh Sureshkumar Beau Rampey Team MacroHard: The Perfect Selfie Shreesha Suresha Mary Anne Noskowski Simranjit Singh Sekhon Bragatheesh Sureshkumar Beau Rampey Intro: The project is an integration of a drone, a video recording device,

More information

CHAPTER-6 HISTOGRAM AND MORPHOLOGY BASED PAP SMEAR IMAGE SEGMENTATION

CHAPTER-6 HISTOGRAM AND MORPHOLOGY BASED PAP SMEAR IMAGE SEGMENTATION CHAPTER-6 HISTOGRAM AND MORPHOLOGY BASED PAP SMEAR IMAGE SEGMENTATION 6.1 Introduction to automated cell image segmentation The automated detection and segmentation of cell nuclei in Pap smear images is

More information

Modeling and Control of Small and Mini Rotorcraft UAVs

Modeling and Control of Small and Mini Rotorcraft UAVs Contents 1 Introduction... 1 1.1 What are Unmanned Aerial Vehicles (UAVs) and Micro Aerial Vehicles (MAVs)?... 2 1.2 Unmanned Aerial Vehicles and Micro Aerial Vehicles: Definitions, History,Classification,

More information

Introduction to Artificial Intelligence. Prof. Inkyu Moon Dept. of Robotics Engineering, DGIST

Introduction to Artificial Intelligence. Prof. Inkyu Moon Dept. of Robotics Engineering, DGIST Introduction to Artificial Intelligence Prof. Inkyu Moon Dept. of Robotics Engineering, DGIST Chapter 9 Evolutionary Computation Introduction Intelligence can be defined as the capability of a system to

More information

Collaboration Between Unmanned Aerial and Ground Vehicles. Dr. Daisy Tang

Collaboration Between Unmanned Aerial and Ground Vehicles. Dr. Daisy Tang Collaboration Between Unmanned Aerial and Ground Vehicles Dr. Daisy Tang Key Components Autonomous control of individual agent Collaborative system Mission planning Task allocation Communication Data fusion

More information

U g CS for DJI. Mobile companion application. Copyright 2016, Smart Projects Holdings Ltd

U g CS for DJI. Mobile companion application. Copyright 2016, Smart Projects Holdings Ltd U g CS for DJI Mobile companion application Copyright 2016, Smart Projects Holdings Ltd Contents Preface... 3 Drone connection and first run... 3 Before you begin... 3 First run... 3 Connecting smartphone

More information

U g CS for DJI Phantom 2 Vision+, Phantom 3 and Inspire 1 Mobile companion application

U g CS for DJI Phantom 2 Vision+, Phantom 3 and Inspire 1 Mobile companion application U g CS for DJI Phantom 2 Vision+, Phantom 3 and Inspire 1 Mobile companion application Copyright 2015, Smart Projects Holdings Ltd Contents Preface... 2 Drone connection and first run... 2 Before you begin...

More information

Dynamic Target Tracking and Obstacle Avoidance using a Drone

Dynamic Target Tracking and Obstacle Avoidance using a Drone The th International Symposium on Visual Computing (ISVC), December 4-6, 205 Las Vegas, Nevada, USA. Dynamic Target Tracking and Obstacle Avoidance using a Drone Alexander C. Woods and Hung M. La (B) Advanced

More information

QUADCOPTER. Huginn. Group 1. Floris Driessen Martin van Leussen Art Senders Tijl van Vliet Steven van der Vlugt

QUADCOPTER. Huginn. Group 1. Floris Driessen Martin van Leussen Art Senders Tijl van Vliet Steven van der Vlugt QUADCOPTER Huginn Group 1 Floris Driessen Martin van Leussen Art Senders Tijl van Vliet Steven van der Vlugt Presentation outlines Assignments and goals A) Build the quadcopter and get it flying B) PID

More information

FORMATION FLIGHT OF FIXED-WING UAVS USING ARTIFICIAL POTENTIAL FIELD

FORMATION FLIGHT OF FIXED-WING UAVS USING ARTIFICIAL POTENTIAL FIELD FORMATION FLIGHT OF FIXED-WING UAVS USING ARTIFICIAL POTENTIAL FIELD Yoshitsugu Nagao*and Kenji Uchiyama* *Department of Aerospace Engineering, Nihon University csyo1217@g.nihon-u.ac.jp, uchiyama@aero.cst.nihon-u.ac.jp

More information

Requirements Specification

Requirements Specification Autonomous Minesweeper 206-2-2 Editor: Erik Bodin Version.3 Status Reviewed Erik Bodin 206-0-07 Approved Martin Lindfors 206-09-23 TSRT0 Page tsrt0-balrog@googlegroups.com Autonomous Minesweeper 206-2-2

More information

Multi Drone Task Allocation (Target Search)

Multi Drone Task Allocation (Target Search) UNIVERSITY of the WESTERN CAPE Multi Drone Task Allocation (Target Search) Author: Freedwell Shingange Supervisor: Prof Antoine Bagula Co-Supervisor: Mr Mehrdad Ghaziasgar March 27, 2015 Abstract The goal

More information

Implementing Consensus based tasks with autonomous agents cooperating in dynamic missions using Subsumption Architecture

Implementing Consensus based tasks with autonomous agents cooperating in dynamic missions using Subsumption Architecture Implementing Consensus based tasks with autonomous agents cooperating in dynamic missions using Subsumption Architecture Prasanna Kolar PhD Candidate, Autonomous Control Engineering Labs University of

More information

Multiple UAV Coordination

Multiple UAV Coordination Multiple UAV Coordination By: Ethan Hoerr, Dakota Mahan, Alexander Vallejo Team Advisor: Dr. Driscoll Department: Bradley ECE Thursday, April 9, 2015 2 Problem Statement Using multiple UAV coordination,

More information

Autonomous Quadcopter UAS P15230

Autonomous Quadcopter UAS P15230 Autonomous Quadcopter UAS P15230 Agenda Project Description / High Level Customer Needs / Eng Specs Concept Summary System Architecture Design Summary System Testing Results Objective Project Evaluation:

More information

Studies of AR Drone on Gesture Control MA Lu, CHENG Lee Lung

Studies of AR Drone on Gesture Control MA Lu, CHENG Lee Lung 3rd International Conference on Materials Engineering, Manufacturing Technology and Control (ICMEMTC 2016) Studies of AR Drone on Gesture Control MA Lu, CHENG Lee Lung Department of Electronic Engineering

More information

First Session Visual Navigation for Flying Robots Workshop

First Session Visual Navigation for Flying Robots Workshop First Session Visual Navigation for Flying Robots Workshop Jürgen Sturm, Jakob Engel, Daniel Cremers Computer Vision Group, Technical University of Munich Bergendal Meetings 17.06.2013 Introduction We

More information

Kinect Hand Gesture Drone Flight Controller

Kinect Hand Gesture Drone Flight Controller Kinect Hand Gesture Drone Flight Controller Darren Jody van Roodt Thesis presented in fulfilment of the requirements for the degree of Bachelor of Science (Honours) at the University of the Western Cape

More information

Assessment of accuracy of basic maneuvers performed by an unmanned aerial vehicle during autonomous flight

Assessment of accuracy of basic maneuvers performed by an unmanned aerial vehicle during autonomous flight Assessment of accuracy of basic maneuvers performed by an unmanned aerial vehicle during autonomous flight Paweł ĆWIĄKAŁA AGH University of Science and Technology, Faculty of Mining Surveying and Environmental

More information

Control of Flight Operation of a Quad rotor AR. Drone Using Depth Map from Microsoft Kinect Sensor K. Boudjit, C. Larbes, M.

Control of Flight Operation of a Quad rotor AR. Drone Using Depth Map from Microsoft Kinect Sensor K. Boudjit, C. Larbes, M. Control of Flight Operation of a Quad rotor AR. Drone Using Depth Map from Microsoft Kinect Sensor K. Boudjit, C. Larbes, M. Alouache Abstract: In recent years, many user-interface devices appear for managing

More information

Abhyast Phase V Semi-Autonomous Aerial Vehicle Design

Abhyast Phase V Semi-Autonomous Aerial Vehicle Design 25-03-15 Abhyast Phase V Semi-Autonomous Aerial Vehicle Design Boeing-IITK Joint Venture Project Team Members Elle Atma Vidhya Prakash (eprakash@iitk.ac.in) Rahul Gujar (rahugur@iitk.ac.in) Preksha Gupta

More information

Autonomous Aerial Mapping

Autonomous Aerial Mapping Team Name: Game of Drones Autonomous Aerial Mapping Authors: Trenton Cisewski trentoncisewski@gmail.com Sam Goyal - s4mgoyal@gmail.com Chet Koziol - chet.koziol@gmail.com Mario Infante - marioinfantejr@gmail.com

More information

Follow this and additional works at:

Follow this and additional works at: University of Tennessee, Knoxville Trace: Tennessee Research and Creative Exchange University of Tennessee Honors Thesis Projects University of Tennessee Honors Program 8-2017 Min Kao Drone Tour Ethan

More information

Keywords Barcode, Labview, Real time barcode detection, 1D barcode, Barcode Recognition.

Keywords Barcode, Labview, Real time barcode detection, 1D barcode, Barcode Recognition. Volume 7, Issue 4, April 2017 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Real Time Barcode

More information

CONCEPTUAL DESIGN OF AN AUTOMATED REAL-TIME DATA COLLECTION SYSTEM FOR LABOR-INTENSIVE CONSTRUCTION ACTIVITIES

CONCEPTUAL DESIGN OF AN AUTOMATED REAL-TIME DATA COLLECTION SYSTEM FOR LABOR-INTENSIVE CONSTRUCTION ACTIVITIES CONCEPTUAL DESIGN OF AN AUTOMATED REAL-TIME DATA COLLECTION SYSTEM FOR LABOR-INTENSIVE CONSTRUCTION ACTIVITIES H. Randolph Thomas The Pennsylvania State University Research Building B University Park,

More information

Comparison of PID and Fuzzy Controller for Position Control of AR.Drone

Comparison of PID and Fuzzy Controller for Position Control of AR.Drone IOP Conference Series: Materials Science and Engineering PAPER OPEN ACCESS Comparison of PID and Fuzzy Controller for Position Control of AR.Drone To cite this article: A Prayitno et al 7 IOP Conf. Ser.:

More information

DRONIUM ZERO DRONE WITH LIVE STREAMING CAMERA

DRONIUM ZERO DRONE WITH LIVE STREAMING CAMERA DRONIUM ZERO DRONE WITH LIVE STREAMING CAMERA THANK YOU. Thank you for your purchase of Protocol s Dronium Zero With Live Streaming Camera. You are about to experience the best of what remote control

More information

Drone Adjustment Play Block Coding System for Early Childhood

Drone Adjustment Play Block Coding System for Early Childhood , pp.122-126 http://dx.doi.org/10.14257/astl.2017.145.24 Drone Adjustment Play Block Coding System for Early Childhood Yeon-Jae Oh 1, Young-Sang Suh 2 and Eung-Kon Kim 1 1 Department of Computer Engineering,

More information

Rendezvous of Multiple UAVs with Collision Avoidance using Consensus

Rendezvous of Multiple UAVs with Collision Avoidance using Consensus Rendezvous of Multiple UAVs with Collision Avoidance using Consensus J G Manathara & D Ghose Department of Aerospace Engineering, Indian Institute of Science, Bangalore, India 560012. A R T I C L E I N

More information

Parking Lot Checker. Wilson Kwong Nate Mayotte Jeff Wanger

Parking Lot Checker. Wilson Kwong Nate Mayotte Jeff Wanger Parking Lot Checker Wilson Kwong Nate Mayotte Jeff Wanger Introduction The purpose of the parking lot checker is to provide a visual representation of where available parking spaces are in a parking lot.

More information

Index Terms Quadcopter, Unmanned Aerial Vehicle, Radial Basis Function Neural Network. Displacement of x axis ey. Displacement of y axis ez

Index Terms Quadcopter, Unmanned Aerial Vehicle, Radial Basis Function Neural Network. Displacement of x axis ey. Displacement of y axis ez Modeling and Control Design of Quad copter Failsafe System Chun-Yi Lin and Wu-Sung Yao Department of Mechanical and Automation Engineering, National Kaohsiung First University of Science and Technology,

More information

Deposited on: 13 July 2009

Deposited on: 13 July 2009 Kim, J. and Kim, Y. (2009) Optimal circular flight of multiple UAVs for target tracking in urban areas. In: Lazinica, A. (ed.) Intelligent Aerial Vehicles. IN-TECH, Vienna, Austia. ISBN 9789537619411 http://eprints.gla.ac.uk/6253/

More information

Artificial Intelligence applied for electrical grid inspection using drones

Artificial Intelligence applied for electrical grid inspection using drones Artificial Intelligence applied for electrical grid inspection using drones 11/08/2018-10.22 am Asset management Grid reliability & efficiency Network management Software Drones are being used for overhead

More information

Searcue Quadcopters. Gurjeet Matharu [Type the company name] Fall 2014

Searcue Quadcopters. Gurjeet Matharu [Type the company name] Fall 2014 Fall 2014 Searcue Quadcopters Gurjeet Matharu [Type the company name] Fall 2014 Contents Introduction... 2 Overview of the System - Quadcopter... 3 Risks:... 4 Benefits:... 5 Cost... 5 Project Organization...

More information

Advanced Tactics Announces the Release of the AT Panther Drone First Aerial Package Delivery Test with a Safe Drive-up-to-your-doorstep Video

Advanced Tactics Announces the Release of the AT Panther Drone First Aerial Package Delivery Test with a Safe Drive-up-to-your-doorstep Video UPDATED 03APRIL2017 MEDIA CONTACT: press@advancedtacticsinc.com (310) 325-0742 Advanced Tactics Announces the Release of the AT Panther Drone First Aerial Package Delivery Test with a Safe Drive-up-to-your-doorstep

More information

Enhancement of Quadrotor Positioning Using EKF-SLAM

Enhancement of Quadrotor Positioning Using EKF-SLAM , pp.56-60 http://dx.doi.org/10.14257/astl.2015.106.13 Enhancement of Quadrotor Positioning Using EKF-SLAM Jae-young Hwang 1,1 and Young-wan Cho 1 1 Dept. of Computer Engineering, Seokyeong University

More information

Development of Multiple AR.Drone Control System for Indoor Aerial Choreography *

Development of Multiple AR.Drone Control System for Indoor Aerial Choreography * Trans. JSASS Aerospace Tech. Japan Vol. 12, No. APISAT-2013, pp. a59-a67, 2014 Development of Multiple AR.Drone Control System for Indoor Aerial Choreography * By SungTae MOON, DongHyun CHO, Sanghyuck

More information

Basic Multicopter Control with Inertial Sensors

Basic Multicopter Control with Inertial Sensors International Journal of Computational Engineering & Management, Vol. 17 Issue 3, May 2014 www..org 53 Basic Multicopter Control with Inertial Sensors Saw Kyw Wai Hin Ko 1, Kyaw Soe Lwin 2 1,2 Department

More information

White Paper. Drone Design Guide. Page 1 of 14

White Paper. Drone Design Guide. Page 1 of 14 White Paper Drone Design Guide Page 1 of 14 Table of Contents Section Topic Page I Abstract 3 II Commercial Drone Market Overview 3 III Drone Basics 3 IV Quadcopter Power Plant 5 V Flight Time 8 VI Flight

More information

History. Also known as quadrotors First flying quadrotor: 11 November 1922 Etienne Oehmichen. Image: blogger.com

History. Also known as quadrotors First flying quadrotor: 11 November 1922 Etienne Oehmichen. Image: blogger.com Quadcopters History Also known as quadrotors First flying quadrotor: 11 November 1922 Etienne Oehmichen Image: blogger.com History Quadrotors overtaken by helicopters due to workload of the pilot. Some

More information

Swarming UAVs Demand the Smallest Mil/Aero Connectors by Contributed Article on August 21, 2018

Swarming UAVs Demand the Smallest Mil/Aero Connectors by Contributed Article on August 21, 2018 24/09/2018 Swarming UAVs Demand the Smallest Mil/Aero Connectors by Contributed Article on August 21, 2018 Most of the connectors used in the small, hand-launched military UAVs typical of swarming drones

More information

Introduction to Drones

Introduction to Drones Introduction to Drones Introduction You can go backwards, you can hover, and you can go straight up or straight down. What is it? It s a bird, it s a plane, no it s a drone! If you are familiar with the

More information

TEAM E: CRITICAL DESIGN REVIEW

TEAM E: CRITICAL DESIGN REVIEW TEAM E: CRITICAL DESIGN REVIEW DOCK- IN- PIECE Rushat Gupta Chadha Keerthana Manivannan Bishwamoy Sinha Roy Aishanou Osha Rait Paul M. Calhoun OVERVIEW Project Description Use Case System-level Requirements

More information

Alpha CAM. Quick Start Guide V1.0

Alpha CAM. Quick Start Guide V1.0 Alpha CAM Quick Start Guide V1.0 Learn about Your Alpha CAM The Alpha CAM is SUNLY TECH s portable smart mini drone that has been specially designed for selfie-lovers. It is equipped with a high-definition

More information

Research on Obstacle Avoidance System of UAV Based on Multi-sensor Fusion Technology Deng Ke, Hou Xiaosong, Wan Wenjie, Liu Shiyi

Research on Obstacle Avoidance System of UAV Based on Multi-sensor Fusion Technology Deng Ke, Hou Xiaosong, Wan Wenjie, Liu Shiyi 4th International Conference on Electrical & Electronics Engineering and Computer Science (ICEEECS 2016) Research on Obstacle Avoidance System of UAV Based on Multi-sensor Fusion Technology Deng Ke, Hou

More information

International Journal of Engineering Trends and Technology (IJETT) Volume 23 Number 7- May 2015

International Journal of Engineering Trends and Technology (IJETT) Volume 23 Number 7- May 2015 Effect of Path Planning on Flying Measured Characteristics for Quadcopter Using APM2.6 Controller Wael R. Abdulmajeed #1, Omar A. Athab #2, Ihab A. Sattam #3 1 Lect.Dr, Department of Mechatronics, Al-Khwarizmi

More information

Automation of the Maritime UAV Command, Control, Navigation Operations, Simulated in Real-Time Using Kinect Sensor: A Feasibility Study

Automation of the Maritime UAV Command, Control, Navigation Operations, Simulated in Real-Time Using Kinect Sensor: A Feasibility Study Automation of the Maritime UAV Command, Control, Navigation Operations, Simulated in Real-Time Using Kinect Sensor: A Feasibility Study Regius Asiimwe, and Amir Anvar Abstract This paper describes the

More information

Instruction Manual ODY-1012

Instruction Manual ODY-1012 Ages 8+ Instruction Manual ODY-1012 INCLUDED CONTENTS: 1 Fuselage Cover 2 Main Frame / Cage 3 Replacement Blades (x 4) 4 3.7 Rechargeable Lithium Battery 5 USB Charging Cable 6 Radio Transmitter 1 RADIO

More information

Decentralized Control Architecture for UAV-UGV Cooperation

Decentralized Control Architecture for UAV-UGV Cooperation Decentralized Control Architecture for UAV- Cooperation El Houssein Chouaib Harik, François Guérin, Frédéric Guinand, Jean-François Brethé, Hervé Pelvillain To cite this version: El Houssein Chouaib Harik,

More information

Distant Mission UAV capability with on-path charging to Increase Endurance, On-board Obstacle Avoidance and Route Re-planning facility

Distant Mission UAV capability with on-path charging to Increase Endurance, On-board Obstacle Avoidance and Route Re-planning facility [ InnoSpace-2017:Special Edition ] Volume 4.Issue 1,January 2017, pp. 10-14 ISSN (O): 2349-7084 International Journal of Computer Engineering In Research Trends Available online at: www.ijcert.org Distant

More information

2017 SUNY TYESA Mini UAV Competition Friday, May 5, 2017 Monroe Community College, Rochester NY

2017 SUNY TYESA Mini UAV Competition Friday, May 5, 2017 Monroe Community College, Rochester NY V 2017 SUNY TYESA Mini UAV Competition Friday, May 5, 2017 Monroe Community College, Rochester NY Project Teams of sophomore and freshman students will design, build, and pilot a mini Unmanned Aerial Vehicle

More information

204 Part 3.3 SUMMARY INTRODUCTION

204 Part 3.3 SUMMARY INTRODUCTION 204 Part 3.3 Chapter # METHODOLOGY FOR BUILDING OF COMPLEX WORKFLOWS WITH PROSTAK PACKAGE AND ISIMBIOS Matveeva A. *, Kozlov K., Samsonova M. Department of Computational Biology, Center for Advanced Studies,

More information

Advanced Mechatronics: AR Parrot Drone Control Charging Platform

Advanced Mechatronics: AR Parrot Drone Control Charging Platform Advanced Mechatronics: AR Parrot Drone Control Charging Platform Engineering Team Members: Ashwin Raj Kumar Feng Wu Henry M. Clever Advanced Mechatronics: Project Plan Phase 1: Design testing platform

More information

Path-finding in Multi-Agent, unexplored And Dynamic Military Environment Using Genetic Algorithm

Path-finding in Multi-Agent, unexplored And Dynamic Military Environment Using Genetic Algorithm International Journal of Computer Networks and Communications Security VOL. 2, NO. 9, SEPTEMBER 2014, 285 291 Available online at: www.ijcncs.org ISSN 2308-9830 C N C S Path-finding in Multi-Agent, unexplored

More information

USING RECREATIONAL UAVS (DRONES) FOR STEM ACTIVITIES AND SCIENCE FAIR PROJECTS

USING RECREATIONAL UAVS (DRONES) FOR STEM ACTIVITIES AND SCIENCE FAIR PROJECTS USING RECREATIONAL UAVS (DRONES) FOR STEM ACTIVITIES AND SCIENCE FAIR PROJECTS Education Committee Federation of Earth Science Information Partners Presenter: Shelley Olds, UNAVCO ABOUT ESIP - THE FEDERATION

More information

Metaheuristics and Cognitive Models for Autonomous Robot Navigation

Metaheuristics and Cognitive Models for Autonomous Robot Navigation Metaheuristics and Cognitive Models for Autonomous Robot Navigation Raj Korpan Department of Computer Science The Graduate Center, CUNY Second Exam Presentation April 25, 2017 1 / 31 Autonomous robot navigation

More information

11. Precision Agriculture

11. Precision Agriculture Precision agriculture for crop production can be defined as a management system that: is information- and technology-based is site-specific uses one or more of the following sources of data for optimum

More information

IPL New Project Proposal Form 2015

IPL New Project Proposal Form 2015 Date of Submission 28/02/2015 IPL New Project Proposal Form 2015 1. Project Title: Development and applications of a multi-sensors drone for geohazards monitoring and mapping 2. Main Project Fields (1)

More information

DRONE-BASED TRAFFIC FLOW ESTIMATION AND TRACKING USING COMPUTER VISION

DRONE-BASED TRAFFIC FLOW ESTIMATION AND TRACKING USING COMPUTER VISION DRONE-BASED TRAFFIC FLOW ESTIMATION AND TRACKING USING COMPUTER VISION A. de Bruin and M.J. Booysen Department of E&E Engineering, Stellenbosch University, Private Bag X1, Matieland, 7602 Tel: 021 808-4013;

More information

IMAGE PROCESSING TECHNIQUE TO COUNT THE NUMBER OF LOGS IN A TIMBER TRUCK

IMAGE PROCESSING TECHNIQUE TO COUNT THE NUMBER OF LOGS IN A TIMBER TRUCK IMAGE PROCESSING TECHNIQUE TO COUNT THE NUMBER OF LOGS IN A TIMBER TRUCK Asif ur Rahman Shaik PhD Student Dalarna University, Borlänge,Sweden Telephone number +4623778592 aus@du.se ABSTRACT This paper

More information

Advanced Sonar Sensing for Robot Mapping and Localisation

Advanced Sonar Sensing for Robot Mapping and Localisation Advanced Sonar Sensing for Robot Mapping and Localisation by Lindsay Kleeman Associate Professor Intelligent Robotics Research Centre Monash University, Australia Research Funded by the Australian Research

More information

Evaluation of tracking algorithms to autonomously following dynamic targets at low altitude

Evaluation of tracking algorithms to autonomously following dynamic targets at low altitude to autonomously following dynamic targets at low altitude Hendrikus R. Oosterhuis 1, and Arnoud Visser 1 Intelligent Systems Laboratory Amsterdam Universiteit van Amsterdam Abstract One of the main requirements

More information

3D Model Generation using an Airborne Swarm

3D Model Generation using an Airborne Swarm 3D Model Generation using an Airborne Swarm R. A. Clark a, G. Punzo a, G. Dobie b, C. N. MacLeod b, R. Summan b, G. Pierce b, M. Macdonald a and G. Bolton c a Department of Mechanical and Aerospace Engineering,

More information

Instruction Manual ODY-1012

Instruction Manual ODY-1012 Ages 8+ Instruction Manual ODY-1012 INCLUDED CONTENTS: 1 Fuselage Cover 2 Main Frame / Cage 3 Main Blades (x 4) 4 3.7 Rechargeable Lithium Battery 5 USB Charging Cable 6 Radio Transmitter Thank you for

More information

Small UAV Noise Analysis Design of Experiment

Small UAV Noise Analysis Design of Experiment Small UAV Noise Analysis Design of Experiment Prepared by : Raya Islam Sam Kelly Duke University Humans and Autonomy Laboratory 1 Introduction This design of experiment presents a procedure to analyze

More information

Transient and Succession-of-Steady-States Pipeline Flow Models

Transient and Succession-of-Steady-States Pipeline Flow Models Transient and Succession-of-Steady-States Pipeline Flow Models Jerry L. Modisette, PhD, Consultant Jason P. Modisette, PhD, Energy Solutions International This paper is copyrighted to the Pipeline Simulation

More information

Dept. of Electrical Engineering. UAV Sensing and Control. Lang Hong, Ph.D. Wright State University

Dept. of Electrical Engineering. UAV Sensing and Control. Lang Hong, Ph.D. Wright State University Senior Design Projects: UAV Sensing and Control Lang Hong, Ph.D. Dept. of Electrical Engineering Wright State University Topic List Light-Weight/Accurate Altimeter for a Small UAV Gyro-Stabilized Servo-Driven

More information

THE SMARTEST EYES IN THE SKY

THE SMARTEST EYES IN THE SKY THE SMARTEST EYES IN THE SKY INTRODUCTION Nightingale Security provides Robotic Aerial Security TM for corporations. Our comprehensive service consists of drones, base stations and powerful mission control

More information

Algorithms and Methods for Influencing a Flock

Algorithms and Methods for Influencing a Flock Fly with Me: Algorithms and Methods for Influencing a Flock The University of Texas at Austin katie@cs.utexas.edu June 22, 2017 1 Bird Strikes in Aviation $3 billion per year (PreciseFlight) 2 Common Bird

More information

Just a T.A.D. (Traffic Analysis Drone)

Just a T.A.D. (Traffic Analysis Drone) Just a T.A.D. (Traffic Analysis Drone) Senior Design Project 2017: Preliminary Design Review 1 Meet the Team Cyril Caparanga (CSE) Alex Dunyak (CSE) Christopher Barbeau (CSE) Matthew Shin (CSE) 2 Problem

More information

Realistic Models for Characterizing the Performance of Unmanned Aerial Vehicles

Realistic Models for Characterizing the Performance of Unmanned Aerial Vehicles Realistic Models for Characterizing the Performance of Unmanned Aerial Vehicles Ken Goss, Riccardo Musmeci, Simone Silvestri National Science Foundation NSF Funded Missouri Transect Science for Peace and

More information

Will the organizers provide a 3D mock-up of the arena? Draft indicative 2D layout of the Arena is given in Appendix 1. This will be finalized soon.

Will the organizers provide a 3D mock-up of the arena? Draft indicative 2D layout of the Arena is given in Appendix 1. This will be finalized soon. GENERAL QUESTIONS When will the organizers publish the details of the communications network to be used during the competition? Details of the communication network can be find at http://www.mbzirc.com/challenge

More information

FIGHTER QUADCOPTER CONTROL USING COMPUTER SYSTEM

FIGHTER QUADCOPTER CONTROL USING COMPUTER SYSTEM FIGHTER QUADCOPTER CONTROL USING COMPUTER SYSTEM Prof.S.V.Phulari, Kumar Kawale, Supriya Kolhe, Gouri Katale, Dhanashri Hemane, P.D.E.A s College of Engineering, Manjari(Bk), Pune sv_phulari@yahoo.com,

More information

On Implementing a Low Cost Quadcopter Drone with Smartphone Control

On Implementing a Low Cost Quadcopter Drone with Smartphone Control On Implementing a Low Cost Quadcopter Drone with Smartphone Control Mohammad Masoud, Yousef Jaradat, Mohammad Farhoud, Ammar Almdallaleh Computer and Communication Engineering Department Al-Zaytoonah University

More information

Use of UAVs for ecosystem monitoring. Genova, July 20 th 2016

Use of UAVs for ecosystem monitoring. Genova, July 20 th 2016 Use of UAVs for ecosystem monitoring Genova, July 20 th 2016 Roberto Colella colella@ba.issia.cnr.it Unmanned Aerial Vehicle An Unmanned Aerial Vehicle (UAV) is an aircraft without a human pilot onboard.

More information

Application of Advanced Multi-Core Processor Technologies to Oceanographic Research

Application of Advanced Multi-Core Processor Technologies to Oceanographic Research DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Application of Advanced Multi-Core Processor Technologies to Oceanographic Research Mark R. Abbott 104 CEOAS Administration

More information

Cooperative Control of Heterogeneous Robotic Systems

Cooperative Control of Heterogeneous Robotic Systems Cooperative Control of Heterogeneous Robotic Systems N. Mišković, S. Bogdan, I. Petrović and Z. Vukić Department of Control And Computer Engineering Faculty of Electrical Engineering and Computing University

More information