fd1 Paper

Published: 2021-09-12 13:35:10
essay essay

Category: Computer Science

Type of paper: Essay

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Hey! We can write a custom essay for you.

All possible types of assignments. Written by academics

The recent developments in autonomous aerial and autonomous ground robotic configurations have drawn many researchers to use them as a team for variety of applications [1]. Some example applications include area exploration, area surveillance [2], and formation control [3], amongst which the combined ground vehicle and Micro Aerial Vehicle (MAV) systems have been utilized effectively while exploiting capabilities of each type of robotic vehicle. The major drawback is that the available MAVs have limited pay load and therefore have limited computational and sensory capability to support resource demanding localization schemes, which are essential for GPS denied navigation [4], [5].
Simultaneous Localization and Mapping (SLAM) methods enable GPS denied navigation of robotic platforms. Laser scan matching based SLAM approaches [4] and vision based parallel tracking and mapping approaches [5], [6] are some of the dominant tools employed on MAVs. However, in a team setting this requirement for SLAM can be relaxed by allowing the more powerful agents (e.g. powerful ground robots) perform map-based localization, while the less capable agents (e.g. MAVs) perform relative localization using the Inter Robot Relative Measurement (IRRM) capability within the network. IRRM based relative localization allows robots to know their relative formation irrespective of the availability of GPS or a previously known map.
IRRM determines the range and bearing values of a robot with respect to a given robot or agent. An illustration of this is given in Fig. 1 where relative range and bearing IRRMs are acquired in a robot network for localization purposes. Out of the main available IRRM solutions for spatial relative location measurement, vision-based approaches exhibit the best bearing measurement accuracies. However, it is limited in its depth perception capability and data correspondence capability [10]–[12]. On the other hand, acoustic-based systems exhibit the best range measurement accuracies along with better data correspondence capability [13]–[15]. However, acoustic solutions are not preferred for bearing estimations and also their update rates are limited. Therefore, this work proposes a solution based on both vision and acoustic sensors. An acoustic sensor is used to measure the range and an infrared (IR) vision sensor is integrated to measure the bearing of IR active markers on the robots. This brings upon heightened accuracies while enabling time domain multiple access measurement correspondence. Additionally, the proposed design is chosen to be scalable for various indoor robots along with the ability to coexist with robot’s onboard acoustic based Proximity sensors.
The main contributions of this work are as follows:
(1) a novel inter-robot relative measurement sensor capable of accurate spatial relative sensing;
(2) a low-cost, low-power and low-payload module that is attachable to many robot platforms, including MAVs;
(3) scalable measurement protocols with state estimator designs to handle input unavailability of the attaching platforms.
In general, relative measurements among platforms are established using transmitter and receiver pairs or arrays, which measure signal parameters related to Time of Arrival (TOA), Received Signal Strength (RSS), or frequency [16]. Depending on the type of signal employed, the available relative measurement methods which are applicable to indoor mobile robots can be classified into four different groups:
(a) RF transmitters and receivers,
(b) image sensors and target features,
(c) IR emitters and receivers, and
(d) ultrasonic transmitters and receivers.
Main reported work utilizing RF transmitters and receivers perform TOA-based range measurement [16], [18] or RSS-based range measurement [19], [20] for localization. These methods are capable of using modulated signals and separate channels for correspondence purposes [18], which allows easy scalability for multi-robot applications. The reported RF-based methods measure only range information. As a result, the methods require multiple and distinct measurements for lateration-based localization.
Image sensors are used to measure the pixel locations of target features in a recorded image. Motion capture systems can be used only in a limited space as fixed indoor positioning systems for robot localization. Depth perception on mobile robot platforms is commonly performed using stereo vision [11], [12] or monocular vision-based measurements of known structures [3], [22], [23]. IR active markers and mono-vision tracking achieves 14 cm maximum error at a 1 m range [3], [22]. It is known that the depth perception accuracy of the vision-based approaches generally degrade with increasing range. The methods shown in [11] and [23] demand significant amount of computational processing for feature identification and correspondence operations. IR emitters and receivers employ modulated IR signals for RSS-based range and bearing measurement. The 2D design presented in [32] achieves accuracies of 35 cm for range and 15° for bearing over a 4 m range. The extension to 3D is reported in [17], where a 14 cm range and 3° bearing accuracies are achieved at a 6m range.
Ultrasonic relative measurement methods achieves sub-centimetre range measurement accuracies and 1-2° bearing measurement accuracies. As a result, the reported measurement errors for ground robotic systems are 6 cm in range and 17° in bearing [30]. Accuracies of 2 cm in range and 22° in azimuth bearing are reported for 3D robotic systems [31]. Due to the poor performance in bearing measurements, the range-based localization method as presented in [31] requires lateration techniques for accurate localization.
IR emitters and receivers employ modulated IR signals for RSS-based range and bearing measurement. The 2D design presented in [32] achieves accuracies of 35 cm for range and 15° for bearing over a 4 m range. The extension to 3D is reported in [17], where a 14 cm range and 3° bearing accuracies are achieved at a 6m range. This increased 3D detection field was realized by using spherical array designs and cascaded filtering [17]. The method facilitates fast refresh rates and robust measurement correspondence over acoustic and vision-based approaches. The main drawbacks of the approach is its high power consumption, construction difficulty of spherical arrays, and poor accuracies compared to vision and ultrasonic systems.
Table I summarizes the main IRRM approaches reported in the literature for localization of ground and aerial systems. Accordingly, the proposed approach in this paper introduces significant improvements of combined range and bearing measurement accuracies. The signal filter designs and signal modulation in the proposed approach effectively addresses the common 40 kHz ultrasonic disturbance sources evident in robot networks. This was a necessary improvement over the reported ultrasonic-only IRRM methods [13], [30], [31] for practical implementation. The correspondence problem faced by vision-only approaches [11], [23] was addressed effectively in this design via synchronized illumination of IR visual targets. Compared to the state-of-the-art system in [17], the proposed approach focuses on multi-platform attachable, low-power and payload design with higher accuracy. It should be noted that the higher accuracies come with lower update rates when compared to [17]. However, the sensor networking protocols are designed so that multiple robots can simulta-neously localize at a modest frequency of 10 Hz, which is sufficient for relative localization purposes.
The traditional design used to have limited intelligence and mainly operated by human operator.
Need to calculate angle of the direction
Slow response
More time consumed
Automatically assigning of each robot at a particular direction and particular place.
Unmanned surveillance.
Can be used in military services
Used mainly for industrial security
Fast response
A) Measurement principle:
The ultrasonic range measurement module measures the TOA and the Angle Of Arrival (AOA) of an ultrasonic signal emitting from a transmitting node. An RF module is integrated for clock synchronization between the two nodes. Fig. 2 illustrates the proposed combined ultrasonic and vision-based relative measurement methods. . By knowing the signal transmission time from the transmitting node, the receiver can measure the TOA and the corresponding propa-gation distance of the signal. The difference in TOA among an array of receivers provides an estimate of the direction of arrival. . A separate vision-based bearing measurement module measures the azimuth and the elevation of an IR active marker located at the transmitting node.
B) Sensor node design :
The range measurement sensor design uses an array of directional ultrasonic transmitters (Prowave 250ST160) and an array of receivers (Prowave 250SR160). The transmitter bursts an ultrasonic tone of 25 kHz with 20 Vpp amplitude. The received signal undergoes an amplification, bandpass filtering, and envelop edge detection process to generate a digital pulse corresponding to the received 25 kHz ultrasonic tone. Fig. 3 illustrates a functional block diagram of different signal processing operations that is happening in a transmitting and a receiving node. An RF transceiver pair ensures a clock synchronization of ±5 ?s, which only contributes to the range measurement error by about 2 mm. A micro-controller measures the time between the RF synchronization pulse and the received processed ultrasonic pulse for TOA measurement. It was necessary to introduce bandpass filtering stages to filter out the strong 40 kHz signals emitting from other sensor devices attached to the operating robots.
The vision sensor used in the study is a PixArt computer vision IR target tracking sensor, which is commonly used for bearing estimation purposes of MAVs [3], [22]. ]. The embedded processor of the sensor has the ability to perform image analysis and target detection tasks in order to provide the pixel positions of perceived IR sources. The image sensor has a limitation of recognizing only four targets. The target limitation is effectively surpassed by using synchronized illu-mination of the IR markers on the robots, which in turn solves the measurement correspondence problem. An omnidirectional IR source was employed using a circular array of pulsed IR LEDs as the tracked target. A panning motor assembly performs IR Search and Track (IRST) tasks to extend the field of view of the sensor. The IR target tracking sensor has low computational overhead as compared to a camera, because it does not require additional feature detection tasks for bearing measurement.
The developed sensors possess a dedicated Zigbee network for measurement communication. . The development test bed consists of a centralized system running the Robot Operating System (ROS), which opens up client programs for communicating with each robot and each sensor node server through the available Wifi or Zigbee network. This allows measurement calibration, analysis, and filtering operations to be performed at a centralized location for experimental purposes.
C) Sensor calibration :
1) Ultrasonic range Measurement Calibration: The model selected for range measurement is given by
r = Cair tr + br + ?r
where r is the range between the two nodes, Cair is the sound propagation speed in air, tr is the TOA of the ultrasonic signal, br is a measurement bias term, and ?r denotes the measurement noise term. For calibration data set , a mobile robot with a receiver node fixed at a known height was maneuvered relative to a robot with a transmitter node. The measurements tr were recorded from the receiver node along with a known set of range values r , which were derived form the laser measurements. In order to identify the model parameters, a nonlinear least-squares optimization process was performed using the Matlab Optimization Toolbox. For this purpose, a cost function was defined as the sum of errors between the known range r , and the measured range given by (1). ). The same process of optimization was used for all parameter estimation tasks presented in this paper.
Model parameters range measurement model(1)
Cair 0.34 mm/?s (± 4.7e-05)
br ?384.23 mm (± 0.77)
E (?r ) ?0.098736 mm
?r 9.6978 mm
Optimization Data size: 18000
Outliers: 0.70 %
2) Ultrasonic Angle of Arrival Measurement Calibration: The AOA estimation is performed using the TOA measurements of the different receivers of the array. In order to improve this estimate the difference in TOA between receivers is used. Due to the use of an array with four receivers, the receiver with minimum TOA provides a coarse estimate of the signal’s direction of arrival with a ± 45° accuracy.
3) Infrared bearing measurement calibration: The image sensor panning motor was set to incremental rotation during the calibration process to capture errors generated by the sensor movement. For optimization, the parameters were initialized using nominal values and measurements.
D) Measurement Configurations :
The proposed sensor node is designed with the capability to attach to both ground and aerial platforms. The nodes can be configured either to be a transmitter or a receiver dynamically using a network coordinator. A full sensor module performs the ultrasonic processing, IR processing, motor control, communication, and computation tasks in one measurement cycle. In order to accurately initialize each cycle, a network coordinator transmits RF timing synchronization pulses along with the information describing the role each node should assume. Upon receiving this information, the receiving sensor node executes the scheduled tasks corresponding to the role it is assigned for that particular cycle.
This study discusses two measurement configurations of the sensor network. The simplest is a static measurement configuration, with a single transmitting node and multiple receiving nodes performing simultaneous range and bearing measurement of the transmitter. This approach is termed as the Star measurement configuration (Fig. 11a). The time taken to complete one set of measurements in this configuration is denoted by T (= 100 ms). These methods have limited applicability for the proposed design due to the limited bandwidth of the receivers used. Therefore, scaling beyond one transmitting node in the proposed design is only realizable through Time Division Multiple Access (TDMA) methods, where different time slots are used by each transmitting node in the system.
The Mesh measurement configuration (Fig. 11b) uses TDMA to cycle the transmitting role throughout the network while limiting signal transmission to only one node during one cycle. The protocol takes nT total time to complete a full set of measurements in a network of n nodes.
Therefore, a Star configuration is necessary for fastest update speeds, while a full Mesh network provides the maximum information for filtering purposes.
E) Relative Localisation :
The sensor node which is performing the estimation process is identified as the “Local node” while the sensor node which is being estimated is identified as the “Target node”. Relative localization filters attempt to find the relative pose between a pair of sensor nodes by using the relative measurements between them.In this project we use pic controller , zigbee, ultrasonic sonic sensor, LCD and robot mechanism.
The embedded system market is one of the highest growth areas as these systems are used in very market segment-consumer electronics, office automation, industrial automation, biomedical engineering, wireless communication, data communication, telecommunications, transportation, military and so on.
The power supply section is the section which provide +5V for the components to work. IC LM7805 is used for providing a constant power of +5V.
The ac voltage, typically 220V, is connected to a transformer, which steps down that ac voltage down to the level of the desired dc output. A diode rectifier then provides a full-wave rectified voltage that is initially filtered by a simple capacitor filter to produce a dc voltage. This resulting dc voltage usually has some ripple or ac voltage variation.
A regulator circuit removes the ripples and also retains the same dc value even if the input dc voltage varies, or the load connected to the output dc voltage changes. This voltage regulation is usually obtained using one of the popular voltage regulator IC units.
2.3.1 Transformer:
Transformers convert AC electricity from one voltage to another with little loss of power. Transformers work only with AC and this is one of the reasons why mains electricity is AC.
Step-up transformers increase voltage, step-down transformers reduce voltage. Most power supplies use a step-down transformer to reduce the dangerously high mains voltage (230V in India) to a safer low voltage.
2.3.2 Rectifier:
There are several ways of connecting diodes to make a rectifier to convert AC to DC. The bridge rectifier is the most important and it produces full-wave varying DC. A full-wave rectifier can also be made from just two diodes if a centre-tap transformer is used, but this method is rarely used now that diodes are cheaper. A single diode can be used as a rectifier but it only uses the positive (+) parts of the AC wave to produce half-wave varying DC
2.3.3 Voltage Regulators:
Voltage regulators comprise a class of widely used ICs. A fixed three-terminal voltage regulator has an unregulated dc input voltage, Vi, applied to one input terminal, a regulated dc output voltage, Vo, from a second terminal, with the third terminal connected to ground.
The series 78 regulators provide fixed positive regulated voltages from 5 to 24 volts. Similarly, the series 79 regulators provide fixed negative regulated voltages from 5 to 24 volts. Voltage regulator ICs are available with fixed (typically 5, 12 and 15V) or variable output voltages. They are also rated by the maximum current they can pass. Negative voltage regulators are available, mainly for use in dual supplies. Most regulators include some automatic protection from excessive current (‘overload protection’) and overheating (‘thermal protection’).
A Microcontroller (or MCU) is a computer-on-a-chip used to control electronicdevices. It is a type of microprocessor emphasizing self-sufficiency and cost-effectiveness, in contrast to a general-purpose microprocessor (the kind used in a PC). A typical microcontroller contains all the memory and interfaces needed for a simple application, whereas a general purpose microprocessor requires additional chips to provide these functions.
Microcontrollers are inside many kinds of electronic equipment (see embedded system). They are the vast majority of all processor chips sold. Over 50% are “simple” controllers, and another 20%

Warning! This essay is not original. Get 100% unique essay within 45 seconds!


We can write your paper just for 11.99$

i want to copy...

This essay has been submitted by a student and contain not unique content

People also read