DSpace Collection:http://hdl.handle.net/2381/3862016-12-10T12:27:50Z2016-12-10T12:27:50ZThe Effect of Thermal Tempering Processes on the Sharpness and Injury Potential of Pint GlassesEarp, Richard Waynehttp://hdl.handle.net/2381/388262016-12-06T03:22:20Z2016-12-05T15:52:53ZTitle: The Effect of Thermal Tempering Processes on the Sharpness and Injury Potential of Pint Glasses
Authors: Earp, Richard Wayne
Abstract: Glass drinkware is widely used in the UK. However, glasses are sometimes used as impulsive weapons in incidents related to alcohol consumption, particularly with imperial pint glasses (~568 ml). In order to reduce the potential for injury with such glasses, glass manufacturers adapted thermal tempering processes to produce tempered (also referred to as toughened) pint glasses. Tempered glass is known for its dense fracture properties and is considered to be a safer alternative to non-tempered (annealed) glass. Tempered pint glasses are now widely used throughout the UK. However, there is no standard which regulates the quality of tempered drinking glasses. This lack of standardisation has been identified as a cause for varying effectiveness of using tempered drinking glasses to reduce injury potential.
This thesis aimed to examine the injury potential of pint glasses and to provide a foundation for a future standard for tempered drinking glasses. The work included: examination of the fracture properties of annealed and tempered drinking glasses; replication and analysis of physical attacks with pint glasses; and assessments into the sharpness of various glass fragments as an indicator of injury potential.
The base region of tempered pint glasses was found to fracture extensively, limiting certain methods of glass attacks. Fragments from the near-rim region were found to vary significantly in size between glasses due to lower wall thicknesses and residual stress. Replications of glassing attacks indicated high forces are involved with such attacks, although the damage severity is lessened with tempered glasses. Sharpness assessments revealed little significant change in fragment sharpness due to tempering. This suggests that changes in injury potential are more likely due to practical considerations such as reduced fragment size, rather than a change in inherent sharpness properties.2016-12-05T15:52:53ZSelective Recording and Stimulation of Neurons in the Mouse Hippocampus and Cortex: Two-Photon Imaging, Uncaging, and BehaviourCampi, Julieta Ernestinahttp://hdl.handle.net/2381/388172016-12-06T03:22:06Z2016-12-05T12:55:50ZTitle: Selective Recording and Stimulation of Neurons in the Mouse Hippocampus and Cortex: Two-Photon Imaging, Uncaging, and Behaviour
Authors: Campi, Julieta Ernestina
Abstract: Two-photon imaging is becoming one of the most widely used technique in Neuroscience. Combined with selective neuronal stimulation (or inhibition) techniques, it offers an infinite variety of experiments aimed to better understand the brain at different scales. However, the optimal conditions for applying each of these techniques are not usually the same, and even though there are ways of overcoming this limitation, these solutions require expensive resources.
This PhD thesis aimed to optimise the combination of two-photon imaging and two-photon uncaging by finding an excitation wavelength suitable for both processes, which is the main limitation when the two methods are used together.
This method was designed in order to be suitable for both in vitro and in vivo experiments. First, in vitro experiments were performed in order to maximise the quality of brain slices. Secondly, the physicochemical characteristics of the protein used for Calcium imaging, GCaMP6s, were studied with the objective of finding a range of excitation wavelengths suitable for doing imaging but, at the same time, closer to the optimal uncaging wavelength. Two-photon uncaging of Glutamate was achieved using light at 850 nm, a wavelength that also permitted monitoring the neuronal response to the stimulation. It is expected that this technique will soon be applied in vivo with the objective of developing an animal model for concept representation in the hippocampus.
This project consists of training mice in a virtual reality set-up that allows testing them in a two-forced choice paradigm. In the case of finding neurons firing selectively to one of these objects, independently of low-level features, the uncaging technique will be useful for manipulating the normal activity of those neurons and study how this manipulation a affects the animal's behaviour.2016-12-05T12:55:50ZHF-MIMO Antenna Array Design and OptimizationLiu, Jiatonghttp://hdl.handle.net/2381/381672016-10-11T02:24:04Z2016-10-10T09:51:42ZTitle: HF-MIMO Antenna Array Design and Optimization
Authors: Liu, Jiatong
Abstract: MIMO as a new antenna array communication technology has been widely applied in modern communications, especially within UHF/VHF band. After several experimental campaigns, E.M. Warrington and S.D. Gunashekar proved that MIMO techniques could also be applicable within HF band. Further experiments showed that traditional widely spaced homogeneous antenna arrays could be replaced by co-located heterogeneous antenna arrays without significant reductions in data transmission rate. In other words, radiation pattern diversity can be used as a new kind of MIMO diversity to replace spatial diversity.
In order to get a better understanding of this phenomenon, antenna modelling using numerical electromagnetics code (NEC) has been carried out in last three years. The study showed that phase difference difference (PDD) between collocated antenna array elements could be the key factor of radiation pattern diversity. The correlation level between array elements can be reduced with increased PDD between these elements. Several transmitting antenna arrays have been developed according to this study and tested via a 202 km radio link between Leicester and Lancaster. The experimental campaigns showed that the newly designed antenna arrays have a significantly increased de-correlation level between array elements compared with traditional antenna arrays. High performance computing (HPC) was used in the antenna modelling in order to investigate the relationship between antenna array geometry and phase difference difference. Several optimized antenna arrays with large phase difference difference were recommended from tens of million of different antenna geometries.
This research gives the new direction of HF-MIMO antenna array design: in theory aspect, it clearly specifies what kind of pattern needs to be targeted in order to get decorrelation using pattern diversity; in antenna modelling aspect, HPC was for the first time applied in HF-MIMO antenna array design, which provided a brand new high level of computing platform for the future modelling work.2016-10-10T09:51:42ZMapping and Investigation of Atrial Electrogram Fractionation in Patients with Persistent Atrial FibrillationPaggi de Almeida, Tiagohttp://hdl.handle.net/2381/381172016-10-05T02:20:44Z2016-10-04T08:43:24ZTitle: Mapping and Investigation of Atrial Electrogram Fractionation in Patients with Persistent Atrial Fibrillation
Authors: Paggi de Almeida, Tiago
Abstract: Atrial fibrillation (AF) is the most common sustained cardiac arrhythmia found in clinical practice, and it is a leading cause of stroke. It has been shown that triggers in the pulmonary veins (PVs) are important in the initiation and perpetuation of paroxysmal AF. PV isolation (PVI) by radiofrequency catheter ablation has been proved effective in treating patients with paroxysmal AF. However, the identification of critical areas for successful ablation in patients with persistent AF (persAF) remains a challenge due to an incomplete understanding of the mechanistic interaction between relevant atrial substrate and the initiation and maintenance of AF. Complex fractionated atrial electrograms (CFAEs) are believed to represent remodelled atrial substrate and, therefore, potential targets for persAF ablation. Since its introduction in 2004, CFAEs have been accepted and incorporated as an additional therapy to PV isolation (PVI) to treat patients with persAF by many laboratories. Inconsistent CFAE-guided ablation outcomes have, however, cast doubt on the efficacy of this approach. The majority of the electrophysiological studies rely on automated CFAE detection algorithms embedded in electro-anatomical mapping (EAM) systems to identify CFAEs during persAF ablation.
Different companies have developed algorithms based on different aspects of the atrial electrogram (AEG). Differences in these algorithms could lead to discordant CFAE classifications by the available EAM systems, giving rise to potential disparities in CFAE-guided ablation. Additionally, previous studies support the existence of fractionated AEGs not related to AF perpetuation, and fractionated AEGs that represent sources responsible for AF maintenance. Those investigations relied on few AEG descriptors, which can be a limiting factor when describing a complex phenomenon such as AF. Discerning the different types of CFAEs is crucial for AF ablation therapy. Finally, the spatio-temporal behaviour of AEGs collected during persAF remains poorly explored.
This study encloses contributions towards the minimization of discordances in automated classification of CFAEs, the characterization of AEGs before and after PVI, and the investigation of the temporal behaviour of consecutive AEGs and the consistency of CFAEs using different AEG segment lengths.2016-10-04T08:43:24ZEffect of physical traffic & t-junction layout on radio signal characteristics & network performance at 5.9 ghzClayton, Crishantha Jeromehttp://hdl.handle.net/2381/380402016-09-10T03:52:33Z2016-09-09T12:00:33ZTitle: Effect of physical traffic & t-junction layout on radio signal characteristics & network performance at 5.9 ghz
Authors: Clayton, Crishantha Jerome
Abstract: IEEE 802.11p, which operates at 5.9 GHz, has been the widely adopted communications standard for vehicular communications and this has prompted studies at different physical locations on the network performance, pathloss, Doppler and delay spreads in the 5.9 GHz radio channel. This thesis presents novel measurements of network performance, signal strength and Doppler spread under NLOS conditions at three T-junctions with different street widths and building layouts. The study found that there was less received power and poorer network performance in intersections with single/dual lanes and fewer buildings on either side of the roads – the maximum range for reliable operation (>90%) of the network is reduced to approximately 10 m from the intersection centre. Higher signal strength in the presence of buildings is consistent with multipath propagation contributing positively towards the signal strength as shown by a site specific ray tracing model developed as part of this project. Signal strength measurements were compared with predictions from the model virtualsource11p and a median error less than 5 dB was found for measurements in urban environments and closer to the intersection centre. The median error was greater than 10 dB and increased with the distance from the intersection centre in junctions with wider roads and fewer buildings either side of the road. The relationship between a vehicle’s size and the Doppler spread it causes is another unique observation of this study and has been investigated by developing a simple model. Doppler spreads become larger as the reflecting vehicle moves closer to the transmitter and receiver and when the size of this vehicle is larger. A directional antenna was used to determine the azimuth of arrival of the strongest multipath components with the observations demonstrating the importance of including transient features in maps when ray tracing.2016-09-09T12:00:33ZTribological Studies of Artificial Sports PitchesDevenport, Timothyhttp://hdl.handle.net/2381/379882016-08-18T02:19:16Z2016-08-17T12:19:40ZTitle: Tribological Studies of Artificial Sports Pitches
Authors: Devenport, Timothy
Abstract: The aim of this project was to investigate the wear of artificial sports pitch materials and characterise the wear mechanisms. The project was in collaboration with Notts Sports Ltd. who wished to compare the performance of existing pitches with potential new, improved artificial sports turfs.
The various materials (from different manufacturers, and of differing pile yarn weight and worn in situ and new) were imaged by optical microscopy and scanning electron microscopy to assess the tribological behaviour of the artificial sports turfs. X-ray computed tomography (X-ray CT) and Magnetic Resonance Imaging (MRI) was used to assess the structure of the artificial sports turfs. Pin on disk wear testing, tensile and fatigue testing was conducted on the artificial sports turfs to assess the wear life/mechanisms and how the structure performs under loading.
Optical and scanning electron microscopy reveal the wear mechanisms can be identified by the damage incurred as either adhesive, or abrasive or both. Additionally a directionality in the mechanical properties of the materials was observed. Visual and structural investigation by X-ray CT revealed a directionality to the materials. X-ray CT also showed inconsistencies in the structure negatively affect the performance of the artificial sports turfs, which was seen to be the case with some tensile and fatigue experiments. MRI was explored but found not useful in assessing the structure of the artificial sports turfs.
Wear testing showed different behaviour for differing artificial sports turfs, whilst the tensile and fatigue testing showed the orientation of the samples affects these properties, in that the horizontal orientation was stronger than vertical orientation.
Tensile testing revealed the artificial sports turfs behave similarly to randomly orientated short fibre composites.
Rawsons artificial sports turfs have more even fibre distribution in the horizontal and vertical directions in comparison to the Leigh Spinners artificial sports turfs, which has led to more uniform mechanical and tribological properties.2016-08-17T12:19:40ZDesign techniques for “soft” single-core embedded processors with predictable behaviour and high levels of performanceRizvi, Syed Aley Imranhttp://hdl.handle.net/2381/379522016-08-16T02:33:14Z2016-08-15T12:03:12ZTitle: Design techniques for “soft” single-core embedded processors with predictable behaviour and high levels of performance
Authors: Rizvi, Syed Aley Imran
Abstract: This thesis is concerned with the design and implementation of single-processor embedded systems which have strict timing constraints. The focus of the work is on the development of systems which are based on the factors that are involved in making the real-time systems unpredictable. Among various other predictability hampering factors, the problem of shared resource access was the main focus of this research.
In previous research this has been demonstrated that the time-triggered co-operative schedulers are more reliable in terms of predictability in real-time applications but there are many applications where some level of pre-emption is inevitable. The inclusion of pre-emption when employed in the systems with shared resources shared resources introduces the possibility of deadlock and data corruption, which can – in turn - lead to critical errors or complete system failures. Various methods and protocols have been suggested but these solutions can themselves lead to other issues (such as task “priority inversion”).
The target systems comprises on FPGAs on which customised techniques were proposed and implemented to avoid the problems of priority inversion in these systems.
In this research adaptation of hardware TRAP and a novel hardware technique TRACE are presented. These techniques were used in a soft-core processor to deal with the shared resources to decrease the jitter and increase the performance of these systems.2016-08-15T12:03:12ZSolid Mechanics of Degrading Bioresorbable PolymersSamami, Hassanhttp://hdl.handle.net/2381/377872016-06-18T03:15:12Z2016-06-17T10:47:39ZTitle: Solid Mechanics of Degrading Bioresorbable Polymers
Authors: Samami, Hassan
Abstract: Bioresorbable polymers have been successfully used in clinical applications for many decades. They are those types of polymers that degrade into the human body and often don’t need to be removed out of the body, because their degradation products metabolise and enter the general metabolic pathways. However, there has been an increasing demand for better reliability and degradation control of bioresorbable polymeric devices causing researchers to abandon trial-and-error approaches to model-based methods. The mathematical or computer-based techniques for modelling of the mechanical properties are currently in their infancy or non-existent. This study aims to build a model to express the change in mechanical properties and detecting the degradation distribution within degrading bioresorbable polymers. It consists of three main parts. The first part reviews the literature for the most commonly used bioresorbable polymers and their applications. It also reviews the existing mathematical models for biodegradation. The experimental data of six PLLA films are also reviewed to provide insight into changes in mechanical properties of degrading bioresorbable polymers during hydrolytic degradation. The review shows that the mechanical properties are highly affected by the changes in molecular weight and crystallization. The second part presents a constitutive law for prediction of the elastic moduli, tensile strength and Poisson’s ratio of amorphous and semi-crystalline bioresorbable polymers based on the novel idea of formation cavity and crystal inclusions within degrading bioresorbable polymers. The results of using the constitutive law show that it can fit the experimental data fairly well. The third part presents a vibration-based study that shows the curvature mode shapes can successfully reveal the degradation distribution within, for instance, a simple cantilever beam or a coronary stent. This study also presents a chapter for computer modelling of the degradation behaviour of polyester-based tissue scaffolds using a degradation model developed in the University of Leicester.2016-06-17T10:47:39ZFault Detection, Isolation and Recovery Schemes for Spaceborne Reconfigurable FPGA-Based SystemsSiegle, Felixhttp://hdl.handle.net/2381/375212016-05-13T02:14:48Z2016-05-12T11:24:17ZTitle: Fault Detection, Isolation and Recovery Schemes for Spaceborne Reconfigurable FPGA-Based Systems
Authors: Siegle, Felix
Abstract: This research contributes to a better understanding of how reconfigurable
Field Programmable Gate Array (FPGA) devices can safely be
used as part of satellite payload data processing systems that are exposed
to the harsh radiation environment in space. Despite a growing
number of publications about low-level mitigation techniques, only
few studies are concerned with high-level Fault Detection, Isolation
and Recovery (FDIR) methods, which are applied to FPGAs in a similar
way as they are applied to other systems on board spacecraft.
This PhD thesis contains several original contributions to knowledge
in this field. First, a novel Distributed Failure Detection method
is proposed, which applies FDIR techniques to multi-FPGA systems
by shifting failure detection mechanisms to a higher intercommunication
network level. By doing so, the proposed approach scales better
than other approaches with larger and complex systems since data
processing hardware blocks, to which FDIR is applied, can easily be
distributed over the intercommunication network. Secondly, an innovative
Availability Analysis method is proposed that allows a comparison
of these FDIR techniques in terms of their reliability performance.
Furthermore, it can be used to predict the reliability of a specific
hardware block in a particular radiation environment. Finally,
the proposed methods were implemented as part of a proof of concept
system: On the one hand, this system enabled a fair comparison
of different FDIR configurations in terms of power, area and performance
overhead. On the other hand, the proposed methods were all
successfully validated by conducting an accelerated proton irradiation
test campaign, in which parts of this system were exposed to
the proton beam while the proof of concept application was actively
running.2016-05-12T11:24:17ZApplication of Anti-Windup (AW) techniques to the control of Wave Energy Converters (WEC)Lekka, Angelikihttp://hdl.handle.net/2381/375102016-05-12T02:13:03Z2016-05-11T13:14:40ZTitle: Application of Anti-Windup (AW) techniques to the control of Wave Energy Converters (WEC)
Authors: Lekka, Angeliki
Abstract: This thesis considers control system enhancement for Wave Energy Converters
(WECs), of the point-absorber type, used for water desalination. The thesis makes
several contributions.
Firstly, it is shown that a type of nonlinear control system previously used in the
literature provides global stability guarantees for this type of WEC in the absence
of input constraints.
Following this, several anti-windup techniques for a certain class of nonlinear systems
with input constraints are developed; a nonlinear Internal Model Control (IMC) compensator,
a linear reduced-order compensator and a linear sub-optimal performance
compensator. It is shown how these anti-windup strategies are natural generalisations
of those found elsewhere in the literature and how all of these compensators can be
designed such that global exponential stability of the class of systems considered is
guaranteed.
Finally, the thesis describes the application of these anti-windup techniques to
a nonlinear simulation model of a WEC system where their benefits are clearly
demonstrated. It is shown that these compensators improve the performance of the
WEC during periods of saturation and, moreover, that the sub-optimal compensator
can achieve desirable tracking without causing any damage to the desalination
equipment. These results demonstrate the benefit of anti-windup for WEC control
and imply potential savings in terms of operation and maintenance costs, thereby
contributing to the potential commercialisation of such devices.2016-05-11T13:14:40ZKarhunen-Loève Transform based Lossless Hyperspectral Image Compression for Space ApplicationsMat Noor, Nor Rizuan binhttp://hdl.handle.net/2381/363992016-01-27T03:23:49Z2016-01-26T14:41:15ZTitle: Karhunen-Loève Transform based Lossless Hyperspectral Image Compression for Space Applications
Authors: Mat Noor, Nor Rizuan bin
Abstract: The research presented in this thesis is concerned with lossless hyperspectral image compression of satellite imagery using the Integer Karhunen-Loève Transform (KLT). The Integer KLT is addressed because it shows superior performance in decorrelating the spectral component in hyperspectral images compared to other algorithms, such as, Discrete Cosine Transform (DCT), Discrete Wavelet Transform (DWT) as well as the Lossless Multispectral and Hyperspectral Image Compression algorithm proposed by the Consultative Committee for Space Data Systems (CCSDS-MHC). The aim of the research is to develop a reliable low complexity implementation of the computationally intensive Integer KLT, which is suitable for use on board remote sensing satellites.
The performance of the algorithm in terms of compression ratio (CR) and execution time was investigated for different levels of clustering and tiling of hyperspectral images using airborne and spaceborne test datasets. It was established that the clustering technique could improve the CR, which is a completely new finding. To speed up the algorithm the Integer KLT was parallelised based on the clustering concept and was implemented using a multi-processor environment. The core part of the Integer KLT algorithm, i.e. the PLUS factorisation, has proven to be the most vulnerable part to single-bit errors that could cause a large loss to the encoded image. An error detection algorithm was proposed which was incorporated in the Integer KLT to overcome that. Based on extensive testing it was shown that it is capable of detecting errors with a sufficiently low error tolerance threshold of 1e-11 featuring a low execution time depending on the extent of clustering and tiling. A new fixed sampling method for the covariance matrix calculation was proposed, which could avoid variation in the data volume of the encoded image that would be beneficial for remote debugging. Analysis of the overhead information generated by the Integer KLT was carried out for the first time and a compaction method which is crucial to clustering and tiling was also suggested.
The full range of the proposed enhanced Integer KLT schemes was implemented and evaluated on a desktop computer and two DSP platforms, OMAP-L137 EVM and TMDSEVM6678L EVM in terms of execution time and average power consumption. A new method for estimating the best clustering level, at which the compression ratio is maximised for each tiling level involved, was also proposed. The estimation method could achieve 87.1% accuracy in determining the best clustering level based on a test set of 62 different hyperspectral images. The best average compression ratio, recorded for Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and Hyperion (spaceborne) images is 3.31 and 2.39, respectively. The fully optimised KLT system, achieving a maximum CR, could compress an AVIRIS image in 3.6 to 8.5 seconds, depending on the tiling level, while a Hyperion image - in less than 1 second on a desktop computer. On the multi-core DSP, an AVIRIS image could be compressed in 18.7 seconds to 1.3 minutes, depending on the tiling level, whereas a Hyperion image - in around 3.4 seconds. On the low power DSP platform OMAP-L137 the compression of an AVIRIS image takes 5.4 minutes and of a Hyperion image - 44 seconds to 2.1 minutes, depending on the tiling level.2016-01-26T14:41:15ZTask Oriented Fault-Tolerant Distributed Computing for Use on Board SpacecraftFayyaz, Muhammadhttp://hdl.handle.net/2381/362682016-01-13T03:10:43Z2016-01-12T16:04:09ZTitle: Task Oriented Fault-Tolerant Distributed Computing for Use on Board Spacecraft
Authors: Fayyaz, Muhammad
Abstract: Current and future space missions demand highly reliable, High Performance Embedded Computing (HPEC). The review of the literature has shown that no single solution could meet both issues efficiently at present addressing HPEC as well as reliability. Furthermore, there is no suitable method of assessing performance for such a scheme.
In this thesis a novel cooperative task-oriented fault-tolerant distributed computing (FTDC) architecture is proposed, which caters for high performance and reliability in systems on board spacecraft. In a nut shell, the architecture comprises two types of nodes, a computing node and an input-output node, interfaced together through a high-speed network with bus topology. To detect faults in the nodes, a fault management scheme specifically designed to support the cooperative task-oriented distributed computing concept is proposed and employed, which is referred to as Adaptive Middleware for Fault-Tolerance (AMFT). AMFT is implemented as a separate hardware block and operates in parallel with the processing unit within the computing node. A set of metrics is designed and mathematical models of availability and reliability are developed, which are used to evaluate the proposed distributed computing architecture and fault management scheme.
As a new development, extending the current state of the art, the proposed fault-tolerant distributed architecture has been subjected to a rigorous assessment through hardware implementation. Implementation approaches at two levels were adopted to provide a proof of concept: a board level and a Multiprocessor System-on-Chip (MPSoC) level. Both distributed computing system implementations were evaluated for functional validity and performance.
To examine the FTDC architecture performance under a realistic space related distributed computing scenario a case-study application, representing a satellite Attitude and Orbit Control System (AOCS), was developed. The AOCS application was selected because it features a time critical task execution, in which system failure and reconfiguration time must be kept minimal. Based on the case-study application, it was demonstrated that the FTDC architecture is capable of fully meeting the desired requirements by timely migrating tasks to functional nodes and keeping rollback of task states minimal, which proves the advantages of the adopted cooperative distributed approach for use on board spacecraft.2016-01-12T16:04:09ZApplication of PEA technique to space charge measurement in cylindrical geometry HV cable systemsZheng, Hualonghttp://hdl.handle.net/2381/361682016-01-06T03:09:50Z2016-01-05T16:47:24ZTitle: Application of PEA technique to space charge measurement in cylindrical geometry HV cable systems
Authors: Zheng, Hualong
Abstract: Space charge, as one of the major concerns for the reliability of polymeric High
Voltage Direct Current (HVDC) cables, has drawn wide attention in both academia and
industry. Accordingly, measurement techniques along with accurate data interpretation
have been required to study space charge behaviour in insulation materials and to
provide solid bases for simulation activities. In this work, a high temperature space
charge measurement system for mini-cables has been developed based on the Pulsed
Electro-Acoustic (PEA) method. In parallel, simulation tools for space charge
accumulation, based on non-linear unipolar charge transport models, the acoustic signal
formation, transmission of acoustic waves, and their detection, have been developed for
PEA measurement on mini-cables to provide an alternative way for interpreting the raw
experimental data rather than the traditional approaches of reconstructing space charge
information by signal processing and calibration. The simulation uses 2-D simulation
tools and includes the clamping unit of the PEA cell to provide, for the first time, a
detailed comparison of the two commonly used shapes, flat and curved, of the base
electrode. Benefiting from the ability of applying isothermal experimental conditions of
20 – 70 °C, the transient of ‘intrinsic’ space charge accumulation due to the field and
temperature dependent conductivity has been studied by means of a novel experimental
data analysis method proposed in this work. In addition, the analysis provides a way to
assess conductivity models by matching the simulation results with the experimental
space charge results. By applying the simulation tools, the effect of the possible cable
defects of non-concentricity and a mismatch between the insulation and semicon layers
could be assessed. Furthermore, the origin of the bulk space charge signal
experimentally observed in a mini-cable was found to be consistent with a radius
dependent conductivity which may be a consequence of incomplete degassing.2016-01-05T16:47:24ZDesign and evaluation of flexible time-triggered task schedulers for dynamic control applicationsHanif, Musharraf Ahmedhttp://hdl.handle.net/2381/361662016-01-06T03:09:48Z2016-01-05T15:15:05ZTitle: Design and evaluation of flexible time-triggered task schedulers for dynamic control applications
Authors: Hanif, Musharraf Ahmed
Abstract: A statically-scheduled time-triggered (TT) software architecture demonstrates very predictable patterns of temporal behaviour and is – therefore – widely considered to be an appropriate platform for many high integrity and safety-critical embedded applications. However, there remains an important class of highly dynamic control systems for which it is considered that TT architectures are not a good match and for which the use of ―event triggered‖ (ET) designs is usually preferred. These applications include the control systems for internal combustion engines, brushless DC motors and synchronous AC motors. The aim of the research project presented in this thesis was to explore ways in which a static TT architecture could be adapted in order to better meet the requirements of such highly-dynamic control systems.
The project had three main outcomes.
The first project outcome was that a novel ―flexible TT architecture was developed. This architecture differs significantly from conventional TT designs in that – during the system operation – only the timing of the next system interrupt is known in advance (that is, the timing of subsequent interrupts is unknown). This allows for considerable flexibility in the task scheduling while retaining most of the features that make static TT approaches attractive.
The second project outcome was that two novel schedulers were designed and implemented, in order to demonstrate (by means of an ―existence proof‖) that it was possible to construct a practical implementation of the flexible TT architecture.
The third outcome from this project was that a comprehensive evaluation of the flexible TT architecture and the associated scheduler implementations was carried by means of two representative case studies. The case studies involved engine synchronisation and control of a brushless DC motor (BLDCM). In the engine synchronisation case study, the flexible TT architecture was shown to be a viable alternative to ET in conditions where a static TT was unable to cope with the system demands. In the BLDCM case study, while both static TT and flexible TT were viable alternatives, the flexible TT was able to provide similar levels of performance to the static TT solution at a fraction of the resource usage.2016-01-05T15:15:05ZNew environments for neurophysiological investigations.Sehmi, Arvindra Singh.http://hdl.handle.net/2381/348282015-11-20T03:27:59Z2015-11-19T08:59:40ZTitle: New environments for neurophysiological investigations.
Authors: Sehmi, Arvindra Singh.
Abstract: The main topics of research are in the sub-areas of neurophysiology that are concerned with measurement of the electrical activity arising from contracting muscle (EMG) and from the surface of the scalp (EEG). Investigations are restricted to the surface-recorded interference pattern EMG, and to the EEG waveform recorded in response to sensory stimulation, known as the evoked potential (EP). The EMG and EP are representative of two important classes of signal commonly encountered in engineering, namely random noise-like and deterministic non-stationary. The thesis describes work on the development of a variety of new techniques and methods of analysis for application in neurophysiology and electrodiagnosis. A general purpose signal processing computer has been built which incorporates a high level of user-machine ergonomics. Turning Points Spectral estimation of the interference pattern EMG is simulated on this computer to demonstrate its flexibility for constructing analysis and control applications. Some emphasis is placed on methods of improving the quality of acquired EMG data for use in the analysis of the dynamics of the neuromuscular system. In this respect, the author describes the design of a fully controllable muscle loading system which uses dc electromagnetic suspension technology. The above computer can be used to control this muscle load for accurate loading protocols in EMG-Force modelling experiments. Techniques involved in the design and construction of the computer lead to higher-level program and data analysis specifications which employ Artificial Intelligence (AI) computing methods. These AI methods, in conjunction with some of those techniques which were used for EMG analysis, are applied to the investigation of single-trial EPs. A suite of adaptive EP analysis procedures, which include a prototype fuzzy expert system, facilitate the extraction of EP component latency variability estimates, and also provide automatic selective single-trial averaging. The latter selective averaging facility, can be used to enhance underlying activity and to examine the relationships that might exist between different components in the EP.2015-11-19T08:59:40ZThe creep and failure of engineering ceramics under multiaxial states of stress.Searle, Andrew Arthur.http://hdl.handle.net/2381/348272015-11-20T03:27:55Z2015-11-19T08:59:39ZTitle: The creep and failure of engineering ceramics under multiaxial states of stress.
Authors: Searle, Andrew Arthur.
Abstract: The effort dedicated to developing the material properties of engineering ceramics has not been accompanied by a similar effort in developing design methods that would allow engineers to make full use of these materials. In particular the high temperature creep behaviour of engineering ceramics has received little attention. In this thesis two parallel approaches, one theoretical and one practical, have been taken towards the final aim of constructing design codes for the creep of ceramic materials. In the theoretical work the principles developed for modelling creep and failure in metals were employed and adapted where necessary to provide new models that describe the behaviour of ceramics under multiaxial stresses. Important changes were made to account for differences in microstructure between these two classes of materials. In the practical work equipment was developed to provide suitable multiaxial creep test data with which to verify and further construct models. This involved the construction of a tension/torsion creep testing machine featuring a radio-frequency heating furnace, cooled grip heads, extensometry equipment, biaxial loading system and a temperature measurement and control system. The machine was capable of operating for at least 300 hours at a temperature of at least 1400 °C. Nine creep tests were conducted on reaction bonded silicon nitride specimens including two unique tests under pure torsion and combined tension/torsion. Four tests were conducted on aluminium oxide specimens including a unique test under combined tension/torsion. Tensile test results showed good agreement with previously published data for both materials confirming the equipment accuracy. Results from the multiaxial tests indicated that reaction bonded silicon nitride fails in response to the value of the effective stress. In addition reasonable agreement was obtained between the test data and predictions from the new models.2015-11-19T08:59:39ZAutomated synthesis of lumped linear three-terminal networks.Savage, W. H.http://hdl.handle.net/2381/348252015-11-20T03:27:52Z2015-11-19T08:59:38ZTitle: Automated synthesis of lumped linear three-terminal networks.
Authors: Savage, W. H.
Abstract: The classical techniques of network synthesis are restricted to designs in idealized elements with series-parallel configurations. This research is an investigation into the possibility of unrestricted synthesis employing alternative techniques which involve optimization by computer. In this method the values of the elements are modified such that an error function is reduced. If the current network is unable to satisfy the required network response then the components have to be modified. A method of coefficient matching was investigated with lumped, linear, passive three-terminal networks having a maximum of ten nodes. The research utilized a design package developed by Drs. O.P.D. Cutteridge and A.J. Krzeczkowski. This formulated the problem for solution by an RC network with a fixed number of nodes. An effective analysis routine calculated the values of the coefficients and their first derivatives, for optimization by the conjugate gradient and Gauss-Newton algorithms. The rudiments of a method for the addition and removal of a single element had been developed. Research was undertaken into three areas. Firstly, the efficiency and dependability of the optimization was improved. This involved research into the individual error functions, variation of common factors and the efficient utilization of the optimization algorithms. Secondly, modifications to the network topology were considered. The criteria to determine the need for a modification were improved and checks to ensure the continued efficiency implemented. An improved method of element addition (capable of multiple additions) was devised. Thirdly, the addition of groups of elements was investigated (i.e. node addition) and a successful method developed. With these modifications implemented, the package was able to achieve more complex realizations than had previously been obtained. For example some seven node RC realizations with fifteen elements were automatically evolved from initial structures having five nodes and eight elements, a process which sometimes required a total of twenty-five topological modifications. Several theoretically interesting networks which were evolved automatically by the package are included.2015-11-19T08:59:38ZA review and the development of bounding methods in continuum mechanics.Scaife, J. A.http://hdl.handle.net/2381/348262015-11-20T03:27:53Z2015-11-19T08:59:38ZTitle: A review and the development of bounding methods in continuum mechanics.
Authors: Scaife, J. A.
Abstract: Energy theorems and kindred inequalities have long been a basis for the analysis of redundant structures and the material continuum. In the first section of this thesis we trace the development of the principal results of elasticity, time-independent inelasticity and creep, from the principle of virtual work and the well-known theorems of linear elasticity to recent results which describe the deformation of general inelastic materials under time-varying loads. In certain instances where incompleteness is apparent in the theory an attempt is made to remedy this; in particular we present a new view of the upper bound shakedown theorem - an area which remains relatively unexplored in comparison with the lower bound theorem and the limit theorems. A discussion of the fundamental material requirements which permit the establishment of many of the inequalities is included. In the following section we obtain new bounding results for a class of constitutive relations using a thermodynamic formalism as the basis of the discussion. The bounds turn out to be both simple in form and insensitive to the detailed aspects of the material behaviour. Cyclic work bounds are derived in which the cyclic stress history known as the "rapid cycle solution" gives a simple physical meaning to the bounding results. Examples are given for linear viscoelastic models, the non-linear viscous model and the Bailey-Orowan recovery model. A displacement bound is derived which is expressed in terms of two plasticity solutions and the result of a simple creep test. Examples are given and the results we obtain for the Bree problem are compared with O'Donnell's solutions which are in use in current design. In the third section, new results are obtained for the behaviour of a general viscoelastic material subjected to cyclic loading. The existence and uniqueness of a stationary cyclic state of stress is proved and a lower work bound for the general non-linear material is derived. An upper work bound is obtained for the general linear material in terms of the rapid cycle solution and we describe a simple method for obtaining this solution without the need for a full analysis. The role of the constitutive equation in the bounding theory is investigated when the method based on a state variable description is compared with the results obtained from the use of a history-dependent constitutive relation. We go on to show how a knowledge of the response of a viscoelastic body to constant loading is sufficient to determine its general long-term cyclic strain behaviour. In the final section we bring together the existing theorems concerning small deformations of time-dependent materials and large deformations of time-independent materials. The problem posed has dual complexity as a result of the dependence of the deformation on the stress history and the dependence of the stress on the changing geometry. We obtain a general displacement bound in terms of suitably defined conjugate variables referred to the undeformed configuration. In an example which follows it is shown that the employment of such variables may in some cases reduce the difficulty of bounding non-linear deformations to a level that is comparable with the linear case.2015-11-19T08:59:38ZThe computation of waveguide vector fields and the generation of field patterns using computer graphics.Santos, M. L. X. dos.http://hdl.handle.net/2381/348242015-11-20T03:27:49Z2015-11-19T08:59:37ZTitle: The computation of waveguide vector fields and the generation of field patterns using computer graphics.
Authors: Santos, M. L. X. dos.
Abstract: Abstract not available.2015-11-19T08:59:37ZSimultaneous stabilization of multivariable linear systems.Saif, Abdul-Wahid Abdul-Aziz.http://hdl.handle.net/2381/348232015-11-20T03:27:47Z2015-11-19T08:59:37ZTitle: Simultaneous stabilization of multivariable linear systems.
Authors: Saif, Abdul-Wahid Abdul-Aziz.
Abstract: The simultaneous stabilization of a collection of systems has received considerable attention over a number of years. The practical motivation for a solution to the simultaneous stabilization problem (SSP) stems from the stability requirements of multimode systems in practical engineering. For example, a real plant may be subjected to several modes due to the failure of sensors and nonlinear systems are often represented by a set of linear models for design purposes. To examine these problems, it is necessary to establish a simultaneous stabilization theory. This dissertation considers the problem of simultaneously stabilizing a set of linear multivariable time-invariant systems. Three methodologies are presented. The first method is based on finding new approaches to solving the strong stabilization problem (i.e. stabilization by a stable controller) which can then be used in the SSP of two plants. New sufficient conditions and algorithms are derived for the solution to this problem. The second method utilizes robust stability theory applied to a "central" plant obtained from a given set of plants. A generalized two-block L-optimization problem is formulated and solved to find the central plant. The third method utilizes the parametrization of all stabilizing controllers. Sufficient conditions for the existence of a solution are derived and in the case of two plants a formula is derived for finding a simultaneously stabilizing controller. The work advances the theory of the SSP (and the Strong Stabilization Problem) by introducing and investigating several new approaches, and deriving new sufficient conditions. The work is less successful in deriving practical algorithms for the SSP except in the second method where a reliable algorithm is given for finding a central plant on which existing robust stabilization methods can be applied. This method is illustrated by its application to helicopter control.2015-11-19T08:59:37ZThe mathematical modelling of ball-joints with friction.Sage, R. M.http://hdl.handle.net/2381/348222015-11-20T03:27:46Z2015-11-19T08:59:37ZTitle: The mathematical modelling of ball-joints with friction.
Authors: Sage, R. M.
Abstract: At present the effects of friction are not included in three-dimensional mechanism simulation packages because of the difficulty of determining a friction model for joints such as the spherical joint where the frictional resistance to motion depends not only upon the coefficient of friction and the magnitude of the loading on the joint but also on the pressure distribution within the joint resulting from that loading. Thus the basis of this thesis has been the development of a mathematical model of the effects of friction in a spherical joint which could then be incorporated into a mechanisms simulation program. The model developed has shown that the main factors determining the magnitudes and directions of the frictional effects produced in a spherical joint, apart from the coefficient of friction and the magnitude of the loading, are the extent of the contact area between the ball and the socket and the magnitude of the angle between the axis of rotation of the joint and the direction of the applied load. Experimental results were obtained using apparatus that enabled the frictional moment produced on the socket of a joint to be measured while allowing the angle between the axis of rotation of the ball and the direction of the applied load to be varied between measurements. These results, obtained for a range of values of the coefficient of friction, confirm that this angle is a significant factor in the model and that the model usefully determines the frictional effects produced in a spherical joint.2015-11-19T08:59:37ZPhoton correlation velocimetry for the fluid flow through turbomachinery.Ross, Michael McLean.http://hdl.handle.net/2381/348192015-11-20T03:27:41Z2015-11-19T08:59:36ZTitle: Photon correlation velocimetry for the fluid flow through turbomachinery.
Authors: Ross, Michael McLean.
Abstract: This thesis is concerned with the application of photon correlation velocimetry to the design of products which employ rotating components in fluids. Two examples are considered, viz. The development of turbines and compressors for power generation. The development of propulsor design for use on underwater powered vehicles. The former required the measurement of high speed gas flows (up to Mach 1.8) both within cascades and in a model turbine. The latter entailed tests on models in both a water tunnel and a wind tunnel with flow velocities of up to 15 meters per second and 50 meters per second respectively. In each case a 50 nanosecond digital correlator was used and the optical systems were designed to operate within constraints set by this, the nature of the expected flows, the optical access available and the information sought. In all three applications, a backscatter geometry had to be used. Laser Doppler velocimetry was employed in the propulsor design. However, since the upper Doppler frequency limit of the correlator was 10 Mz., the high speeds encountered in the turbine and compressor models necessitated the use of laser transit velocimetry. Details of the systems design, the optics and data reduction software are given. Some experimental results of measurements made within cascades and rotating components are presented and their significance concerning the velocimeters used are discussed. The chief conclusions which are drawn from the work are: In many flow configurations of practical interest in gas and steam turbines, transit velocimetry with photon correlation can be used to measure mean velocity to within 1% and turbulence intensity to within 1%. However, in some regions, particularly where the turbulence intensity exceeds approximately 15%, the results are not easy to interpret. Despite the low upper limit to Doppler frequency that can be managed by the 50 nanosecond correlator, its power in processing low-light-level and noisy signals enabled it to be used effectively with a Doppler velocimeter for the measurement of flows within propulsor blading both in a water tunnel and in a wind tunnel. When used with Doppler velocimetry, the inherent averaging mode of operation of the correlator permitted the measurement of mean velocity to within 1%. It also provided a measure of turbulence intensity, which was self consistent to within 2%, although the relationship between this and the standard deviation of velocity was ambiguous. Analysis of the properties of photon correlation in laser velocimetry indicated scope for future work in two directions. Firstly, photon correlation responds to uncertainties arising from particle and velocity biasing in a different way from other signal processors such as burst counters. By carrying out measurements using both types of processor it may be possible to reduce these uncertainties. Secondly, the power of photon correlation in processing low-light-level signals should permit the use of a convenient backscatter arrangement of a reference beam laser Doppler velocimeter to measure the line-of-sight velocity component.2015-11-19T08:59:36ZMetallurgical phase transformations in the rubbing of steels.Rowntree, R. A.http://hdl.handle.net/2381/348202015-11-20T03:27:44Z2015-11-19T08:59:36ZTitle: Metallurgical phase transformations in the rubbing of steels.
Authors: Rowntree, R. A.
Abstract: The formation of phase transformed material in the unlubricated rubbing of plain carbon steels using a crossed cylinders machine, in single and multiple rubs, has been investigated. Previous studies of the dry wear of steels had suggested that if the frictional flash temperatures reached values of approximately 750C, then a phase transformed material, akin to the martensitic structure of conventional ferrous metallurgy, appeared upon the rubbing surfaces. Exploratory experiments indicated that phase transformed material appears as a result of a single rub of the surface only under conditions so severe that the calculated flash temperatures are of the order of 1100C or more. The fundamental metallurgical considerations applicable to small volumes of material subject to a combination of high temperatures and pressures of short duration which are the special characteristics of flash temperatures, have therefore been re-examined. The formation of homogeneous austenite, which is the first stage in the production of transformed material, has been considered to occur in two temperature ranges 723 - 910C and 910C. Only above 910C can the transformation be completed in a typical flash temperature duration. Diffusion of carbon in austenite has been proposed as the rate determining process. Theoretical temperatures of approximately 1250C are calculated for the formation of homogeneous austenite from lamellar pearlite. It has been found that the pioneering results of Welsh (1965), in particular his T2 and T3 transitions can be given satisfactory interpretations in terms of the new metallurgical theory. An important element in these interpretations is the number of rubs to which any element of the surface is subjected. This factor has been deduced from wear theory. Further detailed experiments have confirmed the magnitude of flash temperatures required for the production of phase transformed material. The influence of the maximum temperature has been discussed. Metallurgical and physical analysis indicates phase transformed material to be a fine grained martensitic structure.2015-11-19T08:59:36ZTemperature measurements in an argon plasma jet.Ruddy, M. J.http://hdl.handle.net/2381/348212015-11-20T03:27:45Z2015-11-19T08:59:36ZTitle: Temperature measurements in an argon plasma jet.
Authors: Ruddy, M. J.
Abstract: The aim of this study was to establish reliable temperature distributions, within an argon plasma jet in both the cases of an unconstrained jet discharging to the atmosphere and a jet impinging on a cooled metal surface under various input conditions. An optical technique, the Two Line Relative Intensity Method was employed with both Ionic/Atomic and Atomic/Atomic Line Combinations. Results are presented in Chapter 5 and are compared with published data in Chapter 6 and have been published.* Noise measurements have been made and represent an initial investigation into turbulent fluctuations within the jet. A novel technique of collimation and scanning of the plasma get is described in Appendix 3. *"Temperature and Noise Profiles in an Argon Plasma Jet". R.W. Maxwell and M.J. Ruddy, I.E.E. Conference Publication, number 118. 1974.2015-11-19T08:59:36ZTime dependent transport mechanisms in freshwater lakes.Henderson-Sellers, Brian.http://hdl.handle.net/2381/348182015-11-19T08:59:35Z2015-11-19T08:59:35ZTitle: Time dependent transport mechanisms in freshwater lakes.
Authors: Henderson-Sellers, Brian.
Abstract: The development and implementation of a totally predictive model for the annual thermal structure of a freshwater lake or reservoir is described in detail. Water quality depends to a great degree on dissolved oxygen profiles and nutrient concentrations and is thus governed by the temperature profile in the lake. The numerical model described here discusses fully this latter problem and also indicates strongly the direction for further research into the introduction of comprehensive biochemical cycles to improve this calculation. All possible variations in external and internal parameters are included. These values can be observed for a specific lake and future behaviour predicted on the basis of them. For an unbuilt reservoir, mean values can be taken for the parameters that cannot otherwise be determined from climatological, meteorological or geophysical sources. The transport mechanisms responsible for the development of the thermal profile during the year are determined completely by those parameters. The numerical representation is based on the assumption of horizontal homogeneity so that the one-dimensional heat transfer equation can be solved using a finite difference grid and a forward time step of one day. Decreasing the time step and modifying daily mean values to allow for diurnal variation in solar elevation permits the model to be used over a shorter time scale. The surface energy budget and wind speed (which it is found must be modified for lakes of small surface area) are the main forcing functions for the vertical mixing. It is shown that the average annual temperature structure of the lake is stable over a period of many years irrespective of the initial conditions imposed on the temperature profile. The problems of validating this model 'climate' against the observations of a single year (termed the lake 'weather' ) are evident.2015-11-19T08:59:35ZA high performance transistorised power source for MIG welding.Rodrigues, Alcide Conceicao Do Rosario.http://hdl.handle.net/2381/348172015-11-19T08:59:32Z2015-11-19T08:59:32ZTitle: A high performance transistorised power source for MIG welding.
Authors: Rodrigues, Alcide Conceicao Do Rosario.
Abstract: This research concerns an investigation into the application of Power Electronics to high performance power sources for precise and efficient control of the pulsed-current metal-inert gas (PCM) welding processes. The physical processes of the welding arc are reviewed and the characteristics of a number of power sources are considered prior to preparing the operational specification for the PCM power source. From a number of possibilities the high frequency switching regulator operating in the secondary side of the power transformer was selected for detailed study. The power source was based on the use of state-of-the-art power transistors operating in a switching mode to minimise losses and to give a fast response for good welding performance. The basic operating frequency was chosen to be at the very limit of the audio range. The dynamic behaviour of the transistors and associated protection networks is critical and failure to meet all the operating limits of the transistor can be costly. To assist with the thorough understanding of the circuit behaviour and to predict the transistor switching waveforms a digital computer model was developed. This gave good correlation with experimental results observed with the completed power source. Tests were carried out with the welding power source and showed that there was no discernable difference in the weld quality when compared with those produced by the more expensive series linear regulator power source. As a direct result of the study a new range of power sources meeting exacting standards have been made available to the welding industry.2015-11-19T08:59:32ZWheel loadings on web panels of overhead crane box-girders.Robertson, Adam Patrick.http://hdl.handle.net/2381/348162015-11-20T03:27:33Z2015-11-19T08:59:31ZTitle: Wheel loadings on web panels of overhead crane box-girders.
Authors: Robertson, Adam Patrick.
Abstract: In torsion-box design of twin girder overhead cranes, the bridge rail on which the lifting unit runs is positioned eccentrically on the girder, directly above one of the web plates. This web is subjected to in-plane patch loading produced by the spread of a wheel load through the overlying rail and flange. This study concerns the load carrying capacity of plate box-girder web panels subjected to a wheel load at the midspan of the panel. Distribution of a wheel load through a rail and flange is investigated from recordings made of in-plane vertical stress distribution profiles along the upper edge of a web panel of a short-span model box-girder. The girder was loaded through various interfaces above the web by a wheel load. A simple method is proposed for relating a distributed wheel load to an equivalent uniform patch load. Methods for estimating distributed wheel loading lengths are investigated. It is shown that crane web panels are generally subjected to patch loads of short length, occupying less than one-quarter of the panel length. A computer analysis is presented to determine elastic buckling coeffic-ients for flat rectangular plates subjected to a uniform in-plane patch load centrally disposed on one edge and supported by shear stresses on the adjacent edges. Patch loads of various lengths are considered over a range of plate aspect ratios for plates with various combinations of simply supported and clamped edges. Also considered are some non-uniformly distributed patch loads modelling approximately a distributed wheel load. For the large majority of geometries considered, it is the support condition along the loaded edge which has greatest influence on the buckling load. Correlation with buckling loads estimated from experimental measurements on a model crane girder web panel indicated that an assumption of simply supported panel edges is over-conservative and that it is probably more representative to consider the edges attached to the flanges as clamped. Ultimate load carrying capacity is considered. A plastic mechanism analysis originally presented by Roberts and Rockey is studied and a modified form derived which reveals the transition region from collapse initiated by direct web yielding for girders with stocky webs to failure by a mechanism of out-of-plane web deformation for girders with slender webs. Certain approximations in the original analysis are shown to involve the omission of terms which can contribute significantly to the plastic work expression. Inclusion of these terms, however, whilst offering potential refinement, increases considerably the complexity of the analysis. Results are presented of a series of collapse tests conducted on short-span model box-girders subjected to a wheel load above one of the webs. The effect on the failure load of rail size, web thickness, panel aspect ratio, and longitudinal web stiffening is investigated. Snap buckling was exhibited by several of the test web panels. From the results, a simple expression is developed for predicting collapse loads of plate girders subjected to narrow patch loads. The main findings of the work are used as a basis for a series of recommendations to aid the structural designer in taking account of patch loading on slender web panels.2015-11-19T08:59:31ZShape as a structural design parameter.Porter Goff, R. F. D.http://hdl.handle.net/2381/348142015-11-20T03:27:30Z2015-11-19T08:59:30ZTitle: Shape as a structural design parameter.
Authors: Porter Goff, R. F. D.
Abstract: The objective of this thesis is to present evidence to demonstrate the importance of layout in economical structural design, to review techniques by which structural shape may be handled as a design parameter and to determine the implications of varying layout in certain design situations. The thesis begins with a study of the theory of Michell structures. The difficulty of applying this theory directly to practical design problems then leads to a review of the linear approximation techniques for satisfying the Michell conditions for maximum material economy. The Michell theory however is restricted in its relevance to engineering practice. Consideration is therefore given to other optimisation techniques of mathematical programming. These methods permit quite general forms of merit criterion to be specified and allow a wide range of constraints to be imposed on the design. The results however can usually be justified only on pragmatic grounds rather than judged against an absolute determinable limit of merit. Dynamic programming is investigated as a means of optimising the layout of simple structures with a specified topology. Limitations in this application of the technique become evident but it is possible to use it to obtain results from which the value of varying layout may be deduced in the particular circumstances of discrete section design and of stability limitations.2015-11-19T08:59:30ZUnsteady fluid flow around certain bluff bodies.Polpitiye, Sisira J.http://hdl.handle.net/2381/348132015-11-20T03:27:28Z2015-11-19T08:59:30ZTitle: Unsteady fluid flow around certain bluff bodies.
Authors: Polpitiye, Sisira J.
Abstract: It is shown in this thesis that fluid dynamic forces on unsteadily moving bluff bodies depend on the history of motion as much as on the velocity and acceleration of motion. An empirical relationship between the motion of the body and the resulting force is obtained by analysing the effect of the history of motion on the fluid dynamic force at any instant. The fluid dynamic force, velocity and acceleration are obtained as functions of time, by oscillating test models in water while they are being towed at constant speed. The test models used are: 1. a two-dimensional circular cylinder, 2. a rectangular block with square frontal area and fineness ratio of 3:1, 3. a cruciform parachute canopy with arm ratio of 4:1, and 4. a ring-slot parachute canopy. The functions by which the history of flow affects the future forces, are evaluated by using the Convolution Integral. The results show that the effects due to history of both velocity and acceleration are by no means negligible, that is the velocity and the acceleration at a specific time prior to any instant is so domineering that the fluid dynamic force can approximately be expressed as being delayed by this period of time. This 'time-delay', or time lag (as opposed to phase-lag) in the part of the measured force is found to be independent of the frequency of excitation. In the light of this evidence, a prediction model is suggested for estimating unsteady fluid forces. The data required for the application of this prediction model are obtained experimentally. Chapter One of this thesis gives a brief explanation of the historical background of unsteady fluid dynamics. The effects of acceleration on the fluid dynamic force, in both ideal and real fluids, are discussed in Chapter Two. Explained in Chapter Three are the techniques used for building the force prediction model, and data acquisition. The experimental procedure is explained in Chapter Four. Chapter Five gives the empirical form of the prediction model, and some data that are used in association with this model.2015-11-19T08:59:30ZHamiltonian circuits in trivalent planar graphs.Price, W. L.http://hdl.handle.net/2381/348152015-11-20T03:27:31Z2015-11-19T08:59:30ZTitle: Hamiltonian circuits in trivalent planar graphs.
Authors: Price, W. L.
Abstract: The author has investigated the properties of Hamiltonian circuits in a class of trivalent planar graphs and he has attempted, with partial success, to establish conditions for the existence of Hamiltonian circuits in such graphs. Because the Hamiltonian circuits of a trivalent planar graph are related to the four-colourings of the graph some aspects of the four- colour problem are discussed. The author describes a colouring algorithm which extends the early work of Kempe, together with an algorithm based on the Heawood congruences which enables the parity of the number of four-colourings to be determined without necessarily generating all of the four-colourings. It is shown that the number of Hamiltonian circuits has the same parity as the number of four-colourings and that the number of Hamiltonian circuits which pass through any edge of a trivalent planar graph is either even or zero. A proof is given that the latter number is non-zero, for every edge of the graph, whenever the family of four-colourings has either of two stated properties. The author describes two original algorithms, independent of four-colourings, which generate a family of Hamiltonian circuits in a trivalent planar graph. One algorithm embodies a transformation procedure which enables a family of Hamiltonian circuits to be generated from a given Hamiltonian circuit, while the other generates directly all Hamiltonian circuits which include a chosen edge of the graph. In a new theorem the author proves the existence of Hamiltonian circuits in any trivalent planar graph whose property is that one or more members of a family of related graphs has odd parity.2015-11-19T08:59:30ZWear resistance of pearlitic rail steels.Perez-Unzueta, Alberto Javier.http://hdl.handle.net/2381/348122015-11-20T03:27:26Z2015-11-19T08:59:29ZTitle: Wear resistance of pearlitic rail steels.
Authors: Perez-Unzueta, Alberto Javier.
Abstract: Modern railway transportation has imposed severe work conditions on the track. Wear of rails has become an important and costly phenomenon. Recent developments in the manufacture of rail steels have refined the interlamellar spacing to produce harder and more wear resistant pearlitic steels. Despite better nominal properties shown by bainitic and martensitic steels, pearlitic steels have shown lower wear rates. The aim of this study is to explain the mechanisms for the wear performance by observing how the lamellar pearlitic microstructure adapts to the wear loading. Four pearlitic rail steels, with similar chemical composition but with different hardnesses and interlamellar spacings, have been examined. Wear tests have been performed under both pure sliding and rolling-sliding conditions, the latter designed to simulate track conditions. The worn surfaces and the plastically deformed subsurface regions have been examined by optical metallography and scanning electron microscopy. It was observed that the plastic deformation produced considerable fracturing and realignment of the hard cementite lamellae. The effect of these realignments on the surface was to present an increased area fraction of hard cementite lamellae to the surface. Thinner cementite lamellae, associated with low interlamellar spacings, were easier to blend before fracturing. A relationship between the bulk hardness (HV) and the reciprocal root of the mean true interlamellar spacing has been proposed for fully pearlitic steels. Wear rates were found to be a function of the original bulk hardness, rather than the increased hardness of the plastically deformed layers. Also, wear rates were reduced as hardness increased by reducing the interlamellar spacing. Pure sliding and rolling-sliding wear tests ranked the four steels correctly. Furthermore, qualitative comparisons between experimental wear rates and those obtained in-track trials show the same scale in reduction of wear with increased hardness.2015-11-19T08:59:29ZVibrations of preloaded cylindrical shells.Palacios Gomez, Oscar F.http://hdl.handle.net/2381/348112015-11-20T03:27:24Z2015-11-19T08:59:29ZTitle: Vibrations of preloaded cylindrical shells.
Authors: Palacios Gomez, Oscar F.
Abstract: A theoretical and experimental investigation of the dynamic behaviour of preloaded cylindrical shells including the effects of meridional cracking has been carried out. A Donnell-type equation is derived to study preloaded cylindrical shell vibrations. The solution is obtained using the Galerkin Method for the following initial external loads: (i) Axial compression combined with torsion (ii) Bending moment (iii) Axial compression combined with bending moment (iv) Periodic axial compression (v) Periodic bending moment. A simply supported cylindrical shell was tested under axial compression, bending moment and axial compression combined with bending moment. The results are in fair agreement with the present theoretical solution. The analytical study of the vibrations of cracked shells is carried out by introducing the modifications to the strain and kinetic energy functions expressed in terms of normal co-ordinates. It is shown that cracks reduce the natural frequency and change the nodal configuration associated with the lowest natural frequency. The results of the experiments are in excellent agreement with the theoretical predictions.2015-11-19T08:59:29ZRobust pole assignment by output feedback using optimization methods.Oh, Myungho.http://hdl.handle.net/2381/348092015-11-20T03:27:20Z2015-11-19T08:59:28ZTitle: Robust pole assignment by output feedback using optimization methods.
Authors: Oh, Myungho.
Abstract: A robust output feedback pole assignment method, which seeks to achieve a robust solution in the sense that the assigned poles are as insensitive as possible to perturbations in the system parameters, is studied. In particular, this work is concerned with pole assignment in a specified region rather than assignment to exact positions, whereby the freedom to obtain a robust solution may be realized. The robust output feedback pole assignment problem is formulated as an optimization problem with a special structure in matrix form. Efficient optimization methods and numerical algorithms for solving such a problem are proposed by introducing a concept of the derivative of a matrix valued function. The homotopy method, which is known as a globally convergent method, is applied to solve the robust output feedback pole assignment problem to overcome possible difficulties with the choice of feasible starting point. A new algorithm based on the homotopy approach for solving the pole assignment problem is proposed. Numerical examples of the robust pole assignment problem demonstrate how the homotopy algorithm globally converges to optimal solutions regardless of initial starting points with an appropriately defined homotopy mapping. The proposed algorithms are illustrated using an aircraft case study. It is seen that the controllers obtained using robust pole assignment methods yield the robust flight control and maintain the closed-loop system properties closer to the nominal ones. They are shown to be more robust than those obtained by an alternative direct pole assignment method which is frequently used to develop aircraft control strategies without attempting to optimize any robustness criterion. Indeed, the robust output feedback pole assignment method proposed in this study is a method which can be applied for control system design to achieve one important design objective, robustness.2015-11-19T08:59:28ZPitting failure of gears.Onions, R. A.http://hdl.handle.net/2381/348102015-11-20T03:27:22Z2015-11-19T08:59:28ZTitle: Pitting failure of gears.
Authors: Onions, R. A.
Abstract: Failure due to pitting fatigue has been investigated under controlled laboratory conditions. The investigations used both a realistic laboratory test rig using 1/2" face width gears and the geometrically simpler simulation of gears using a disc machine. The results obtained substantiated earlier work of Way and Dawson. The initiation and propagation mechanisms are generally considered to hold true. However, the gear tests showed that failure could occur much more readily than with discs and therefore the application of disc tests to gears must be viewed with caution. The results suggest a fundamental difference between the pitting behaviour of gears and discs. The second part of the thesis is of a more theoretical nature. A theory of surface contact was developed along the lines of that by Greenwood and Williamson using a surface model developed by Whitehouse and Archard. These results show that a distribution of asperity curvatures increases the probability of plastic deformation. The plasticity index has been redefined in terms of a convenient two parameter defirition of surface topography. The theory has been applied to results obtained from a typical ground surface of hardened steel; when the anisotropy, which is part of such surfaces, is taken into account it is shown that only a small proportion of the contacting asperities are plastically deformed. The limitation of this form of model is discussed and a second approach is put forward using digital techniques. Theory has been developed to enable the contact of surface profiles to be simulated in a computer and the interference areas so formed have been related to the real Hertzian deformed areas of two rough surfaces. The approach is equally applicable to run- in surfaces which are not represented by existing models. The implications of this work for future research are discussed; the need for a fuller understanding of partial and micro elastohydrodynamic lubrication by theory and experiment is stressed.2015-11-19T08:59:28ZFlow resistance in ploughed upland drains: Narrow channels with uniform or composite roughness.Flintham, T. P.http://hdl.handle.net/2381/348072015-11-20T03:27:16Z2015-11-19T08:59:27ZTitle: Flow resistance in ploughed upland drains: Narrow channels with uniform or composite roughness.
Authors: Flintham, T. P.
Abstract: Ploughed upland drains are straight prismatic channels of low aspect ratio. The drains are either uniformly or compositely roughened. In compositely roughened drains the bed and side-walls are differentially roughened although each roughness type is homogeneous. Upland catchments, containing extensive ploughed drainage networks, are particularly prone to flash flooding and increased sediment yield. However, the basic hydraulic data necessary to route flow through the drainage network and improve the engineering design of stable drainage channels are currently unavailable. A logarithmic flow resistance equation is developed for low aspect ratio channels, where the effective Nikuradse equivalent grain size is known. Testing against field data indicates that the relationship successfully predicts the resistance to uniform flow through upland drains. The performance of eight composite roughness formulae to predict the mean velocity in differentially roughened channels is compared. The composite roughness equations involve dividing the cross-sectional flow area into a number of sub-areas. The different methods of cross-sectional area division are considered and their effect on mean velocity prediction examined. Preferences are indicated concerning composite roughness equations which predict the mean velocity in channels of simple cross-sectional shape. Empirical equations are derived to determine the mean bed and side-wall shear stresses in straight symmetrical trapezoidal and rectangular open channels, with uniform or composite roughness. The model proposed is appropriate for stable sub-critical and super-critical flows. The equations are based on data collected from laboratory channels and should be cautiously applied to larger scale channels. Using the mean shear stress model, a design procedure is proposed to improve drainage channel stability.2015-11-19T08:59:27ZEffects of small isolated roughness elements on turbulent boundary layers.Nigim, H. H. M.http://hdl.handle.net/2381/348062015-11-20T03:27:14Z2015-11-19T08:59:27ZTitle: Effects of small isolated roughness elements on turbulent boundary layers.
Authors: Nigim, H. H. M.
Abstract: A series of six equilibrium turbulent boundary layer flows hais been established with values of H from 1.3 to 2.3, and measurements made in them of the effect on the boundary layer development of a single two- dimensional roughness element of mainly square cross-section mounted near the start of the equilibrium region. It is shown that the local increment of the momentum thickness caused by the element is well-predicted by the flat-plate correlation of Gaudet and Johnson, a correlation which is here shown to be universally valid and that, for all flows except for the most adverse pressure gradient, a satisfactory prediction of the subsequent boundary layer development can be made with the aid of relationships proposed by Professor Bradshaw, for the change in H at the roughness element. For the flow with the largest value of H the prediction method for the development fails even in the absence of the element which, in fact, hats little influence on the flow. The discrepancy between calculation and experiment is much larger than can be accounted for by normal stress terms and the reasons for this discrepancy are not entirely evident. However, the essential outcome of the experiment is clear that the incremental drag of a roughness element depends on wall variables. In consequence, the effect of an element which is of small height compared with the boundary layer thickness is negligible in flows with strongly adverse pressure gradients. It is also demonstrated that the length of the separation region behind small roughness elements decreases as the pressure gradient increases adversely.2015-11-19T08:59:27ZSlip-energy recovery techniques for control of induction machines.Nigim, K. A. M.http://hdl.handle.net/2381/348082015-11-20T03:27:18Z2015-11-19T08:59:27ZTitle: Slip-energy recovery techniques for control of induction machines.
Authors: Nigim, K. A. M.
Abstract: This thesis describes two different techniques for efficient control of slip energy in a slip-ring induction machine. The static Kramer system merely recovers slip power and returns it to the a.c. supply. As a result only sub-synchronous motoring or super-synchronous generating is possible. In the static Scherbius system, however, the slip power can be controlled both into and out of the secondary circuit. This allows the machine to operate as a motor and generator at both sub- and super-synchronous speeds. For wide speed range operation a current source inverter was used as this can inherently provide reversal of power flow. The operating requirements for the current source inverter operating in the secondary circuit of an induction machine have been determined. These considerations show that the current source inverter control signal must be synchronised to the secondary e.m.f. of the machine. The machine can then operate in a stable manner over a very wide speed range. The conventional analysis of the current source inverter has been developed to include the effect of the secondary slip e.m.f. which is shown to have a major effect on the commutation behaviour of the inverter. The action of the commutation circuit is affected by the phase angle between the secondary current and the slip e.m.f. This angle can be controlled electronically and the effect of this has been predicted and observed. A detailed study of the Kramer system has included analysis of the d.c. link current waveform including Fourier harmonic prediction in terms of the circuit parameters and the operating slip. The operation of the Kramer and Scherbius systems has been studied for both motoring and generating modes of the induction machine and their relative merits have been compared. In particular the novel idea of using the Scherbius system for variable speed wind energy recovery has been considered and reported in a published paper. Finally suggestions have been made for further work particularly for application to wind energy recovery.2015-11-19T08:59:27ZSubstructures in computer aided structural design.Nasreldin, Hamdy Abdelaliem.http://hdl.handle.net/2381/348052015-11-20T03:27:10Z2015-11-19T08:59:26ZTitle: Substructures in computer aided structural design.
Authors: Nasreldin, Hamdy Abdelaliem.
Abstract: Structural design is basically an iterative process with successive cycles of modification and re-analysis carried out until an acceptable design is obtained. Interactive graphics techniques permit the designer to rapidly view the results of his hypothesis and enable him to formulate the design changes quickly. Consequently, more design cycles can be carried out in a given time. Structural analysis is a major part of a design cycle and considerable effort has been spent in the development of computer based analysis methods. The finite element method, in particular, is now generally accepted and has a wide variety of applications. The rapid development of the finite element method has meant that increasingly large problems can be tackled in increasingly fine detail. This posed two problems. Firstly, the large size of input data makes the input process both tedious and error prone. Secondly, the interpretation of voluminous output can delay the designer's decision on the necessary modification for the next design cycle. The application of LUISA-1 to engineering problems has demonstrated the use of interactive graphics to overcome the above problems. The length of design cycle was reduced to a level which allowed the user to try several alternatives in few minutes while the computer response was maintained within conversational rates. However, the system was limited to small size idealizations, mainly because of large core storage requirements. This thesis investigates the use of substructuring to overcome core size limitations in an interactive graphics system, and the resulting effects on the system response and on the length of design cycle. An interactive graphics system, based on two-level substructuring, was developed as part of the investigation. The system was limited to two dimensional problems. It was implemented on a dedicated ICL 4130 computer and operated through an ELLIOT 4280 refresh terminal. Applying the system to industrial problems led to the following main conclusions: a. Employing two-level substructuring coupled with data paging techniques reduces considerably core size limitations. b. The division into substructures helps to maintain a conversational mode of man-computer communication during the data generation and results presentation phases of design. c. matrix analysis using a large substructuring system is not suited to a conversational mode of communication d. The modularity of substructuring helps to economize in both user time and computer time. e. Substructuring requires a rather complex data management scheme. f. The large size of program code involved is a major factor affecting the system response. The thesis suggests future work needed to increase the acceptability of an interactive substructuring system. This includes; extension to multi-level substructuring, application to three dimensional problems and implementation on a local computer linked to a large time sharing configuration.2015-11-19T08:59:26ZOn the inelastic deformation of structures subjected to variable loading.Megahed, M. M.http://hdl.handle.net/2381/348022015-11-19T08:59:25Z2015-11-19T08:59:25ZTitle: On the inelastic deformation of structures subjected to variable loading.
Authors: Megahed, M. M.
Abstract: The modes of behaviour of a representative two-bar assembly with unequal areas and lengths under the simultaneous action of sustained mechani- al load and cyclic thermal gradient are investigated analytically. Three types of material behaviour are used: perfect plasticity, linear kinematic hardening and linear isotropic hardening. These simple models exhibit much of the behaviour of interest in design of structural components subjected to repeated thermal loads: elastic shakedown, reversed plasticity and ratcheting. The analyses provide closed form expressions for the mechanical-thermal load bounds of the various regimes of deformation. The cyclic plastic behaviour of the structure is developed and analytical results are derived for the transient and steady state values of plastic strain. The results are applicable for a wide range of geometrical, material and loading parameters. Comparisons between perfect plasticity, kinematic and isotropic hardening models provide qualitative estimates of the cyclic inelastic behaviour of actual structural components which can be simulated by means of a two-bar assembly. The results also point out those load combinations at which thermal ratcheting experiments are more likely to yield the most useful informations. In the field of new constitutive relations, a single state variable theory of inelastic deformation is developed on the basis of the Bailey-Orowan concept of creep as the outcome of two competing mechanisms: strain hardening and thermal softening. The resulting theory is capable of representing primary creep, creep recovery, there reemergence of primary creep following a sudden increase in stress, effects of rest periods and past deformation history and strain rate sensitivity. The theory is not capable, however, of reproducing the features of material behaviour under reversed loading conditions. An attempt to describe the cyclic phenomena of metals on the basis of the two-state variable concept is presented. The material behaviour is characterized by means of the current size of the yield surface and a dimensionless parameter which represents the shape of the plastic hardening curve. The transient growth laws for these two parameters are developed from phenomenological data on annealed OFHC copper. The predictions of the model are in close agreement with the experimental cyclic hardening behaviour of copper. The model is used to obtain the inelastic response of a two-bar assembly subjected to cyclic thermal load and the results are compared to the closed form solutions of linear hardening models. Finally, a modification is suggested to the structure of the proposed model in the light of recent work on the application of the state variable concept to the theory of plasticity. It is argued that the second parameter in the model should be taken as the current coordinates of the yield surface of the material.2015-11-19T08:59:25ZRobust multivariable control of industrial production processes: A discrete-time multi-objective approach.Murad, Ghassan Ali.http://hdl.handle.net/2381/348042015-11-20T03:27:08Z2015-11-19T08:59:25ZTitle: Robust multivariable control of industrial production processes: A discrete-time multi-objective approach.
Authors: Murad, Ghassan Ali.
Abstract: This thesis considers a number of important practical issues in the synthesis of discrete-time robust controllers for industrial processes. The work focuses on the control of an "unknown" SISO process (the IFAC 1993 benchmark), the design of robust model-based controllers for a MIMO industrial production process (a glass tube production process), and the design of robust MIMO controllers having integrated control and diagnostic capabilities. The industrial case studies presented are realistic in the sense that their control problems do frequently arise in engineering situations. Explicit state-space formulae for Hinfinity-based one degree-of-freedom (1-DOF) and two degrees-of-freedom (2-DOF) robust controllers are derived. They provide robust stability with respect to left coprime factor perturbations, and for the 2-DOF case, a degree of robust performance in the sense of making the closed-loop system follow a desired reference model. Robust controllers for the "unknown" plant are designed using H2 and Hinfinity optimization techniques. Explicit closed-loop performance is obtained by designing the weighting function parameters using numerical optimization techniques in the form of the method of inequalities. Methods for designing Hinfinity-based controllers that can be directly implemented in the Internal Model Control (IMG) scheme are presented. Explicit state-space formulae for Hinfinity-based IMG 1-DOF and 2-DOF robust controllers which provide robust stability and robust performance with respect to left coprime factor perturbations, axe derived. A technique for discrete-time model reduction is presented, with two illustrative examples. The technique is used in a detailed study of the identification and control of the glass tube production process. The production process, especially for large tube measures, is ill-conditioned and contains large time delays. The model of the process reflects the transfer of two process inputs (mandrel pressure and drawing speed) to the tube dimensions (wall thickness and diameter). The models obtained from advanced multivariable identification are used for the design of robust IMG controllers for the process. The robust performance of the controller is demonstrated and a comparison is made with the present control system. Finally, a framework for synthesizing robust controllers which have both control and actuator failure detection capabilities is presented. Simulation results for a MIMO design example are presented which demonstrate the feasibility of this integrated design approach.2015-11-19T08:59:25ZThe measurement of wall shearing stress in turbulent boundary layers.Miller, B. L. P.http://hdl.handle.net/2381/348032015-11-20T03:27:06Z2015-11-19T08:59:25ZTitle: The measurement of wall shearing stress in turbulent boundary layers.
Authors: Miller, B. L. P.
Abstract: The thesis describes the design, calibration and use of a floating element skin friction meter in smooth wall boundary layers under favourable and adverse pressure gradients. The results of an experimental investigation in turbulent, fully developed duct flow are combined with those obtained by BROWN and JOUBERT (1) to give a secondary force contour map for element Reynold's numbers (dm ur/v) between 500 and 4000 and element Euler numbers between -16 and +20. It is shown that these meters can be used in favourable pressure gradient rough wall flows and that the secondary force characteristics are similar to those obtained over smooth walls. Simple physical and mathematical models for the secondary forces are developed which show good qualitative agreement with experiment. A strongly non-equilibrium boundary layer (-1.2 > delta*/tw dp/dx > 2.6) is investigated in detail and tabulated results given. A modified form of COLE'S (4) method for establishing the skin friction coefficient (Cf) from the velocity profile is developed and used to show the sensitivity of log-law methods to the coefficients assumed. It is also shown that the effects of changes in duct cross-sectional area seriously affect the relationship between wall shear stress and pressure gradient in fully developed flows.2015-11-19T08:59:25ZAdvances in knowledge based signal processing: A case study in EMG decomposition.Loudon, Gareth.http://hdl.handle.net/2381/347992015-11-20T03:26:59Z2015-11-19T08:59:23ZTitle: Advances in knowledge based signal processing: A case study in EMG decomposition.
Authors: Loudon, Gareth.
Abstract: This thesis relates to the use of knowledge based signal processing techniques in the decomposition of EMG signals. The aim of the research is to fully decompose EMG signals recorded at fairly high force levels (up to twenty percent maximum voluntary contraction) automatically into their constituent motor unit potentials to provide a fast and accurate analysis routine for the clinician. This requires the classification of non-overlapping motor unit action potentials (MUAPs) and superimposed waveforms formed from overlapping MUAPs in the signal. Firstly, digital filtering algorithms are used to reduce noise in the signal. A normalisation and compression of the filtered signal is then performed to reduce the time of the analysis. Non-overlapping MUAPs are classified using a statistical pattern recognition method. The method first describes the MUAPs by a set of features and then uses diagonal factor analysis to form uncorrelated factors from these features. An adaptive clustering technique groups together MUAPs from the same MU using the uncorrelated factors. The decomposition of superimposed waveforms is divided into two sections. The first section is a procedural method that finds a reduced set of all possible combinations of MUAPs which are capable of forming each superimposed waveform. The second section is a knowledge based analysis of the selected MUAP combinations forming each superimposed waveform. An expert system has been designed to decide which combination is the most probable by studying the motor unit firing statistics and performs uncertainty reasoning based on fuzzy set theory. The decomposition method was tested on real and simulated EMG data recorded at different levels of maximum voluntary contraction. The different EMG signals contained up to six motor units (MUs). The new decomposition program decomposed all MUAPs in the EMG signals tested into their constituent MUs with an accuracy always greater than ninety five percent. The decomposition program takes about fifteen seconds to classify all non-overlapping MUAPs in an EMG signal of length one second and on average, an extra nine seconds to classify every superimposed waveform. Hardware limitations did not enable the testing of EMG signals containing more than six MUs. The results also show that the computer analysis can simulate the reasoning of a human expert when studying a complex EMG signal.2015-11-19T08:59:23ZAn interactive sizing system for reinforced concrete buildings.Main, Andrew.http://hdl.handle.net/2381/348002015-11-20T03:27:01Z2015-11-19T08:59:23ZTitle: An interactive sizing system for reinforced concrete buildings.
Authors: Main, Andrew.
Abstract: Computer programs for structural design are few and, largely, unpopular compared with those for structural analysis. In the interaction between user and program, there is a balance between the results produced and the time involved in using the program. The failure to provide a good balance (or worthwhile interaction) is a significant defect of most design programs. Further, non-graphical output predominates, but is often unsuited to convey results. Finally, earlier design decisions generally affect the final structure more than later decisions, but design programs do not differentiate between these stages. A system has been implemented which avoids these pitfalls. It allows the jer to find the initial sizes of members (beams, columns, slabs) in a reinforced concrete building. The detailed design of the computer system is discussed. The structural design methods are based on the lower bound theorem. Design algorithms which test the adequacy of members are shown with their theory. Examples of the use of the system are presented. The sizing of a set of slabs, supporting beams and columns is shown. The sizes given by the system are more quickly found and less likely to be in error than those of hand-based methods, and costs appear to be less.2015-11-19T08:59:23ZStability criteria for nonlinear multivariable control systems.McGee, Robert William.http://hdl.handle.net/2381/348012015-11-20T03:27:02Z2015-11-19T08:59:23ZTitle: Stability criteria for nonlinear multivariable control systems.
Authors: McGee, Robert William.
Abstract: The stability of some particular classes of control systems described by ordinary nonlinear differential equations is considered. As a means of introduction to the problem, systems containing a single nonlinearity in an otherwise linear, time-invariant closed loop are examined. Stability criteria based on the frequency-response of the linear part of the system are established by constructing a Liapunov function of a 'quadratic plus integral of non-linearity' form. The problem is extended to cover those classes of control systems which contain several such nonlinear functions (i.e. multivariable control systems) and frequency domain stability criteria are established by constructing a Liapunov function akin to that described above. It is also asserted that stability criteria less restrictive than those obtained previously for these multivariable systems may be achieved by placing certain additional restrictions on the nonlinear functions. Some classes of systems containing nonlinear functions of a most general nature are considered in later chapters of this thesis. Frequency-domain stability criteria are established with the aid of quadratic forms of Liapunov functions. Again, if the complexities of these nonlinearities are reduced it is seen that less restrictive criteria than obtained previously may be established for these classes of systems. Emphasis is laid throughout upon the development of a unified approach to the problem of stability of the classes of systems considered. The criteria, once formulated, can be applied in practice without any further reference to the Liapunov function used.2015-11-19T08:59:23ZControl system design for robust stability and robust performance.Lin, Jong-Lick.http://hdl.handle.net/2381/347972015-11-20T03:26:55Z2015-11-19T08:59:22ZTitle: Control system design for robust stability and robust performance.
Authors: Lin, Jong-Lick.
Abstract: A central problem in control system design is how to design a controller to guarantee that the closed-loop system is robustly stable and that performance requirements are satisfied despite the presence of model uncertainties and exogenous disturbance signals. The analysis problem, that is the assessment of control systems with respect to robust stability and robust performance, can be adequately solved using the structured singular value u as introduced by Doyle. The corresponding design problem (how to choose a controller K to minimize u) is still largely unsolved, but an approximate solution can be found using Doyle's D - K iteration. In this thesis we present an alternative algorithm, called u - K iteration, which works by flattening the structured singular value u over frequency. As a prelude to this a classical loop shaping approach to robust performance is presented for SISO systems, and is also based on flattening u. In u-synthesis it is often the case that real uncertainties are modelled as complex perturbations but the conservatism so introduced can be severe. On the other hand, if real uncertainties are modelled as real perturbations then D - K iteration is not relevant. It is shown that u - K iteration still works for real perturbations. In addition, a geometric approach for computing the structured singular value for a scalar problem with respect to real and/or complex uncertainty is described. This provides insight into the relationship between real u and complex u. A robust performance problem is considered for a 2-input 2-output high purity distillation column which is an ill-conditioned plant. Analysis reveals the potentially damaging effects on robustness of ill-conditioning. A design is carried out using u - K iteration and the "optimum" u compared with that obtained by Doyle and by Freudenberg for the same problem.2015-11-19T08:59:22ZInverter-fed induction machine dynamics.Lockwood, Morris.http://hdl.handle.net/2381/347982015-11-20T03:26:57Z2015-11-19T08:59:22ZTitle: Inverter-fed induction machine dynamics.
Authors: Lockwood, Morris.
Abstract: The study includes the analysis and investigation of inverter-fed squirrel cage induction machine drives. The particular drive used was a 120° square wave inverter feeding a Tubular Axle Induction Motor developed for rail traction by British Railways. The system and its operating modes, including self-excited braking, are described. The conditions under which self-excited braking can be achieved are investigated both theoretically and experimentally and upper and lower limits to the range of permissible rotor speeds are found by several analytical methods. An original analogue model of the inverter is developed which is suitable for the investigation of inverter firing algorithms. A simpler and more efficient model is developed for the investigation of commonly used inverters. A two axis model of the induction machine is described and used to produce an analogue simulation. State-variable analysis is used to predict the steady-state waveforms and the transfer functions of the system. A simple method of predicting the frequency response of a linear or linearised system is described. Steady-state sinusoidal analysis is used to predict the limits to self-excitation. Results from the various methods of analysis are compared with each other and with results from the real system in both the transient and steady-state modes of operation. The results from the analogue model are found to give best agreement with those from the real system in both modes. Inverter losses are found to affect the boundaries of self-excitation. The possibility of using the analogue model in the development of micro-processor control for the traction system is discussed.2015-11-19T08:59:22ZLaboratory investigation into wheel/rail adhesion.Beagley, T. M.http://hdl.handle.net/2381/347962015-11-20T03:26:54Z2015-11-19T08:59:21ZTitle: Laboratory investigation into wheel/rail adhesion.
Authors: Beagley, T. M.
Abstract: Wheel/rail adhesion is affected by the contamination that is present on the railhead. This can be broadly divided into three categories oil, water and solid debris. The frictional phenomena associated with each of these groups were examined on a variety of laboratory simulation rigs. The interactions between each group were then explored so that a comprehensive description of wheel/rail adhesion could be established. Concepts of boundary lubrication can be used to describe the low friction of surfaces contaminated by oil and/or water. However these concepts are shown to have their limitations when solid debris is trapped in the contact area and either significantly lowers adhesion when mixed with small quantities of water under dynamic conditions, or increases the coefficient of friction by adsorbing the oil. There is insufficient oil on most track to cause low adhesion even when the rails are wet. Under some specific circumstances debris helps form thin layers that cover the wear band and are weak enough in shear to reduce adhesion and cause wheel slip. Leaves readily form such films although the most common constituents are rust and water. Laboratory experiments have shown that low adhesion can be caused by mixtures of rust and water, and a theoretical explanation for this has been developed based on their rheological properties. It is concluded that low wheel/rail adhesion is usually caused by a viscous paste formed of solid debris and small quantities of water. It is because Britain has such a cold, damp climate that wheel/rail adhesion on BR is such a problem.2015-11-19T08:59:21ZSome effects of history on turbulent flow.Lee, Brian E.http://hdl.handle.net/2381/347952015-11-20T03:26:53Z2015-11-19T08:59:21ZTitle: Some effects of history on turbulent flow.
Authors: Lee, Brian E.
Abstract: Few physical phenomena can be adequately described without reference to their histories. This maxim applies not only to the field of experimental engineering but throughout academic disciplines to geography, biology, human relations and almost every sphere of physical activity. With reference to the field of fluid dynamics the interpretation of this statement means that few real flow situations can be described solely in terms of local values and relationships and that the previous history of the flow must be taken into account. An example of this is the work of Cockrell , Diamond and Jones ( 1 ) *, who showed that for a given diffuser , the performance varied even though the inlet boundary layer thickness parameters were maintained constant , and inferred that the manner in which the boundary layers had been produced has an appreciable effect on diffuser performance. This thesis has three major objectives. The first is to use our knowledge of flow history to provide an explanation of the 'overshoot' phenomenon experienced by the growth of the boundary layer thickness parameters in duct flow. This manifests itself by the manner in which a thickness parameter grows until it reaches a stationary point, whereupon it subsequently decreases. The second objective is to attempt to expand our knowledge of flow history effects by simulating a mean velocity profile at the duct entry and observing the subsequent development in both this and the turbulence parameters of the flow. This second objective has practical applications in the production of atmospheric boundary layers and other shear flows for the purpose of model Figures in brackets refer to reference listing. testing. The third objective is to use the experimental results obtained from duct flow in the auxiliary equation of an integral method of boundary layer calculation. This then should provide a realistic allowance for the flow history effects. Since the lack of any allowance of flow history effects is very often a major failing of the use of integral techniques it is possible in this instance to attempt to assess the overall usefulness of such a method. In addition to these major objectives, appendices are presented on the experimental behaviour of the local relationships, often used in calculation methods, between the turbulent shear stresses and the mean velocity profile. The distributions of these mixing lengths and eddy viscosities are compared with some of the formulations available in the literature. A description of the experimental facilities is given which includes a section on the use of hot wire anemometers for the measurement of turbulence. Some of the likely sources of error encountered here, together with their possible magnitudes, are also presented.2015-11-19T08:59:21ZThe prediction of tool and workpiece shapes in electro-chemical machining.Lawrence, Peter.http://hdl.handle.net/2381/347942015-11-20T03:26:51Z2015-11-19T08:59:20ZTitle: The prediction of tool and workpiece shapes in electro-chemical machining.
Authors: Lawrence, Peter.
Abstract: The primary object of the work was to develop mathematical and other methods for predicting (a) tool shapes to machine defined work shapes, and (b) work shapes machined by defined tool shapes, operating under equilibrium conditions. Existing methods were examined, the essence of which was a "Cos theta" theory. This theory, based on an analysis of machining between plane parallel electrodes normal to the direction of motion, was found to provide a useful approximate method for designing tool surfaces. However, the theory was shown to be inadequate for curved surfaces whose inclination to the direction of motion of the cathode was less than 350, particularly when used to predict work shapes. Equations relating inter-electrode gap and time, obtained from the plane parallel electrode analysis, were shown also to represent the surface produced by a flat cathode inclined to its direction of motion. Work and tool shape prediction procedures making use of these equations are described and a modification to the equations to allow for overpotential is included. The current and potential distribution in the inter-electrode gap was then studied as a field problem. Two boundary conditions at the workpiece surface were identified and analytical solutions were obtained for tool shapes to produce semi-cylindrical and hemispherical work shapes, including modifications to account for overpotential at the work surface. Solutions were also obtained by an electrolytic tank analogue apparatus and by two numerical methods. A finite difference method was used successfully to predict work shapes but was found to be unsuitable for designing tool shapes. A second method was developed specifically for this purpose using a specified work shape and its boundary conditions to build up a field solution step by step. The computations for both methods were performed by digital computer. Experimental work was undertaken to provide shapes to compare with the theoretical predictions and good agreement was obtained. An attempt was made to measure anode overpotential for various metals under actual machining conditions and readings less than 1 volt were recorded. Discussion of the merits of the various methods and their applicability to three dimensional problems, conclusions and suggestions for future studies complete the work.2015-11-19T08:59:20ZDeformation and rupture of structures due to combined cyclic plasticity and creep.Lavender, David A.http://hdl.handle.net/2381/347932015-11-20T03:26:49Z2015-11-19T08:59:20ZTitle: Deformation and rupture of structures due to combined cyclic plasticity and creep.
Authors: Lavender, David A.
Abstract: The effect of creep-fatigue conditions on structural components is not completely understood, and so the prediction of the behaviour and lifetime of such components is often unreliable and inaccurate. One of the methods proposed to improve the predictions is continuum damage mechanics, which provides a general description of material behaviour under degrading conditions. An estimate of life is usually based on the initial behaviour of a component. However, the work of previous researchers has shown that accurate predictions of the creep life of structures require that the stress redistribution due to the growth of damage is taken into account. In this thesis, this work is extended to fatigue and the effect of fatigue damage on life and deformation is studied for multibar model structures. The non-linear kinematic hardening rule is introduced as a constitutive law for cyclic plasticity that models many aspects of the cyclic behaviour of metals. Its properties are studied and it is extended to include the effects of damage on cyclic deformation. Creep-fatigue is studied by combining the models for fatigue and creep. Using published material data, the creep-fatigue behaviour of a two bar structure is studied and the results are compared with some experimental results. A study is made of finite element methods for solving problems involving plasticity and an example problem is solved. A model for the multiaxial behaviour of damaged material is proposed and examined for simple cases. The studies show that stress redistribution has a significant effect on fatigue life and the qualitative properties of the uniaxial models are very close to experimental observations. However, a lack of suitable and consistent experimental data on material behaviour means that the lifetime predictions and the multiaxial models are of uncertain accuracy.2015-11-19T08:59:20Z