DSpace Collection:http://hdl.handle.net/2381/41372017-08-11T00:15:20Z2017-08-11T00:15:20ZModelling Share Prices as a Random Walk on a Markov ChainSamci Karadeniz, Rukiyehttp://hdl.handle.net/2381/401292017-08-03T02:24:41Z2017-08-02T14:10:44ZTitle: Modelling Share Prices as a Random Walk on a Markov Chain
Authors: Samci Karadeniz, Rukiye
Abstract: In the financial area, a simple but also realistic means of modelling real data is very important. Several approaches are considered to model and analyse the data presented herein. We start by considering a random walk on an additive functional of a discrete time Markov chain perturbed by Gaussian noise as a model for the data as working with a continuous time model is more convenient for option prices. Therefore, we consider the renowned (and open) embedding problem for Markov chains: not every discrete time Markov chain has an underlying continuous time Markov chain. One of the main goals of this research is to analyse whether the discrete time model permits extension or embedding to the continuous time model. In addition, the volatility of share price data is estimated and analysed by the same procedure as for share price processes. This part of the research is an extensive case study on the embedding problem for financial data and its volatility.
Another approach to modelling share price data is to consider a random walk on the lamplighter group. Specifically, we model data as a Markov chain with a hidden random walk on the lamplighter group Z3 and on the tensor product of groups Z2 ⊗ Z2. The lamplighter group has a specific structure where the hidden information is actually explicit. We assume that the positions of the lamplighters are known, but we do not know the status of the lamps. This is referred to as a hidden random walk on the lamplighter group. A biased random walk is constructed to fit the data. Monte Carlo simulations are used to find the best fit for smallest trace norm difference of the transition matrices for the tensor product of the original transition matrices from the (appropriately split) data.
In addition, splitting data is a key method for both our first and second models. The tensor product structure comes from the split of the data. This requires us to deal with the missing data. We apply a variety of statistical techniques such as Expectation- Maximization Algorithm and Machine Learning Algorithm (C4.5).
In this work we also analyse the quantum data and compute option prices for the binomial model via quantum data.2017-08-02T14:10:44ZVirtual Element MethodsSutton, Oliver Jameshttp://hdl.handle.net/2381/399552017-06-27T02:27:11Z2017-06-26T13:44:08ZTitle: Virtual Element Methods
Authors: Sutton, Oliver James
Abstract: In this thesis we study the Virtual Element Method, a recent generalisation of the standard conforming Finite Element Method offering high order approximation spaces on computational meshes consisting of general polygonal or polyhedral elements. Our particular focus is on developing the tools required to use the method as the foundation of mesh adaptive algorithms which are able to take advantage of the flexibility offered by such general meshes.
We design virtual element discretisations of certain classes of linear second order elliptic and parabolic partial differential equations, and present a detailed exposition of their implementation aspects. An outcome of this is a concise and usable 50-line MATLAB implementation of a virtual element method for solving a model elliptic problem on general polygonal meshes, the code for which is included as an appendix. Optimal order convergence rates in the H1 and L2 norms are proven for the discretisation of elliptic problems. Alongside these, we derive fully computable residual-type a posteriori estimates of the error measured in the H1 and L2 norms for the methods we develop for elliptic problems, and in the L2(0; T;H1) and L∞(0; T;L2) norms for parabolic problems. In deriving the L∞(0; T;L2) error estimate, we introduce a new technique (which translates naturally back into the setting of conventional finite element methods) to produce estimates with effectivities which become constant for long time simulations. Mesh adaptive algorithms, designed around these methods and computable error estimates, are proposed and numerically assessed in a variety of challenging stationary and time-dependent scenarios.
We further propose a virtual element discretisation and computable coarsening/refinement indicator for a system of semilinear parabolic partial differential equations which we apply to a Lotka-Volterra type model of interacting species. These components form the basis of an adaptive method which we use to reveal a variety of new pattern-forming mechanisms in the cyclic competition model.2017-06-26T13:44:08ZModern Mathematical Methods for Actuarial SciencesKaya, Ahmethttp://hdl.handle.net/2381/396132017-04-04T02:20:54Z2017-04-03T08:09:18ZTitle: Modern Mathematical Methods for Actuarial Sciences
Authors: Kaya, Ahmet
Abstract: In the ruin theory, premium income and outgoing claims play an important role. We introduce several ruin type mathematical models and apply various mathematical methods to find optimal premium price for the insurance companies. Quantum theory is one of the significant novel approaches to compute the finite time non-ruin probability. More exactly, we apply the discrete space Quantum mechanics formalism (see main thesis for formalism) and continuous space Quantum mechanics formalism (see main thesis for formalism) with the appropriately chosen Hamiltonians.
Several particular examples are treated via the traditional basis and quantum mechanics formalism with the different eigenvector basis. The numerical results are also obtained using the path calculation method and compared with the stochastic modeling results.
In addition, we also construct various models with interest rate. For these models, optimal premium prices are stochastically calculated for independent and dependent claims with different dependence levels by using the Frank copula method.2017-04-03T08:09:18ZAdaptive large-scale mantle convection simulationsCox, Samuel Peterhttp://hdl.handle.net/2381/395712017-03-28T02:17:11Z2017-03-27T10:20:14ZTitle: Adaptive large-scale mantle convection simulations
Authors: Cox, Samuel Peter
Abstract: The long-term motion of the Earth's mantle is of considerable interest to geologists and geodynamists in explaining the evolution of the planet and its internal and surface history. The inaccessible nature of the mantle necessitates the use of computer simulations to further our understanding of the processes underlying the motion of tectonic plates.
Numerical methods employed to solve the equations describing this motion lead to linear systems of a size which stretch the current capabilities of supercomputers to their limits. Progress towards the satisfactory simulation of this process is dependent upon the use of new mathematical and computational ideas in order to bring the largest problems within the reach of current computer architectures.
In this thesis we present an implementation of the discontinuous Galerkin method, coupled to a more traditional finite element method, for the simulation of this system. We also present an a posteriori error estimate for the convection-diffusion equation without reaction, using an exponential fitting technique and artificial reaction to relax the restrictions upon the derivative of the convection field that are usually imposed within the existing literature. This error bound is used as the basis of an h-adaptive mesh refinement strategy. We present an implementation of the calculation of this bound alongside the simulation and the indicator, in a parallelised C++ code, suitable for use in a distributed computing setting.
Finally, we present an implementation of the discontinuous Galerkin method into the community code ASPECT, along with an adaptivity indicator based upon the proven a posteriori error bound. We furnish both implementations with numerical examples to explore the applicability of these methods to a number of circumstances, with the aim of reducing the computational cost of large mantle convection simulations.2017-03-27T10:20:14ZAdaptive Discontinuous Galerkin Methods for Interface ProblemsSabawi, Younis Abidhttp://hdl.handle.net/2381/393862017-02-28T03:44:55Z2017-02-27T12:07:31ZTitle: Adaptive Discontinuous Galerkin Methods for Interface Problems
Authors: Sabawi, Younis Abid
Abstract: The aim of this thesis is to derive adaptive methods for discontinuous Galerkin approximations for both elliptic and parabolic interface problems. The derivation of adaptive method, is usually based on a posteriori error estimates. To this end, we present a residual-type a posteriori error estimator for interior penalty discontinuous Galerkin (dG) methods for an elliptic interface problem involving possibly curved interfaces, with flux-balancing interface conditions, e.g., modelling mass transfer of solutes through semi-permeable membranes. The method allows for extremely general curved element shapes employed to resolve the interface geometry exactly. Respective upper and lower bounds of the error in the respective dG-energy norm with respect to the estimator are proven. The a posteriori error bounds are subsequently used to prove a basic a priori convergence result. Moreover, a contraction property for a standard adaptive algorithm utilising these a posteriori bounds, with a bulk refinement criterion is also shown, thereby proving that the a posteriori bounds can lead to a convergent adaptive algorithm subject to some mesh restrictions. This work is also concerned with the derivation of a new L1∞(L2)-norm a posteriori error bound for the fully discrete adaptive approximation for non-linear interface parabolic problems. More specifically, the time discretization uses the backward Euler Galerkin method and the space discretization uses the interior penalty discontinuous Galerkin finite element method.
The key idea in our analysis is to adapt the elliptic reconstruction technique, introduced by Makridakis and Nochetto [48], enabling us to use the a posteriori error estimators derived for elliptic interface models and to obtain optimal order in both L1∞(L2) and L1∞(L2) + L2(H¹) norms. The effectiveness of all the error estimators and the proposed algorithms is confirmed through a series of numerical experiments.2017-02-27T12:07:31ZMultilevel sparse grid kernels collocation with radial basis functions for elliptic and parabolic problemsZhao, Yangzhanghttp://hdl.handle.net/2381/391482017-01-17T03:23:11Z2017-01-16T11:28:58ZTitle: Multilevel sparse grid kernels collocation with radial basis functions for elliptic and parabolic problems
Authors: Zhao, Yangzhang
Abstract: Radial basis functions (RBFs) are well-known for the ease implementation as
they are the mesh-free method [31, 37, 71, 72]. In this thesis, we modify the
multilevel sparse grid kernel interpolation (MuSIK) algorithm proposed in [48]
for use in Kansa’s collocation method (referred to as MuSIK-C) to solve elliptic
and parabolic problems. The curse of dimensionality is a significant challenge
in high dimension approximation. A full grid collocation method requires O(Nd)
nodal points to construct an approximation; here N is the number of nodes in
one direction and d means the dimension. However, the sparse grid collocation
method in this thesis only demand O(N logd1(N)) nodes. We save much more
memory cost using sparse grids and obtain a good performance as using full grids.
Moreover, the combination technique [20, 54] allows the sparse grid collocation
method to be parallelised. When solving parabolic problems, we follow Myers
et al.’s suggestion in [90] to use the space-time method, considering time as
one spatial dimension. If we apply sparse grids in the spatial dimensions and
use time-stepping, we still need O(N2 logd1(N)) nodes. However, if we use the
space-time method, the total number of nodes is O(N logd(N)).
In this thesis, we always compare the performance of multiquadric (MQ) basis
function and the Gaussian basis function. In all experiments, we observe that
the collocation method using the Gaussian with scaling shape parameters does
not converge. Meanwhile, in Chapter 3, there is an experiment to show that the
space-time method with MQ has a similar convergence rate as a time-stepping
method using MQ in option pricing. From the numerical experiments in Chapter
4, MuSIK-C using MQ and the Gaussian always give more rapid convergence
and high accuracy especially in four dimensions (T R3) for PDEs with smooth
conditions. Compared to some recently proposed mesh-based methods, MuSIK-C
shows similar performance in low dimension situation and better approximation
in high dimension. In Chapter 5, we combine the Method of Lines (MOL) and our
MuSIK-C to obtain good convergence in pricing one asset European option and
the Margrabe option, that have non-smooth initial conditions.2017-01-16T11:28:58ZDynamic Cooperative InvestmentAlmualim, Anwar Hassan Alihttp://hdl.handle.net/2381/391462017-01-17T03:23:15Z2017-01-16T11:16:31ZTitle: Dynamic Cooperative Investment
Authors: Almualim, Anwar Hassan Ali
Abstract: In this thesis we develop dynamic cooperative investment schemes in discrete and
continuous time. Instead of investing individually, several agents may invest joint
capital into a commonly agreed trading strategy, and then split the uncertain
outcome of the investment according to the pre-agreed scheme, based on their
individual risk-reward preferences. As a result of cooperation, each investor is able
to get a share, which cannot be replicated with the available market instruments,
and because of this, cooperative investment is usually strictly profitable for all
participants, when compared with an optimal individual strategy. We describe
cooperative investment strategies which are Pareto optimal, and then propose a
method to choose the most ‘fair’ Pareto optimal strategy based on equilibrium
theory. In some cases, uniqueness and stability for the equilibrium are justified.
We study a cooperative investment problem, for investors with different risk preferences,
coming from expected utility theory, mean-variance theory, mean-deviation
theory, prospect theory, etc. The developed strategies are time-consistent; that
is the group of investors have no reasons to change their mind in the middle of
the investment process. This is ensured by either using a dynamic programming
approach, by applying the utility model based on the compound independence
axiom.
For numerical experiments, we use a scenario generation algorithm and stochastic
programming model for generating appropriate scenario tree components of the
S&P 100 index. The algorithm uses historical data simulation as well as a GARCH
model.2017-01-16T11:16:31ZGaussian Process and Functional Data Methods for Mortality ModellingWu, Ruhaohttp://hdl.handle.net/2381/391432017-01-17T03:23:10Z2017-01-16T10:46:09ZTitle: Gaussian Process and Functional Data Methods for Mortality Modelling
Authors: Wu, Ruhao
Abstract: Modelling the demographic mortality trends is of great importance due to its considerable impact on welfare policy, resource allocation and government planning. In this thesis, we propose to use various statistical methods, including Gaussian process (GP), principal curve, multilevel functional principal component analysis (MFPCA) for forecasting and clustering of human mortality data. This thesis is actually composed of three main topics regarding mortality modelling. In the first topic, we propose a new Gaussian process regression method and apply it to the modelling and forecasting of age-specific human mortality rates for a single population. The proposed method incorporates a weighted mean function and the spectral mixture covariance function, hence provides better performance in forecasting long term mortality rates, compared with the conventional GPR methods. The performance of the proposed method is also compared with Lee-Miller model and the functional data model by Hyndman and Ullah (2007) in the context of forecasting the French total mortality rates. Then, in the second topic, we extend mortality modelling for a single population independently to that for multiple populations simultaneously, by developing a new framework for coherent modelling and forecasting of mortality rates for multiple subpopulations within one large population. We treat the mortality of subpopulations as multilevel functional data and then a weighted multilevel functional principal component approach is proposed and used for modelling and forecasting the mortality rates. The proposed model is applied to sex-specific data for nine developed countries, and the forecasting results suggest that, in terms of overall accuracy, the model outperforms the independent model (Hyndman and Ullah 2007) and is comparable to the Product-Ratio model (Hyndman et al 2013) but with several advantages. Finally, in the third topic, we introduce a clustering method based on principal curves for clustering of human mortality as functional data. And this innovative clustering method is applied to French total mortality data for exploring its potential features.2017-01-16T10:46:09ZDiscontinuous Galerkin Methods on Polytopic MeshesDong, Zhaonanhttp://hdl.handle.net/2381/391402017-01-14T04:08:36Z2017-01-13T15:44:19ZTitle: Discontinuous Galerkin Methods on Polytopic Meshes
Authors: Dong, Zhaonan
Abstract: This thesis is concerned with the analysis and implementation of the hp-version
interior penalty discontinuous Galerkin finite element method (DGFEM) on computational
meshes consisting of general polygonal/polyhedral (polytopic) elements.
Two model problems are considered: general advection-diffusion-reaction boundary
value problems and time dependent parabolic problems. New hp-version a
priori error bounds are derived based on a specific choice of the interior penalty
parameter which allows for edge/face-degeneration as well as an arbitrary number
of faces and hanging nodes per element.
The proposed method employs elemental polynomial bases of total degree p (Pp-
bases) defined in the physical coordinate system, without requiring mapping from
a given reference or canonical frame. A series of numerical experiments highlighting
the performance of the proposed DGFEM are presented. In particular,
we study the competitiveness of the p-version DGFEM employing a Pp-basis on
both polytopic and tensor-product elements with a (standard) DGFEM and FEM
employing a (mapped) Qp-basis. Moreover, a careful theoretical analysis of optimal
convergence rate in p for Pp-basis is derived for several commonly used
projectors, which leads to sharp bounds of exponential convergence with respect
to degrees of freedom (dof) for the Pp-basis.
Description: File under embargo until 3rd June 2017.2017-01-13T15:44:19ZRepresentations of Quantum Conjugacy Classes of Non-Exceptional Quantum GroupsAshton, Thomas Stephenhttp://hdl.handle.net/2381/390242016-12-22T03:21:01Z2016-12-21T16:00:17ZTitle: Representations of Quantum Conjugacy Classes of Non-Exceptional Quantum Groups
Authors: Ashton, Thomas Stephen
Abstract: Let G be a complex non-exceptional simple algebraic group and g its Lie algebra. With every point x of the maximal torus T ʗ G we associate a highest weight module Mx over the Drinfeld-Jimbo quantum group Uq(g) and an equivariant quantization of the conjugacy class of x by operators in End(Mx). These equivariant quantizations are isomorphic for x lying on the same orbit of the Weyl group, and Mx support different exact representations of the same quantum conjugacy class.
This recovers all quantizations of conjugacy classes constructed before, via special x, and also completes the family of conjugacy classes by constructing an equivariant quantization of “borderline" Levi conjugacy classes of the complex orthogonal group SO(N), whose stabilizer contains a Cartesian factor SO(2) SO(P), 1 6 P < N, P Ξ N mod 2.
To achieve this, generators of the Mickelsson algebra Zq(g; g’), where g’ ʗ g is the Lie subalgebra of rank rkg’ = rkg-1 of the same type, were explicitly constructed in terms of Chevalley generators via the R-matrix of Uq(g).2016-12-21T16:00:17ZEquivariant Hochschild CohomologyKoam, Ali Nasser Alihttp://hdl.handle.net/2381/385022016-11-15T03:22:04Z2016-11-14T11:03:11ZTitle: Equivariant Hochschild Cohomology
Authors: Koam, Ali Nasser Ali
Abstract: In this thesis our goal is to develop the equivariant version of Hochschild cohomology. In the equivariant world there is given a group G which acts on objects. First naive object which can be considered is a G-algebra, that is, an associative algebra A on which G acts via algebra automorphisms. In our work we consider two more general situations. In the first case we develop a cohomology theory for oriented algebras and in the second case we develop a cohomology theory for Green functors.2016-11-14T11:03:11ZMultilevel Adaptive Radial Basis Function Approximation using Error IndicatorsZhang, Qihttp://hdl.handle.net/2381/382842016-11-01T03:23:32Z2016-10-31T10:53:54ZTitle: Multilevel Adaptive Radial Basis Function Approximation using Error Indicators
Authors: Zhang, Qi
Abstract: In some approximation problems, sampling from the target function can be both expensive and time-consuming. It would be convenient to have a method for indicating where the approximation quality is poor, so that generation of new data provides the user with greater accuracy where needed.
In this thesis, the author describes a new adaptive algorithm for Radial Basis Function (RBF) interpolation which aims to assess the local approximation quality and adds or removes points as required to improve the error in the specified region.
For a multiquadric and Gaussian approximation, one has the flexibility of a shape parameter which one can use to keep the condition number of the interpolation matrix to a moderate size. In this adaptive error indicator (AEI) method, an adaptive shape parameter is applied.
Numerical results for test functions which appear in the literature are given for one, two, and three dimensions, to show that this method performs well. A turbine blade design problem form GE Power (Rugby, UK) is considered and the AEI method is applied to this problem.
Moreover, a new multilevel approximation scheme is introduced in this thesis by coupling it with the adaptive error indicator. Preliminary numerical results from this Multilevel Adaptive Error Indicator (MAEI) approximation method are shown. These indicate that the MAEI is able to express the target function well. Moreover, it provides a highly efficient sampling.2016-10-31T10:53:54ZHomotopy Types of Topological Groupoids and Lusternik-Schnirelmann Category of Topological StacksAlsulami, Samirah Hameed Breakhttp://hdl.handle.net/2381/380942016-09-27T02:20:49Z2016-09-26T11:04:55ZTitle: Homotopy Types of Topological Groupoids and Lusternik-Schnirelmann Category of Topological Stacks
Authors: Alsulami, Samirah Hameed Break
Abstract: The concept of a groupoid was first introduced in 1926 by H. Brandt in his fundamental paper [7]. The idea behind it is a small category in which every arrow is invertible. This notion of groupoid can be thought of as a generalisation of the notion of a group. Namely, a group is a groupoid with only one object. After the introduction of topological and differentiable groupoids by Ehresmann in 1950 in his paper on connections [19], the concept has been widely studied by many mathematicians in many areas of topology, geometry and physics. In this thesis, we deal with topological groupoids as the main object of study. We first develop the main concepts of homotopy theory of topological groupoids. Also, we study general versions of Morita equivalence between topological groupoids, which lead us to discuss topological stacks. The main objective of this thesis is then to develop and analyse a notion of Lusternik-Schnirelmann category for general topological groupoids and topological stacks, generalising the classical work by Lusternik and Schnirelmann for topological spaces and manifolds [30] and for orbifolds and Lie groupoids as introduced by Colman [11]. Fundamental in the classical definition of the LS-category of a smooth manifold or topological space is the concept of a categorical set. A subset of a space is said to be categorical if it is contractible in the space. The Lusternik-Schnirelmann category cat(X) of a topological space X is defined to be the least number of categorical open sets required to cover X, if that number is finite. Otherwise the category cat(X) is said to be infinite. Here using a generalised notion of categorical subgroupoid and substack, we generalise the notion of the Lusternik-Schnirelmann category to topological groupoids and topological stacks with the intention of providing a new useful tool and invariant to study homotopy types of topological groupoids and topological stacks, which will be important also to understand the geometry and Morse theory of Lie groupoids and differentiable stacks from a purely homotopical viewpoint.2016-09-26T11:04:55ZThe convective instability of the BEK system of rotating boundary-layer flows over rough disksAlveroğlu, Burhanhttp://hdl.handle.net/2381/379772016-08-17T02:26:40Z2016-08-16T11:54:44ZTitle: The convective instability of the BEK system of rotating boundary-layer flows over rough disks
Authors: Alveroğlu, Burhan
Abstract: A numerical study investigating the effects of surface roughness on the stability properties of the BEK system of flows is introduced. The BEK system of flows occur in many engineering applications such as turbo-machinery and rotor-stator devices, therefore they have great practical importance. Recent studies have been concerned with the effects of surface roughness on the von Kármán flow. The aim of this thesis is to investigate whether distributed surface roughness could be used as a passive drag reduction technique for the broader BEK system of flows. If it can, what is “the right sort of roughness?" To answer these questions, a linear stability analysis is performed using the Chebyshev collocation method to investigate the effect of particular types of distributed surface roughness, both anisotropic and isotropic, on the convective instability characteristics of the inviscid Type I (cross-flow) instability and the viscous Type II instability. The results reveal that all roughness types lead to a stabilisation of the Type I mode in all flows within the BEK family, with the exception of azimuthally-anisotropic roughness (radial grooves) within the Bődewadt flow which causes a mildly destabilising effect. In the case of the Type II mode, the results reveal the destabilising effect of radially-anisotropic roughness (concentric grooves) on all the boundary layers, whereas both azimuthally-anisotropic and isotropic roughness have a stabilising effect on the mode for Ekman and von Kármán flows. Moreover, an energy analysis is performed to investigate the underlying physical mechanisms behind the effects of rough surfaces on the BEK system. The conclusion is that isotropic surface roughness is the most effective type of the distributed surface roughness and can be recommended as a passive-drag reduction mechanism for the entire BEK system of flows.2016-08-16T11:54:44ZMathematical Modelling of Oxygen - Plankton System under the Climate ChangeSekerci Firat, Yadigarhttp://hdl.handle.net/2381/379712016-08-17T02:26:05Z2016-08-16T10:51:51ZTitle: Mathematical Modelling of Oxygen - Plankton System under the Climate Change
Authors: Sekerci Firat, Yadigar
Abstract: Oxygen production due to phytoplankton photosynthesis is an important phenomenon keeping in mind the underlying dynamics of marine ecosystems. However, despite its crucial importance, not only for marine but also for terrestrial ecosystems, the coupled oxygen-plankton dynamics have been overlooked.
This dissertation aims to provide insight into an oxygen-plankton system using mathematical modelling. We firstly develop a ‘baseline’ oxygen-phytoplankton model which is then further developed through the addition of biologically relevant factors such as plankton respiration and the predator effect of zooplankton. The properties of the model have been studied both in the nonspatial case, which corresponds to a well mixed system with a spatially uniform distribution of species, and in the spatially explicit extension, by taking into account the transport of oxygen and movement of plankton by turbulent diffusion.
Since the purpose of this work is to reveal the oxygen dynamics, the effect of global warming is considered taken into consideration and modelled by various oxygen production rates and phytoplankton growth functions in Chapters 5 and 6. It is shown that sustainable oxygen production is only possible in an intermediate range of the production rate. If the oxygen production rate becomes sufficiently low or high, in the course of time, the system’s dynamics shows abrupt changes resulting in plankton extinction and oxygen depletion. We show that the spatial system’s sustainability range is larger that of the corresponding nonspatial system. We show that oxygen production by phytoplankton can stop suddenly if the water temperature exceeds a certain critical threshold. Correspondingly, this dissertation reveals the scenarios of extinction which can potentially lead to an ecological disaster.2016-08-16T10:51:51ZThe Fundamental Groupoid and the Geometry of MonoidsPirashvili, Iliahttp://hdl.handle.net/2381/378372016-07-14T02:19:15Z2016-07-13T15:42:29ZTitle: The Fundamental Groupoid and the Geometry of Monoids
Authors: Pirashvili, Ilia
Abstract: This thesis is divided in two equal parts. We start the first part by studying the Kato-spectrum of a commutative monoid M, denoted by KSpec(M). We show that the functor M → KSpec(M) is representable and discuss a few consequences of this fact. In particular, when M is additionally finitely generated, we give an efficient way of calculating it explicitly.
We then move on to study the cohomology theory of monoid schemes in general and apply it to vector- and particularly, line bundles. The isomorphism class of the latter is the Picard group. We show that under some assumptions on our monoid scheme X, if k is an integral domain (resp. PID), then the induced map Pic(X) → Pic(Xk) from X to its realisation is a monomorphism (resp. isomorphism).
We then focus on the Pic functor and show that it respects finite products. Finally, we generalise several important constructions and notions such as cancellative monoids, smoothness and Cartier divisors, and prove important results for them.
The main results of the second part can be summed up in fewer words. We prove that for good topological spaces X the assignment U → II₁(U) is the terminal object of the 2-category of costacks. Here U is an open subset of X and II₁(U) denotes the fundamental groupoid of U. This result translates to the étale fundamental groupoid as well, though the proof there is completely different and involves studying and generalising Galois categories.2016-07-13T15:42:29ZBetting Markets: Defining odds restrictions, exploring market inefficiencies and measuring bookmaker solvencyCortis, Dominichttp://hdl.handle.net/2381/377832017-06-15T01:45:06Z2016-06-17T10:02:32ZTitle: Betting Markets: Defining odds restrictions, exploring market inefficiencies and measuring bookmaker solvency
Authors: Cortis, Dominic
Abstract: Betting markets have been of great interest to researchers as they represent a simpler set-up of financial markets. With an estimated Gross Gambling Revenue of 45bn yearly on betting on outcomes alone (excluding other gambling markets such as Casino, Poker and Lottery), these markets deserve attention on their own merit.
This thesis provides simple mathematical derivation of a number of key statements in setting odds. It estimates the expected bookmaker profit as a function of wagers placed and bookmaker margin. Moreover it shows that odds set by bookmakers should have implied probabilities that add up to at least one. Bookmakers do not require the exact probability of an outcome to have positive expected profits. They can increase profitability by having more accurate odds and offering more multiples/accumulators. Bookmakers can lower variation in payouts by maintaining the ratio of wagers and implied probability per outcome.
While bookmakers face significant regulatory pressures as well as increased taxes and levies, there is no standard industry model that can be applied to evaluate the minimum reserves for a bookmaker. Hence a bookmaker may be under/over-reserving funds required to conduct business. A solvency regime for bookmakers is presented in this work.
Furthermore a model is proposed to forecast soccer results – this focuses on goal differences in contrast to traditional models that predict outcome or goals scored per team.
Significant investigations are made on the inefficiencies of betting markets. The likelihood of Brazil reaching different stages of the 2014 World Cup, as perceived by odds, is compared to events on and outside the pitch to imply bias. An analysis of over 136,000 odds for European soccer matches shows evidence of the longshot bias. Finally this work investigates how it is possible to profit from market inefficiencies on betting exchanges during short tournaments by using a Monte Carlo simulation method as a quasi-arbitrage model.2016-06-17T10:02:32ZkNN predictability analysis of stock and share closing pricesShi, Yanshanhttp://hdl.handle.net/2381/377722016-06-17T02:17:44Z2016-06-16T09:25:00ZTitle: kNN predictability analysis of stock and share closing prices
Authors: Shi, Yanshan
Abstract: The k nearest neighbor rule or the kNN rule is a nonparametric algorithm that search for the k nearest neighbors of a query set in another set of points. In this thesis, application of the kNN rule in predictability analysis of stock and share returns is proposed. The first experiment tests the possibility of prediction for ‘success’ (or ‘winner’) components of four stock and shares market indices in a selected time period [1]. We have developed a method of labeling the component with either ‘winner’ or ‘loser’. We analyze the existence of information on the winner–loser separation in the initial fragments of the daily closing prices log–returns time series. The Leave–One–Out Cross–Validation with the kNN algorithm is applied on the daily log–returns of components. Two distance measurements are used in our experiment, a correlation distance, and its proximity. By analyzing the error, for the HANGSENG and the DAX index, there are clear signs of possibility to evaluate the probability of long–term success. The correlation distance matrix histograms and 2–D/3–D elastic maps generated from the ViDaExpert show that the ‘winner’ components are closer to each other and ‘winner’/‘loser’ components are separable on elastic maps for the HANGSENG and the DAX index while for the negative possibility indices, there is no sign of separation.
In the second experiment, for a selected time interval, daily log–return time series is split into “history”, “present” and “future” parts. The kNN rule is used to search for nearest neighbors of “present” from a set. This set is created by using the sliding window strategy. The nearest neighbors are considered as the predicted “future” part. We then use ideas from dynamical systems and to regenerate “future” part closing prices from nearest neighbors log–returns. Different sub–experiments are created in terms of the difference in generation of “history” part, different market indices, and different distance measurements. This approach of modeling or forecasting works for both the ergodic dynamic systems and the random processes.
The Lorenz attractor with noise is used to generate data and the data are used in the kNN experiment with the Euclidean distance. The sliding window strategy is applied in both test and training set. The kNN rule is used to find the k nearest neighbors and the next ‘window’ is used as the prediction. The error analysis of the relative mean squared error RMSE shows that k = 1 can give the best prediction and when k → 100, the average RMSE values converge. The average standard deviation values converge when k → 100. The solution Z(t) is predicted quite accurate using the kNN experiment.2016-06-16T09:25:00ZCohomology of tiling spaces: beyond primitive substitutionsRust, Daniel Georgehttp://hdl.handle.net/2381/374692016-04-30T02:28:11Z2016-04-29T13:40:19ZTitle: Cohomology of tiling spaces: beyond primitive substitutions
Authors: Rust, Daniel George
Abstract: This thesis explores the combinatorial and topological properties of tiling spaces
associated to 1-dimensional symbolic systems of aperiodic type and their associated
algebraic invariants. We develop a framework for studying systems which are more
general than primitive substitutions, naturally partitioned into two instances: in the
first instance we allow systems associated to sequences of substitutions of primitive
type from a finite family of substitutions (called mixed substitutions); in the second
instance we concentrate on systems associated to a single substitution, but where
we entirely remove the condition of primitivity.
We generalise the notion of a Barge-Diamond complex, in the one-dimensional case,
to any mixed system of symbolic substitutions. This gives a way of describing
the associated tiling space as an inverse limit of complexes. We give an effective
method for calculating the Cech cohomology of the tiling space via an exact sequence
relating the associated sequence of substitution matrices and certain subcomplexes
appearing in the approximants. As an application, we show that there exists a
system of substitutions on two letters which exhibit an uncountable collection of
minimal tiling spaces with distinct isomorphism classes of Cech cohomology.
In considering non-primitive substitutions, we naturally divide this set of substitutions
into two cases: the minimal substitutions and the non-minimal substitutions.
We provide a detailed method for replacing any non-primitive but minimal substitution
with a topologically conjugate primitive substitution, and a more simple
method for replacing the substitution with a primitive substitution whose tiling
space is orbit equivalent. We show that an Anderson-Putnam complex with a collaring
of some appropriately large radius suffices to provide a model of the tiling
space as an inverse limit with a single map. We apply these methods to effectively
calculate the Cech cohomology of any substitution which does not admit a periodic
point in its subshift. Using its set of closed invariant subspaces, we provide a pair
of invariants which are each strictly finer than the usual Cech cohomology for a
substitution tiling space.2016-04-29T13:40:19ZA classification of the point spectrum of constant length substitution tiling spaces and general fixed point theorems for tilingsAbuzaid, Dina Asaadhttp://hdl.handle.net/2381/370272016-03-12T04:27:12Z2016-03-11T16:11:58ZTitle: A classification of the point spectrum of constant length substitution tiling spaces and general fixed point theorems for tilings
Authors: Abuzaid, Dina Asaad
Abstract: We examine the point spectrum of the various tiling spaces that result from
different choices of tile lengths for substitution of constant length on a two or a three letter
alphabet. In some cases we establish insensitivity to changes in length. In a wide range
of cases, we establish that the typical choice of length leads to trivial point spectrum.
We also consider a problem related to tilings of the integers and their connection to fixed
point theorems. Using this connection, we prove a generalization of the Banach Contraction
Principle.2016-03-11T16:11:58ZMultiscale principal component analysisAkinduko, Ayodeji Akinwumihttp://hdl.handle.net/2381/366162016-02-10T03:35:14Z2016-02-09T10:10:00ZTitle: Multiscale principal component analysis
Authors: Akinduko, Ayodeji Akinwumi
Abstract: The problem of approximating multidimensional data with objects of lower dimension is a classical problem in complexity reduction. It is important that data approximation capture the structure(s) and dynamics of the data, however distortion to data by many methods during approximation implies that some geometric structure(s) of the data may not be preserved during data approximation. For methods that model the manifold of the data, the quality of approximation depends crucially on the initialization of the method. The first part of this thesis investigates the effect of initialization on manifold modelling methods. Using Self Organising Maps (SOM) as a case study, we compared the quality of learning of manifold methods for two popular initialization methods; random initialization and principal component initialization. To further understand the dynamics of manifold learning, datasets were further classified into linear, quasilinear and nonlinear.
The second part of this thesis focuses on revealing geometric structure(s) in high dimension data using an extension of Principal Component Analysis (PCA). Feature extraction using (PCA) favours direction with large variance which could obfuscate other interesting geometric structure(s) that could be present in the data. To reveal these intrinsic structures, we analysed the local PCA structures of the dataset. An equivalent definition of PCA is that it seeks subspaces that maximize the sum of pairwise distances of data projection; extending this definition we define localization in term of scale as maximizing the sum of weighted squared pairwise distances between data projections for various distributions of weights (scales). Since for complex data various regions of the dataspace could have different PCA structures, we also define localization with regards to dataspace. The resulting local PCA structures were represented by the projection matrix corresponding to the subspaces and analysed to reveal some structures in the data at various localizations.2016-02-09T10:10:00ZUsing partially specified models to detect and quantify structural sensitivity in biological systemsAdamson, Matthew Williamhttp://hdl.handle.net/2381/359502015-11-26T03:03:19Z2015-11-25T15:53:12ZTitle: Using partially specified models to detect and quantify structural sensitivity in biological systems
Authors: Adamson, Matthew William
Abstract: Mathematical models in ecology and evolution are highly simplified representations of a complex underlying reality. For this reason, there is always a high degree of uncertainty with regards to the model specification—not just in terms of parameters, but also in the form taken by the model equations themselves. This uncertainty becomes critical for models in which the use of two different functions fitting the same dataset can yield substantially different model predictions—a property known as structural sensitivity. In this case, even if the model is purely deterministic, the uncertainty in the model functions carries through into uncertainty in the model predictions, and new frameworks are required to tackle this fundamental problem. Here, we construct a framework that uses partially specified models: ODE models in which unknown functions are represented not by a specific functional form, but by an entire data range and constraints of biological realism. Partially specified models can be used to rigorously detect when models are structurally sensitive in their predictions concerning the character of an equilibrium point by projecting the data range into a generalised bifurcation space formed of equilibrium values and derivatives of any unspecified functions. The key question of how to carry out this projection is a serious mathematical challenge and an obstacle to the use of partially specified models. We address this challenge by developing several powerful techniques to perform such a projection.2015-11-25T15:53:12ZOptimum shape problems in distributed parameter control theory.Girgis, Siham Boctor.http://hdl.handle.net/2381/345812015-11-19T08:55:48Z2015-11-19T08:55:48ZTitle: Optimum shape problems in distributed parameter control theory.
Authors: Girgis, Siham Boctor.
Abstract: The work is concerned with optimum shape problems in the distributed parameter area and it consists of four parts. In Part I we consider first the basic variational theory due to Gelfand and Fomin emphasising the importance of the transversality condition in optimum shape situations; also in Part I we discuss an application of the basic theory in a particular problem where the state equations (the constraints) are hyperbolic in character. In Part II we consider a heat transfer problem between two streams of different temperatures, moving parallel to one another and with constant speeds, the aim being to choose the inlet conditions of one stream in order to achieve desired outlet conditions for the other stream. Two different aspects of the heat transfer problem are considered. In Part III we consider a hydrodynamic problem using shallow water theory in which we seek the optimum shape of a harbour boundary in order to redistribute the liquid energy in some desired way. Here one-dimensional and two-dimensional aspects of the problem are discussed, in the former fairly precise results are achieved, and in the latter the solution of the problem is shown to depend on the solution of coupled integral equations. In Part IV we consider the problem of optimum shape of an axially symmetric elastic body (subject to the classical equations of elasticity) in order to minimise the axial moment of inertia or the weight of the body. An approximate method for finding the optimum shape is presented though considerable work remains to be done in this problem.2015-11-19T08:55:48ZDistributed parameter theory in optimal control.Gregson, M. J.http://hdl.handle.net/2381/345822015-11-20T03:16:41Z2015-11-19T08:55:48ZTitle: Distributed parameter theory in optimal control.
Authors: Gregson, M. J.
Abstract: The main result of this work is the solution of open loop optimal control problems for counterflow diffusion processes, which occur very widely in chemical and mechanical engineering. In these processes two fluids pass each other moving in opposite directions separated by a membrane which is permeable to heat or a chemical solute. The membrane may also take the form of a liquid-gas interface. Subject to certain simplifying assumptions, the equations describing such processes are 01 (x,t), 02 (x,t) are the temperatures, or concentrations of solute, of the two fluids and u(t), v(t) are time dependent flow rates. k is a transfer coefficient which is assumed constant, and C1, C2 are thermal or solute capacities of the fluids per unit length of tube. h is an equilibrium constant; h = 1 for heat transfer. Possible controls are the inlet temperature or concentration of one stream and the flow rates, while possible objectives are the regulation of the outlet temperature or concentration of the other stream, or the maximisation of heat or solute transfer. Subsidiary results are the optimal control of simpler but related hyperbolic systems. One of these is the restricted counterflow problem in which the controlling stream is assumed to be so massive that it is unaffected by giving up heat or solute to the controlled stream, i.e. the system is described by the equations ; Another is the furnace equation in which u and w are possible controls. Different classes of problem arise according to whether the multiplicative controls u and v are subject to rigid constraints (frequently leading to "bang-bang" controls), or whether they are constants, functions of x and t, or functions of t only. Variational methods based on the maximum principle of A.I. Egorov are employed. Analytic solutions and numerical solutions using finite differences are obtained to the various problems. The simplifying assumptions made are probably too severe for many of the results to be directly applicable to industry. However the qualitative features of the optimal control of these processes are explained, and it is not too difficult to build more complex models.2015-11-19T08:55:48ZApplications of variational theory in certain optimum shape problems in hydrodynamics.Essawy, Abdelrahman Hussein.http://hdl.handle.net/2381/345802015-11-20T03:16:56Z2015-11-19T08:55:47ZTitle: Applications of variational theory in certain optimum shape problems in hydrodynamics.
Authors: Essawy, Abdelrahman Hussein.
Abstract: PART I In a recent paper Wu, T.Y. & Whitney, A.K., the authors studied optimum shape problems in hydrodynamics. These problems are stated in the form of a singular integral equation depending on the unknown shape and an unknown singularity distribution; the shape is then to be determined so that some given performance criterion has to be {lcub}maximized/minimized{rcub} In the optimum problem to be studied in this part we continue to assume that the state equation is a linear integral equation but we generalize the Wu & Whitney theory in two different ways. This method is applied to functional of quadratic form and a necessary condition for the extremum to be a minimum is derived. PART II The purpose of this part is to evaluate the optimum shape of a two-dimensional hydrofoil of given length and prescribed mean curvature which produces {lcub}maximum lift/minimum drag{rcub} The problem is discussed in three cases when there is a {lcub}full/partial/zero{rcub} cavity flow past the hydrofoil. The liquid flow is assumed to be two-dimensional steady, irrotational and incompressible and a linearized theory is assumed. Two-dimensional vortex and source distributions are used to simulate the two dimensional {lcub}full/partial/zero{rcub} cavity flow past a thin hydrofoil. This method leads to a system of integral equations and these are solved exactly using the Carleman-Muskhelishvili technique. This method is similar to that used by Davies, T.V. We use variational calculus techniques to obtain the optimum shape of the hydrofoil in order to {lcub}maximized/minimized{rcub} the {lcub}lift/drag{rcub} coefficient subject to constraints on curvature and given length. The mathematical problem is that of extremizing a functional depending on {lcub}? vortex strength/ ? source strength{rcub} these three functions are related by singular integral equations. The analytical solution for the unknown shape z and the unknown singularity distribution y has branch-type singularities at the two ends of the hydrofoil. Analytical solution by singular integral equations is discussed and the approximate solution by the Rayleigh-Ritz method is derived. A sufficient condition for the extremum to be a minimum is derived from consideration of the second variation. PART III The purpose of this work is to evaluate the optimum shape of a two-dimensional hydrofoil of given length and prescribed mean curvature which produces minimum drag. A thin hydrofoil of arbitrary shape is in steady, rectilinear, horizontal motion at a depth h beneath the free surface of a liquid. The usual assumptions in problems of this kind are taken as a basis, namely, the liquid is non-viscous and moving two-dimensionally, steadily and without vorticity, the only force acting on it is gravity. With these assumptions together with a linearization assumption we determine the forces, due to the hydrofoil beneath a free surface of the liquid. We use variational calculus techniques similar to those used in Part II to obtain the optimum shape so that the drag is minimized. A sufficient condition for the extremum to be a minimum is derived from consideration of the second variation. In this part some general expressions are established concerning the forces acting on a submerged vortex and source element beneath a free surface using Blasius theorem.2015-11-19T08:55:47ZOptimum shape problems for distributed parameter systems.Edwards, Janet Mhttp://hdl.handle.net/2381/345792015-11-20T03:16:33Z2015-11-19T08:55:47ZTitle: Optimum shape problems for distributed parameter systems.
Authors: Edwards, Janet M
Abstract: In this thesis the variation of a functional defined on a variable domain has been studied and applied to the problem of finding the optimum shape of the domain in which some performance criterion has an extreme. The method most frequently used is one due to Gelf and Fomin. It is applied to problems governed by first and second order partial differential equations, unsteady one dimensional gas movements and the problem of minimum drag on a body with axial symmetry in Stokes flow.2015-11-19T08:55:47ZMany-valued logics. a study of the relationship of propositional calculi and algebraic systems.Cuninghame-Green, Raymond.http://hdl.handle.net/2381/345782015-11-19T08:55:46Z2015-11-19T08:55:46ZTitle: Many-valued logics. a study of the relationship of propositional calculi and algebraic systems.
Authors: Cuninghame-Green, Raymond.
Abstract: This thesis sets out to examine the possibility of devising a theory which will give a unified account of prepositional calculi and algebraic systems. Starting from a historical account of the principal ideas tributary to the main stream of theory from Boole to the present day, it presents a technical- language framework within which it is possible to develop in a uniform format substantial portions of the theories of both sorts of system. The idea of an Interpretation then leads to a discussion of Functional Completeness, and the use of Galois fields in the algebraic representation of functions. Two particular families of systems, the Protomodules and Protorings, are selected for more detailed study. Their principal decision problems are considered, their structure examined, and their relationship to familiar systems of algebra and prepositional calculus displayed. The discussion then specialises again to the use of Galois fields in the solution of computational problems arising in connection with an important class of protorings, the so- called Galois Logics. One of these problems is of sufficient complexity to warrant the use of an automatic digital computer, and details of the computer program are presented in an appendix. Three other appendices are devoted to the presentation of material which evolved as by-products during the contemplation of the main issues; they are concerned with closely related topics, and are given here in support of the thesis rather than as part of the theory.2015-11-19T08:55:46ZParameter reduction in definition by multi-successor recursion.Burville, J. C.http://hdl.handle.net/2381/345772015-11-20T03:16:57Z2015-11-19T08:55:45ZTitle: Parameter reduction in definition by multi-successor recursion.
Authors: Burville, J. C.
Abstract: It is well known that in primitive recursive arithmetic with a single successor the number of parameters in a definition by recursion may be successively reduced. In this thesis I examine the possibility of effecting a similar reduction in the number of parameters in a definition by recursion in a multi-successor arithmetic. The reduction process involves the discovery in multi-successor arithmetic of analogues of pairing functions and of functions which select the elements of an ordered pair. One of the difficulties in finding such functions is the construction within multi-successor arithmetic of suitable product and square foot functions and establishing the properties of these functions, and the pairing functions, within a formalisation of multi-successor arithmetic. The reduction process involves of course an examination of what functions, if any, need to be adjoined to the initial functions to secure the generality of the reduction.2015-11-19T08:55:45ZComposition algebras and their generators.Wheeler, Roger F.http://hdl.handle.net/2381/345752015-11-19T08:55:44Z2015-11-19T08:55:44ZTitle: Composition algebras and their generators.
Authors: Wheeler, Roger F.
Abstract: The aim of this thesis is to show how the study of composition algebras and their generators has developed from a simple observation in logic made by Henry Maurice Sheffer nearly 60 years ago. The results in the algebra on 2 marks, which corresponds to classical 2-value sentence logic, were firmly established when Emil Post wrote a monograph on the subject 30 years ago. In this dissertation, however, they are developed in a more coherent and systematic way than has been attempted before and it is hoped that some novelty can be claimed for this exposition. More recent work has concentrated on the algebra on 3 marks (to which the author has made a published contribution) and on the general algebra. The outstanding problem in the general case has, in fact, been solved quite recently by Ivo Rosenberg. This thesis does not try to cover these later developments comprehensively; it concentrates on investigating and elucidating aspects of the subject that the author has found interesting and elegant.2015-11-19T08:55:44ZA formalisation of the arithmetic of transfinite ordinals in a multisuccessor Equation calculus.Williams, H. P.http://hdl.handle.net/2381/345762015-11-20T03:16:46Z2015-11-19T08:55:44ZTitle: A formalisation of the arithmetic of transfinite ordinals in a multisuccessor Equation calculus.
Authors: Williams, H. P.
Abstract: This thesis presents a syntactic development of the arithmetic of ordinal numbers less than This is done by means of an Equation calculus v/here.all statements are given in the form of equations. There are rules of inference for deriving; one equation from another. Certain functions, including a countably infinite number of successor functions are taken as primitive. New functions are defined by substitution and primitive recursion starting with the primitive functions. Such definitions constitute some of the axioms of the system. The only other axioms are two rules concerning the combination of successor functions, Fundamental for this development is the axiom. In this system a multisuccessor arithmetic is developed in which it is possible to prove many of the familiar results concerning trans-finite ordinal numbers. In particular the associativity of addition and multiplication as well as multiplication being left distributive with respect to addition are proved. It is shown that each ordinal in the system can be represented in Cantor's Normal Form. An ordinal subtraction is defined and a number of results involving this are proved. It is shown that this subtraction is, in a number of respects, an inverse to addition. In particular the key-equation is proved. As in Professor Goodstein's formalisation of the primitive recursive arithmetic of the natural numbers this equation is important as it allows a difference function to be defined for which a zero value is equivalent to equality of the arguments. Inequality relations are defined and some results concerning them proved. In Chapter II it is shown, using a suitable coding, that this arithmetic can be reduced to the primitive recursive arithmetic of the natural numbers. Chapter III gives a meta-proof of the consistency of the system. Also submitted with this thesis is a paper The Synthesis of Logical Nets consisting of NOR units which is the result of work on a logical problem which was done at the same time as work for the thesis.2015-11-19T08:55:44ZTowards a theory of multivariate interpolation using spaces of distributions.Wayne, Henry.http://hdl.handle.net/2381/345742015-11-20T03:17:00Z2015-11-19T08:55:43ZTitle: Towards a theory of multivariate interpolation using spaces of distributions.
Authors: Wayne, Henry.
Abstract: The research contained in this thesis concerns the study of multivariate interpolation problems. Given a discrete set of possibly complex-valued data, indexed by a set of interpolation nodes in Euclidean space, it is desirable to generate a function which agrees with the data at the nodes. Within this general framework, this work pursues and generalizes one approach to the problem. Based on a variational theory, we construct a parameterised family of Hilbert spaces of tempered distributions, detail the necessary evolution of the interpolation problem, and provide a general error analysis. Some of the more popular applications from the theory of radial basis functions are shown to arise naturally, but the theory admits many more examples, which are not necessarily radial. The general error analysis is applied to each of the applications, and taken further where possible. Connections with the theory of conditionally positive definite functions are highlighted, but are not central to the theme.2015-11-19T08:55:43ZSome problems in the kinetic theory of plasmas.Tapp, M. C.http://hdl.handle.net/2381/345732015-11-20T03:16:55Z2015-11-19T08:55:43ZTitle: Some problems in the kinetic theory of plasmas.
Authors: Tapp, M. C.
Abstract: This thesis covers essentially two problems in the kinetic theory of plasmas. The first concerns the investigation of plasma oscillations in a constant electric field - a topic investigated by Akheizer and Sitenko as early as 1956 [1] More recently Stenflo [2] has considered the problem in which he replaces the collision integral of Boltzmann's equation by a Fokker-Planck term and a B.G.K. term. The dispersion relations derived by Stenflo contained a number of parameters the relative importance of which he did not clearly define. We have undertaken here a stability study of longitudinal oscillations of a weakly ionised gas permeated by a uniform electric field. A dispersion relation is formulated in terms of error-type functions and some computational studies are carried out for various plasma parameters of interest. The results are exhibited graphically in the form of Nyquist plots. The conclusions made by Stenflo and others regarding possible instabilities of the plasma needs modification, certainly in the context of a weakly ionised electron-ion gas. The second topic covered here concerns the transport theory of relativistic gases. This has received increasing attention in recent years [3,4]. Much attention has been devoted to calculating the first order relativistic effects on the transport coefficients. Up to now only the 'Maxwellian' model, investigated by Israel [3], has been considered. The method of attack is via the Chapman-Enskog approach. In this second topic we develop a more general approach to the problem by generalising the classical spherical harmonic solution of the Boltzmann equation to the relativistic case. The theory is applied to transport problems of fully ionised plasmas in the Coulomb field.2015-11-19T08:55:43ZIncomplete data in event history analysis.Sutton, Christopher Julian.http://hdl.handle.net/2381/345722015-11-20T03:16:37Z2015-11-19T08:55:42ZTitle: Incomplete data in event history analysis.
Authors: Sutton, Christopher Julian.
Abstract: Incomplete data present a serious problem in the modelling of event histories. Two particular forms of incompleteness are in evidence for data of this form. The first is due to recording of event times in interval-censored form. For single non-repeatable events this can be accommodated by using methods for modelling grouped survival times, such as those of Prentice and Gloeckler (1978) and Finkel- stein (1986). The other, more serious, problem relates to incomplete recording of follow-up measurements which would typically be included as time-dependent covariates in survival models. A number of methods exist for handling incomplete data. These include multiple imputation for variables subject to incompleteness and the application of iterative algorithms such as EM and the data augmentation algorithm. In this thesis, a method for handling both these types of incompleteness is derived based on multiple imputation combined with an adaptation of Finkelstein's method to handle time-varying covariates. This method is then investigated via Monte Carlo simulation and applied to data arising from the annual screening of those aged 75 years and over in the town of Melton Mowbray, as performed through the local general practice. Its performance is compared with that of more traditional approaches to modelling data collected in studies of this type. It is shown that parameter estimates can be considerably affected by the choice of approach to modelling. Whilst there are some problems with the implementation of this technique, particularly with reference to the model for the multiple imputation of the repeated risk factor values, it shows promise for application to studies of this form, particularly if combined with improved models for multiple imputations. The data from the annual screenings are assumed missing at random, but the techniques used could be extended to cover non-ignorable missing data mechanisms of known form.2015-11-19T08:55:42ZAlglat for modules over fsi rings and reflexivity.Snashall, Nicole Jane.http://hdl.handle.net/2381/345702015-11-20T03:16:57Z2015-11-19T08:55:42ZTitle: Alglat for modules over fsi rings and reflexivity.
Authors: Snashall, Nicole Jane.
Abstract: For a bimodule RMDelta where R and Delta are rings with unity, alglat RMDelta is the ring of all Delta-endomorphisms of M leaving invariant every R-submodule of M. The bimodule is said to be reflexive if the elements of alglat RMDelta are precisely the left scalar multiplications by elements of R. For most of the thesis Delta = R, a commutative ring with unity. However, in the early work, some results on the general structure of alglat are obtained, and in particular, Theorem 1.9 shows that it is an inverse limit. The next section of the thesis is concerned with reflexivity, and considers rings R for which all non-torsion or all finitely generated R-modules are reflexive. Theorem 3.4 gives eight equivalent conditions on an h-local domain R to the assertion that every finitely generated R-module is reflexive, that is R is scalar- reflexive. A local version of this property is introduced, and it is shown in Theorem 2.17 that a locally scalar-reflexive ring is scalar-reflexive. The remainder of this thesis considers alglat for all modules over an FSI ring. The local FSI rings are precisely the almost maximal valuation rings, and this is the first case to be settled. More details are then given of the structure of FSI rings and related rings. A completion is introduced in 6.4 to enable alglat to be determined for certain torsion modules over an indecomposable FSI ring. Theorem 7.3, in summarising the work of the last two chapters of the thesis, gives a complete characterisation of alglat for all modules over an FSI ring.2015-11-19T08:55:42ZSuccessor systems. An investigation into the primitive recursive functions of generalised multisuccessor arithmetics, with applications to constructive algebra.Stanford, Paul Hudson.http://hdl.handle.net/2381/345712015-11-20T03:16:16Z2015-11-19T08:55:42ZTitle: Successor systems. An investigation into the primitive recursive functions of generalised multisuccessor arithmetics, with applications to constructive algebra.
Authors: Stanford, Paul Hudson.
Abstract: An investigation into the primitive recursive functions of generalised multisuccessor arithmetics, with applications to constructive algebra.' Submitted for the degree of Doctor of Philosophy by Paul Hudson Stanford* at Leicester University, England, in 1975. The above named thesis is concerned with the extension of the notion of primitive recursion to structures other than the natural numbers. Successor systems are generalisations of the arithmetics of Vu?kovi? [2], and as a class are closed under operations corresponding to direct products and quotient formation. Given a system ? we can also define a system a* which has successor functions Sax for each numeral a of ?. The formalisation used is derived from the free variable calculus of Goodstein [1]. Various forms of recursion are considered, none of which employ more than a small number of known functions. For example, given a function g from ? x ? to ? we can define f from ?* to ? as follows. f(0) = 0; f(Sax) = g(a,f(x)) Algebraic applications include the construction of groups and rings: actual examples range from the integers and polynomials to permutations, finite sets and ordinal numbers. Several relations which may hold between systems are investigated, as are the notions of anchored and decidable systems.*(supported by a Science Research Council grant) One chapter deals with the case of commuting successor functions, and another considers systems with only one successor. In an appendix we briefly investigate the further generalisation obtained by using non-unary successor functions. The author expresses his thanks to all concerned, especially his supervisor. Professor R. L. Goodstein. Contents of thesis: (1) Introduction, (2) The Integers, (3) Products, (4) Recursion, (5) The Star Operation, (6) Commutative systems, (7) Homomorphisms, (8) Groups, (9) Further recursion, (10) Decidable systems, (11) Single successor systems, (12) Polynomials; (A1) Small systems, (A2) Joint successor arithmetics, (A3) Polish Circles, (A4) A Formalisation of the Integers. References to abstract: [1] Goodstein, R.L., Recursive Number Theory, Amsterdam (1957) [2] Vu?kovi?, V., Partially ordered recursive arithmetics, Math.Scand. 7 (1959), 305-320.2015-11-19T08:55:42ZFunctional-completeness criteria for finite domains.Schofield, P. (Peter)http://hdl.handle.net/2381/345692015-11-20T03:16:44Z2015-11-19T08:55:42ZTitle: Functional-completeness criteria for finite domains.
Authors: Schofield, P. (Peter)
Abstract: Necessary and sufficient conditions for the functional completeness of a set F of functions with variables and values ranging over N = {lcub}0,1,...,n{rcub}, where n ? 1, are investigated and in particular, completeness criteria for a single function are determined. Complete solutions are known in the special cases n = 1,2, and results about these special cases which are of use in formulating general theorems are discussed. Proceeding to the general case some preliminary criteria (which presuppose that certain 2-place functions are generated by F) for the functional completeness of F are derived. These results show that the set consisting of all 2-place functions is complete. In the special case n + 1 = p (a prime number) the functions of F are shown to have a special form, and this is used in some illustrations of complete subsets. The value sequence of a function satisfying the Stupecki conditions (that is, depending on at least 2 of its argument places, and taking all n + 1 values of N) is now examined, and some properties of such a function are found. These results are then used in demonstrating the completeness of a set F which generates all 1-place functions, together with a function satisfying the Stupecki conditions. Our main results give improved sufficient conditions for the completeness of F. In particular a set F is complete if it generates a triply transitive group of permutations of N and contains either (i) only a single function or (ii) at least one function satisfying the Stupecki conditions, the latter apart from certain exceptional cases. A detailed investigation shows that these occur only when n = 2 or when n + 1 is a power of 2 and all functions of F are linear in each variable, relative to some mapping of N as a vector space over Z2. Finally a different mapping of N into Z42 is considered, and it is shown that the functions of F can be given a unique representation relative to this mapping. This representation is then used to find some examples of complete subsets.2015-11-19T08:55:42ZFormalisations of recursive arithmetic.Rose, H. E. (Harvey Ernest)http://hdl.handle.net/2381/345652015-11-19T08:55:41Z2015-11-19T08:55:41ZTitle: Formalisations of recursive arithmetic.
Authors: Rose, H. E. (Harvey Ernest)
Abstract: In this thesis we shall present a new formalisation of the theory of primitive recursive functions, which is called Ternary Recursive Arithmetic. In a recent paper, Alonzo Church described a formalisation of recursive arithmetic in which single axioms of recursion and composition (i.e. definition by explicit substitution) took the place of an infinity of such axioms in earlier codifications. Church's system, however, postulates axioms of the propositional calculus and of mathematical induction, in Ternary Recursive Arithmetic these axioms have been eliminated in the manner of Goodstein. In chapter 1 a full statement of the primitive basic of the system will be given and in chapters 2, 3 and 4 we shall present a development of it and state precisely in what sense it may be considered a formalisation of the theory of primitive recursive functions. The main motivation of this work is that it enable us to give a proof of the consistency of primitive recursive arithmetic in a much simpler system than was hitherto possible; that is, in the system consisting of Ternary Recursive Arithmetic with one additional axiom. This proof and a discussion of the Godel incompleteness theorems are given in chapters 6 and 7. In presenting these results we have given the more routine work, which is necessary but does not form an essential part of the development, at the ends of the corresponding chapters, sections 3.7, 4.5 and 5.4 fall into this category. (Abstract shortened by UMI.).2015-11-19T08:55:41ZThe metatheory of the elementary equation calculus.Bundy, A.http://hdl.handle.net/2381/345662015-11-20T03:16:51Z2015-11-19T08:55:41ZTitle: The metatheory of the elementary equation calculus.
Authors: Bundy, A.
Abstract: Abstract not available.2015-11-19T08:55:41ZThe convective instability of the boundary-layer flow over families of rotating spheroids.Samad, Abdul.http://hdl.handle.net/2381/345682015-11-20T03:16:29Z2015-11-19T08:55:41ZTitle: The convective instability of the boundary-layer flow over families of rotating spheroids.
Authors: Samad, Abdul.
Abstract: The majority of this work is concerned with the local-linear convective instability analysis of the incompressible boundary-layer flows over prolate spheroids and oblate spheroids rotating in otherwise still fluid. The laminar boundary layer and the perturbation equations have been formulated by introducing two distinct orthogonal coordinate systems. A cross-sectional eccentricity parameter e is introduced to identify each spheroid within its family. Both systems of equations reduce exactly to those already established for the rotating sphere boundary layer. The effects of viscosity and streamline-curvature are included in each analysis. We predict that for prolate spheroids at low to moderate latitudes, increasing eccentricity has a strong stabilizing effect. However, at high latitudes of 0 60, increasing eccentricity is seen to have a destabilizing effect. For oblate spheroids, increasing eccentricity has a stabilizing effect at all latitudes. Near the pole of both types of spheroids, the critical Reynolds numbers approach that for the rotating disk boundary layer. However, in prolate spheroid case near the pole for very large values of e, the critical Reynolds numbers exceed that for the rotating disk. We show that high curvature near the pole of prolate spheroids is responsible for the increase in critical Reynolds number with increasing eccentricity. For both types of spheroids at moderate eccentricity, we predict that the most amplified modes travel at approximately 76% of the surface speed at all latitudes. This is consistent with the existing studies of boundary-layer flows over the related rotating-disk, -sphere and -cone geometries. However, for large values of eccentricity, the traveling speed of the most amplified modes increases up to approximately 90% of the surface speed of oblate spheroids and up to 100% in the prolate spheroid case.2015-11-19T08:55:41ZLogical systems with finitely many truth values.Rousseau, G.http://hdl.handle.net/2381/345672015-11-20T03:16:08Z2015-11-19T08:55:41ZTitle: Logical systems with finitely many truth values.
Authors: Rousseau, G.
Abstract: Abstract not provided.2015-11-19T08:55:41ZTransmission of guided sound waves through a layer of fluid or solid.Romilly, N.http://hdl.handle.net/2381/345642015-11-19T08:55:40Z2015-11-19T08:55:39ZTitle: Transmission of guided sound waves through a layer of fluid or solid.
Authors: Romilly, N.
Abstract: The thesis considers the transmission of sound waves through a layer of fluid or solid contained in a wave-guide of a simple form. The main aim is to find the transmission coefficient for a lowest order incident mode and to fine the lengths of the layer for which the transmission is a maximum or minimum. The first part of the thesis gives the exact solution for transmission through a layer of inviscid fluid, and for transmission through a layer of viscous fluid when the boundaries of the guide are rigid and lubricated. It also gives approximate solutions for transmission through a layer of viscous fluid when the boundaries of the guide are pressure-free and when they are rigid but not lubricated. The second part of the thesis considers transmission through a layer of solid. It gives the exact solution, in infinite series form, to the problem of the transmission of any incident waveguide mode through a stretched membrane contained in a rigid circular guide. It is shown that above a certain frequency an incident plane wave can never be completely transmitted or completely reflected. Below this frequency complete transmission or reflection can occur, but the frequencies at which it does occur depend on the medium surrounding the membrane. The solution is discussed and results are given for a particular case and compared with approximate solutions obtained by other authors. The same analysis is applied to transmission through a thin plate. The second part of the thesis also contains work on transmission through a thick layer of elastic solid. An exact solution is found using an approximate equation of motion for the solid which should be valid at low frequencies. An attempt is made to find a solution based on the exact equations for the solid, but it is necessary to use an approximation.2015-11-19T08:55:39ZThe classification of ultrafilters on N.Pitt, R. A.http://hdl.handle.net/2381/345622015-11-19T08:55:38Z2015-11-19T08:55:38ZTitle: The classification of ultrafilters on N.
Authors: Pitt, R. A.
Abstract: This thesis has as its aim the classification of ultrafilters on N, by use of partitions and collections of partitions of N, and the investigation of the operations under which each class is closed and the inclusion/exclusion relationships between them. Choquet (1) asked whether there was a n.p.u.f ? on N such that for no map 0 : N ? N is 0 absolute; Mathias answered that there was by constructing a n.p.u.f. ? on N with the stronger property that for no map 0 : N ? N is 0 a P point (of ?N-N). In Chapter II we construct a P point ? such that for no map 0 : N ? N is 0 rare and, using this result, we construct a n.p.u.f. ? on N such that for no map 0 : N ? N is 0 a P point or a rare ultrafilter. We also show that if ? is a n.p.u.f. on N that cannot be mapped to a rare ultrafilter then ? cannot be mapped to a countable limit of absolute ultrafilters.;NOTATION: Let ? be a n.p.u.f. on N and s be a partition of N into finite sets (p.o.N.i.f.s.); we will write ? ? s whenever F ? ? implies |F ? A| ? 1 for each A ? s. Let S1,S2 be p.o.N.i.f.s.; we will write S1?S2 whenever A E S1 and B ? S2 imply |A ? B| ? 1.;DEFINITION: A n.p.u.f. ? on N is an n(a) point (point of degree of complexity n) if for every collection S = S1,S2,...,Sn+1 of p.o.N.i.f.s. satisfying Si?sj for 1 ? i < j ? n+1 there is a t, 1 ? t ? n+1 such that ??st, and this is the least n for which it is true. Chapter III is devoted to the investigation of this and allied notions. We extend the idea to allow infinite degrees of complexity (S. and c) and show that for any n.p.u.f. ? on N there is a n ? {lcub}0,1,.., s.,c{rcub} such that ? is a n(a) point. We also show that for any n,n(a) P points exist. Many of the theorems give bounds for the degree of complexity of ultrafilters of the form ? = ? lim ?i given the degree of complexity of ?, ?1, ?2,.. and given that ? has a certain property (e.g. ? is a P point). The first section of Chapter IV gives counterexamples to the following plausible hypotheses: 1) each 1(a) point is rapid; 2) each c(a ) point is not rapid. The final section of the thesis deals with a concept appearing in a letter from Professor G. Choquet to Dr. R.O.Davies.;DEFINITION: A n.p.u.f. ? on N has property c if for any pair of maps 0,? : N ? N, 0 = ? implies 0 and ? agree on some member of ?. We show that there is a n.p.u.f. ? on N that is neither a P point nor a rare ultrafilter with property c and a n.p.u.f. on N that is a P point without property c. We investigate the relationships between the class of n.p.u.f.'s with property c and the classes defined previously. 1) G. Choquet, Deux classes remarquables d'ultrafiltres sur N, Bull.Sci.Math.(2) 92 (1968), 143-153.2015-11-19T08:55:38ZThe water wave - ice floe interaction and associated integral equation problems.Porter, D.http://hdl.handle.net/2381/345632015-11-20T03:16:04Z2015-11-19T08:55:38ZTitle: The water wave - ice floe interaction and associated integral equation problems.
Authors: Porter, D.
Abstract: The water wave - ice floe interaction is introduced by reviewing the work done to date on the problem. Several mathematical models, incorporating hitherto unexplored and possibly significant mechanisms of the interaction, are then constructed and investigated. In the first place, the effect of a plane wave incident at any angle upon a semi-infinite elastic sheet of constant thickness is considered, using linearised shallow water theory. The solution for the velocity potential under the ice is discussed for various values of the physical parameters, and in the most interesting case, numerical calculations are made to determine the relevance of such factors as ice thickness and angle of incidence. Secondly, a semi-infinite sheet of variable thickness is examined and the particular case treated when this thickness has a sinusoidal form. Ranges of incident wavelengths corresponding to a progressive wave solution under the ice are calculated. Also, an ice thickness having a rectangular wave form is considered with similar results. Attention is then turned to the problem of the existence of a progressive wave in an infinite array of rigidly held, equally spaced floes. Two different approaches are employed to reduce the resulting potential problem to weakly singular integral equations, which in turn are solved by a perturbation method, and, in the general case, by a numerical technique. It is found that complex wave groups can be constructed satisfying the problem, but that simple progressive waves do not exist. In an attempt to make analytic inroads on the above mentioned integral equations, some aspects of singular integro-differential equations are investigated, and methods developed by which these may be solved. The closely associated generalised Riemann-Hilbert problem is also discussed and two integro-differential equations arising in aerodynamic theory are solved as examples of the techniques proposed.2015-11-19T08:55:38ZThe Hamiltonian formulation in relativity.Palfreyman, Niall M.http://hdl.handle.net/2381/345602015-11-20T03:16:30Z2015-11-19T08:55:37ZTitle: The Hamiltonian formulation in relativity.
Authors: Palfreyman, Niall M.
Abstract: Like any major breakthrough in thinking, the theory of relativity caused a great upheaval in our attitude to science. Seventy years after the advent of relativity we are still coming to terms with the changes it has brought in our outlook. Part of this process is simply the valid translation of pre-relativistic laws and concepts into the 4-dimensional language of relativity - a problem by no means as easy as would at first seem; the aim of this thesis is to survey the ways in which the methods of analytical mechanics may be translated into a relativistic setting. Chapter 1 provides an introduction to the work in the form of a non-rigorous discussion of the historical and mathematical development of electromagnetism, analytical mechanics and relativity, and ends with a presentation of the basics of the functional calculus. This is needed in the presentation of field theory given in chapter 2. We see two possibilities for the relativistic formulation of analytical mechanics, and field theory represents the first of these possibilities. In the absence of any real grounds for continuing on this tack we then move on to the other possibility in chapter 3, where we review the attempts of a number of authors to formulate relativistic particle mechanics as a Hamiltonian system. This then leads in chapter 4 to our own such attempt, based mainly on the work of Synge, which we have named homogeneous mechanics. After the main exposition of the theory the work of the remaining chapters 5 and 6 is then to apply the above theory (not always successfully) to a number of cases where analytical mechanics has in the past proven itself an invaluable tool: namely, the areas of symmetries and quantum theory.2015-11-19T08:55:37ZCommutative multiple successor recursive arithmetics.Partis, M. T. (Michael T)http://hdl.handle.net/2381/345612015-11-20T03:16:43Z2015-11-19T08:55:37ZTitle: Commutative multiple successor recursive arithmetics.
Authors: Partis, M. T. (Michael T)
Abstract: Recursive arithmetics are usually based on three initial functions, namely the zero, successor and identity functions. In this thesis recursive arithmetics are considered which instead of having just one successor function have a number of different successor functions. These will be represented by Sv where v ranges from 1 to n. The system is made commutative by stipulating that SuSvx = SvSux for all u and v. The notion of a primitive recursive function is introduced into this arithmetic and various basic functions are defined. Another recursive arithmetic is then constructed in which the elements are ordered sets of natural numbers. It is shown that a complete isomorphism, both functional and deductive, exists between this arithmetic and the arithmetic with n successors. It is then shown by using this isomorphism that a proof can be constructed of the key equation x + (y - x) = y + (x - y) in multiple successor recursive arithmetic. A formal equation calculus is then developed for multiple successor recursive arithmetic in which the proof of the key equation given above is derived without resource to a doubly recursive uniqueness rule. The properties of the basic primitive recursive functions are also established. The problem of avoiding irregular models of this equation calculus is then examined and it is shown that this can be done by using relatively simple axioms. An inequality relationship is then defined and it is shown that with respect to this relationship the numbers of a multiple successor recursive arithmetic form a lattice. It is then shown that this lattice is modular and distributive. The problem of introducing limited universal and existential quantifiers is then considered. It is shown that this can be done in an arithmetic of ordered sets and hence, by the isomorphism established earlier, they can also be introduced into a multiple successor recursive arithmetic. Three different logical models in multiple successor recursive arithmetic are then considered. The models are of classical two-valued logic, a modified form of Heyting's intuitionist logic, and a many-valued logic. The connection between these models is examined.2015-11-19T08:55:37ZIntermediate propositional logics.McKay, C. G.http://hdl.handle.net/2381/345582015-11-20T03:16:56Z2015-11-19T08:55:36ZTitle: Intermediate propositional logics.
Authors: McKay, C. G.
Abstract: The main object of the thesis is to investigate a variety of questions relating to the set of intermediate prepositional logics. Let H denote the set of words which are intuitionist theses and let K denote the set of words which are classical theses. Then a set of words X is an intermediate (prepositional) logic iff 1) HcXcK and 2) X is closed wrt modus ponens and substitution. Of special interest among intermediate logics, are those which are characterised by a finite pseudocomplemented lattice. We prove the important result that every such finite logic is finitely axiomatisable. This result is one of the many consequences of the fundamental representation theorem for pseudocomplemented lattices (PLs) whereby every PL is subdirectly reducible to a set of so-called strongly compact PLs. In addition we provide a neat syntactic characterisation of finite logics, and show that H is the limit of a certain sequence of explicitly axiomatised finite logics. In addition we consider more restricted types of intermediate logics, in particular intermediate ICN logics. By generalising a result of DIEGO, to show that every ICN algebra with a finite number of generators, is finite, we manage to prove that every finitely axiomatised intermediate ICN logic is decidable with primitive recursive bound. This generalises and completes earlier work of BULL. The same methods are then applied to obtain a proof of the decidability of all those intermediate logics, obtained by adding a finite set of disjunction-free words, as additional axioms to H. Many older results in the literature are then seen to be special cases of this general result. We introduce the new concept of strong undefinability of a prepositional connective, and examine its relation to McKINSEY'S related notion. It is shown that the connectives of implication, disjunction and negation, are all strongly undefinable in H, whereas conjunction is weakly definably. Lastly we investigate the scope of the so-called Separation theorem in the field of intermediate logics. It is shown that certain intermediate logics treated in the literature do not possess any axiomatisation for which the Separation theorem can be proved.2015-11-19T08:55:36ZOperations on generalized functions.Özçag, Emin.http://hdl.handle.net/2381/345592015-11-20T03:16:17Z2015-11-19T08:55:36ZTitle: Operations on generalized functions.
Authors: Özçag, Emin.
Abstract: In Chapter 1, we give some properties distributions and introduce the notions of neutrix and neutrix limit with examples, in order to study the problem of defining the convolution product and the product of distributions. The problem of defining the distribution such that the ordinary derivative formula is satisfied for all and s = 0,1,2,... is studied in Chapter 2. In Chapter 3, we define the Beta function Bp,q (,) using the neutrix limit and prove that this neutrix limit exists for all . In Chapter 4 we let f and g be distributions and let fn(x) = f(x)Tn(x), where Tn(x) is a certain function which converges to the identity function as n tends to infinity. We then define the neutrix convolution product fg as the neutrix limit of the sequence {lcub}fn * g{rcub}, provided the limit h exists in the sense that N - limn fn * g,? = h, for all in D. The neutrix convolution products In are evaluated, from which other neutrix convolution products are deduced. The neutrix convolution product of distributions in Chapter 4 is not commutative. Therefore, in Chapter 5, we consider the commutative neutrix convolution product of distributions, *, and also evaluate the neutrix convolution product. The problem of defining the product of ultradistributions is considered in Chapter 6, and the neutrix product (Ff) (Fg) in Z', where F denotes the Fourier transform, is defined as the neutrix limit of {lcub}F(fTn).F(gTn). Later, we prove that the exchange formula holds. We finally define the neutrix product F(f)0G(g) of F(f) and G(g), where F and G are distributions and f and g are locally summable functions. It is proved that if f is infinitely differentiable function with f'(x) 0 and if the neutrix product F o G exists and equals H, then the neutrix product F(f) o G(f) exists and equals H(f). We also give an alternative approach to the form F(f(x)) in D', where F and f are distributions.2015-11-19T08:55:36ZAn investigation of the propagation of electromagnetic waves in some circular cylindrical waveguides using a finite difference formulation.Lawrence, P. J.http://hdl.handle.net/2381/345562015-11-20T03:16:35Z2015-11-19T08:55:36ZTitle: An investigation of the propagation of electromagnetic waves in some circular cylindrical waveguides using a finite difference formulation.
Authors: Lawrence, P. J.
Abstract: This thesis is concerned with the propagation of electromagnetic waves through circular cylindrical waveguide having perfectly conducting walls. A finite difference approximation method is used to evaluate the propagation constant of the waves. The method is one of great generality. It may be used for any coaxial configuration of media inside the waveguide. In particular, the effects on propagating electromagnetic waves of a transversely magnetised ferrite tube adjacent to the waveguide wall are studied. Ferrite material is taken to have a permeability tensor of the form [image] when it is subjected to a static magnetic field along its third coordinate axis. The ferrite tube is subject to a static magnetic field formed by four magnetic poles at the corners of a square centred on the axis of the guide, like poles being at opposite corners. In the ferrite, this field leads to a permeability tensor which is dependent upon the angle in cylindrical polar coordinates when the z-axis is taken along the guide and Maxwell's equations reduce to two simultaneous second order partial differential equations with non-constant coefficients in the EZ and HZ components of the propagating electromagnetic wave. The finite difference approximation method reduces the problem to one of solving the condition for consistency of a large number of difference equations. Values of the propagation constant which satisfy this condition are found by a trial method which involves evaluating a determinant of very high order. This evaluation is carried out by computer and use is made of the banded nature of the determinant to prevent the amount of computer store required becoming prohibitive. The validity of the method is tested by applying it to several special cases with known results and its limitations and accuracy are discussed. A hypothesis is suggested to explain the numerical results.2015-11-19T08:55:36ZDecidable classes of recursive equations.Lee, R. D.http://hdl.handle.net/2381/345572015-11-20T03:16:45Z2015-11-19T08:55:36ZTitle: Decidable classes of recursive equations.
Authors: Lee, R. D.
Abstract: Many different formalisations of recursive arithmetic have been proposed, and this thesis is concerned mainly with the system proposed by R.L. Goodstein and known as the Axiom - Free Equation Calculus. As with all other formal systems of arithmetic with sufficient content, the system is incomplete and recursively undecidable. The interesting questions lie in the completeness and decidability, or otherwise, of fragments of the system. I attempt to answer some of these questions. It happens that some of the problems lead to well known questions in the theory of diophantine equations namely, Hilbert's 10th Problem, The Undecidability of Exponential Diophantine Equations, and the Integer Linear Programming Problem. In 1943 Kalmar proposed a set of functions called elementary functions, and Ilona Bereczki showed effectively that the class of equations F = 0, where F is any elementary function, is undecidable. The class of functions given by Kalmar was, variables, l,+,., |a - b|, [a/b], but it can easily be shown that this is the same as those formed by composition from +,.,? This latter definition is the one we use. In his paper, A Decidable Fragment of Recursive Arithmetic, Goodstein showed the class of equations F = 0 where F is any function formed by composition from x + y, x.y and 1 ? x is decidable. So I have attempted to extend Goodstein's result with the upper bound provided by the undecidability of the elementary equations. The main results I have obtained are 1. If F is any function formed by composition from x + y, x.y, 1 ? x, ? 1, E y=w, II y=w, then F = 0 is decidable, and furthermore the provability in the equation calculus of F = 0 is decidable and that this class of equations is complete. 2. If F,G are any functions formed from x + y, x.y, 1 ? x, x ? 1, by composition, then the class of equations F = G is decidable. 3. If F,G are any functions formed by composition from x + y, x ? y then the class of equations F = G is decidable. 4. If F.G are any functions formed by composition from x + y, x ? y, x.y, then the class of equations F = G is decidable if and only if Hilbert's 10th Problem is decidable. 5. If F,G are any functions formed by composition from x + y, x.y, II y=w then the class of equations F = G is undecidable. 6. Presburger's Algorithm can be used to solve the Integer Linear Programming Problem - the problem was not solved until 1958.2015-11-19T08:55:36ZLattices and topologies on Newman algebras.Beazer, R.http://hdl.handle.net/2381/345552015-11-19T08:55:35Z2015-11-19T08:55:35ZTitle: Lattices and topologies on Newman algebras.
Authors: Beazer, R.
Abstract: In what was almost certainly an attempt to find a new axiom system for Boolean algebra based on distributivity and the existence of complements MHA Newman discovered a remarkable set of independent postulates defining an algebra which may be regarded as a generalization of Boolean algebra and now bears his name. Shortly after publication of his paper Newman extended his discussion to a wider class of relatively complemented algebras which we call Generalized Newman algebra. Recently K.Roy investigated the properties of an algebra closely related to Newman algebra, called Dual Newman algebra, and found that it has similar properties to its progenitor the opening chapters of the thesis are devoted to a discussion of the properties of the lattices of ideals, congruence relations and filters in Newman algebra and the relationships between them. The concepts of inverse and sub-inverse limits of Newman algebras are introduced, some general properties proved, and a sub-inverse limit representation established for a particular class of Newman algebras together with an inverse limit representation for the class of infinite, couplet, Boolean algebras. Furthermore, it is proved that a Newman algebra can be represented as a direct product of simple algebras if and only if its ideal lattice is a finite Boolean algebra. In the following chapter we investigate, within the framework of Newman algebras, the analogues of the auto and ideal topologies on Boolean algebra discovered by P.S. Rema. It is shown that the set of all ideal topologies L1 on a Newman algebra N is a complete, Brouwerian, dually atomic lattice containing the set L0 of all auto topologies as a complete sub-lattice and that L0 is completely isomorphic to the lattice of filters of N. Some important types of filters in N are characterized in terms of properties of the associated auto topologies on N and the auto topology associated with a given filter characterized within the lattice of ideal topologies on N. Amongst the more general properties proved we mention that the property of a topology, compatible with the fundamental operations on N, being an ideal (auto) topology is, in the algebraic and topological sense, hereditary, productive and divisible. The various connectedness properties of ideal topologized Newman algebra N;J are considered in some detail; the components being exhibited as certain congruence classes of N and necessary and sufficient conditions found for N;J to be connected, locally connected and totally disconnected. Some results are obtained concerning complete ideal uniformities and compact ideal uniformities. The properties of a particular class of ideal uniformities, called chain uniformities, are investigated and a clear cut family of metrizable chain uniformities are exhibited. Necessary and sufficient conditions are then established for a Newman algebra endowed with a separated ideal uniformity to the metrizable. In the closing chapters of the thesis we are concerned with the axiomatics of Dual and Generalized Newman algebras. Two new sets of axioms for Dual Newman algebra are exhibited each containing one less axiom than the system due to K. Roy. A new set of axioms for Generalized Boolean algebra is found containing one less axiom than the system discovered by lawmen together with a new set of independent postulates, characterizing the direct product of an arbitrary Generalized Boolean algebra and Boolean ring, which contains two fewer axioms than the system discovered by Newman.2015-11-19T08:55:35Z