refinement using a sequence of nested, logically rectangular meshes on which the partial differential equation is discretized [4]. This technique allows for the adaptation of precision for areas of the model that exhibit dynamic and/or **multi**-scale behavior. For our work on initial **data**, we use Carpet, an adaptive mesh refinement and **multi**-patch driver [14], via code found at http://einsteintoolkit.org/ [10]. These types of problems have extreme **data** storage requirements since each level of refinement is often quite large, potentially including millions of grid cells, with up to 15 levels of refinement being called upon for the highest resolution **simulations** [11]. To combat this we look for **methods** to store **data** more efficiently, preserving the vast majority of the information content from a simulation while reducing the overall storage requirements.

Show more
49 Read more

This dissertation touches on a representative sampling of the issues in **numerical** **relativity**. Chapter 2 describes some of the necessary theoretical background including the Einstein equations and the 3+1 decomposition of the Einstein equations known as the ADM decomposition. Chapter 3 explains the initial value problem of general **relativity**. To evolve any physical system, one must start with valid initial **data**. The initial **data** must obey the laws of physics; in our case, it must satisfy constraint equations. Chapter 4 will describe our initial value solver using multigrid and adaptive mesh refinement called AMRMG (Adaptive Mesh Refinement MultiGrid). The initial **data** for one of the first successful binary black hole **simulations** was provided by AMRMG . Chapter 5 details a new class of initial **data** made feasible by AMRMG : distorted black holes using the puncture method. Chapter 6 describes the evolution equations as they written in the ADM formalism. It explains the inherent stabilities encountered with this form, and it illustrates exponentially growing instabilities with some **numerical** examples. This chapter also contains an introduction to controlling constraint violations during an evolution. We use **spectral** **methods** to implement a constrained evolution. **Spectral** **methods** are well established computational tools, but they are somewhat novel to the **numerical** **relativity** community. Chapter 7 explains **spectral** **methods** as they are most commonly used in **relativity** and describes the

Show more
136 Read more

Abstract. Lofted mineral dust over **data**-sparse regions presents considerable challenges to satellite-based remote sensing **methods** and **numerical** weather prediction alike. The southwest Asia **domain** is replete with such examples, with its diverse array of dust sources, dust mineralogy, and meteo- rologically driven lofting mechanisms on multiple spatial and temporal scales. A microcosm of these challenges occurred over 3–4 August 2016 when two dust plumes, one lofted within an inland dry air mass and another embedded within a moist air mass, met over the southern Arabian Peninsula. Whereas conventional infrared-based techniques readily de- tected the dry air mass dust plume, they experienced marked difficulties in detecting the moist air mass dust plume, be- coming apparent when visible reflectance revealed the plume crossing over an adjacent dark water background. In com- bining information from **numerical** modeling, **multi**-satellite and **multi**-sensor observations of lofted dust and moisture profiles, and idealized radiative transfer **simulations**, we de- velop a better understanding of the environmental controls of this event, characterizing the sensitivity of infrared-based dust detection to column water vapor, dust vertical extent, and dust optical properties. Differences in assumptions of dust complex refractive index translate to variations in the sign and magnitude of the split-window brightness temper- ature difference commonly used for detecting mineral dust. A **multi**-sensor technique for mitigating the radiative mask- ing effects of water vapor via modulation of the split-window

Show more
18 Read more

The **Spectral** Einstein Code (SpEC) [123] is a **numerical** **relativity** code that can perform BBH **simulations**. It presently can handle mass ratios up to 10, dimensionless spin magnitudes of almost 1 [55], and is typically used for **simulations** lasting dozens of orbits, although it is capable of performing hundreds of orbits given enough computing time [16]. The singularities inside the BHs are excised from the computational **domain**. The excised regions are chosen to lie within the BH apparent horizons, which are gauge-dependent null surfaces guaranteed to lie inside the event horizons. This ensures the outside physics are not a ff ected. The remaining **domain**, which extends out to some outer boundary, is decomposed into many subdomains. Each subdomain is treated spectrally [124, 125] – all quantities are decomposed into basis functions, and the basis function expansions are truncated at some finite number of terms which determines the resolution. The advantage of **spectral** **methods** is that for problems with smooth solutions, the **numerical** truncation error decays exponentially with the number of basis functions. Subdomains are split and merged, and their resolution is controlled automatically using adaptive mesh refinement. The computational grid is fixed but mapped onto the physical **domain** using rotations, translations, scalings and other maps. These maps are controlled to ensure the excised regions remain inside the BH apparent horizons [126]. SpEC uses a generalized harmonic evolution system [127] which chooses coordinates x µ to satisfy an inhomogeneous wave equation,

Show more
234 Read more

where x p ,α p ,ω p and v p denote the location, circulation, vorticity and volume of the respective particle. The Lagrangian formulation of particle **methods** avoids the explicit discretization of the convective term in the governing transport equations and the associated stability constraints. The particle positions are modiﬁed according to the local ﬂow. Large velocity gradients lead to local accumulation or spreading of particles, which result in a loss of accuracy of the computation. Non-linear stability implies that particle paths are not allowed to cross, which results in the time-step constraint dt ≤ C||∇u|| −1 ∞ . This condition is often less demanding than classical CFL conditions. However, it implies that maintaining a regular particle distribution is an important issue. Therefore rather irregular particle positions are projected onto a regular mesh by interpolating the particle **data** onto the grid. Thereafter new particles are created at regularly distributed positions. Solving the diﬀusion part of the equations uses explicit solvers that are stable under the condition νdt ≤ Ch −2 , which is generally not a severe limitation for 3D computations with moderate to high Reynolds numbers.

Show more
12 Read more

and 50 0 from the base profile [1]. The engineered blades were screened, as working fluid in a simple smoke tunnel of wooden smoke with white smoke. The research indicated that the blade curvature controls the degree of flow deviation from the blade surface, and it is also apparent from the research that shallow angles hold superior promise in the compressor unit's efficiency. Ujjawal and Joshi [2] designed the five-stage axial-flow compressor utilizing the mean line technique and using NACA 65410 profiles blade coordinates were generated. Solid work modeling was used for first stage creation and to corroborate the results, CFD simulation was performed using ANSYS CFX. Comparing the results of the theoretical layout with the analytical results, it was observed that the findings of the computational fluid dynamics analysis are consistent within the acceptable range of theoretical results. Ahmed et al. [3] studied design and optimization of multistage axial compressors. The objective of the work was to define a methodology for the design and analysis of multistage axial-flow compressors. At the conception stage of a fifteen-stage compressor with inlet guide vanes (IGV), a **numerical** methodology was implemented to optimize effectiveness. Two computer programs were produced using "Visual Basic" for the design and optimization of the compressor through a southern flow analysis of the compressor with the premise of axisymmetric flow properties. Findings show that this modeling approach is much easier than the regular computational **methods** which involve much more modeling/programming work and runtime of the machine. Zhang et al. [4] explored an aerodynamic design based on the surface hypothesis of the S1/S2 stream that includes the design of the S2 stream surface through-flow, blade profile design, stack blade models

Show more
What heightens this difference in approach is the fact that most standard **data** mining is concerned mostly with describing but not explaining the patterns and trends. In contrast, medicine needs those explanations because a slight difference could change the balance between life or death. For example, anthrax and influenza share the same symptoms of respiratory problems. Lowering the threshold signal in a **data** mining experiment may either raise an anthrax alarm when there is only a flu outbreak. The converse is even more fatal: a perceived flu outbreak turns out to be an anthrax epidemic (Wong et al 2005). It is no coincidence that we found that, in most of the **data** mining papers on disease and treatment, the conclusions were almost-always vague and cautious. Many would report encouraging results but recommend further study. This failure to be conclusive indicates the current lack of credibility of **data** mining in these particular niches of healthcare.

Show more
To tackle the final part of the identification problem, in order to find the features of the physical flow excitation from the modal excitation results of Figure 3, these may be worked in various manners. Any identification approach used will obviously depend on what is hypothesized about the unknown excitation field under investigation. Here we will assume that the flow turbulence excitations to be identified actually behave as modeled through the general formulation developed in Part 1. For identification of the unknown excitation field, the space-**domain** [0, ] L will be decomposed in

Show more
10 Read more

is very time-consuming for high CNT loading cases since there may be multiple contact points for one CNT with other neighboring CNTs. However, for low CNT loading cases discussed here, this operation is comparatively simple. The shortest distance (0.47 nm) is determined by matching the **numerical** predicted conductivity and piezoresistivity with the experimental ones. It is reasonable to set up this shortest distance (0.47 nm) since, in general, the equilibrium distance between two independent carbon atomistic structures under Lennard-Jones potential or van der Waals force ranges approximately from 0.3 to 0.5 nm. Furthermore, it is interesting to find that the shortest distance is much lower than the average diameter of the CNTs in the nu- merical **simulations** (i.e., 50 nm). Therefore, the range of distance d between two central lines of two cylindrical CNTs to introduce the tunneling effect should be [D + d min , D + d max ], where d min and d max are 0.47 nm and

Show more
11 Read more

A system with ECG monitoring and low energy LED light stimulation is proposed. In the study, ECG signal is calculated to get time-**domain** and frequency **domain** HRV and ANS was analyzed. When the calculated values are out of the self-defined criteria, the system will remind the user to turn on LED to stimulate specific body site (ex. Neiguan point (PC6)) to balance ANS. In additional, the operating frequency and the dosage are adjustable, the default setting is 10 Hz and the duty cycle is 50%. The **data** and control parameter can be sent and received between the embedded system and the computer by UART for storage, display, calculation and analysis. In the future, more physiological signals such as electroencephalogram, photoplethysmography etc. would be included into the system for the judgment reference.

Show more
So far we have considered settings with a small number of similar domains. While this is typical of **multi**-task problems, real world settings present many domains which do not all share the same be- haviors. Online algorithms scale to numerous ex- amples and we desire the same behavior for numer- ous domains. Consider a spam filter used by a large email provider, which filters billions of emails for millions of users. Suppose that spammers control many accounts and maliciously label spam as legiti- mate. Alternatively, subsets of users may share pref- erences. Since behaviors are not consistent across domains, shared parameters cannot be learned. We seek algorithms robust to this behavior.

Show more
this paper, we examine the rising phenomena found in a two-degree of freedom vibro- impact system as a system parameter is varied, and find similar results to [14] regarding the drop in sticking time. We demonstrate using **numerical** **simulations** that rising and **multi**-sliding are qualitatively equivalent bifurcations events by observing that the stick- ing solutions become tangent to the boundary of the sticking region at a rising event — analogous to results for sliding orbits shown by [18].

20 Read more

15 Li, Wei, et al. "Local Binary Patterns and Extreme Learning Machine for Hyperspectral Imagery Classification." IEEE Transactions on Geoscience & Remote Sensing 53.7(2015):1-13. 16 Huang, Xin, and L. Zhang. "An SVM Ensemble Approach Combining **Spectral**, Structural, and Semantic Features for the Classification of High-Resolution Remotely Sensed Imagery." IEEE Transactions on Geoscience & Remote Sensing 51.1(2013):257-272.

22 Read more

The rest of this paper is organized as follows. Section provides some preliminaries. In Section , by employing the orthogonal spherical polynomials approximation and the **spectral** theory of compact operator, we derive the error estimates of approximate eigen- values and eigenfunctions. In Section , by adopting orthogonal spherical base functions, we establish the discrete model with sparse mass and stiﬀ matrices which is very eﬃcient for ﬁnding the **numerical** solutions of biharmonic eigenvalue equations on the spherical **domain**. In Section , we provide some **numerical** examples to validate that the theoretical results are correct. Finally, we provide some conclusions in Section .

Show more
11 Read more

If APPLICATION OF SPECTRAL METHODS IN ECONOMIC DATA ANALYSIS by Richard Deane Terrell A Thesis presented to the Australian National University for the Degree of Doctor of Philosophy The results presen[.]

251 Read more

If this smoothed periodogram is to produce good estimates of the average **spectral** density in each hand, the true function f(A) must not vary too much within each hand; or if it does vary it should vary about the centre of the band in such a way that the biases on either side cancel out. This is the most vital consideration in the estimation of spectra. The whole problem of seasonal adjustment can he viewed as one of **spectral** estimation and the smoothness requirement is the reason why we firstly remove the trend and seasonal components since we have prior knowledge of the location of these **spectral** peaks. We will then estimate the spectrum of the residuals, which we expect to he relatively smooth and which, as we

Show more
113 Read more

In this paper, a GPM is proposed to estimate the DOA of narrow-band incident signals in uniform linear array. The method is an expansion of traditional PM. It uses different block structures of array manifold to get a new **spectral** function without EVD, by which the array received date can be utilized more effectually. Simulation results show that GPM is much better than PM at low SNR and snapshots. Corresponding GRPM is also obtained from GPM like root-MUSIC. Its performance is very close to root-MUSIC, but much better than RPM and ESPRIT. So this study can provide a good reference for associated problem estimation of DOA.

Show more
[12] Sarika Tale, Time and Frequency **domain** analysis of Heart rate variability signal in prognosis of type 2 diabetic autonomic neuropathy, International Journal of Engineering Science and technology (IJEST), ISSN:0975-5462, vol.3, no.4 April 2011 [13] Stevan Silbernagl, Florian Lang. 2000, Color atlas of

There is an exceptional amount of computer resources needed to simulate a system that exhibits gravitational radiation. This is due to the field equations of GR, which consist of ten coupled nonlinear PDEs. A necessity for **numerical** **relativity** is finding accurate initial **data** to begin the simulation. Unlike in classical Newtonian physics, where the initial **data** consists of initial positions and velocities, in GR we need to initially specify the space-time metric and curvature. The equations for initial **data**, which encompass the space-time metric and curvature at time zero are nonlinear, elliptical PDEs of the form

Show more
163 Read more