All categories

[1] vixra:2401.0066 [pdf]
Infinity-Cosmoi and Fukaya Categories for Lightcones
We propose some questions about Fukaya categories. Given a class of isomorphisms $0 sim tau$, where $tau$ represents the truth value of a particle, and $0$ is a $0$ object in a Fukaya category, what are its spectral homology theories? This is a variation on the works of P. Seidel and E. Riehl.
[2] vixra:2401.0059 [pdf]
Deep Learning-Based Approach for Stock Price Predict
This paper presents a deep learning-based approach for stock price prediction in financial markets. The problem of accurately predicting future stock price movements is of crucial importance to investors and traders, as it allows them to make informed investment decisions. Deep learning, a branch of artificial intelligence, offers new perspectives for meeting this complex challenge. Deep learning models, such as deep neural networks, are capable of extracting complex features and patterns from large amounts of historical data on stock prices, trading volumes, financial news and data. other relevant factors. Using this data, deep learning and machine learning models can learn to recognize trends, patterns, and non-linear relationships between variables that can influence stock prices. Once trained, these models can be used to predict future stock prices. This study aims to find the most suitable model to predict stock prices using statistical learning with deep learning and machine learning methods RNN, LSTM, GRU, SVM and Linear Regression using the data on Apple stock prices from Yahoo Finance from 2000 to 2024. The result showed that SVMmodeling is not suitable for predicting Apple stock prices. In comparison,GRUshowed the best performance in predicting Apple stock prices with a MAE of 1.64 and an RMSE of 2.14 which exceeded the results of LSTM, Linear regression and SVM. The limitation of this research was that the data type was only time series data. It is important to note, however, that stock price forecasting remains a complex challenge due to the volatile nature of financial markets and the influence of unpredictable factors. Although deep learning models can improve prediction accuracy, it is essential to understand that errors can still occur.
[3] vixra:2401.0058 [pdf]
Geometric Product of Two Oriented Points in Conformal Geometric Algebra
We compute and explore the full geometric product of two oriented points in conformal geometric algebra Cl(4,1) of three-dimensional Euclidean space. We comment on the symmetry of the various components, and state for all expressions also a representation in terms of point pair center and radius vectors.
[4] vixra:2401.0054 [pdf]
On the Sum of Reciprocals of Primes
Suppose that $y>0$, $0leqalpha<2pi$ and $0<K<1$. Let $P^+$ be the set of primes $p$ such that $cos(yln p+alpha)>K$ and $P^-$ the set of primes $p$ such that $cos(yln p+alpha)<-K$ . In this paper we prove $sum_{pin P^+}frac{1}{p}=infty$ and $sum_{pin P^-}frac{1}{p}=infty$.
[5] vixra:2401.0053 [pdf]
A Prelininary Theory of the Proton-Electron Mass Ratio
A recent investigation by the author revealed the proton-electron mass ratio, a dimensionless number, could be approximates extremely well by a simple closed form equation, namely the fourth root of an integer. The result inspired the reader to account for the fact. This paper details some preliminary theorizing that partly explains the fact. Why the integer has the value that it does remains an open question.
[6] vixra:2401.0049 [pdf]
Energetic Sheaves: Higher Quantization and Symmetry
This document is devoted to understanding and implementing the energy numbers, which were recently explicated very clearly by Emmerson in his recent paper. Through this line of reasoning, it becomes apparent that the algebra defined by the energy numbers are indeed the natural algebra for categorifying quantization. We also develop a notion of symmetric topological vector spaces, and forcing on said spaces motivated by homological mirror symmetry.
[7] vixra:2401.0048 [pdf]
Some Interesting Closed Form Expresions That Approximate Dimensionless Physical Constants
In this paper we enumerate the fine structure constant, the proton-electron mass ratio, the neutron-electron mass ratio and the neutron-proton mass ratio. Instructions are given to the readers so that the readers can pursue this enumeration technique for themselves.
[8] vixra:2401.0046 [pdf]
Redefining Mathematical Structure: From the Real Number Non-Field to the Energy Number Field
The traditional classification of real numbers (R) as a complete ordered field is contested throughcritical examination of the field axioms, with a focus on the absence of a multiplicative inverse for zero. We propose an alternative mathematical structure based on Energy Numbers (E), deriving from quantum mechanics, which addresses the classical anomalies and fulfills field properties universally, including an element structurally analogous but functionally distinctive from the zero in R.
[9] vixra:2401.0044 [pdf]
Baryon Rest Mass
A 3-manifold geometry offers an alternative theory for baryon rest mass splitting. A number of formulae are given that approximate a selection of lighter baryons within a few standard deviations of observation. The extreme density this theory necessitates is not compatible with the quark model.
[10] vixra:2401.0041 [pdf]
Co-Moving Coordinates Cannot Maintain Their Co-Moving Status in the Spatially Non-Flat Cosmological Models
It is thought that consideration of the General Relativity force law demonstrates that particles will retain their stationary status in the standard cosmological models. However this argument neglects the effects of pressure-dependent gravitational forces. When these forces are correctly included, what actually happens is that in spatially non-flat universes particles do not really remain co-moving, and indeed develop motion that is not consistent with the very symmetry condition these models were designed to manifest.
[11] vixra:2401.0040 [pdf]
Energy Loss of Electrons in Storage Rings
We examine, in general, the energy loss of electrons caused by the multiple Compton scattering of electrons on black body photons in the storage rings. We derive the scattering rate of electrons in the Planckian photon sea and then the energy loss of electrons per unit length. We discuss the possible generalization of our method in particle physics and consider a possible application of our formulas in case of motion of charged particles in the relic cosmological radiation.
[12] vixra:2401.0038 [pdf]
Source-Free Conformal Waves on Spacetime
Investigating conformal metrics on (Pseudo-)Riemannian spaces in any number of dimensions, it is shown that the pure scalar curvature R as the Lagrange densityleads to a homogeneous d'Alembert equation on spacetime which allows for source-free wave phenomena. This suggests to use the scalar curvature R itself rather than the Hilbert-Einstein action R*sqrt(abs(g)) as the governing Lagrange density for General Relativity to also find general, non-conformal solutions.
[13] vixra:2401.0030 [pdf]
Dictionary of Ayurveda by Dr. Ravindra Sharma and the Graphical Law
We study Dictionary of Ayurveda by Dr. Ravindra Sharma belonging to the Green Foundation, Dehradun, India. We draw the natural logarithm of the number of entries, normalised, starting with a letter vs the natural logarithm of the rank of the letter, normalised. We conclude that the Dictionary can be characterised by BW(c=0.01),the magnetisation curve of the Ising Model in the Bragg-Williams approximation in the presence of external magnetic field, H. $c=frac{ H}{gamma epsilon}=0.01$ with $epsilon$ being the strength of coupling between two neighbouring spins in the Ising Model, $gamma$ representing the number of nearest neighbours of a spin, which is very large.
[14] vixra:2401.0029 [pdf]
FinFET Chronicles: Navigating the Silicon Horizon in the Era of Nanoarchitecture
This paper investigates FinFET transistor technology, aiming to address limitations in conventional planar CMOS transistors. The motivation stems from the escalating demandfor high-performance, low-power devices in sub-10nm technology nodes. The challenges of short-channel effects, leakage currents,and scalability constraints in planar CMOS transistors have prompted exploration into novel architectures like FinFETs. This research provides an indepth analysis of FinFETs’ threedimensional structure, fabrication, materials, and design considerations. We evaluate their advantages and limitations compared to traditional transistors in terms of power consumption, speed, and scalability. Our approach involves comparative studiesutilizing simulations, material analysis, and empirical data. By merging theory with practical insights, this paper aims to offera comprehensive view of FinFET technology’s potential and challenges in modern semiconductor applications. In conclusion,this study sheds light on FinFET transistors, emphasizing their fabrication, design, and performance characteristics. It highlightstheir promise as a solution to semiconductor industry challenges, paving the way for next-generation electronic devices.
[15] vixra:2401.0028 [pdf]
A New Approach to Unification Part 3: Deducting Gravity Physics
In a series of 4 papers an approach to a unified physics is presented. In part 1 the founda-tion of such an approach is given. In part 2 it was shown how particle physics follows. Inthis 3rd part gravitational physics will be derived. In part 4 open fundamental questionsof actual physics are answered and the concept of a new cosmology is introduced.
[16] vixra:2401.0027 [pdf]
Negative-Energy and Tachyonic Solutions in the Weinberg-Tucker-Hammer Equation for Spin 1
We considered Weinberg-like equations in the article [1] in order to construct the Feynman-Dyson propagator for the spin-1 particles. An analog of the $S=1/2$ Feynman-Dyson propagator is presented in the framework of the $S=1$ Weinberg's theory. The basis for This construction is based on the concept of the Weinberg field as a system of four field functions differing by parity and by dual transformations. Next, We also analyzed the recent controversy in the definitions of the Feynman-Dyson propagator for the field operator containing the $S=1/2$ self/anti-self charge conjugate states in the papers by D. Ahluwalia et al~cite{Ahlu-PR} and by W. Rodrigues Jr. et al~cite{Rodrigues-PR,Rodrigues-IJTP}. The solution to this mathematical controversy is obvious. I proposed the necessary doubling of the Fock Space (as in the Barut and Ziino works), thus extending the corresponding Clifford Algebra. However, the logical interrelations of different mathematical foundations with physical interpretations are not so obvious. In this work we present some insights with respect to this for spin 1/2 and 1. Meanwhile, the N. Debergh et al article considered our old ideas of doubling the Dirac equation, and other forms of T- and PT-conjugation [5]. Both algebraic equation $Det (hat p - m) =0$ and $Det (hat p + m) =0$ for $u-$ and $v-$ 4-spinors have solutions with $p_0= pm E_p =pm sqrt{{bf p}^2 +m^2}$. The same is true for higher-spin equations (or they may even have more complicated dispersion relations). Meanwhile, every book considers the equality $p_0=E_p$ for both $u-$ and $v-$ spinors of the $(1/2,0)oplus (0,1/2))$ representation only, thus applying the Dirac-Feynman-Stueckelberg procedure for elimination of negative-energy solutions. The recent Ziino works (and, independently, the articles of several other authors) show that The Fock space can be doubled on the quantum-field (QFT) level. We re-consider this possibility on the quantum-field level. In this article we give additional bases for the development of the correct theory of higher spin particles in QFT. It seems, that it is imposible to consider the relativistic quantum mechanics appropriately without negative energies, tachyons and appropriate forms of the discrete symmetries, and their actions on the corresponding physical states.
[17] vixra:2401.0021 [pdf]
General Intelligent Network (GIN) and Generalized Machine Learning Operating System (GML) for Brain-Like Intelligence
This paper introduces a preliminary concept aimed at achieving Artificial General Intelligence (AGI) by leveraging a novel approach rooted in two key aspects. Firstly, we present the General Intelligent Network(GIN) paradigm, which integrates information entropy principles with a generative network, reminiscent of Generative Adversarial Networks(GANs). Within the GIN network, original multimodal information is encoded as low information entropy hidden state representations (HPPs). These HPPs serve as efficient carriers of contextual information, enabling reverse parsing by contextually relevant generative networks to reconstruct observable information.Secondly, we propose a Generalized Machine Learning Operating System (GML System) to facilitate the seamless integration of the GINparadigm into the AGI framework. The GML system comprises three fundamental components: an Observable Processor (AOP) responsiblefor real-time processing of observable information, an HPP Storage Systemfor the efficient retention of low entropy hidden state representations, and a Multimodal Implicit Sensing/Execution Network designed to handle diverse sensory inputs and execute corresponding actions.
[18] vixra:2401.0017 [pdf]
Presentation of the Cases 4,5 and 7 Parameters of Bursa-Wolf Transformation
In this note, we present the Bursa-Wolf seven-parameter transformation from one geodetic system to another, showing how to determine the 7 parameters by the method of least squares and calculate them numerically following the number of the parameters 4,5 or 7.
[19] vixra:2401.0015 [pdf]
Physics Recombined
Modern physics is no longer comprehensible to most people. This book explores the question of whether it is possible to develop a simpler and more intuitive physics. For this purpose, many ignored and overlooked physics relations are presented, which cannot be found in any textbook. These relations allow to develop a more holistic physics, which also builds a bridge to information theory and philosophy. Specifically, the book investigates, among other things, whether space consists of a structure of smallest elements, whether our universe is fractal, i.e. self-similar, whether we live in a black hole, what hidden commonalities there are in the fundamental forces, whether fundamental particles and hydrogen can be meaningfully described without quantum physics, whether black holes and fundamental particles are related, whether the formalism of special relativity really makes sense, whether particle spin can be explained classically, how gravity and quantum physics might be brought together and what the Sommerfeld constant is all about. To shed light on these questions, also parts of the works of Horst Thieme, Nassim Haramein, Dr. Randell Mills and Erik Verlinde are presented.
[20] vixra:2401.0012 [pdf]
BERT-Based RASP: Enhancing Runtime Application Security with Fine-Tuned BERT
Runtime Application Security Protection (RASP) is crucial in safe-guarding applications against evolving cyber threats. This research presents a novel approach leveraging a fine-tuned BERT (Bidirectional Encoder Representations from Transformers) model as the cornerstone of a robust RASP solution. The fine-tuning process optimizes BERT’s natural language processing capabilities for application security, enabling nuanced threat detection and mitigation at runtime. The developedRASP system harnesses BERT’s contextual understanding to proactively identify and neutralize potential vulnerabilities and attacks within diverse application environments. Through comprehensive evaluation and experimentation, this study demonstrates the efficacy and adaptability of the BERT-based RASP solution in enhancing application security, thereby contributing to the advancement of proactive defense mechanisms against modern cyber threats.
[21] vixra:2401.0011 [pdf]
Linear Algebra and Group Theory
This is an introduction to linear algebra and group theory. We first review the linear algebra basics, namely the determinant, the diagonalization procedure and more, and with the determinant being constructed as it should, as a signed volume. We discuss then the basic applications of linear algebra to questions in analysis. Then we get into the study of the closed groups of unitary matrices $Gsubset U_N$, with some basic algebraic theory, and with a number of probability computations, in the finite group case. In the general case, where $Gsubset U_N$ is compact, we explain how the Weingarten integration formula works, and we present some basic $Ntoinfty$ applications.
[22] vixra:2401.0010 [pdf]
Calculus and Applications
This is an introduction to calculus, and its applications to basic questions from physics. We first discuss the theory of functions $f:mathbb Rtomathbb R$, with the notion of continuity, and the construction of the derivative $f'(x)$ and of the integral $int_a^bf(x)dx$. Then we investigate the case of the complex functions $f:mathbb Ctomathbb C$, and notably the holomorphic functions, and harmonic functions. Then, we discuss the multivariable functions, $f:mathbb R^Ntomathbb R^M$ or $f:mathbb R^Ntomathbb C^M$ or $f:mathbb C^Ntomathbb C^M$, with general theory, integration results, maximization questions, and basic applications to physics.
[23] vixra:2401.0009 [pdf]
Contribution to Goldbach's Conjectures
The internal structure of the natural numbers reveals the relation between the weak and strong Goldbach's conjectures. Explicitly, if the weak Goldbach's conjecture is true, the strong Goldbach's conjecture is, and Goldbach's conjectures are true.
[24] vixra:2401.0008 [pdf]
Goldbach's Number Construction
Goldbach’s numbers, all-natural integers which satisfy Goldbach’s conjectures are all odd integers and a subset of the even integers. Naturally, they appear in the proof of Goldbach’s conjectures. In this paper, the construction of Goldbach’s numbers approach is used to prove Goldbach’s conjectures, hopefully, it will bring a happy end.
[25] vixra:2401.0007 [pdf]
Optical Fourier Transform with Direct Phase Measurement as a Computational Method
In the context of computational methods, an experiment on the optical Fourier transform with direct phase measurement (Nature 2017) is analyzed, a correct assessment of its actual computational complexity is made, taking into account interference that exceeds the capabilities of computational methods, and an experiment technique for direct measurement of computational complexity is presented.<p>В контексте вычислительных методов разбирается эксперимент по оптическому преобразованию Фурье с прямым измерением фазы (Nature 2017), производится корректная оценка его действительной вычислительной сложности с учётом помех, превосходящей возможности вычислительных методов, приводится методика эксперимента по прямому измерению вычислительной сложности.
[26] vixra:2401.0006 [pdf]
Topological Property of Newton's Theory of Gravitation
We propose the topological object, a gravitational knot, could exist in Newton's theory of gravitation by assuming that the Ricci curvature tensor especially the metric tensor consists of a scalar field i.e. a subset of the Ricci curvature tensor. The Chern-Simons action is interpreted as such a knot.
[27] vixra:2401.0005 [pdf]
Hidden Nonlinearity in Newton's Second Law of Gravitation in (2+1)-Dimensional Space-Time
By assuming that the Ricci curvature tensor consists of a subset (scalar) field, we propose that Newton's second law of gravitation in (2+1)-dimensional space-time, a linear equation, could have hidden nonlinearity. This subset field satisfies a non-linear subset field theory where in the case of an empty space-time or the weak field, it reduces to Newton's linear theory of gravitation.
[28] vixra:2312.0167 [pdf]
Complete Integers: Extending Integers to Allow Real Powers Have Discontinuities in Zero
We will define a superset of integers (the complete integers), which contains the dual of integers along parity (e.g. the odd zero, the even one, ...). Then we will see how they form a ring and how they can be used as exponents for real numbers powers, in order to write functions which have a discontinuity in zero (the function itself or one of its derivates), as for example |x| and sgn(x).
[29] vixra:2312.0160 [pdf]
Wellposedness for the Homogeneous Periodic Navier-Stokes Equation
We use the strichartz estimate and commutator estimates to prove the decay property of the solution. The global results can be obtained by the decay property.
[30] vixra:2312.0157 [pdf]
Investigation on Brocard-Ramanujan Problem
Exploring n! + 1 = m^2 for natural number solutions beyond n = 4, 5, 7 confirms no further solutions exist,validated by using GCD Linear Combination Theorem
[31] vixra:2312.0153 [pdf]
Active Learning for Question Difficulty Prediction
This paper focuses on question difficulty estimation (calibration), and its applications in educational scenarios and beyond. The emphasis is on the use of Active Learning to bound the minimum number of labelled samples that we need. It also explores using various SOTA methods for predicting question difficulty, with a specific focus on German textual questions using the Lernnavi dataset. The study refines preprocessing techniques for question data and metadata to improve question difficulty estimation.
[32] vixra:2312.0152 [pdf]
Diff+STN Architectures for External Orientation Correction
STNs are highly efficient in warping the input image for a downstream task. However, cascaded STNs are found to be able to learn more complex transformations. We attempt to leverage the multistep process of diffusion models to produce module(s) that has a similar effectto cascaded STNs.
[33] vixra:2312.0149 [pdf]
A Modified Born-Infeld Model of Electrons and a Numerical Solution Procedure
This work presents a modified Born-Infeld field theory and a numerical solution procedure to compute electron-like solutions of this field theory in the form of rotating waves of finite self-energy. For the well-known constants of real electrons, the computed solution results in a Born-Infeld parameter of 5x10^22 V/m, which is consistent with previous work.
[34] vixra:2312.0148 [pdf]
Stringy Motivic Spectra II: Higher Koszul Duality
This is a rendition of [2]. We study stringy motivic structures. This builds upon work dealing with $mathbb{F}_p$-modives for a suitable prime p. In our case, we let p be a long exact sequence spanning a path in a pre-geometric space. We superize a nerve from our previous study.
[35] vixra:2312.0143 [pdf]
On the Nonexistence of Solutions to a Diophantine Equation Involving Prime Powers
This paper investigates the Diophantine equation pr + (p + 1)s = z2 Where p > 3, s ≥ 3 , z is an even integer. The focus of the study is to establish rigorous results concerning the existence of solutions within this specific parameter space. The main result presented in this paper demonstrates the absence of solutions under the stated conditions. The proof employs mathematical techniques to systematically address the case when the prime p exceeds 3, and the exponent s is equal to or greater than2, while requiring the solution to conform to the constraint of an even z. This work contributes to the understanding of the solvability of the given Diophantine equation and provides valuable insights into the interplay between prime powers and the resulting solutions.
[36] vixra:2312.0142 [pdf]
The Tunisian Land Information System (TLIS)
To celebrate the launch of the "Tunisian Land Information System (TLIS)" project, we would like to recall some of the history of Tunisian geodesy since its inception, and describe in detail the terrestrial and spatial geodetic systems that have been used since then.
[37] vixra:2312.0141 [pdf]
Tumbug: a Pictorial, Universal Knowledge Representation Method
Since the key to artificial general intelligence (AGI) is commonly believed to be commonsense reasoning (CSR) or, roughly equivalently, discovery of a knowledge representation method (KRM) that is particularly suitable for CSR, the author developed a custom KRM for CSR. This novel KRM called Tumbug was designed to be pictorial in nature because there exists increasing evidence that the human brain uses some pictorial type of KRM, and no well-known prior research in AGI has researched this KRM possibility. Tumbug is somewhat similar to Roger Schank's Conceptual Dependency (CD) theory, but Tumbug is pictorial and uses about 30 components based on fundamental concepts from the sciences and human life, in contrast to CD theory, which is textual and uses about 17 components (= 6 Primitive Conceptual Categories + 11 Primitive Acts) based mainly on human-oriented activities. All the Building Blocks of Tumbug were found to generalize to only five Basic Building Blocks that exactly correspond to the three components {O, A, V} of traditional Object-Attribute-Value representation plus two new components {C, S}, which are Change and System. Collectively this set of five components, called "SCOVA," seems to be a universal foundation for all knowledge representation.
[38] vixra:2312.0138 [pdf]
A Promising Visual Approach to Solution of 82% of Winograd Schema Problems Via Tumbug Visual Grammar
This 2023 document is a wrapper that embeds the author's original 2022 article of the above title that has never been publicly available before. The embedded article is about Phase 1 (which is about Tumbug) and Phase 2 (which is about non-spatial reasoning) of the 5-phase Visualizer Project of the author, a project that is still in progress as of late 2023. The embedded article is currently being re-released by the author to supply more information about that project to the public, and for historical reasons. The embedded article was written before a much more thorough article about Phase 1 (viz., "Tumbug: A pictorial, universal knowledge representation method") became available in 2023, but the embedded article describes results from Phase 2 that have not yet been documented elsewhere.
[39] vixra:2312.0136 [pdf]
A Postulate-Free Treatment of Lorentz Boosts in Minkowski Space
Fundamental results of special relativity, such as the linear transformation for Lorentz boosts, and the invariance of the spacetime interval, are derived from a system of differential equations. The method so used dispenses with the need to make any physical assumption about the nature of spacetime.
[40] vixra:2312.0135 [pdf]
On the Notion of Carries of Numbers 2^n-1 and Scholz Conjecture
Applying the pothole method on the factors of numbers of the form $2^n-1$, we prove that if $2^n-1$ has carries of degree at most $$kappa(2^n-1)=frac{1}{2(1+c)}lfloor frac{log n}{log 2}floor-1$$ for $c>0$ fixed, then the inequality $$iota(2^n-1)leq n-1+(1+frac{1}{1+c})lfloorfrac{log n}{log 2}floor$$ holds for all $nin mathbb{N}$ with $ngeq 4$, where $iota(cdot)$ denotes the length of the shortest addition chain producing $cdot$. In general, we show that all numbers of the form $2^n-1$ with carries of degree $$kappa(2^n-1):=(frac{1}{1+f(n)})lfloor frac{log n}{log 2}floor-1$$ with $f(n)=o(log n)$ and $f(n)longrightarrow infty$ as $nlongrightarrow infty$ for $ngeq 4$ then the inequality $$iota(2^n-1)leq n-1+(1+frac{2}{1+f(n)})lfloorfrac{log n}{log 2}floor$$ holds.
[41] vixra:2312.0134 [pdf]
A Proof of the Wen-Yao Conjecture
In this article, we characterize monomials in de facto values.Carlitz-Goss rielle defined on the complement of Fq (T) in a finite place which arealgebraic on Fq (T ). In particular, this confirms Wen-Yao's conjecturestated in 2003. This gives a necessary and sufficient condition on an en-p-adic tier so that the value of the Carlitz-Goss factorial in it is algebraic on Fq (T ). When restricted to rational arguments, we determinenot all algebraic relations between the values u200bu200btaken by this function, this which gives the counterpart for finite places of a result of Chang, Papanikolas, Thakur and Yu obtained in the case of infinite place.<p>Dans cette article, nous caractérisons les monômes en les valeurs de la facto-rielle de Carlitz-Goss définie sur le complété de Fq (T ) en une place finie qui sont algébriques sur Fq (T ). En particulier, cela confirme la conjecture de Wen-Yaoénoncée en 2003 . Celle-ci donne une condition necessaire et suffisante sur un en-tier p-adique pour que la valeur de la factorielle de Carlitz-Goss en celui-ci soit algébrique sur Fq (T ). Lorsque restreint aux arguments rationnels, nous détermi-nons toutes les relations algébriques entre les valeurs prises par cette fonction, cequi donne le pendant pour les places finies d’un résultat de Chang, Papanikolas, Thakur et Yu obtenu dans le cas de la place infinie.
[42] vixra:2312.0133 [pdf]
Uchida's Identities and Simple Results of 1/0=0/0= Tan(π/2)= Cot(π/2)=0
In this note, we would like to show the simple results 1/0=0/0= tan(π/2)= cot(π/2)=0 based on the simple identities that are discovered by Keitaroh Uchida. The logic and results are all reasonable and exceptionally pleasant lookings for high school students.
[43] vixra:2312.0132 [pdf]
Homogenization of the First Initial-Boundary Value Problem for Periodic Hyperbolic Systems. Principal Term of Approximation
Let $mathcal{O}subset mathbb{R}^d$ be a bounded domain of class $C^{1,1}$. In $ L_2(mathcal{O};mathbb{C}^n)$, we consider a matrix elliptic second order differential operator $A_{D,varepsilon}$ with the Dirichlet boundary condition. Here $varepsilon >0$ is a small parameter. The coefficients of the operator $A_{D,varepsilon}$ are periodic and depend on $mathbf{x}/varepsilon$. The principal terms of approximations for the operator cosine and sine functions are given in the $(H^2ightarrow L_2)$- and $(H^1ightarrow L_2)$-operator norms, respectively. The error estimates are of the precise order $O(varepsilon)$ for a fixed time. The results in operator terms are derived from the quantitative homogenization estimate for approximation of the solution of the initial-boundary value problem for the equation $(partial _t^2+A_{D,varepsilon})mathbf{u}_varepsilon =mathbf{F}$.
[44] vixra:2312.0129 [pdf]
Stringy Motivic Spectra
We consider strings from the perspective of stable motivic, homotopical QFT. Some predictions for the behavior of gauginos in both a Minkowski light cone and $5$-dimensional $mathcal{A}dmathcal{S}_5$-space are given. We show that there is a duality between working locking in a system of dendrites, and threshold edging at the periphery of a manifold.This work extends the work of [4] and [7] by providing a more mathematical interpretation of the realization of quasi-quanta in open topological dynamical systems. This interpretation incidentally involves the category of pure motives over $mathfrak{C}$, and projections of fiber spectra to the category of stable homotopies.
[45] vixra:2312.0128 [pdf]
Space Time PGA in Geometric Algebra G(1,3,1)
In Geometric Algebra, G(1,3,1) is a degenerate-metric geometric algebra being introduced in this paper as Space Time PGA [STPGA], based on 3D Homogeneous PGA G(3,0,1) [3DPGA] and 4D Conformal Spacetime CGA G(2,4,0) [CSTA]. In CSTA, there are flat (linear) geometric entities for hyperplane, plane, line, and point as inner product null space (IPNS) geometric entities and dual outer product null space (OPNS) geometric entities. The IPNS CSTA geometric entities are closely related, in form, to the STPGA plane-based geometric entities. Many other aspects of STPGA are borrowed and adapted from 3DPGA, including a new geometric entity dualization operation J_e that is an involution in STPGA. STPGA includes operations for spatial rotation, spacetime hyperbolic rotation (boost), and spacetime translation as versor operators. This short paper only introduces the basics of the STPGA algebra. Further details and applications may appear in a later extended paper or in other papers. This paper is intended as a quick and practical introduction to get started, including explicit forms for all entities and operations. Longer papers are cited for further details.
[46] vixra:2312.0125 [pdf]
Quadratic Phase Quaternion Domain Fourier Transform
Based on the quaternion domain Fourier transform (QDFT)of 2016 and the quadratic-phase Fourier transform of 2018, we introduce the quadratic-phase quaternion domain Fourier transform (QPQDFT) and study some of its properties, like its representation in terms of the QDFT, linearity, Riemann-Lebesgue lemma, shift and modulation, scaling, inversion, Parseval type identity, Plancherel theorem, directional uncertainty principle, and the (direction-independent) uncertainty principle. The generalization thus achieved includes the special cases of QDFT, a quaternion domain (QD) fractional Fourier transform, and a QD linear canonical transform.
[47] vixra:2312.0123 [pdf]
The Transformation of Bursa-Wolf of Seven Parameters
In this note, we present the Bursa-Wolf seven-parameter transformation from one geodetic system to another, showing how to determine the 7 parameters by the method of least squares and calculate them numerically.
[48] vixra:2312.0118 [pdf]
On Uniformly-accelerated Motion in an Expanding Universe
Null geodesics of a spacetime are a key factor in determining dynamics of particles. In this paper, it is argued that, within the scope of validity of Cosmological Principle where FLRW model can be safely employed, expansion of the Universe causes the null geodesics to accelerate, providing us with a universal acceleration scale a_0=cH_0. Since acceleration of null rays of spacetime corresponds to null rays of velocity space, demanding the invariance of acceleration of light a_0 yields a new metric for the velocity space which introduces time as a dimension of the velocity space. Being part of the configuration space, modification of distance measurements in velocity space alters the Euler-Lagrange equation and from there the equation of motion, Newton's Second Law. It is then seen that the resulting modification eliminates the need for Dark matter in clusters of galaxies and yields MOND as an approximation.
[49] vixra:2312.0117 [pdf]
Solvable Quintic Equation X^5 45X + 108 = 0
We have previously proposed a quintic equation that is outside the available arguments of the solvable quintic equation . In this article, we give another quintic equation in Bring - Jerrard form and its root.
[50] vixra:2312.0116 [pdf]
Authentication Link: A Novel Authentication Architecture in IoT/IoB Environment
The authentication is the process of determining whether someone or something is, and there are many authentication methods for digital environment. The digital authentication is divided into three main categories, 'What you have', 'What you know', and 'Who you are'. Furthermore, there are multi-factor authentications using a combination of two or more of these. However, these methods are always exposed to the risk of forgery, tampering, and stealing. This paper proposes a novel authentication architecture that is suitable for Internet of Things (IoT) and Internet of Behaviors (IoB) environment. In the aspect of technology, the proposed architecture is token based authentication method. However, this architecture is continuous, mimics real analog world, and has the advantage of being immediately recognizable in counterfeiting.
[51] vixra:2312.0115 [pdf]
A New Approach to Unification Part 2: Deducting Particle Physics
In a series of 4 papers an approach to a unified physics is presented. In part 1 the foundation of such an approach is given. Here in part 2 it will be shown how particle physics follows. In part 3 gravitational physics will be derived. In part 4 open fundamental questions of actual physics are answered and the concept of a new cosmology is introduced.
[52] vixra:2312.0113 [pdf]
Nervous Equivariant Holonomy
One of the possible explanations for entanglement is a sort of perverse holonomy which acts on sheaves whose germs are eigenvectors for a tuple of local variables. We take baby steps towards realizing this model by introducing an equivariant form of holonomy. As a test category, we take U(1)-bundles whose outbound fibrations are Koszul nerves of degree (p+q)=n.
[53] vixra:2312.0112 [pdf]
On Wilker-Type Inequalities
In this paper, we present elementary proofs of Wilker-type inequalities involving trigonometric and hyperbolic functions. In addition, we propose some conjectures which extend and generalize the Wilker-type inequalities.
[54] vixra:2312.0111 [pdf]
On Generalized li-Yau Inequalities
We generalize the Li-Yau inequality for second derivatives and we also establish Li-Yau type inequality for fourth derivatives. Our derivation relies on the representation formula for the heat equation.
[55] vixra:2312.0108 [pdf]
Complete Operations
The Operator axioms have produced complete operations with real operators. Numerical computations have been constructed for complete operations. The classic calculator could only execute 7 operator operations: + operator operation(addition), - operator operation(subtraction), $times$ operator operation(multiplication), $div$ operator operation(division), ^{} operator operation(exponentiation), $surd$ operator operation(root extraction), log operator operation(logarithm). In this paper, we invent a complete calculator as a software calculator to execute complete operations. The experiments on the complete calculator could directly prove such a corollary: Operator axioms are consistent.
[56] vixra:2312.0105 [pdf]
Fine-tuning BERT for HTTP Payload Classification in Network Traffic
Fine-tuning pre-trained language models like Bidirectional Encoder Representations from Transformers (BERT) has exhibited remarkable potential in various natural language processing tasks. In this study, we propose and investigate the fine-tuning of BERT specifically for the classification of HTTP payload representations within network traffic. Given BERT's adeptness at capturing semantic relationships among tokens, we aim to harness its capabilities for discerning normal and anomalous patterns within HTTP payloads. Leveraging transfer learning by fine-tuning BERT, our methodology involves training the model on a task-specific dataset to adapt its pre-trained knowledge to the intricacies of HTTP payload classification. We explore the process of fine-tuning BERT to learn nuanced representations of HTTP payloads and effectively distinguish between normal and anomalous traffic patterns. Our findings reveal the potential efficacy of fine-tuned BERT models in bolstering the accuracy and efficiency of anomaly detection mechanisms within network communications.
[57] vixra:2312.0104 [pdf]
Convergence Condition for the Newton-Raphson Method: Application in Real Polynomial Functions
The Newton-Raphson method applies to the numerical calculation of the roots of Real functions, through successive approximations towards the Root of the function. The Newton-Raphson method has the drawback that it does not always converge. This work establishes the convergence condition of the Newton-Raphson method for Real functions in general; once the convergence condition is met, the method will always converge towards the Root of the function. In this work, the development of the application of the convergence condition is established to specifically solve Real polynomial functions.
[58] vixra:2312.0100 [pdf]
On the Origins of Mass
Probability, as manifested through entropy, is presented in this study as one ofthe most fundamental components of physical reality. It is demonstrated that thequantization of probability allows for the introduction of the mass phenomenon.In simple terms, gaps in probability impose resistance to change in movement,which observers experience as inertial mass. The model presented in the paperbuilds on two probability fields that are allowed to interact. The resultant prob-ability distribution is quantized, producing discrete probability levels. Finally, aformula is developed that correlates the gaps in probability levels with physicalmass. The model allows for the estimation of quark masses. The masses of theproton and neutron are arrived at with an error of under 0.04%. The masses ofsigma baryons are calculated with an error between 0.2% and 0.05%. The Wboson mass is calculated with an error of under 0.5%. The model explains whyproton is stable while other baryons are not, and it gives an explanation of theorigins and nature of dark matter. Throughout the text, the article illustratesthat the approach required to describe the nature of mass is incompatible withthe mathematical framework needed to explain other physical phenomena.
[59] vixra:2312.0093 [pdf]
Galois Connections on a Brane
On an absolute frame of reference, a Galois connection to a d-brane may be prescribed such that the data of the frame becomes locally presentable. We describe these connections briefly.
[60] vixra:2312.0092 [pdf]
Fixed Point Properties of Precompletely and Positively Numbered Sets
In this paper, we prove a joint generalization of Arslanov’s completenesscriterion and Visser’s ADN theorem for precomplete numberings. Then we considerthe properties of completeness and precompleteness of numberings in the context ofthe positivity property. We show that the completions of positive numberings are nottheir minimal covers and that the Turing completeness of any set A is equivalent to theexistence of a positive precomplete A-computable numbering of any infinite family withpositive A-computable numbering.
[61] vixra:2312.0090 [pdf]
Complex Curvature and Complex Radius
I define the notions of complex curvature and complex radius and prove that one of these complex numbers is exactly the inverse of the other.
[62] vixra:2312.0089 [pdf]
The Excess Mortality is Strongly Underestimated
This article analyses the conjecture that excess mortality is underestimated with the pandemic.I use the numbers from the CBS (Dutch Central Bureau for Statistics) as an example. As a baseline we take the expected mortality for 2021 and 2022 from 2019. I correct this expected mortality with the estimated number of people who died in earlier years than expected because of the pandemic. For 2021 this correction is 8K. The CBS expects the mortality to be almost equal to the estimate from 2019. Then the excess mortality increases from 16K (CBS) to 24K.I present the following idea to explain the difference. At the beginning of very year the numbers of people in year groups are usually adjusted by applying a historical determined percentage to the population at January first. Covid hits the weakest the hardest. This changes the distribution of the expected remaining life years in the year group. And thus the average expected remaining life years. Hence the percentage has to be adjusted. Then the expected mortality decreases and the excess mortality increases.The excess mortality within a year are people who for example died in April from covid but who would have died in October without the pandemic. With this number total excess mortality rises with 6K to 30K.Excess mortality is divided in covid and non-covid. De large increase in non-covid deaths is striking.The analysis supports the conjecture that excess mortality is underestimated.Note: The numbers in this article are for the Netherlands. For you own country use the appropriate numbers.
[63] vixra:2312.0088 [pdf]
Expected Mortality: Adjustment for Distribution in Age-Groups
This article discusses the influence of a disturbance like covid on the calculation of life expectancy in year groups etcetera. Life expectancies in year-groups are usually adjusted in the beginning of the year based on the population in the beginning of the year. This is done with a percentage based on previous years. This percentage is a reflection of volume. With the pandemic the weak were hit heavily by covid. A consequence is that the distribution of life expectancy changes in the year groups. This increases the life expectancy and decreases the expected mortality in the year group. Then the calculation for the year groups has to be adjusted accordingly. In this article I give an example of such adjustment. One can accordingly adjust likewise statistics.
[64] vixra:2312.0087 [pdf]
On Non-Principal Arithmetical Numberings and Families
The paper studies Σ-0-n-computable families (n ⩾ 2) and their numberings. It is proved that any non-trivial Σ-0-n-computable family has a complete with respect to any of its elements Σ-0-n-computable non-principal numbering. It is established that if a Σ-0-n-computable family is not principal, then any of its Σ-0-n-computable numberings has a minimal cover.
[65] vixra:2312.0086 [pdf]
Gravitational Waves Background, as Well as Some UFO, FRB and Supernova Flares, Are Due to Compressibility of the Spacetime
The recently observed gravitational wave background is explained in terms of the quantum modification of the general relativity (Qmoger). Some UFO, FRB and supernova flares also can be explained in terms of Qmoger.
[66] vixra:2312.0085 [pdf]
Geometric Entity Dualization and Dual Quaternion Geometric Algebra in PGA G(3,0,1) with Double PGA G(6,0,2) for General Quadrics
In Geometric Algebra, G(3,0,1) is a degenerate-metric algebra known as PGA, originally called Projective Geometric Algebra in prior literature. It includes within it a point-based algebra, plane-based algebra, and a dual quaternion geometric algebra (DQGA). In the point-based algebra of PGA, there are outer product null space (OPNS) geometric entities based on a 1-blade point entity, and the join (outer product) of two or three points forms a 2-blade line or 3-blade plane. In the plane-based algebra of PGA, there are commutator product null space (CPNS) geometric entities based on a 1-blade plane entity, and the meet (outer product) of two or three planes forms a 2-blade line or 3-blade point. The point-based OPNS entities are dual to the plane-based CPNS entities through a new geometric entity dualization operation J_e that is defined by careful observation of the entity duals in same orientation and collected in a table of basis-blade duals. The paper contributes the new operation J_e and its implementations using three different nondegenerate algebras {G(4),G(3,1),G(1,3)} as forms of Hodge star dualizations, which in geometric algebra are various products of entities with nondegenerate unit pseudoscalars, taking a grade k entity to its dual grade 4-k entity copied back into G(3,0,1). The paper contributes a detailed development of DQGA. DQGA represents and emulates the dual quaternion algebra (DQA) as a geometric algebra that is entirely within the even-grades subalgebra of PGA G(3,0,1). DQGA has a close relation to the plane-based CPNS PGA entities through identities, which allows to derive dual quaternion representations of points, lines, planes, and many operations on them (reflection, rotation, translation, intersection, projection), all within the dual quaternion algebra. In DQGA, all dual quaternion operations are implemented by using the larger PGA algebra. The DQGA standard operations include complex conjugate, quaternion conjugate, dual conjugate, and part operators (scalar, vector, tensor, unit, real, imaginary), and some new operations are defined for taking more parts (point, plane, line) and taking the real component of the imaginary part by using the new operation J_e. All DQGA entities and operations are derived in detail. It is possible to easily convert any point-based OPNS PGA entity to and from its dual plane-based CPNS PGA entity, and then also convert any CPNS PGA entity to and from its DQGA entity form, all without changing orientation of the entities. Thus, each of the three algebras within PGA can be taken advantage of for what it does best, made possible by the operation J_e and identities relating CPNS PGA to DQGA. PGA G(3,0,1) is then doubled into a Double PGA (DPGA) G(6,0,2) including a Double DQGA (DDQGA), which feature two closely related forms of a general quadric entity that can be rotated, translated, and intersected with planes and lines. The paper then concludes with final remarks.
[67] vixra:2312.0075 [pdf]
Unification of Gravity and Electromagnetism
Maxwell wrote that he wanted to "leaven" his Treatise on Electromagnetism with Quaternions. Maxwell died before doing this. Silberstein and Conway accomplished this partially.This presentation claims the Lorentz condition field is the gravitational potential. Resulting in a Gravitic-Electromagnetic unification.
[68] vixra:2312.0073 [pdf]
The Intent of Hume's Discussion on the Existence of the External World
Exploring the concept of the external world's existence has been a focal point within the domain of epistemological inquiry throughout the annals of philosophy. Numerous thinkers have grappled with the question of whether one can truly fathom the existence of the external world and, if so, how such comprehension can be attained. Among these intellectual explorers stands David Hume, who approaches our perceptions of the external world as deeply rooted in matters of belief. Hume critically examines the belief in the enduring and distinct presence of external entities, even when these entities escape active perception. This inquiry delves into the origins of the belief in an external world that persists independently of our cognitive processes and sensory experiences, probing the cognitive faculties responsible for shaping such convictions. Through this exploration, it is asserted that Hume's primary aim is to illuminate the epistemological significance embedded within such beliefs.
[69] vixra:2312.0067 [pdf]
The Classical Derivation of the Remnant Mass of a Quasi-Binary Black Hole
In the present article, we classically derive an analytic formula of the Remnant Mass of a Quasi-Binary Black Hole. The Quasi Black Hole concept comes from a Theory Of Everything we have developed few years ago.
[70] vixra:2312.0062 [pdf]
A New Approach to Unification Part 1: How a Toe is Possible
As in spite to intense search at present apparently there is no approach leading to a theory including the standard model of particle physics and general relativity, it is discussed whether such a theory is possible at all. In the following is shown that a ToE is possible if the either/or condition of current unification theories for background-dependent or -independent is replaced by a both/and. In this part 1, the foundation of such a theory is presented. In the following parts 2 and 3 particle and gravitational physics are derived from this foundation and in part 4 fundamental open fundamental questions of actual physics are answered by a new interpretation of physical quantities and an outline of a new cosmology is given.
[71] vixra:2312.0058 [pdf]
Galileo's Undone Gravity Experiment: Part 2
Certain preconceptions about the physical world inherited from antiquity as yet permeate our established theories of physics and cosmology. Tacitly prominent in this world view is the fact that humans evolved on a 5.97 x 10^24 kg ball of matter.One of the consequences is the "relativistic" point of view, according to which accelerometers may or may not be telling the truth, whether they fall (a = 0) or when they are "at rest" on a planet’s surface (a > 0). The result of an experiment proposed by Galileo in 1632, but not yet performed, would unequivocally prove whether this schizoid relationship with accelerometers rings true or not.An imaginary alien civilization (of Rotonians) evolved on a rotating world in which the truthfulness of accelerometers is never doubted. Adopting a Rotonian perspective leads to a model of gravity according to which the result of Galileo’s experiment dramatically conflicts with the predictions of both Newton and Einstein.The consequences of this new perspective bear on and invite a rethink of many facets of established theories of physics and cosmology. Herein we discover that the Rotonian perspective is consistent with what we actually KNOW about the physical world and -- depending on the result of Galileo’s experiment -- it opens the door to a much more coherent, contradiction-free world view, which spans all scales of size, mass, and time.
[72] vixra:2312.0057 [pdf]
Galileo's Undone Gravity Experiment: Part 3
Failure of LIGO physicists to provide a spacetime diagram showing the simultaneous laser paths and gravitational waves propagating through their interferometer is argued to be fatal to the whole enterprise.After establishing the cogency of this argument, the seemingly "unhackable" multi-messenger event GRB170817A is similarly placed under suspicion. Claims to have detected the gravitational waves from a coalescing neutron star binary suffer the red flag of a prominent (and suspiciously placed) glitch which prevented the event from triggering a real time alert to the community.Altogether, we have many reasons to suspect that all the claims of having detected gravitational waves are false. LIGO is a hoax. Perhaps the most dramatic way to expose the charade would be to at last perform the simple gravity experiment proposed by Galileo in 1632. We predict a result that conflicts with both Newton’s and Einstein’s theories of gravity. If our prediction is confirmed, gravitational waves and much else about modern gravitational theory would be falsified.Even if the result of Galileo’s experiment supports Newton and Einstein, we are way overdue to find out directly from Nature, instead of pretending to know, based on faith in popular theories.
[73] vixra:2312.0056 [pdf]
Bordisms and Wordlines II
This paper is a continuation of [2]. Here, we discuss twisted branes, the free loop superspace, and, in particular, a deformation of the modal lightcone which allows us to model cobordisms of generically small, portable, locally closed systems.
[74] vixra:2312.0052 [pdf]
New Exact Solution to Einsteins Field Equation Gives a New Cosmological Model
Haug and Spavieri have recently presented a new exact solution to Einstein’s field equations. In this paper, we will explore how this new metric could potentially lead to a new model for the cosmos. In the Friedman model, the cosmological constant must be introduced ad-hoc in Einstein’s field equations or, alternatively, directly into the Friedmann equation. However, a similar constant automatically emerges in our cosmological model directly from Einstein’s original 1916 field equations, which initially did not include a cosmological constant. We will analyze this, and it appears that the cosmological constant is little more than an adjustment for the equivalence of the mass-energy of the gravitational field, which is not taken into account in other exact solutions but is addressed in the Haug and Spavieri solution. Our approach seems to indicate that the Hubble sphere can be rep- resented as a black hole, a possibility that has been suggested by multiple authors, but this is a quite different type of black-hole universe that seems to be more friendly than that of a Schwarzschild black-hole.
[75] vixra:2312.0051 [pdf]
Minimal Polynomials and Multivector Inverses in Non-Degenerate Clifford Algebras
Clifford algebras are an active area of mathematical research with numerous applications in mathematical physics and computer graphics among many others.The paper demonstrates an algorithm for the computation of inverses of such numbers in a non-degenerate Clifford algebra of an arbitrary dimension. This is achieved by the translation of the classical Faddeev-LeVerrier-Souriau (FVS) algorithm for characteristic polynomial computation in the language of the Clifford algebra. The FVS algorithm is implemented using the Clifford package in the open-source Computer Algebra System Maxima.Symbolic and numerical examples in different Clifford algebras are presented.
[76] vixra:2312.0050 [pdf]
Nodal Lines of Eigenfunctions of Laplacian in Plane
We prove Payne's nodal line conjecture for any bounded simply connected, possibly non-convex, smooth boundary domain $Omega$ in Plane; Payne conjectured that any second Dirichlet eigenfunction of laplacian in any simply connected bounded domain in Plane can not have a closed nodal line.
[77] vixra:2312.0049 [pdf]
Note on the Resolution of the Equation β(φ)=Y Used in Geodesy
In this note, we give a method to resolve the equation β(φ)=Y used in geodesy, where β(φ) is the length of the arc of the meridian of an ellipse or an ellipsoid from the equator to the point of geodetic latitude φ.
[78] vixra:2312.0047 [pdf]
Dark Energy, MOND and the Mirror Matter Universe
The purpose of this study is to entrench the Copernican principle into cosmology with regard to dark energy (DE). A dual-universe solution is proposed for both the scale and coincidence problems of DE which is simple and involves no `fine-tuning'. It is also, in principle, testable and falsifiable. The model enables computation of the total entropy of the universe contained within the horizon expressed holographically projected onto the area of the cosmic horizon in units of Planck area. We subsequently compute the Planck entropy, which takes an irreducibly simple form. A derivation of the relation $[{DE}]={sqrt{m_{pl}.H_0}}$ is provided and we further show that this relation is valid in all (local i.e. $H'_{tau}=H'_0$) observer frames. We prove that the vacuum energy is exactly zero in this dual universe model. Lastly we propose that our analysis implies that the MOND paradigm is due to gravitation interaction of the two universes and we compute the MOND acceleration scale $a_0$ and scale invariant ${cal.{A}}_0$ as a consequence of cosmology, completely independent of galaxy dynamics. Significantly, this allows us to bring the MOND paradigm into a cosmological model without modifying General Relativity.
[79] vixra:2312.0046 [pdf]
Scattering of Worldlines Along a Bordism
In this paper, ER bridges are discussed as bordisms. We treat these bordisms as fibers, whose sections are holographically entangled to copies of $S^1$. Diffemorphisms of these fibers are discussed, as well as the implication of replacing $S^1$ with the supercircle, and the replacing its underlying algebra with a Lie superalgebra.
[80] vixra:2312.0045 [pdf]
The Vacuum Catastrophe Solved by Taking into Account Hawking-Bekenstein Black Hole Entropy
We will demonstrate that the vacuum catastrophe can be solved by utilizing Bekenstein- Hawking entropy and applying it to black hole type cosmology models, as well as to a large class of Rh = ct models. Additionally, we will examine a recent exact solution to Einstein’s field equation and explore how it may potentially resolve the vacuum catastrophe rooted in both steady-state universe and possibly growing black hole universe scenarios.
[81] vixra:2312.0042 [pdf]
Fine Structure Constant and Proton/Electron Mass Ratio
In this note a model is put forward whereby the proton has mass and charge shell radii in the ratio 1:1.68. The fine structure constant is proportional to the thickness of the shell. Two new formulae for calculating �� are introduced. Eq 5 and Eq 6 are the centrepiece of this note. These make use of the usual set of fundamental constants, including the proton / electron mass ratio. Eq 6 gives the same value for $alpha$ as standard formulae. However, it is suggested that in an optimal physics, this method with reasonable confidence, gives a slightly lower value for �� reliable to 12 decimal places, i.e., $0.007297352566$. Whether this is the actual fine structure constant depends on the veracity of the model and the accuracy of the proton /electron mass ratio.
[82] vixra:2312.0036 [pdf]
New Equivalent of the Riemann Hypothesis
In this article, it is demonstrated that if the zeta function does not have a sequence of zeros whose real part converges to 1, then it cannot have any zeros in the critical strip, showing that the Riemann Hypothesis is false.
[83] vixra:2312.0030 [pdf]
A 2-Pitch Structure
We have constructed a pitch structure. In this paper, we define a binary relation on the set of steps, thus the set become a circle set. And we define the norm of a key transpose. To apply the norm, we define a scale function on the circle set. Hence we may construct the 2-pitch structure over the circle set.
[84] vixra:2312.0027 [pdf]
Spectrum of Sunflower Hypergraphs
Hypergraphs are generalization of graphs, which have several useful applications. Sunflower hypergraphs are interesting hypergraphs, which become linear in some cases. In this paper, we discuss the Siedel spectrum of these hypergraphs.
[85] vixra:2312.0025 [pdf]
Some Remarks on the Generalization of Atlases
We generalize atlases for flat stacks over smooth bundles by constructing local-global bijections between modules of differing order. We demonstrate an adjunction between a special mixed module and a holonomy groupoid.
[86] vixra:2312.0008 [pdf]
Weighted Riemann Zeta Limits on the Real Axis
It is investigated whether for real argument s the (s−1)n+1 weighted Riemann zeta ζ(n)(s) limits s ↓ 1 do exist. Here, we will look into n = 0,1. The answer to the question could very well be that assuming existence to be true gives a confusing outcome. That may support the possibility of incompleteness in concrete mathematics.
[87] vixra:2312.0001 [pdf]
Attacks Against K-Out-of-M Spacetime-Constrained Oblivious Transfer
This paper conducts a security analysis of the generalized k-out-of-m spacetime-constrained oblivious transfer protocol in the context of relativistic quantum cryptography. The introduction of this paper provides an overview of relativistic quantum cryptography and delves into the details of the spacetime-constrained oblivious transfer protocol. The subsequent sections of the paper focus on determining the successful probability of various cloning and measurement attacks. The majority of the analysis will be based on the simplest case when m = 3 and k=2.
[88] vixra:2311.0153 [pdf]
Anthropocosmos: From Analogy to Solanthropy in a Billionth-Precision Cosmology
Diophantine treatment of Kepler laws, the Holographic Principle and Arthur Haas's forgotten Principle of Coherence induce a steady-state cosmology down to the billionth precision, which explains the observation of "impossible galaxies", and predicts the isothermicity of the background radiation at temperature 2.725820456 Kelvin. The connections of parameters with those of the solar system impose the Solanthropic Principle: we are alone in the universe.
[89] vixra:2311.0151 [pdf]
On the Fundamental Relation Between Universal Gravitational Constant G and Coulomb’s Constant K
In this brief paper we relate the universal gravitational constant G and Coulomb’s constant k to the volumes of subatomic particles. We define a new characteristic of subatomic particles, quantum volume, which varies in an inverse proportion to the mass of the subatomic particle. As an immediate corollary, we propose an explanation to the Proton’s radius puzzle that reconciles the various seemingly contradictory results obtained, checking our postulates with a prediction of upper and lower bounds for the electron’s radius which is consistent with the current experimental bounds.
[90] vixra:2311.0148 [pdf]
Two Alternative Arnowitt-Dresner-Misner,formalisms Using the Conventions Adopted by Misner-Thorne-Wheeler and Alcubierre Applied to the Natario Warp Drive Spacetime
General Relativity describes the gravitational field using the geometrical line element of a given generic spacetime metric where do not exists a clear difference between space andtime.This generical form of the equations using tensor algebra is useful for differential geometry where we can handle the spacetime metric tensor in a way that keeps both space and time integrated in the same mathematical entity (the metric tensor). However there are situations in which we need to recover the difference between space and time.The 3 + 1 ADM formalism allows ourselves to separate from the generic equation of a given spacetime the 3 dimensions of space(hypersurfaces) andthe time dimension.
[91] vixra:2311.0145 [pdf]
Quasi-diagonalization and Quasi-Jordanization of Real Matrices in Real Number Field
A real matrix may not be similar to a diagonal matrix or a Jordan canonical matrix in the real number field. However, it is valuable to discuss the quasi-diagonalization and quasi-Jordanization of matrices in the field of real numbers. Because the characteristic polynomial of a real matrix is a real coefficient polynomial, the complex eigenvalues and eigenvector chains occur in complex conjugate pairs. So we can re-select the base vectors to quasi-diagonalize it or quasi-Jordanize it into blocks whose dimensions are no larger than 2. In this paper, we prove these conclusions and give the method of finding transition matrix from the Jordan canonical form matrix to the quasi-diagonalized matrix.
[92] vixra:2311.0144 [pdf]
A Proof of the Rationality of the Definition of Momentum in Relativity
In this paper we firstly use a new method--the invariance of space-time interval and some simple linear algebra knowledge to derive Lorentz transformations and four-dimensional vectors. Finally we discuss and prove how to define the force and the momentum in relativity which has not been discussed and proved in textbooks and scientific literature. The first three dimensions of a four-dimensional momentum are defined as momentum and the derivative of momentum with respect to time is defined as force. But there is a problem that the rationality of the definition of momentum is not discussed and proved. Force and momentum cannot be arbitrarily defined. Because if our senses are sensitive and sophisticated enough, only a correct definition can guarantee that when we accelerate an object with a constant force, the momentum will increase at a constant rate. It is not necessary to be discussed in classical mechanics, because in classical mechanics the force is proportional to the acceleration and the force comes before the momentum. But it is just the opposite that the momentum comes before the force in relativistic mechanics, so it's important to discuss and prove how to define the force and the momentum in relativity. In addition the fact that the same physical process does not depend on the space-time point means that the Lorentz transformations must be linear transformations, so we can derive Lorentz transformations and four-dimensional vectors by using the invariance of space-time interval and some simple linear algebra knowledge.
[93] vixra:2311.0143 [pdf]
Formulating a Mathematical Model for Living Systems
Prigogine’s 1978 concept of dissipative structures, drawing parallels with living systems, forms the basis for exploring life’sunique traits. However, these identified similarities prove insufficient in capturing the entirety of life. To address this gap, ourproposed modeling approach emphasizes the distinctive ability of living organisms to observe other systems—an attributeintricately tied to quantum mechanics’ "measurement" processes, as highlighted by Howard Pattee. This article introduces acomprehensive mathematical model centered on quantum dynamical dissipative systems, portraying living systems as entitiesdefined by their observational capacities within this framework. The exploration extends to the core dynamics of these systemsand the intricacies of biological cells, including the impact of membrane potentials on protein states. Within this theoreticalstructure, the model is expanded to multicellular living systems, revealing how cells observe quantum dynamical systemsthrough protein state changes influenced by membrane potentials. The conclusion acknowledges the current theoreticalstatus of the model, underscoring the crucial need for experimental validation, particularly regarding the superposition state ofmembrane proteins under the influence of an electric field.
[94] vixra:2311.0141 [pdf]
On the Variable Nature of Electric Charge of Subatomic Particles
In this paper, we propose a variable modeling for the electric charge of subatomic particles, postulating that the charge of some subatomic particle with charge is dependent on its relativistic speed, with the speed of light as the main inertial reference frame. This variable modeling provides a solid explanation of the quantization of electric charge, and opens a new path of research in quantum physics.
[95] vixra:2311.0137 [pdf]
New Bounds on Mertens Function
In this brief paper we study and bound Mertens function. The main breakthrough is the obtention of a Möbius-invertible formulation of Mertens function, which with some transformations and the application of a generalization of Möbius inversion formula, allows us to reach an asymptotic rate of growth of Mertens function that proves the Riemann Hypothesis.
[96] vixra:2311.0134 [pdf]
Cooldown Time Estimation Methods for Stirling Cycle Crycoolers
Miniature crycoolers are small refrigerators that can reach cryogenic temperatures in the range of 60K to 150K. They have the capability of accumulating a small temperature drop into a large overall temperature reduction. The cooldown time estimation is becoming more and more as a design parameter, certainly in hands-on applications. The various complicated physical processes involved in crycooler operation make it hardly possible to explicitly simulate the temperature time response. The numerical methods for solving a typical crycooler suffer from numerical instability,time step restrictions and high computational costs, among others. Since the operation of crycoolers involve processes in range of 15Hz−120Hz, actually solving the crycooler transient response would require different software tools to support the design and analysis of physical processes such as heat transfer, fluid dynamic, electromagnetic and mechanical. These processes would also require an excessive amount of calculations, incurring time consuming and precision penalty. In thisarticle we try to bridge the gap between the explicit impractical approach to steady state based approach. A framework developed in Python for calculating the cooldown time profile of anycrycooler based on a steady state database, is introduced, while utilizing a semi-analytic approach under various operating conditions. The cooldown time performance can be explored at various target and ambient temperature conditions, and also the effects of an external load, material properties or thermal capacitance on the overall cooldown time response. Two case studies based on linear and rotary crycoolers developed at Ricor are used for verfication, with a good agreement between the simulated and measured values.
[97] vixra:2311.0131 [pdf]
Approaching the Value of Vacuum Permittivity Using Vacuum Ether Dipoles Concept
In this paper, we compute an approximate value of vacuum permittivity constant (epsilon naught), with an error of less that 5%, using ether vacuum dipoles concept (Dirac's sea). We start from first principles by defining a field equation that will give bounded states dipoles, described as stationnary waves, and an equation for their polarizability density, very close to the vacuum permittivity value.
[98] vixra:2311.0127 [pdf]
Intrafunctorial Calculus: An Example Solution
This paper presents a novel approach for calculating the solution to n-CongruencyAlgebraist Topologies using intrafunctorial calculus equations. We use a com-bination of the Primal Solution to n-Congruency Algebraist Topologies and theinterpspace calculus equation to calculate the finite integral associated with thealgebraic equations and the corresponding solutions for n. We show how thelogical operator "not" can be used in conjunction with the interpspace calculusequation to both calculate the integral and to negate the algebraic statement.We also provide an example of how this approach can be used to solve a partic-ular form of the equations. The results discussed in this paper can be appliedto a variety of problems in the field of algebraic topology.This paper explored the use of interpspace calculus and logical operations to calculate the finite integral solutions associated with the n-Congruency Algebraic Topologies equation and the associated solutions for n. We then applied this combination of methods to study the dynamics of the sun's gravitational pull on the planets orbiting it. Additionally, we developed a mathematical formulation to calculate the stability of a wormhole solution. Finally, we used the interpspace calculus equation and logical operations to derive an equation for the stability of a wormhole solution. This paper provides a comprehensive approach for studying the mathematics of stable wormholes.
[99] vixra:2311.0123 [pdf]
Relativity Cosmology
This paper presents the perspective that universe is relative. The universe, which changes according to the observer’s proper time, is dynamic.Generally, the universe expands, but if the observer accelerates enough,contraction is also possible. The Hubble parameter merely represents thatchange. If the Hubble parameter is dependent on the proper time, themeasurement can only be uncertain. Identifying the factors that influencethe proper time could potentially lead to a more accurate measurementof the universe’s age, and it could also offer insights into the state of theearly universe and the causes of the accelerated expansion observed in thecurrent epoch.
[100] vixra:2311.0120 [pdf]
[proof] that the Half Retarded and Half Advanced Electromagnetic Theory is Equivalent to Maxwell's Classical Electromagnetic Theory
Abstract Dirac, Wheeler Feynman, and Cramer proposed the electromagnetic theory idea of current element generating half retarded wave and half advanced wave. The author further refined this idea. Proposed the laws of mutual energy flow and conservation of energy. And thus established a new set of electromagnetic theories. For calculating electromagnetic wave radiation of current elements, Maxwell's electromagnetic theory requires electromagnetic radiation to meet the boundary conditions of Sliver Muller. In the author's new theory, this boundary condition is replaced by the charge of the absorber covering the infinite sphere. The author assumes that these absorbers are sinks and will generate advanced waves. The radiation of the current element is a retarded wave. This retarded wave and advanced wave form a mutual energy flow. The author believes that these mutual energy flows are photons. The sum of the energy of countless photons is the macroscopic electromagnetic radiation of the current element. This radiation should be consistent with the Poynting energy flow in classical electromagnetic theory. If the two are indeed consistent, it indicates that the two theories of electromagnetic radiation are equivalent. The author proves that the two theories are indeed equivalent. In this proof, the author also addresses an inherent loophole in Poynting's theorem. In addition, the author found that due to the introduction of sinks, both the field and potential must be compressed to the original %50. This corresponds precisely to the current generating either a %50 retarded wave or a %50 advanced wave. In this way, the author's electromagnetic theory can be seen as the lower level electromagnetic theory of Maxwell's electromagnetic theory. This macroscopic electromagnetic wave is composed of countless photons. Photons are mutual energy flows, which are composed of retarded waves emitted by the sources and advanced waves emitted by the sinks.
[101] vixra:2311.0119 [pdf]
Zeta Function
This article delves into the properties of the Riemann zeta function, providing a demonstration of the existence of a sequence of zeros ${z_k}$, where $lim operatorname{Re}(z_k) = 1$. The exploration of these mathematical phenomena contributes to our understanding of complex analysis and the behavior of the zeta function on the critical line.
[102] vixra:2311.0110 [pdf]
Age, Amplitude of Accommodation and the Graphical Law
We look into the Age(in years) vs Amplitude of accommodation(in Diopters) of eye.We draw the natural logarithm of the age, normalised, starting with an amplitude of accommodation of an eye vs the natural logarithm of the the amplitude of accommodation of the eye, normalised. We conclude that the Age vs Amplitude of accommodation of eyes, can be characterised by a magnetisation curve of a Spin-Glass in thepresence of a little external magnetic field.
[103] vixra:2311.0104 [pdf]
An Algebrologist in Wonderland
By imposing a requirement for spatial isotropy, it is possible to find an algebra with a subalgebra structure having a pattern matching that of the bosons and three families of fermions of the standard model.
[104] vixra:2311.0090 [pdf]
Finding Rational Points of Circles, Spheres, Hyper-Spheres via Stereographic Projection and Quantum Mechanics}
One of the consequences of Fermat's last theorem is the existence of a countable infinite number of rational points on the unit circle, which allows in turn, to find the rational points on the unit sphere via the inverse stereographic projection of the homothecies of the rational points on the unit circle. We proceed to iterate this process and obtain the rational points on the unit $S^3$ via the inverse stereographic projection of the homothecies of the rational points on the previous unit $S^2$. One may continue this iteration/recursion process ad infinitum in order to find the rational points on unit hyper-spheres of arbitrary dimension $S^4, S^5, cdots, S^N$. As an example, it is shown how to obtain the rational points of the unit $ S^{24}$ that is associated with the Leech lattice. The physical applications of our construction follow and one finds a direct relation among the $N+1$ quantum states of a spin-N/2 particle and the rational points of a unit $S^N$ hyper-sphere embedded in a flat Euclidean $R^{N+1}$ space.
[105] vixra:2311.0089 [pdf]
Prototype-Based Soft Feature Selection Package
This paper presents a prototype-based soft feature selection package (Sofes) wrapped around the highly interpretable Matrix Robust Soft Learning Vector Quantization (MRSLVQ) and the Local MRSLVQ algorithms. The process of assessing feature relevance with Sofes aligns with a comparable approach established in the Nafes package, with the primary distinction being the utilization of prototype-based induction learners influenced by a probabilistic framework. The numerical evaluation of test results aligns Sofes' performance with that of the Nafes package.
[106] vixra:2311.0086 [pdf]
On the Largest Prime Factor of the K-Generalized Lucas Numbers
Let $(L_n^{(k)})_{ngeq 2-k}$ be the sequence of $k$--generalized Lucas numbers for some fixed integer $kge 2$ whose first $k$ terms are $0,ldots,0,2,1$ and each term afterwards is the sum of the preceding $k$ terms. For an integer $m$, let $P(m)$ denote the largest prime factor of $m$, with $P(0)=P(pm 1)=1$. We show that if $n ge k + 1$, then $P (L_n^{(k)} ) > (1/86) log log n$. Furthermore, we determine all the $k$--generalized Lucas numbers $L_n^{(k)}$ whose largest prime factor is at most $ 7$.
[107] vixra:2311.0085 [pdf]
A Framework for Modeling, Analyzing, and Decision-Making in Disease Spread Dynamics and Medicine/Vaccine Distribution
The challenges posed by epidemics and pandemics are immense, especially if the causes are novel. This article introduces a versatile open-source simulation framework designed to model intricate dynamics of infectious diseases across diverse population centres. Taking inspiration from historical precedents such as the Spanish flu and COVID-19, and geographical economic theories such as Central place theory, the simulation integrates agent-based modelling to depict the movement and interactions of individuals within different settlement hierarchies. Additionally, the framework provides a tool for decision-makers to assess and strategize optimal distribution plans for limited resources like vaccines or cures as well as to impose mobility restrictions.
[108] vixra:2311.0084 [pdf]
Levelwise Accessible Equivalence Classes of Fibrations
For a space of directed currents, geometric data may be accessible by means of a certain $frac{1}{n}$-type functor on a sheaf of germs. We investigate pointwise periodic homeomorphisms and their connections to foliations.
[109] vixra:2311.0081 [pdf]
A Modified Born-Infeld Model of Electrons with Realistic Magnetic Dipole Moment
The original Born-Infeld model of electrons has been used to describe static electrons without magnetic dipole moment. It is not obvious how to include the magnetic field of a realistic magnetic dipole moment in the original model. This short work proposes a small modification to the original model that might allow for experimentally observed values of electric charge and magnetic dipole moment of electrons.
[110] vixra:2311.0080 [pdf]
Unlocking Robotic Potential Through Modern Organ Segmentation
Deep learning has revolutionized the approach to complex data-driven problems, specifically in medical imaging, where its techniques have significantly raised efficiency in organ segmentation. The urgent need to enhance the depth and precision of organ-based classification is an essential step towards automation of medical operation and diagnostics. The research aims to investigate the effect andpotential advantages transformer models have on binary semantic segmentation, the method utilized for the project. Hence, I employed the SegFormer model, for its lightweight architecture, as the primary deep learning model, alongside the Unet. A custom 2D computerized tomography (CT) scan dataset was assembled, CT-Org2D through meticulous operations. Extensive experiments showed that, in contrast to the selected models, the task’s simplicity required a redesigned Unet architecture with reduced complexity. This model yielded impressive results: Precision,Recall, and IOU scores of 0.91, 0.92, and 0.85 respectively. The research serves as a starting point, motivating further exploration, through different methodologies, to achieve even greater efficiency in organ segmentation.
[111] vixra:2311.0073 [pdf]
Mathematicians, Physicists and Technologists
The branch of the science we call physics does not go unnoticed when Einstein published his famous articles. In this article, I reflect on this branch of science.
[112] vixra:2311.0072 [pdf]
Mtemáticos, Físicos Y Tecnólogos (Mathematicians, Physicists and Technologists)
No pasa inadvertida la mutación de la ciencia que denominamos física cuando Einstein publicó sus artículos famosos. Refiero a esa mutación las reflexiones contenidas en este documento.<p>The branch of the science we call physics does not go unnoticed when Einstein published his famous articles. In this article, I reflect on this branch of science.
[113] vixra:2311.0069 [pdf]
Linear-Time Estimation of Smooth Rotations in ARAP Surface Deformation
In recent years the As-Rigid-As-Possible with Smooth Rotations (SR-ARAP [5]) technique has gained popularity in applications where an isometric-type of surface mapping is needed. The advantage of SR-ARAP is that quality of deformation results is comparable to more costly volumetric techniques operating on tetrahedral meshes. The SR-ARAP relies on local/global optimisation approach to minimise the non-linear least squares energy. The power of this technique resides on the local step. The local step estimates the local rotation of a small surface region, or cell, with respect of its neighbouring cells, so a local change in one cell’s rotation affect the neighbouring cell’s rotations and vice-versa. The main drawback of this technique is that the local step requires a global convergence of rotation changes. Currently the local step is solved in an iterative fashion, where the number of iterations needed to reach convergence can be prohibitively large and so, in practice, only a fixed number of iterations is possible. This trade-off is, in some sense, defeating the goal of SR-ARAP. We propose a linear-time closed-form solution for estimating the codependent rotations of the local step by solving a sparse linear system of equations. Our method is more efficient than state-of-the-art since no iterations are needed and optimised sparse linear solvers can be leveraged to solve this step in linear time. It is also more accurate since this is a closed-form solution. We apply our method to generate interactive surface deformation, we also show how a multiresolution optimisation can be applied to achieve real-time animation of large surfaces.
[114] vixra:2311.0059 [pdf]
Divisible Cyclic Numbers
There are known to exist a number of (multiplicative) cyclic numbers, but in this paper I introduce what appears to be a new kind of number, which we call divisible cyclic numbers (DCNs), examine some of their properties and give a proof of their cyclic property. It seems remarkable that I can find no reference to them anywhere. Given their simplicity, it would be extraordinary if they were hitherto unknown.
[115] vixra:2311.0058 [pdf]
A Quantum Theory of Spacetime Events Yielding a Gravitized Standard Model Inherent to 4D, with Disruption Beyond
We present a comprehensive quantum theory of spacetime events. These events serve as a nexus where probabilities and spacetime geometry coalesce, representing perhaps the most fundamental entities that embody this synthesis. At the heart of our theory lies the 'Prescribed Measurement Problem,' an algorithm that extends the entropy maximization problem of statistical physics into the quantum and geometric domains. Employing this algorithm, we systematically extrapolate a generalized quantum theory of gravity from the measurement entropy of spacetime events, from which general relativity and the Standard Model naturally emanate as inherent outcomes. Interestingly, the theory maintains coherence exclusively within four-dimensional spacetime and encounters intrinsic disruptions beyond this dimension, highlighting a quantum-geometric justification for the four-dimensionality of our universe.
[116] vixra:2311.0052 [pdf]
On the Incompletely Predictable Problems of Riemann Hypothesis, Modified Polignac's and Twin Prime Conjectures
We validly ignore even prime number 2. Based on all arbitrarily large number of even prime gaps 2, 4, 6, 8, 10...; the complete set and its derived subsets of Odd Primes fully comply with Prime number theorem for Arithmetic Progressions. With this condition being satisfied by all Odd Primes, we argue that Modified Polignac's and Twin prime conjectures are proven to be true with these conjectures treated as Incompletely Predictable Problems. In so doing [and with the famous Riemann hypothesis being a special case], the generalized Riemann hypothesis formulated for Dirichlet L-function is also supported. By broadly applying Hodge conjecture and Grothendieck period conjecture to Dirichlet eta function (as proxy function for Riemann zeta function), Riemann hypothesis is separately proven to be true with this hypothesis treated as Incompletely Predictable Problem.
[117] vixra:2311.0050 [pdf]
Mathematics for Incompletely Predictable Problems Required to Prove Riemann Hypothesis, Modified Polignac's and Twin Prime Conjectures
As two different but related infinite-length equations through analytic continuation, Hasse principle is satisfied by Riemann zeta function as a certain type of equation that generates all infinitely-many trivial zeros but this principle is not satisfied by its proxy Dirichlet eta function as a dissimilar type of equation that generates all infinitely-many nontrivial zeros. Based on two seemingly different location that are in fact identical, all nontrivial zeros are mathematically located on critical line or geometrically located on Origin point. Thus we prove location for complete Set nontrivial zeros to be critical line confirming Riemann hypothesis to be true. Sieve of Eratosthenes as a certain type of infinite-length algorithm is exactly constituted by an Arbitrarily Large Number of (self-)similar infinite-length sub-algorithms that are specified by every even Prime gaps. Modified Hasse principle is satisfied by this algorithm and its sub-algorithms that perpetually generate the Arbitrarily Large Number of all Odd Primes. Thus we prove Set even Prime gaps with corresponding Subsets Odd Primes all have cardinality Arbitrarily Large in Number confirming Modified Polignac's and Twin prime conjectures to be true.
[118] vixra:2311.0044 [pdf]
CMB, Hawking, Planck and Hubble Scale Relations Consistent Wth Recent Quantization of General Relativity Theory
We are demonstrating new relationships between the Hawking temperature, the CMB temperature, and the Planck scale. When comprehended at a deep level, this is in line with recent developments in the quantization of cosmology and its connection to the Planck scale. This is also entirely consistent with a recently published approach to quantizing Einstein’s general theory of relativity.
[119] vixra:2311.0043 [pdf]
A Novel Derivation of the Reissner-Nordstrom and Kerr-Newman Black Hole Entropy from truly Charge Spinning Point Mass Sources
Recently we have shown how the Schwarzschild Black Hole Entropy in all dimensions emerges from truly point mass sources at r = 0 due to a non-vanishing scalar curvature R involving the Dirac delta distribution in the computation of the Euclidean Einstein-Hilbert action. As usual, it is required to take the inverse Hawking temperature beta as the length of the circle S^1_beta obtained from a compactification of the Euclidean time in thermal field theory which results after a Wick rotation, i t = tau, to imaginary time. In this work we extend our novel procedure to evaluate both the Reissner-Nordstrom and Kerr-Newman black hole entropy from truly charge spinning point mass sources.
[120] vixra:2311.0038 [pdf]
Cote’s Spiral in Neptune Great Dark Spot (GDS)
The characteristic equation of which spiral theCyclone its double spiral shape, whose mathematical equation has already been definedas Cote’s spiral, Gobato et al. (2022) and similarly Lindblad (1964) show shape of double spiral galaxies. In physics and in the mathematics of plane curves, a Cotes’s is one of a family of spirals classified by Roger Cotes. The image captured by Voyager 2, the Neptune’s Great Dark Spot (GDS) presents a characteristic that resembles a Cote’s Spiral. Its ellipsoidal shape is due to the rotation of the differentplanetary rotation layers in opposite directions, increasing and compressing the GDS, from the lower to upper layers of Neptune’s atmosphere.
[121] vixra:2311.0036 [pdf]
Roots of Real Polynomial Functions and of Real Functions
The Newton-Raphson method is the most widely used numerical calculation method to determine the roots of Real polynomial functions, but it has the drawback that it does not always converge. The method proposed in this work establishes the convergence condition and the development of its application, and therefore will always converge towards the roots of the function. This will mean a conclusive advance for the determination of roots of Real polynomial functions. According to interpretation of the Abel-Ruffini theorem, the roots of polynomial functions of degree greater than 4 can only be determined by numerical calculation.
[122] vixra:2311.0030 [pdf]
Euler's Identity, Leibniz Tables, and the Irrationality of Pi
Using techniques that show show that e and pi are transcendental, we give a short, elementary proof that pi is irrational based on Euler's formula. The proof involves evaluation of a polynomial using repeated applications of Leibniz formula as organized in a Leibniz table.
[123] vixra:2311.0020 [pdf]
The Search for Gravitational Waves: Fundamentals of Reception Technology
In order to detect and decode the phase-modulated signals of gravitational waves in noise, you need an antenna and a receiver in the $mu$Hz range with special properties. The necessary technology is described in detail.
[124] vixra:2311.0019 [pdf]
Die Suche Nach Gravitationswellen, Grundlagen Der Empfangstechnik (The Search for Gravitational Waves: Basics of Reception Technology
Um die phasenmodulierten Signale von Gravitationswellen im Rauschen zu entdecken und zu dekodieren, benötigt man einen Empfänger im $mu$Hz-Bereich mit speziellen Merkmalen. Die notwendige Technik wird ausführlich beschrieben.<p>In order to detect and decode the phase-modulated signals of gravitational waves in noise, a receiver in the $mu$Hz range with special features is required. The necessary technology is described in detail.
[125] vixra:2311.0015 [pdf]
Reverse Chebyshev Bias in the Distribution of Superprimes
We study the distribution of superprimes, a subsequence of prime numbers with prime indices, mod 4. Rather unexpectedly, this subsequenceexhibits a reverse Chebyshev bias: terms of the form 4k + 1 are more common than those of the form 4k + 3, whereas the opposite is the case in the sequence of all primes. The effect, while initially weak and easy to overlook, tends tobe several times larger than the Chebyshev bias for all primes for samples of comparable size, at least, by one simple measure. By two other measures, it can be seen as fairly strong; by the same measures the ordinary Chebyshev effectis very strong. Both of these measures also imply that the reverse Chebyshev bias for superprimes is more volatile than the ordinary Chebyshev bias.
[126] vixra:2311.0008 [pdf]
Evolution of Information and the Laws of Physics
This paper combines insights from information theory, physics and evolutionary theory to conjecture the existence of fundamental replicators, termed `femes'. Femes are hypothesised to cause transformations resulting in the structure and dynamics of the observable universe, classified as their phenotype. A comprehensive background section provides the foundation for this interdisciplinary hypothesis and leads to four predictions amenable to empirical scrutiny and criticism. Designed to be understood by a multidisciplinary audience, the paper challenges and complements ideas from various domains, suggesting new directions for research.
[127] vixra:2311.0007 [pdf]
A Proof for a Generalization of the Inequality from the 42nd International Mathematical Olympiad
In this paper, we present a proof for a generalization of the inequality from the 42nd International Mathematical Olympiad. The proved inequality relates to a sum involving square roots of fractions. It has various applications in mathematical analysis, optimization, or statistics. In the field of mathematical analysis, it can be used in the study of convergence. In terms of optimization, it may help establish bounds or relationships between the variables involved.
[128] vixra:2311.0005 [pdf]
Real Numbers: a New (Quantum) Look, with a Hierarchical Structure
Rational numbers Q have much more structure beyond the ordered field structure which leads to Real Numbers as a metric completion.The modular group representation of continued fractions is used as a Number Theory "friendly" implementation of the real numbers, with a possible unification with p-adic numbers, beyond the "direct sum" adeles framework. This approach also allows to extend Fourier and Haar Wavelet Analysis, by including inversion as a geometric antipode. Other applications in Mathematical-Physics steam from the central role of the modular group: Belyi maps, Farey graphs and tessellations etc. which allow the study of important classes of numbers (algebraic, periods) in a systematic way. The presentation is a preliminary version the project, stating the motivation, goals and approach.
[129] vixra:2310.0149 [pdf]
Seebeck Effect Shows Photon Energy Current Within Current-carrying Conductors.
If a piece of conducting material has a temperature difference between its two ends, an electromotive force is observed between the ends with the hotter end being positive and the other negative. This is the Seebeck effect. The emf is dependent only on the temperature difference and the type of conductor material. Current physics only mentions the emf of the Seebeck effect, but has ignored another significant fact about the Seebeck effect. Besides the observed emf, the Seebeck effect causes a radiation energy current flow within the conductor from the hotter end towards the cooler end. The operation of a thermocouple electric cell relies on the Seebeck effect. An analysis the operation of such a cell shows that energy transmission by current-carrying conductors has nothing to do with the magnetic fields surrounding the conductor; the actual physical mechanism of energytransmission is by photon energy current within the body of the conductor.
[130] vixra:2310.0146 [pdf]
Monte Carlo Quantum Computing Using a Sum of Controlled Few-Fermions
A restricted path integral method is proposed to simulate a type of quantum system or Hamiltonian called a sum of controlled few-fermions on a classical computer using Monte Carlo without a numerical sign problem. Related methods and systems of Monte Carlo quantum computing are disclosed for simulating quantum systems and implementing quantum computing efficiently on a classical computer, including methods and systems for simulating many-variable signed densities, methods and systems for decomposing a many-variable density into a combination of few-variable signed densities, and methods and systems for solving a computational problem via Monte Carlo quantum computing.
[131] vixra:2310.0144 [pdf]
Inversions (Mirror Images) With Respect to the Unit Circle and Division by Zero
In this note, we will consider the interesting inversion formula that was discovered by Yoichi Maeda with respect to the unit circle on the complex plane from the viewpoint of our division by zero: $1/0=0/0=0$.
[132] vixra:2310.0143 [pdf]
The Isomorphism of H4 and E8
This paper gives an explicit isomorphic mapping from the 240 real R^8 roots of the E8 Gossett 4_{21} 8-polytope to two golden ratio scaled copies of the 120 root H4 600-cell quaternion 4-polytope using a traceless 8x8 rotation matrix U with palindromic characteristic coefficients and a unitary form e^{iU}}. It also shows the inverse map from a single H4 600-cell to E8 using a 4D<->8D chiral L<->R mapping function, phi scaling, and U^{-1}. This approach shows that there are actually four copies of each 600-cell living within E8 in the form of chiral H4L+phi H4L+H4R+phi H4R roots. In addition, it demonstrates a quaternion Weyl orbit construction of H4-based 4-polytopes that provides an explicit mapping between E8 and four copies of the tri-rectified Coxeter-Dynkin diagram of H4, namely the 120-cell of order 600. Taking advantage of this property promises to open the door to as yet unexplored E8-based Grand Unified Theories or GUTs.
[133] vixra:2310.0141 [pdf]
The Countdown Letters Game
We present an analysis of the letters game from the TV show Countdown using Monte Carlo methods.This game requires finding the longest possible words in a set of randomly chosen letters. We show that the probability of finding a word of length k from N given letters follows a Fermi-Dirac Distribution with k as the variable and N acting as control parameter. Increasing N we get to a fixed point where a phase transition occurs before reaching the IR fixed point as N goes to infinity. Lastly, we find the expected total number of words per game, and the number of letters one must be given in order to have a significant probability to find all words in the dictionary.
[134] vixra:2310.0134 [pdf]
A Novel Derivation of Black Hole Entropy in all Dimensions from truly Point Mass Sources
It is explicitly shown how the Schwarzschild Black Hole Entropy (in all dimensions) emerges from truly point mass sources at r = 0 due to a non-vanishing scalar curvature involving the Dirac delta distribution. It is the density and anisotropic pressure components associated with the point mass delta function source at the origin r = 0 which furnish the Schwarzschild black hole entropy in all dimensions $ D ge 4$ after evaluating the Euclidean Einstein-Hilbert action. As usual, it is required to take the inverse Hawking temperature $beta_H$ as the length of the circle $S^1_beta$ obtained from a compactification of the Euclidean time in thermal field theory which results after a Wick rotation, $ i t = tau $, to imaginary time. The appealing and salient result is that there is $no$ need to introduce the Gibbons-Hawking-York boundary term in order to arrive at the black hole entropy because in our case one has that $ {cal R } ot= 0$. Furthermore, there is no need to introduce a complex integration contour to $avoid$ the singularity as shown by Gibbons and Hawking. On the contrary, the source of the black hole entropy stems entirely from the scalar curvature $singularity$ at the origin $ r = 0$. We conclude by explaining how to generalize our construction to the Kerr-Newman metric by exploiting the Newman-Janis algorithm. The physical implications of this finding warrants further investigation since it suggests a profound connection between the notion of gravitational entropy and spacetime singularities.
[135] vixra:2310.0128 [pdf]
Geometric Sub-Bundles
Let $mathfrak{X}$ be a topological stack, and $LocSys(mathfrak{X})$ a local system taking varieties $v in mathfrak{X}$ to their projective resolutions over an affine coordinate system. Let $alpha$ and $beta$ be smooth charts encompassing non-degenerate loci of the upper-half plane, and let $varphi$ be the map $beta circ alpha^{-1}$. Our goal is to describe a class of vector bundles, called $emph{geometric sub-bundles}$, which provide holonomic transport for n-cells (for small values of n) over a $G_delta$-space which models the passage $mathfrak{X} ightrightarrows LocSys(mathfrak{X})$. We will first establish the preliminary definitions before advancing our core idea, which succinctly states that for a pointed, stratified space $Strat_M^ast$, there is a canonical selection of transition maps $[varphi]$ which preserves the intersection of a countable number of fibers in some sub-bundle of the bundle $Bun_V$ over $LocSys(mathfrak{X})$
[136] vixra:2310.0127 [pdf]
The Electron Interaction with the Dirac Delta Pulse
After the clasical approach to acceleration of a charged particle by delta-formimpulsive force, we consider the corresponding quantum theory based on the Volkovsolution of the Dirac equation. We determine the modied Compton formula forfrequency of photons generated by the scattering of the delta-form laser pulse on theelectron in a rest. The article follows the physical ideas involved in the author text -Electron in an Ultrashort Laser Pulse (2003).
[137] vixra:2310.0118 [pdf]
Application of Deep and Reinforcement Learning to Boundary Control Problems
The boundary control problem is a non-convex optimization and control problem in many scientific domains, including fluid mechanics, structural engineering, and heat transfer optimization. The aim is to find the optimal values for the domain boundaries such that the enclosed domain adhering to the governing equations attains the desired state values. Traditionally, non-linear optimization methods, such as the Interior-Point method (IPM), are used to solve such problems.This project explores the possibilities of using deep learning and reinforcement learning to solve boundary control problems. We adhere to the framework of iterative optimization strategies, employing a spatial neural network to construct well-informed initial guesses, and a spatio-temporal neural network learns the iterative optimization algorithm using policy gradients. Synthetic data, generated from the problems formulated in the literature, is used for training, testing and validation. The numerical experiments indicate that the proposed method can rival the speed and accuracy of existing solvers. In our preliminary results, the network attains costs lower than IPOPT, a state-of-the-art non-linear IPM, in 51% cases. The overall number of floating point operations in the proposed method is similar to that of IPOPT. Additionally, the informed initial guess method and the learned momentum-like behaviour in the optimizer method are incorporated to avoid convergence to local minima.
[138] vixra:2310.0113 [pdf]
New Maximum Interval Between Any Number and the Nearest Prime Number and Related Conjectures
In this short paper we prove that for n ≥ 2953652287 it exists some prime number between nand n + log(n), improving the best known proved bounds for the maximum interval between anynumber and the nearest prime number, as well as the maximum difference between two consecutiveprime numbers (prime gap). We note that this result proves some open conjectures on prime gapsand maximum intervals between any number and the nearest prime number.
[139] vixra:2310.0108 [pdf]
Application of Rational Representation in Euclidean Geometry
This book focuses on the application of rational representations to plane geometry. Most plane geometry objects, such as circles, triangles, quadrilaterals, conic curves, and their composite figures, can be represented almost exclusively in terms of rational parameters, which makes the process of computation and proof straightforward.
[140] vixra:2310.0106 [pdf]
Drag Coefficient Estimation of Low Density Objects by Free Fall Experiments
The present article investigates whether the drag coefficient of low density objects can be determined by free fall experiments with sufficient accuracy. Among other things, the drag coefficient depends on the flow velocity, which can be controlled in wind channels experiments. Free fall experiments do not offer an experimental environment with constant flow velocity. Especially the later part of the movement gets relevantly influenced by air drag deceleration. We theoretically estimate an average sphere drag coefficient for the relevant part of the movement of falling spheres. The results are verified by examining the drag coefficient from experimental data. Finally, we determine the drag coefficient of a model rocket, which is compered to the result of the corresponding wind channel experiment.
[141] vixra:2310.0102 [pdf]
Infrared Spectrum for Derivative Steroid with Potential to Treat Breast Cancer
This study applies Density Functional Theory(DFT), using the B3LYP functional, and via ab initio Restrict Hartree-Fock (RHF) methods, to study the infrared spectrum of steroid 17-Iodo-androst-16-ene. The spectrum was obtained viacomputational methods ab initio RHF and DFT. Optimization of molecular structure via UFF (Universal Force Field), followed by PM3 (Parametric Method 3), with geometric optimization,obtaining the spectrum of other basis sets of steroid 17-Iodo-androst-16-ene. The study this steroid was chosen because it can could act as aromatase enzyme inhibitors and this phenomenon could be translated as good compounds to treat breast cancer. The B3LYP functional always presents the lowest thermal energy than the RHF in all calculated bases, however the RHF always presents the highest Entropy than the B3LYP, in all the calculated basis sets. The normalized spectrum calculated in the B3LYP/SVP functional/basis set have harmonic frequency with peaks 3,241.83 cm−1, 100% and 3,177.535 cm−1 at 43.304% absorbance. The study has so far been limited to computational methods compatible with the theory of quantum chemistry.
[142] vixra:2310.0099 [pdf]
Superconducting Theory of Confined Electrons
Based on the experimental facts of angle-resolved photoemission spectroscopy (ARPES) and neutron scattering in high-temperature superconductors, a unified theoretical framework centered around polyhedron quantum-well-confinedelectrons is presented for superconductivity. According to the crystal structure of superconducting materials, the new theory can analytically determine the fundamental properties in copper- and iron-based superconductors, including the Fermi surface structure, the superconducting energy-gapsymmetry and value, the superconducting transition temperature, and the spin resonance peaks and parity, the predictions of the theoryare in good agreement with experiments. Furthermore, our research provides new insights into the microscopic nature of magnetism,spin, and the Ginzburg-Landau order parameter.
[143] vixra:2310.0098 [pdf]
Electrostatic Fields
Electrostatic fields, cornerstone elements in understanding electrical phenomena, serve as key components in diverse scientific and engineering fields. This paper elucidates the concept of electrostatic fields, explores their properties, and outlines their broad applications. We start from the basics of electric charges and their interactions, leading us to the core principles of electrostatics. A deep dive into Coulomb's law is presented to scrutinize the behavior of electrostatic fields, along with the concept of electric potential and its relationship with the electric field. We underline the instrumental role of electrostatic field analysis in practical applications like electrical power systems, electronics, and telecommunications. Furthermore, we introduce techniques to tackle electrostatic field problems and showcase their applications in engineering and technology. By providing a comprehensive review of electrostatic fields, we aim to deepen understanding and propel further research into this vital domain of electromagnetism.
[144] vixra:2310.0092 [pdf]
FDTD Computer Modelling of a Half Wavelength (4pi) Toroidal Cavity Mode With Spin and Angular Momentum
The properties of a resonant half wavelength mode, sometimes called a 4pi mode, is investigated in a toroidal cavity of large aspect ratio. No dividing wall is used but instead the field is given a poloidal (in the direction of the smaller circumference) twist. The toroidal cavity resonator equations are derived by bending a length of cylindrical waveguide into a toroid and changing the field equations from cylindrical to local toroidal. If the toroid aspect ratio is large the errors are small but the equations must still be considered to be approximate and so in order to confirm the stability and form of the resonant modes a finite difference time domain (FDTD) program was written to model the propagation of the fields. This also confirms that no false assumptions have been made, particularly regarding how the fields behave where the two ends of the half wave join. This is believed to be the first confirmation of the existence of a half wave toroidal mode without a dividing wall. FDTD simulations of both a toroidal (in the direction of the larger circumference) and a poloidal spinning 4pi mode were also carried out. It was observed that the presence of twist would prevent either a pure toroidal or poloidal spinning mode being produced and that the poloidally spinning field produced a stable mode with both spin and angular momentum.
[145] vixra:2310.0084 [pdf]
New Method for High-Accuracy Determination of Time-Span of Electron-Photon Interaction Based on Quantized Beer’s Lambert Absorbance
Actual Determination of Time-span in Absorption Spectroscopy and variation of absorbance vs.concentration is crucial in chemistry and biology. We investigated molecular absorption spectraof 1,4 -diamino anthraquinone taking concentration 20 − 90μM. We primarily report a violationand quantization of Absorbance in Beer’s Lambert Law and propose an alternative explanationfor the inherent phenomenon, quantum mechanically. Upon laser pulse excitation yielded photontransfer excited state having 17-96 attoseconds lifespan is formed for different concentrations ofsamples. We furnished a general equation for the electrostatic field, the wave function of photonsand its correlation with absorbance. Further, we propose an intricate relationship between theelectromagnetic field generated by particles and its wavefunction.
[146] vixra:2310.0063 [pdf]
A New Realist Formulation of Quantum Theory
This article presents a new way of looking at and understanding quantum physics through the lens of a novel realist framework. It addresses core issues of realism, locality, and measurement. It proposes a general quantum ontology consisting of two field-like entities, called W-state and P-state, that respectively account for the wave- and particle-like aspects of quantum systems. Unlike Bohmian mechanics, however, it does not take the conjunction of wave and particle literally.W-state is a generalization of the wavefunction, but has ontic stature and is defined on the joint time-frequency domain. It constitutes a non-classical local reality, consisting of superpositions of quantum waves writ small. P-state enforces entanglement obligations and mediates the global coordination within quantum systems required to bring about wavefunction collapse in causal fashion consistent with special relativity.The framework brings quantum theory much closer to general relativity. The two share common language, concepts, and principles. It offers a sensible alternative to the Copenhagen dispensation, which actively discourages - indeed, oracularly proscribes - inquiry that seeks to explain quantum mechanics more deeply than the fact that the mathematical formalism works.
[147] vixra:2310.0061 [pdf]
Machine Learning Methods in Algorithmic Trading: An Experimental Evaluation of Supervised Learning Techniques for Stock Price
In the dynamic world of financial markets, accurate price predictions are essential for informed decision-making. This research proposal outlines a comprehensive study aimed at forecasting stock and currency prices using state-of-the-art Machine Learning (ML) techniques. By delving into the intricacies of models such as Transformers, LSTM, Simple RNN, NHits, and NBeats, we seek to contribute to the realm of financial forecasting, offering valuable insights for investors, financial analysts, and researchers. This article provides an in-depth overview of our methodology, data collection process, model implementations , evaluation metrics, and potential applications of our research findings. The research indicates that NBeats and NHits models exhibit superior performance in financial forecasting tasks, especially with limited data, while Transformers require more data to reach full potential. Our findings offer insights into the strengths of different ML techniques for financial prediction, highlighting specialized models like NBeats and NHits as top performers-thus informing model selection for real-world applications.
[148] vixra:2310.0053 [pdf]
Phenomenological Velocity
The intent of this paper is to provide a simple focus on that mathematical concept and solution, phenomenological velocity to shine light on aworthy topic for mathematicians and physicists alike. Phenomenological Velocity is essential to the formulation of a gestalt cosmology. The bibliography of this paper provides references to the extensive research that has been conducted by myself on the topic. I have performed conditional integrals of the phenomenological velocity in its most liberated standard-algebraic form, I have shown that the computational-phenomenological velocity satisfies its real-analytic solutionwhen not using the speed of light in scientific notation to get the computational version, thus demonstrating that it is a valid solution. So, phenomenological velocity has profound consequences to the foundations of physics as civilization moves into a galactic scale and information is communicated at the quantumlevel, because it is such a mathematical reality it ought not be ignored when considering topics from hidden dimensions (a real, algebraic technique) and relativity to gravity and dark matter. It gives us a new perspective on how weperceive the meaning of velocity itself with pragnanz, and thus with the new meaning, perspectives can change. I hope the reader will investigate the combined research I have performed on this topic, available by referencing the works in this bibliography to fully understand the nature of the arguments being made within. So, this points the right direction for future research, perhaps even withintent to encourage experimental design.
[149] vixra:2310.0050 [pdf]
Ratios of Exponential Functions, Interpolation
We describe models of proportions depending on some independent quantitative variables. An explicit formula for inverse matrices facilitatesinterpolation as a way to calculate the starting values for iterations in nonlinear regression with logistic functions or ratios of exponential functions.
[150] vixra:2310.0048 [pdf]
Anomaly From the 189 Filtered Earthquakes
A Specific Magnitude Budget for the Detection of 44 Nuclear Earthquakes near Large Urban Areas subject to a Natural Seismic Hazard was crucial for finding the typical parameters of the Nuclear Earthquakes : $5.9-7.9~M_w$, from 1st January 1960 to 15th September 2023, $R_i <160~km$ with the index $i$ spanning the $1230$ largest cities and having a maximal horizontal shaking ratio $10^{M_w,i} /R_i^2$ for each specific city $i$ satisfying these typical parameters. Therefore, it allows to build a filter $mathcal{F}_Z$ and to filter out a total of $189$ earthquakes around $372$ cities with a relatively low background of Natural Earthquakes with respect to the Nuclear ones. By including the $189$ filtered earthquakes around these $372$ cities, there is a total of $393$ cities having enough seismic data, with respect to a sufficiently large background of the recent smaller earthquakes around these same cities ($M_wgeq M_{w,0} =4.0$, 1980-2022, $R_i < R_{max}=160~km$, Gutenberg-Richter law and $Delta N_i geq 10^{5.9-4.0}>79$ in the case of an absence of filtered earthquakes with the filter $mathcal{F}_Z$), in order to derive the Probability Estimation of having a such maximal horizontal shaking ratio. Finally, there is a 8.4-$sigma$ anomaly from a statistical excess of the maximal horizontal shaking ratio of the filtered earthquakes with the filter $mathcal{F}_Z$ and with a Probability Estimation Cutoff of $<0.43$ (There is an artifact close to $1$ arising from an exponential behavior inside the Probability Estimation formula of having a such maximal horizontal shaking ratio). Of course, by taking some random positions for the $1230$ largest cities, it vanishes completely that 8.4-$sigma$ anomaly.
[151] vixra:2310.0047 [pdf]
Transforming Education Through AI, Benefits, Risks, and Ethical Considerations
The integration of Artificial Intelligence (AI) into education has the potential to revolutionize traditional teaching and learning methods. AI can offer personalized learning experiences, streamline administrative tasks,enhance feedback mechanisms, and provide robust data analysis. Numerous studies have demonstrated the positive impact of AI on both student outcomes and teacher efficiency. However, caution must be exercised when implementing AI in education, considering potential risks and ethical dilemmas. It is essential to use AI as a tool to support human educators rather than replace them entirely. The adoption of AI in education holds the promise of creating more inclusive and effective learning environments, catering to students of diverse backgrounds and abilities. As AI technology continues to advance, the education sector can anticipate even more innovative applications, further shaping the future of learning.This abstract provides an overview of the multifaceted landscape of AI in education, highlighting its potential benefits, associated challenges, and the importance of responsible integration.
[152] vixra:2310.0046 [pdf]
The Philosophical and Mathematical Implications of Division by 0/0 = 1 in Light of Einstein’s Theory of Special Relativity
The enigma of dividing zero by zero 0 0 has perplexed scholars across philosophy, mathematics, and physics, remaining devoid of a clear-cut solution. This lingering conundrum leaves us in an unsatisfactory position,as there emerges a genuine necessity for such divisions, particularly in scenarios involving tensor components that are both set at zero. This article endeavors to grapple with this profound issue by leveraging the insights of Einstein’s theory of special relativity. Surprisingly, when we wholeheartedly embrace the ramifications of this theory, it becomes evidentthat zero divided by zero must equate to one 00 = 1. Essentially, we are confronted with a pivotal decision: either embrace the feasibility and definition of dividing zero by zero, in accordance with Einstein’s theory of special relativity, or reevaluate the integrity of this fundamental theory itself. This exploration delves into the profound consequences arisingfrom this critical choice.
[153] vixra:2310.0045 [pdf]
Infinity Tensors, the Strange at Tractor, and the Riemann Hypothesis: An Accurate Rewording of The Riemann Hypothesis Yields Forma L Proof
The Riemann Hypothesis can be reworded to indicate that the real part of one half always balanced at the infinity tensor by stating that the Riemann zeta function has no more than an infinity tensor’s worth of zeros on the critical line. For something to be true in proof, it often requires an outside perspective. In other words, there must be some exterior, alternate perspective or system on or applied to the hypothesis from which the proof can be derived. Two perspectives, essentially must agree. Here, a fractal web with infinitesimal 3D strange attractor is theorized as present at the solutions to the Riemann Zeta function and in combination with the infinity tensor yields an abstract, mathematical object from which the rewording of the Riemann Zeta function can be derived. From the rewording, the law that mathematical sequences can be expressed in more concise and manageable forms is applied and the proof is manifested. The mathematical law that any mathematical sequence can be expressed in simpler and more concise terms: ∀s∃su2032⊆s: ∀φ: s⊆φ ⇒ su2032⊆φ, is the final key to the proof when comparing the real and imaginary parts. Parker Emmerson is affiliated with now defunct, Marlboro College, as he attained his B.A. in Psychology and Philosophy with a focus on mathematics of perception in 2010.
[154] vixra:2310.0041 [pdf]
Exist[ence Of] a Prime in Interval N^2 and "N^2+epsilon n"
Oppermance’ conjecture states that there is a prime number between n^2 and n^2 + n for every positive integer n,first we show that , All integer numbers between x^2 and x^2 + ϵx can be written as x^2 + i > 4p that 1 ≤ i ≤ ϵx andp = (x − m − 2)2 + j in which j is a number in intervals 1 ≤ j ≤ ϵ(x − m − 2),and then we prove generalization of Oppermance’ conjecture i.e there is a prime number in interval n^2 and n^2 + ϵn such that 0 < ϵ ≤ 1.
[155] vixra:2310.0037 [pdf]
A New Pairing Mechanism Via Chiral Electron-Hole Condensation in Non-BCS Superconductors
A novel chiral electron-hole (CEH) pairing mechanism is proposed to account for non-BCS superconductivity. In contrast to BCS Cooper pairs, CEH pairs exhibit a pronounced affinity to antiferromagnetism for superconductivity. The gap equations derived from this new microscopic mechanism are analyzed for both s- and d-wave superconductivity, revealing marked departures from the BCS theory. Unsurprisingly, CEH naturally describes superconductivity in strongly-correlated systems, necessitating an exceedingly large coupling parameter ($lambda>1$ for s-wave and $lambda>pi/2$ for d-wave) to be efficacious. The new mechanism provides a better understanding of various non-BCS features, especially in cuprate and iron-based superconductors. In particular, CEH, through quantitative comparison with experimental data, shows promise in solving long-standing puzzles such as the unexpectedly large gap-to-critical-temperature ratio $Delta_0/T_c$, the lack of gap closure at $T_c$, superconducting phase diagrams, and a non-zero heat-capacity-to-temperature ratio $C/T$ at $T=0$ (i.e., the ``anomalous linear term''), along with its quadratic behavior near $T=0$ for d-wave cuprates.
[156] vixra:2310.0035 [pdf]
Exploring the Association Between SNP Rs7903146 and Type 2 Diabetes in a Bangladeshi Population
Type 2 diabetes, often referred to as T2D, is a widespread health condition. This study focuses on the specific genetic variation in a single nucleotide poly- morphism (SNP) rs7903146 in the gene TCF7L2 and its potential association with T2D. We examined DNA samples from Bangladeshi individuals to see whether the genetic variant rs7903146 is associated with T2D in this community. We discovered that the CC variant of rs7903146 was extremely prevalent, appearing in all of the samples from people without T2D and in the majority of the samples from people with T2D out of the 38 sequences they looked at (16 from people without T2D and 22 from people with T2D). In a tiny proportion of T2D patients, there was also a less prevalent variant called CT. Surprisingly, the Bangladeshi population in this study did not show a clear association be- tween rs7903146 and T2D. This counters what several earlier investigations have found. This opens the door for more future research to completely comprehend the genetic and environmental causes of T2D to build preventative treatments for this quickly increasing global health problem.
[157] vixra:2310.0032 [pdf]
Adaptive Posterior Distributions for Uncertainty Analysis of Covariance Matrices in Bayesian Inversion Problems for Multioutput Signals
In this paper we address the problem of performing Bayesian inference for the parameters of a nonlinear multi-output model and the covariance matrix of the different output signals. We proposean adaptive importance sampling (AIS) scheme for multivariate Bayesian inversion problems, which is based in two main ideas: the variables of interest are split in two blocks and the inferencetakes advantage of known analytical optimization formulas. We estimate both the unknown parameters of the multivariate non-linear model and the covariance matrix of the noise. In the firstpart of the proposed inference scheme, a novel AIS technique called adaptive target AIS (ATAIS) is designed, which alternates iteratively between an IS technique over the parameters of the nonlinearmodel and a frequentist approach for the covariance matrix of the noise. In the second part of the proposed inference scheme, a prior density over the covariance matrix is considered and the cloud of samples obtained by ATAIS are recycled and re-weighted for obtaining a complete Bayesian study over the model parameters and covariance matrix. ATAIS is the main contribution of the work. Additionally, the inverted layered importance sampling (ILIS) is presented as a possible compelling algorithm (but based on a conceptually simpler idea). Different numerical examples show the benefits of the proposed approaches.
[158] vixra:2310.0031 [pdf]
Fictitious Currents as a Source of Electromagnetic Field
In this paper we introduce the idea of electric fictitious currents for the electromagnetic field. Electric fictitious currents are currents that arise in electrodynamics when we change the topology of space. We show, with a specific example, how fictitious currents may be the source of magnetic moment of a singularity.
[159] vixra:2310.0030 [pdf]
The Deep Relation of the Non-Trivial Zeros of the Riemann Zeta Function with Electromagnetism and Gravity
In one of my previous articles (https://vixra.org/abs/1701.0042) I already demonstrated the deep relationship of this Riemann function (canonical partition function of the imaginary parts of the non-trivial zeros of the Riemann z function) with gravity and electromagnetism. In this work we recover it to show its deep connection with a new very simple function that uses the imaginary part of the first zero of the Riemann zeta function together with another well-known function of the non-trivial zeros of the zeta function.Its extraordinary simplicity, without random terms appearing, and that in both functions (the canonical partition function of the imaginary parts of the non-trivial zeros of the zeta function and the new one, derived from the imaginary part of the first zero of zeta) are present the Planck mass, Newton's gravitation constant and the elementary electric charge, implies that their coincidence is completely impossible. At the same time, and for some time now, both mathematicians and physicists have been trying to demonstrate the Riemann hypothesis from quantum mechanics, but so far this has not been achieved. Our work works in reverse and demonstrates that the Riemann zeta function for non-trivial zeros plays an essential role in quantum mechanics and in a possible unification theory, as will be observed by the equations that we will show. We even dare to conjecture that the Physical baryon density ( value 2018,0.0224±0.0001 ) parameter is obtained with a function involved in this work (non-trivial zeros of the function z)
[160] vixra:2310.0027 [pdf]
A New Solution to the Strong CP Problem
We suggest a new solution to the strong CP problem. The solution is based on the proper use of the boumdary conditions for the QCD generatimg functional integral. It obeys the principle of renormalizability of Quantum Field Theory and does not involve new particles like axions.
[161] vixra:2310.0023 [pdf]
Solving Particle-Antiparticle and Cosmological Constant Problems
Following the results of our publications, we argue that fundamental objects in particle theory are not elementary particles and antiparticles but objects described by irreducible representations (IRs) of the de Sitter (dS) algebra.One might ask why, then, experimental data give the impression that particles and antiparticles are fundamental and there are conserved additive quantum numbers(electric charge, baryon quantum number and others). The matter is that, at the present stage of the universe, the contraction parameter $R$ from the dS to the Poincare algebra is very large and, in the formal limit $Rtoinfty$,one IR of the dS algebra splits into two IRs of the Poincare algebra corresponding to a particle andits antiparticle with the same masses. The problem why the quantities $(c,hbar,R)$ are as are does not arise because they are contraction parameters for transitions from more general Lie algebras to less general ones. Then the baryon asymmetry of the universeproblem does not arise. At the present stage of the universe (when semiclassical approximation is valid), the phenomenon of cosmological acceleration (PCA) is described without uncertainties as an inevitable kinematical consequence of quantum theory in semiclassical approximation. In particular, it is not necessary to involve dark energy the physical meaning ofwhich is a mystery. In our approach, background space and its geometry (metric and connection) are not used, $R$ has nothing to do with the radius of dS space, but, in semiclassical approximation, the results for PCA are the same as in General Relativity if $Lambda=3/R^2$, i.e., $Lambda>0$ and there isno freedom in choosing the value of $Lambda$.
[162] vixra:2310.0018 [pdf]
Vortex Model of Plane Poiseuille Flow of Non-Newtonian Fluid
We present a description of plane Poiseuille flow of non-Newtonian time-independent fluid based on symmetric equations, which take into account both the longitudinal motion and rotation of the vortex tubes. This model has analytical solution in the form of the two-parametric velocity distribution, which is in good agreement with velocimetry data in microchannels. The advantage of this approach is that, in contrast to the Ostwald-de Waele power law, it provides a more accurate approximation of experimental velocity profiles for different Reynolds numbers by model profiles corresponding to the same viscosity parameter. We believe that this simple model can be useful for making adequate estimates for the parameters of non-Newtonian time-independent fluids in engineering hydrodynamics.
[163] vixra:2310.0016 [pdf]
Requiring Negative Probabilities from "The Thing" Researched, Else that Thing Doesn’t Exist, is Insufficient Ground for Any Conclusion
It is demonstrated that the statistical method of the famous Aspect- Bell experiment requires negative densities and negative probabilities from "the thing" researched, else that thing doesn’t exist. The thing refers here to Einstein hidden variables. This requirement in the experiment is absurd and so the results from such experiment are meaningless.
[164] vixra:2310.0014 [pdf]
The Growth of the Universe: Another Explanation for Redshift
At present, the idea that our space-time is emergent, as a kind of low-energy phasetransition into some kind of condensed matter and superconductor, is becoming quitepopular. This space-time is formed as a kind of new network, the volume of which,when measured in Planck units, is equal to the number of network nodes. Thegrowth of such a network and the increase in its volume is not due to the stretchingof existing cells, but as a result of adding new cells. Considering the growth of sucha network in a similar way to the growth of the volume of a new phase, one cannaturally explain the redshift and many other problems of modern cosmology.
[165] vixra:2310.0011 [pdf]
What is Entanglement?
Entanglement is information distributed over the parts of a system, classical or quantum. It can be modeled as a reduction of the structure group, of horizontal symmetries, extending the Gauge Theory paradigm. Many conflicting interpretations are rooted in considering quantum properties as ``intrinsic'', e.g. in the Point-Form QFT / Gauge Theory, and needing a causal connection for ``changing'' the unknown state of the partner-particle, or confirming that QM is (was) ``incomplete'': EPR. The natural explanation of entanglement starts with Einstein's time synchronization and takes advantage of the advancement in our models of particle physics and quantum computing. The ``external variables'' emerge from the ``intrinsic variables'' and the entanglement relations are in fact generalizations of conservation laws. New aspects will be discussed: beyond Noether Theorem, from gauge fiber to horizontal symmetry groups, Hopf algebras model change of symmetry group, analogous to creation-annihilation of pairs of particle-antiparticle, but for information, corresponding to entanglement. Relations with quantum eraser, 2-slit experiment and retrocausality are discussed.
[166] vixra:2310.0005 [pdf]
A String Model of Particles in 6D
A model for particles based on preons in chiral, vector and tensor/graviton supermultiplets of unbroken global supersymmetry is engineered. The framework of the model is little string theory. Some phenomenological results are discussed.
[167] vixra:2309.0157 [pdf]
Exploring the Accelerating Expansion of the Universe
The relative velocity between objects will affect the effect between them. The effect caused by the chase between objects is called the general Doppler effect. The speed of gravitational field energy transmission is limited, so there is also a chase relationship between the gravitational field energy and the object. This paper explores whether the Doppler effect of the gravitational field can cause the slow expansion of planetary orbits, and then thinks whether the accelerating expansion of the universe also comes from the Doppler effect of this gravitational field.
[168] vixra:2309.0149 [pdf]
Hyperparameter Optimization and Interpretation in Machine Learning
Machine learning has undergone tremendous advancements, paving the way for a myriad of applications across industries. In the midst of this progress, the significance of hyperparameter tuning and model evaluation can't be understated, as they play a critical role in achieving optimal model performance. This project delves into the realm of ML model optimization and evaluation, harnessing Bayesian Optimization, SHAP (SHapley Additive exPlanations), and traditional evaluation matrices. By focusing on a decision tree classifier, the study investigates the efficiency of various hyperparameter tuning methods, the interpretability of model decisions, and the robustness of performance metrics. Preliminary results suggest that Bayesian Optimization may offer advantages in efficiency over traditional tuning methods. Furthermore, SHAP values provide deeper insights into model decision-making, fostering better transparency and trust in ML applications.
[169] vixra:2309.0146 [pdf]
A Polynomial Solution for the 3-Sat Problem
In this paper, an algorithm is presented, solving the 3-SAT problem in a polynomial runtime of O(n^3), which implies P=NP. The 3-SAT problem is about determining, whether or not a logical expression, consisting of clauses with up to 3 literals connected by OR-expressions, that are interconnected by AND-expressions, is satisfiable. For the solution a new data structure, the 3-SAT graph, is introduced. It groups the clauses from a 3-SAT expression into coalitions, that contain all clauses with literals consisting of the same variables. The nodes of the graph represent the variables connecting the corresponding coalitions. An algorithm R will be introduced, that identifies relations between clauses by transmitting markers, called upgrades, through the graph, making use of implications. The algorithm will start sequentially for every variable and create start upgrades, one for the variables negated and one for its non-negated literals. It will be shown, that the start upgrades have to be within a specific clause pattern, called edge pattern, to mark a beginning or ending of an unsatisfiable sequence. The algorithm will eventually identify other kinds of pattern within the upgraded clauses. Depending on the pattern, the algorithm either sends the upgrades on through the graph, creates new following upgrades to extend the upgrade path, a subgraph storing all previous upgrades, or if all connector literals of a pattern have received upgrades of the same path or two corresponding following upgrades circle, marks the upgrade as circling. If an upgrade circles, then it is unsatisfiable on its path. It will be proven, that if after several execution steps of algorithm R, two corresponding start upgrades circle, then the expression is unsatisfiable and if no upgrade steps are possible anymore and the algorithm did not return unsatisfiable, the expression is satisfiable. The algorithm R is similar to already existing solutions solving 2-SAT polynomial, also making use of implications using a graph.
[170] vixra:2309.0143 [pdf]
The Application of the Theory of Variable Speed of Light on the Universe
By applying the theory of variable speed of light, the galactic red-shift can be described as aphenomenon, which only SEEMS to be a movement and it can be explained by the variation ofthe cosmic gravitational potential. An alternative concept of the universe is developed, whereasonly very general assumptions about the properties of the universe are made. The worldviewarising from that is simpler than the Standard Model. The theory of variable speed of light isable to describe the Hubble diagram consistently in a new way, nevertheless, without having tointroduce any parameters.
[171] vixra:2309.0139 [pdf]
Jacobi and Lagrangian Formulation of the Classical Cosmological Equations
Classical mechanics has been a well-established field for many years, but there are still some challenges that can be addressed using modern techniques. When dealing with classical mechanics problems, the first step is usually to create amathematical expression called the Hamiltonian based on a known function called the Lagrangian. This involves using standard procedures to establish relationships like the Poisson bracket, canonical momenta, Euler-Lagrange equations, andHamilton-Jacobi relations. In this paper, we focus on a specific problem related to the calculus of variations, which deals with finding the Lagrangian function that, when used in the Euler-Lagrange equation, produces a given differentialequation. To tackle this problem, we employ two distinct methods to determine the Lagrangian and, subsequently, the Hamiltonian for the cosmological equations derived from General Relativity. These equations describe the motion of celestial objects in the universe and are of second-order in nature.
[172] vixra:2309.0137 [pdf]
Geometry, Symmetries, and Quantization of Scalar Fields in de-Sitter Space-Time
The paper commences by examining the geometric properties of de-Sitter space-time,with a specific focus on the isometries generated by Killing vectors. It also investigatesvarious metrics that are applicable to specific regions of space-time, revealing that in thedistant future, the symmetries exhibit a similar local structure to that of $R^3$.Furthermore,the classical Klein-Gordon equation is solved within this space-time, leading to the discoverythat energy is not conserved. The solutions to the Klein-Gordon equation yield intriguing outcomes that have the potential to enable observations from the early inflationary era.Finally, the primary objective of the paper is to comprehensively examine a quantized scalarfield in the de-Sitter background, exploring the solutions for the two-point function andanalyzing their behavior during both early and late time periods.
[173] vixra:2309.0133 [pdf]
Solving Cosmological Constant Problem
Physicists usually believe that physics cannot (and should not) derive the values of c and ћ but should derive the value of the cosmological constant Λ. This problem is considered fundamental after the phenomenon of cosmological acceleration (PCA) was discovered in 1998. This phenomenon is usually considered in the framework of General Relativity (GR) and here the main uncertainty is how the background space is treated. If it is flat, PCA is usually treated as a manifestation of dark energy and (as acknowledged in the literature) currently its nature is a mystery. On the other hand, if it is curved then a problem arises why the value of Λ is as is. However, in our approach based only on universally recognized results of physics, the solution of the problem does not contain uncertainties because PCA is an inevitable kinematical consequence of quantum theory in semiclassical approximation. Since the de Sitter (dS) algebra is semisimple, it is the most general ten-dimensional Lie algebra because it cannot be obtained by contraction from other ten-dimensional Lie algebras. Let R be the parameter of contraction from the dS algebra to the Poincare one. Then the problem why the quantities (c,ћ,R) are as are does not arise because they are contraction parameters for transitions from more general Lie algebras to less general ones. In our approach, background space and its geometry (metric and connection) are not used but, in semiclassical approximation, the result for PCA is the same as in GR if Λ=3/R^2.
[174] vixra:2309.0131 [pdf]
Double Field Theory: Uniting Gauge and Gravity Theories Through the Double Copy of Yang-Mills
In the realm of theoretical physics, scientists have long been intrigued by the link between gaugetheories and gravity. This study explores a fascinating idea called the "double copy technique," which reveals a deep connection between these two seemingly different theories. While gauge theories, like Yang-Mills, describe basic interactions in a simple and elegant way, gravity, despite its symmetry, is a complex and challenging theory in the quantum world. This paper investigates the "double field theory" (DFT) and its connection to the double copy method. This connection shows that gauge and gravity theories are remarkably similar at the quantum level. The double copy technique essentially transforms gravity into a sort of "squared" gauge theory, offering insights into how color and movement are related in gauge theories. By carefully studying the math and important equations, this paper explains how this connection is tied to the idea of color and movement duality. The study concludes by introducing the DFT action, which is derived through the double copy method and doesn’t rely on a specific background. This surprising result highlights how the complexity of gravity can be beautifully linked to the simplicity of Yang-Mills theory.
[175] vixra:2309.0127 [pdf]
On Configuration Space
A particular class of real manifolds (Hermitian spaces) naturally model smooth, possibly complex n-spaces. We show how to realize such a space as a restriction of a super-smooth stack using a compass. We also discuss the classical relationship between iterated loop spaces and the configuration space of a particle.
[176] vixra:2309.0126 [pdf]
Exploring the Potential Shifts in Our Understanding of Space and Time Through Quantum Gravity
In this paper, I tried to provide an overview of how various quantum gravity approachesprompt us to reconsider our understanding of space and time. The primary focus hasbeen on two prominent contenders: string theory and loop quantum gravity. However,it’s important to bear in mind that these theories remain unverified and lack a universallyaccepted consensus.As we navigate through these ideas, it becomes evident that our conventional notionsof space and time might necessitate a fundamental shift. The very fabric of spacetimecould reveal intricacies that challenge our prior assumptions. Moreover, our explorationwill encompass diverse viewpoints on the nature of time within the realm of quantumgravity.
[177] vixra:2309.0125 [pdf]
Theory of Compton Effect in Dielectric Medium
We determine the Compton effect from the Volkov solution of the Dirac equationfor a process in medium with the index of refraction n. Volkov solution involves themass shift, or, the mass renormalization of an electron. We determine the modifedCompton formula for the considered physical situation. The index of refractioncauses that the wave lengths of the scattered photons are shorter for some anglesthan the wave lengths of the original photons. This is anomalous Compton effect.Since the wave length shift for the visible light is only 0,01 percent, it means thatthe Compton effect for the visible light in the dielectric medium can be performedby the well educated experimenters.
[178] vixra:2309.0118 [pdf]
Frenet's Trihedron of the Second Order
Based on the remarkable property of the Darboux vector to be perpendicular to the normal, I define a new trihedron associated with curves in space and prove that this trihedron also satisfies Frenet's formulas. Unlike the previous paper, where I used the trigonometric form of Frenet's formulas for simplicity, in this paper I construct a proof based only on curvature and torsion, respectively, darbuzian and lancretian.
[179] vixra:2309.0117 [pdf]
Triedrul Lui Frenet de Ordinul al Doilea (Frenet's Trihedron of the Second Order)
Bazându-mă pe proprietatea remarcabilă a vectorului lui Darboux de a fi perpendicular pe normală, definesc un nou triedru asociat curbelor din spațiu și demonstrez că și acest triedru satisface formulele lui Frenet. Spre deosebire de lucrarea anterioară, unde am folosit pentru simplitate forma trigonometrică a formulelor lui Frenet, în această lucrare construiesc o demonstrație bazată doar pe curbură și torsiune, respectiv, pe darbuzian și lancretian.<p>Based on the remarkable property of the Darboux vector to be perpendicular to the normal, I define a new trihedron associated with curves in space and prove that this trihedron also satisfies Frenet's formulas. Unlike the previous paper, where I used the trigonometric form of Frenet's formulas for simplicity, in this paper I construct a proof based only on curvature and torsion, respectively, darbuzian and lancretian.
[180] vixra:2309.0115 [pdf]
On the Mechanics of Quasi-Quanta Realization
We model an absolute reference frame using a pullback on a certain locally trivial line bundle. We demonstrate that this pullback is unrealizable in $mathbb{R}^4$. We devote section 3 to process-based thinking.
[181] vixra:2309.0109 [pdf]
The Geometric Collatz Correspondence
The Collatz Conjecture, one of the most renowned unsolved problems in mathematics, presents adeceptive simplicity that has perplexed both experts and novices. Distinctive in nature, it leaves manyunsure of how to approach its analysis. My exploration into this enigma has unveiled two compellingconnections: firstly, a link between Collatz orbits and Pythagorean Triples; secondly, a tie to theproblem of tiling a 2D plane. This latter association suggests a potential relationship with PenroseTilings, which are notable for their non-repetitive plane tiling. This quality, reminiscent of theunpredictable yet non-repeating trajectories of Collatz sequences, provides a novel avenue to probethe conjecture’s complexities. To clarify these connections, I introduce a framework that interpretsthe Collatz Function as a process that maps each integer to a unique point on the complex plane.In a curious twist, my exploration into the 3D geometric interpretation of the Collatz Function has nudged open a small, yet intriguing door to a potential parallel in the world of physics. A subtle link appears to manifest between the properties of certain objects in this space and the atomic energy spectral series of hydrogen, a fundamental aspect in quantum mechanics. While this connection is in its early stages and the depth of its significance is yet to be fully unveiled, it subtly implies a simple merging where pure mathematics and applied physics might come together.The findings in this paper have led me to pursue development of a new type of number I call a Cam number, which stands for "complex and massive", indicating that it is a number with properties that on one hand act like a scalar, but on the other hand act as a complex number. Cam numbers can be thought of as having somewhat dual identities which reveal their properties and behavior under iterations of the Collatz Function. This paper serves as a motivator for a pursuit of a theory of Cam numbers.
[182] vixra:2309.0104 [pdf]
Interpretation of the Double-Slit Experiment Based on the Quantum Light
The traditional understanding of the double-slit experiment, which serves as a classicdemonstration of wave-particle duality, is being reconsidered due to new insights into therole of the central barrier between the slits. Contrary to the expectation of seeing twostripes on the screen when treating light as particles, the pattern can be more complex.This complexity arises from the interaction of light with the central barrier, where it isabsorbed and re-emitted in the form of surface plasma polaritons (SPPs). These SPPs travelalong the barrier's surface and contribute to the observed interference pattern. If theirprogress is interrupted by a Geiger counter, the pattern is altered, suggesting that theparticle nature of light alone can sufficiently explain the phenomena. This challenges thetraditional wave-particle duality interpretation and calls for a more nuanced understandingof quantum behavior.
[183] vixra:2309.0101 [pdf]
Conservation of Baryon and Lepton Number is an Effect of Electric and Magnetic Charges
The conservation of baryon number and lepton number has not yet been explained. Here I present a new nomenclature where I redefine isospin and hypercharge. By doing so I explain baryon and lepton number conservation as an effect of the electric-magnetic duality and the U(1) x U(1) gauge symmetry of quantum electromagnetodynamics. By using this method I predict the quantum numbers of an octet of magnetic monopoles. Another surprising result is that both leptons and quarks have nonzero magnetic isospin, a new quantum number.
[184] vixra:2309.0093 [pdf]
Bi-Verse Theory Visualizing Universe as One Side of a Thread
In this theorem titled bi-verse, we aim to provide a unique mathematical explanation for the current nature of the universe. By conceptualizing the world as a thread, we introduce a novel approach where particles are generated due to the oscillations of this thread. This paper sets our work apart by offering a new framework that diverges from traditional theories about the nature of universe.
[185] vixra:2309.0088 [pdf]
Collatz Conjecture Proof for Special Integer Subsets and a Unified Criterion for Twin Prime Identification
This paper presents a proof of the Collatz conjecture for a specific subset of positive integers, those formed by multiplying a prime number "p" greater than three with an odd integer "u" derived using Fermat’s little theorem. Additionally, we introduce a novel screening criterion for identifying candidate twin primes, extending our previous work linking twin primes (p and p+2) with the equation 2(p−2) = pu+v, where unique solutions for u and v are required. This unified criterion offers a promising approach to twin prime identification within a wider range of integers, further advancing research in this mathematical domain.
[186] vixra:2309.0086 [pdf]
Natural Units, Pi-Groups and Period Laws
In the context of QFT and Gauge Theory, the introduction of Natural Units, as a quantization in disguise'' combined with Buckingham's Pi-Theorem, provides a direct connection with de Rham Periods, as also hinted by Feynman amplitudes, Dessins d'Enfant and Belyi maps models of baryon modes etc.A program emerges: Physics Laws as Period Laws, and Alpha, as an element of the Pi-groups, a period. Our models of the Physical Reality emerge from the union of Cohomological Physics and Number Theory,helping us understand ``the unreasonable effectiveness of Mathematics''.An overview of the Network Model is included, with impacts to Sciences in general. Further prospects for understanding the fine structure constant are presented.
[187] vixra:2309.0084 [pdf]
The Prescribed Measurement Problem: Toward a Contention-Free Formulation of Quantum Physics
Quantum mechanics, though empirically validated, confronts numerous interpretative challenges, predominantly centered around the quantum measurement problem. Addressing these challenges, we introduce the "Prescribed Measurement Problem," which serves as an inversion to the traditional wavefunction collapse problem. Rather than axiomatizing the entire framework, our approach emphasizes the axiomatization of a sequence of prescribed measurements, highlighting their complex-phase attributes and inherent linearity. Leveraging entropy maximization techniques specific to these measurements, we recover the core elements of quantum mechanics: the Schrödinger equation, Born rule, complex Hilbert spaces, unitary evolution, and self-adjoint operators. Collectively, this approach offers a comprehensive and equivalent formulation of quantum mechanics that integrates measurement outcomes while sidestepping the traditional measurement problem.
[188] vixra:2309.0082 [pdf]
Theory of Electrons System
Self-consistent Lorentz equation is proposed, and is solved to electrons and the structures of particles and atomic nucleus. The static properties and decay are reasoned, all meet experimental data. The equation of general relativity sheerly with electromagnetic field is discussed as the base of this theory.
[189] vixra:2309.0076 [pdf]
Prototype-based Feature Selection with the Nafes Package
This paper introduces Nafes as a prototype-based feature selection package designed as a wrapper centered on the highly interpretable and powerful Generalized Matrix Learning Vector Quantization (GMLVQ) classification algorithm and its local variant (LGMLVQ). Nafes utilizes the learned relevances evaluated by the mutation validation scheme for Learning Vector quantization (LVQ), which iteratively converges to selected features that relevantly contribute to the prototype-based classifier decisions.
[190] vixra:2309.0075 [pdf]
Microcontrollers A Comprehensive Overview and Comparative Analysis of Diverse Types
This review paper provides a comprehensive overview of five popular microcontrollers: AVR, 8052, PIC, ESP32, and STM32. Each microcontroller is analyzed in terms of its architecture, peripherals, development environment, and application areas. A comparison is provided to highlight the key differences between these microcontrollers and assist engineers in selecting the most appropriate microcontroller for their specific needs. This paper serves as a valuable resource for beginners and experienced engineers alike, providing a comprehensive understanding of the different microcontrollers available and their respective applications.
[191] vixra:2309.0067 [pdf]
The Planets of BH1
A star — about the size of the Sun — orbits the nearby black hole BH1. The duo emits a gravitational wave that affects Earth's atmospheric pressure. With this receiving antenna and an extremely narrow-band receiver, we measure and evaluate the signal of the binary system. It is phase modulated with seven different frequencies that obey a simple formation law. The parameters of the PM allow the orbital periods, masses and positions of the planets to be estimated. The unexpectedly high values of the modulation index suggest that gravitational waves propagate slower than the speed of light.
[192] vixra:2309.0063 [pdf]
Tumor Angiogenic Optimizer: a New Bio-Inspired Based Metaheuristic
In this article, we propose a new metaheuristic inspired by the morphogenetic cellular movements of endothelial cells (ECs) that occur during the tumor angiogenesis process. This algorithm starts with a random initial population. In each iteration, the best candidate selected as the tumor, while the other individuals in the population are treated as ECs migrating toward the tumor's direction following a coordinated dynamics through a spatial relationship between tip and follower ECs. EC movements mathematical model in angiogenic morphogenesis are detailed in the article.This algorithm has an advantage compared to other similar optimization metaheuristics:the model parameters are already configured according to the tumor angiogenesis phenomenon modeling, preventing researchers from initializing them with arbitrary values.Subsequently, the algorithm is compared against well-known benchmark functions, and the results are validated through a comparative study with Particle Swarm Optimization (PSO). The results demonstrate that the algorithm is capable of providing highly competitive outcomes.Also the proposed algorithm is applied to a real-world problem. The results showed that the proposed algorithm performed effective in solving constrained optimization problems, surpassing other known algorithms.
[193] vixra:2309.0062 [pdf]
Expressing Even Numbers Beyond 6 as Sums of Two Primes
The "strong Goldbach conjecture" posits that any even number exceeding 6 can be represented as the sum of two prime numbers. This study explores this hypothesis, leveraging the constancy of odd integer quantities and cumulative sums within positive integers. By identifying odd prime numbers, pα1and pα2, within [3, n] and (n, 2n-2) intervals, we demonstrate a transformative process grounded in the unchanging nature of odd number counts and their cumulative sums. Through this process, we establish the equation 2n =pα1 + pα2, offering a significant stride in unraveling the enigmatic core of the strong Goldbach conjecture.
[194] vixra:2309.0058 [pdf]
The Synchrotron Spectrum from the Volkov Solution of the Dirac Equation
We derive the power spectrum of the synchrotron radiation from the Volkov solution of the Dirac equation and from S-matrix. We also generalize the Bargmann-Michel-Telegdi equation for the spin motion in case it involves the radiation term. This equation play the crucial role in the spin motion of protons in LHC and in FERMILAB. The axion production in the magnetic feld described by the Volkovsolution is discussed.
[195] vixra:2309.0055 [pdf]
Common Points of Parallel Lines and Division by Zero Calculus
In this note, we will consider some common points of two parallel lines on the plane from the viewpoint of the division by zero calculus. Usually, we will consider that there are no common points or the common point is the point at infinity for two parallel lines. We will, surprisingly, introduce a new common point for two parallel lines from the viewpoint of the division by zero calculus.
[196] vixra:2309.0049 [pdf]
The Sum of Positive and Negative Prime Numbers are Equal
This paper unveils a profound equation that harnesses the power of natural numbers to establish a captivating theorem: the balance between positive and negative prime numbers’ summation, intricately linked through the medium of natural numbers. As a corollary, the essence ofnatural numbers emerges as a testament to the harmonious interplay between even and odd elements. Notably, we expose the remarkable revelation that odd numbers find expression as both the aggregate of prime divisors and the sum of prime numbers, fusing diverse mathematical concepts into an elegant unity. This work reshapes the landscape of number theory, illuminating the hidden connections between primes, naturals, andtheir arithmetical compositions.
[197] vixra:2309.0047 [pdf]
Spin Angular Momentum Explained by the Classical Quantum Model
This paper proposes a new picture of spin angular momentum. In the conventional picture of spin, the precession of the axis is based on the assumption that the electron has an acceleration. In this study we first consider the case where the acceleration is expressed as a simple harmonic oscillator and the precession as a sinusoidal function. In this case, a double angle appears in the outer product of the Thomas precession, confirming that an angular velocity of one revolution of space can be obtained with half the circumference of the circle. It also shows the results of operations in which a single electron can take both up and down spin depending on the time transition. Next we consider the case of Lorentz contraction of the circumference in the direction of the axis of rotation. Einstein pointed out that in a rotating coordinate system the ratio of circumference to diameter is not pi. This study propose that the Lorentz contraction is the cause of the anomalous magnetic moment. The anomalous magnetic moment is regarded as a Lorentz contraction of the rotational angular momentum. As a result, the average oscillation velocity of the electron within Compton wavelengths is calculated to be about four percent of the speed of light. Furthermore, the Schwarzschild radius on general relativity included in the scope of the consideration to predict the size of the electron.
[198] vixra:2309.0043 [pdf]
On Bifurcations and Beauty
This paper focuses on two ideas: the beginning focuses on standard and chaotic bifurcations, and the end focuses on beauty through mathematical coincidences. The scope of the bifurcation side is ambitious: relating bifurcation theory not only to the logistic map but also to prime spirals, the Riemann hypothesis, the Lambert W function, the Collatz conjecture, the Mandelbrot set, and music theory. The scope of the beauty side is similar: a proposed sequence that is opposite to the primes in some sense, finite sequences with peculiar properties, Fibonacci-like sequences, trees of primitive Pythagorean triples, Babylonian math, Grimm's conjecture, and Shell sort. Rather than providing rigorous analysis, my goal is to revitalize qualitative mathematics.
[199] vixra:2309.0036 [pdf]
Constantes Fundamentales: Uniendo Gravedad Y la Expansión Acelerada Del Universo Por Medio Del Bosón de Higgs
En este artículo, se plantea una nueva reinterpretación de la geometría curva del espacio-tiempo, donde de considera que el espacio-tiempo experimenta una contracción longitudinal. Este efecto, se manifiesta en cambios de la métrica del espacio-tiempo que determinan como se miden las distancias y los intervalos temporales en esa región. Es decir, una variación de dimensión en la escala, tamaño o longitud aparente del espacio-tiempo. Esta reinterpretación es compatible con las ecuaciones de campo de Einstein y Maxwell. La constante gravitacional universal GNewton, la constante de Hubble para la expansión acelerada del universo H(0) y la constante cosmológica asociada con la hipotética energía oscura Λ, se pueden obtener y aproximar mediante este nuevo enfoque, donde la masa del bosón de Higgs con sus privilegiadas y únicas características, juega un papel trascendental para dar respuesta a una multitud de preguntas abiertas en física y en la actualidad cosmológica moderna. La reinterpretación de la geometría curva por la contracción espacio-temporal, proporciona un nuevo marco para entender mucho mejor la gravedad. Al obtener valores muy aproximados de la constante de gravitación universal, es posible determinar la fuerza inversa a la gravedad responsable de la expansión acelerada del universo. Esto es posible gracias a la teoría de divergencia de Gauss, donde la distribución de carga determinada por la constante de Coulomb en el marco de la expansión multipolar definida por el electromagnetismo, constituye una analogía bastante sólida, siendo de forma inversamente proporcional a la gravedad, mediante la cuál es posible obtener y calcular con mucha precisión, el valor de la constante H(0) de Hubble. La constante cosmológica Λ, considerada como una posible energía oscura que impulsa la expansión acelerada del universo, también puede ser obtenida y explicada a través de este nuevo enfoque. La reinterpretación de la geometría curva de la gravedad, como una contracción espacio-temporal, afectaría a las propiedades de la expansión del espacio-tiempo, donde la interpretación de contracción del universo, que describe la Relatividad General, debe ser reinterpretada, comprendida y aceptada, como la gravedad misma a cualquier escala.
[200] vixra:2309.0035 [pdf]
Fundamental Constants: Uniting Gravity and the Accelerated Expansion of the Universe Through the Higgs Boson
This article presents a new reinterpretation of the curved geometry of spacetime, where it is considered that spacetime undergoes longitudinal contraction. This effect is manifested in changes in the spacetime metric that determine how distances and temporal intervals are measured in that region. In other words, a variation in the scale, size, or apparent length of spacetime. This reinterpretation is compatible with Einstein's field equations and Maxwell's equations. The universal gravitational constant of Newton, G, the Hubble constant for the accelerated expansion of the universe, H(0), and the cosmological constant associated with hypothetical dark energy, Λ, can be obtained and approximated using this new approach, where the mass of the Higgs boson with its unique and privileged characteristics plays a crucial role in addressing numerous open questions in physics and modern cosmology. The reinterpretation of curved geometry through spacetime contraction provides a new framework for better understanding gravity. By obtaining very close values of the universal gravitational constant, it is possible to determine the inverse force to gravity responsible for the accelerated expansion of the universe. This is achievable through Gauss's divergence theorem, where the charge distribution determined by the Coulomb constant within the framework of multipolar expansion defined by electromagnetism constitutes a quite solid analogy, being inversely proportional to gravity. This allows for the precise calculation of the value of the Hubble constant, H(0). The cosmological constant Λ, considered as a potential dark energy driving the accelerated expansion of the universe, can also be obtained and explained through this new approach. The reinterpretation of the curved geometry of gravity as spacetime contraction would affect the properties of spacetime expansion, where the interpretation of the universe's contraction described by General Relativity must be reinterpreted, understood, and accepted as gravity itself at any scale.
[201] vixra:2309.0026 [pdf]
Numerical Calculation of Roots of Real Polynomial Functions, Convergent Method
The Newton-Raphson method is the most widely used numerical calculation method to determine the roots of Real polynomial functions, but it has the drawback that it does not always converge. The method proposed in this work establishes the convergence condition and the development of its application, and therefore will always converge towards the roots of the function. This will mean a conclusive advance for the determination of roots of Real polynomial functions. According to interpretation of the Abel-Ruffini theorem, the roots of polynomial functions of degree greater than 4 can only be determined by numerical calculation.
[202] vixra:2309.0024 [pdf]
Calculation of Nth Partial Sums ���� of Power Series and Its Relationship with the Calculation of Bernoulli Numbers
In this work, the general formula of the n-th partial sums ��_n of sums of powers of the form 1^n+ 2^n + . . . + m^n is obtained by an algebraic method, and said formula is applied to the obtaining the Bernoulli numbers by a new simple method.<p>En este trabajo se obtiene la fórmula general de las n-ésimas sumas parciales ��^n de sumas de potencias de la forma 1^n + 2^n + . . . + m^n mediante un método algebraico y dicha fórmula se aplica a la obtención de los números de Bernoulli por un método alternativo recursivo sencillo.
[203] vixra:2309.0023 [pdf]
Knot in Weak-Field Geometrical Optics
We construct the geometric optical knot in 3-dimensional Euclidean (vacuum or weak-field) space using the Abelian Chern-Simons integral and the variables (the Clebsch variables) of the complex scalar field, i.e. the function of amplitude and the phase related to the refractive index. The result of numerical simulation shows that in vacuum or weak-field space, there exists such a knot.
[204] vixra:2309.0020 [pdf]
Hilbert and Pólya Conjecture, Dynamical System, Prime Numbers, Black Holes, Quantum Mechanics, and the Riemann Hypothesis
In mathematics, the search for exact formulas giving all the prime numbers, certain families of prime numbers or the n-th prime number has generally proved to be vain, which has led to contenting oneself with approximate formulas [8]. The purpose of this article is to give a simple function to produce the list of all prime numbers.And then I give a generalization of this result and we show a link with the quantum mechanics and the attraction of black Holes. And I give a new proof of lemma 1 which gave a proof of the Riemann hypothesis [4]. Finally another excellent new proof o f the Riemann hypothesisis given and I deduce the proof of Hilbert Polya's conjecture
[205] vixra:2309.0004 [pdf]
The Computation of P and NP with Photophonon Stargates
We give a discourse on symmetry and singularity and the construction ofphotophonon stargates, and use them to create computers that decide andverify languages, including proofs, in polynomial time. Photophonons arequasiparticles, or synonymously stargates, that form from the oscillatoryfolding of singularities of cosmic light and cosmic sound with a synergetion, anovel quasiparticle. We shall find that, at each step in the computation oflanguages of any complexity, there exists a corresponding emission spectra ofphotophononics, and where upon examination, we observe when P = NP.
[206] vixra:2308.0207 [pdf]
Triangular Simplifying and Recovering: A Novel Geometric Approach for Four Color Theorem
The Four Colour Theorem is one of the mathematical problems with a fairly short history. This problem originated from coloring areas on a map, but has been dealt with graph and topological theory. Since the discovery of the problem, there have been many proofs by people interested in this mathematical problem, but in 1976 it was recognized as a proof by computer. The method of proof was to show that many graphs or many patterns can be colored with four colors. This proposed algorithm aims to show that all graphs are satisfied with the four color theorem regardless of the topology and the four color problem has no more non-deterministic polynomial time complexity.
[207] vixra:2308.0197 [pdf]
On the Existence of Solutions to Erdh{o}s-Straus Type Equations
We apply the notion of the textbf{olloid} to show that the family of ErdH{o}s-Straus type equation $$frac{4^{2^l}}{n^{2^l}}=frac{1}{x^{2^l}}+frac{1}{y^{2^l}}+frac{1}{z^{2^l}}$$ has solutions for all $lgeq 1$ provided the equation $$frac{4}{n}=frac{1}{x}+frac{1}{y}+frac{1}{z}$$ has solution for a fixed $n>4$.
[208] vixra:2308.0193 [pdf]
The Planets of the Binary Star TZ Mensae
The two stars in TZ Mensae emit gravitational waves with a frequency of 2.70 $mu$Hz, which may be measured here on Earth. The decoding of the phase modulations of the GW shows eleven companions. The orbital times of the planets fit well with the predictions of Dermott's law. The physical interpretation of the result of the phase modulation is difficult, but allows the masses of the planets to be estimated.
[209] vixra:2308.0192 [pdf]
The Planets of the Binary Star Cygnus X1
The black hole in Cygnus X1 orbits a supergiant and emits gravitational waves with a frequency of 4.134 $mu$Hz. The GW can be measured here on Earth. The decoding of the phase modulations of the GW shows eight companions. The orbital times of the planets fit well with the predictions of Dermott's law. The physical interpretation of the result of the phase modulation is difficult, but allows the masses of the planets to be estimated.
[210] vixra:2308.0184 [pdf]
The Planets of the Binary Star RR Caeli
The two stars in RR Caeli emit gravitational waves with a frequency of 76.22 $mu$Hz, which may be measured here on Earth. The decoding of the phase modulations of the GW shows nine companions -- including the planet already discovered with electromagnetic waves. Identical values of the orbital frequency are measured with both methods. The orbital times of the remaining planets fit well with the predictions of Dermott's law. The physical interpretation of the result of the phase modulation is difficult, but allows the masses of the planets to be estimated.
[211] vixra:2308.0183 [pdf]
Linear Compositional Regression
We study the properties of regression coefficients when the sum of the dependent variables is one,ie, the dependent variables are compositional.We show that the sum of intercepts is equal toone and the sum of other corresponding regressioncoefficients is zero. We do it for simple linearregressions and also for a more general case usingmatrix notation. The last part treats the casewhen the dependent variables do not sum up to one. We simplify the well known formula derived by theuse of Lagrange multipliers.
[212] vixra:2308.0180 [pdf]
The Fine Structure Constant: Revisited
A comparison between hydrodynamics (NSE) and gauge theory vector potential gauge flow (e.g. Hopf solution of NSE), yields alpha as the Reynolds number: eddies as Feynman loops etc. It explains the QED grading by alpha and lifetimes of particles (graded by powers of alpha) as a dissipation process.The theory of alpha can be formulated via the Schrodinger operator spectrum for Hydrogen atom and Boltzmann partition function, when related to Hopf fibration (Kepler problem on S3, magnetic topological monopole in the gauge theory formulation as an exact solution of NSE) for one loop (electronic orbital).The computation of the fine structure constant uses finite symmetry groups corroborated with H. Jehle's loopforms model of electron (Hopf fibration with connection and vector potential flow).The article brings together research material towards achieving such a goal. A program emerges: Physics Laws as Period Laws, and alpha an element of Pi-groups of periods.
[213] vixra:2308.0179 [pdf]
"LAHEL": An AI-Generated Content Approached LAwHELper to Personal Legal Advice
In certain developing countries, public awareness of legal rights is increasing, leading to a growing demand for legal consultation. However, the time and monetary costs associated with consulting professional lawyers remain high.Concurrently, there are two major impacts of computer science on the current legal sector. First, within government and public prosecution systems, information systems have accumulated vast amounts of structured and semi-structured data, offering significant economic value and potential for exploration. However, few people have attempted to mine these data resources. Second, intelligent dialogue systems have matured, but dialogue systems specifically tailored for the legal domain have not yet emerged.Considering these two trends, we introduce LAHEL, a legal consultation system developed by a team of nine individuals over the course of two years, dedicated to addressing the aforementioned issues. The system comprises three components: search, human dialogue systems, and robot dialogue systems. Its primary contributions are twofold: exploring the application of AI in legal consultation and summarizing lessons learned from the design of legal consultation systems.
[214] vixra:2308.0177 [pdf]
Natural Number Infinite Formula and the Nexus of Fundamental Scientific Issues
Within this paper, we embark on a comprehensive exploration of the profound scientific issues intertwined with the concept of the infinitewithin the realm of natural numbers. Through meticulous analysis, we delve into three distinct perspectives that shed light on the nature of natural number infinity. By considering the framework of time reference, we confront and address the inherent challenges that arise when contemplating the infinite. Furthermore, we navigate the intricate relationship betweenthe infinite and fundamental scientific questions, seeking to unveil novel insights and resolutions. In a departure from conventional viewpoints,our examination of natural number infinity takes on a relativistic dimension, scrutinizing the role of time and the observer’s perspective. Strikingly, as we delve deeper into the foundational strata, we uncover the pivotal significance of relativity not only in physics but also in mathematics. This realization propels us towards a more holistic and consistentmathematical framework, underlining the inextricable link between the infinitude of natural numbers and the essential constructs of time and perspective.
[215] vixra:2308.0170 [pdf]
The Planets of the Binary Star R Canis Majoris (R Cma, HD 57167)
The two stars in textit{R Canis Majoris} emit gravitational waves with a frequency of 20.33 $mu$Hz, which may be measured here on Earth. The decoding of the phase modulations of the GW shows textit{nine} companions -- one was suspected using electromagnetic waves. The orbital times fit well with the predictions of Dermott's law. The physical interpretation of the result of the phase modulation is difficult, but allows the masses of the planets to be estimated.
[216] vixra:2308.0169 [pdf]
Thickenings in the Ring System of the Binary Star V4046 Sagittarii
The protoplanetary disk orbiting textit{V4046 Sagittarii} may contain planets. The central binary system $A1-A2$ emits gravitational waves with a frequency of 9.56 $mu$Hz, which may be measured here on Earth. The decoding of the phase modulations of the GW provides evidence that there are textit{twelve} rings that are not rotationally symmetric. Some may contain young planets. The orbital times fit well with the predictions of Dermott's law. The physical interpretation of the result of the phase modulation is difficult, but allows the asymmetry of the mass distribution to be estimated.
[217] vixra:2308.0164 [pdf]
The Simple Structure of Prime Numbers
The prime numbers have a pseudo-random structure. And this structure is not simple. In this paper, we analyze the behavior of prime numbers. And we diagnose the inner body of the prime numbers.
[218] vixra:2308.0160 [pdf]
The General Relativity in the Radius of Particles
Through Minkowsky’s Geometry and the equation on the curva- ture of a light ray, derived from the Theory of General Relativity, we deduce that in the presence of a mass, a constant volume must always be subtracted from that volume containing that mass, by decreasing the radius of a sphere containing such a mass. So we show that the same phenomenon physic leads also to a reduction in the radius of mass of particles with mass.
[219] vixra:2308.0154 [pdf]
Electrostatic Polyhedron
I minimize the N charges electric potential on a sphere, the minimum potential optimize the distance between the charges and it is possible to obtain the polyhedrons from the N charge positions
[220] vixra:2308.0146 [pdf]
One Third Crucial Theorem for the Refoundation of Elementary Set Theory and the Teaching of that Discipline to Future Generations
For a given infinite countable set A, we demonstrate that A is an infinite countableset if and only if A is equal to an infinite countable set indexed to the infinity. Saidotherwise we demonstrate that A is an infinite countable set iff there exists an infinite numberof non-empty, distinct elements a_i ǂ ∅, i ∈ N∗, ∀i, j ∈ N∗, i ǂ j, a_i ǂ a_j such thatA = U_{+∞}_{i=1} {a_i}. At this occasion, for infinite countable sets constituted by the union of two giveninfinite countable sets A and B, Au2032 = [A ∪ B]P(Au2032), we introduce the notion of undeterminedinfinite countable set in order to designate infinite countable sets for whichan explicit indexation is not determined meanwhile such indexation must necessarilyexist.
[221] vixra:2308.0142 [pdf]
Generalized Relativistic Transformations in Clifford Spaces and Their Physical Implications
A brief introduction of the Extended Relativity Theory in Clifford Spaces (C-space) paves the way to the explicit construction of the generalized relativistic transformations of the Clifford multivector-valued coordinates in C-spaces. The most general transformations furnish a full mixing of the grades of the multivector-valued coordinates. The transformations of the multivector-valued momenta follow leading to an invariant generalized mass M in C-spaces which differs from m. No longer the proper mass appearing in the relativistic dispersion relation E^2 - p^2 = m^2 remains invariant under the generalized relativistic transformations. It is argued how this finding might shed some light into the cosmological constant problem, dark energy, and dark matter. We finalize with some concluding remarks about extending these transformations to phase spaces and about Born reciprocal relativity. An appendix is included with the most general (anti) commutators of the Clifford algebra multivector generators.
[222] vixra:2308.0141 [pdf]
On a Solution of "P vs NP" Millennium Prize Problem Based on the Subset Sum Problem
Given a set of distinct non-negative integers X^n and a target certificate S parametrized in: ∃X^k⊆X^n,∑_(x_i∈X^k)[]x_i =S (k=|X^k |,n=|X^n |). We present a polynomial solution of the subset sum problem with time complexity T≤O(n^2) and space complexity S≤O(n^2 ), so that P = NP.
[223] vixra:2308.0140 [pdf]
The Planets of the Binary Star Epsilon Ursae Minoris
So far, no planets have been discovered in the binary star system, which is 300 LY away. The two stars emit gravitational waves with a frequency of 586 nHz, which can be measured here on Earth. The decoding of the phase modulations of the GW shows textit{eleven} companions -- unseen by electromagnetic waves. The orbital times fit well with the predictions of Dermott's law. The physical interpretation of the result of the phase modulation is difficult, but allows the masses of the planets to be estimated.
[224] vixra:2308.0132 [pdf]
Modeling Learning Behavior of Students of Mathematics
This paper introduces an innovative approach to comprehending and modeling the collective behaviors of students within the context of mathematics education. The core objective is to present a comprehensive mathematicalframework capable of addressing the entire spectrum of behaviors exhibited in mathematics classrooms. We introduce a novel SIR-basedmodel, tailored to capture behaviors under the influence of individual students. Additionally, we propose that interactions among students acrossdifferent classrooms can serve as a regulatory mechanism for these behaviors. To validate our approach, we conduct a series of simulations thatdemonstrate the practicality and significance of our model. This paper significantly contributes to the advancement of our understanding of studentbehaviors in the realm of mathematics education and their mathematical representation. By bridging the gap between mathematical modeling andthe intricate dynamics of student conduct, this work provides valuable insights into the behaviors displayed in math classrooms.
[225] vixra:2308.0131 [pdf]
Exact Sum of Prime Numbers in Matrix Form
This paper introduces a novel approach to represent the nth sum of prime numbers using column matrices and diagonal matrices. The proposed method provides a concise and efficient matrix form for computing and visualizing these sums, promising potential insights in number theory and matrix algebra. The innovative representation offers a new perspectiveto explore the properties of prime numbers in the context of matrix algebra.
[226] vixra:2308.0130 [pdf]
Connected Old and New Prime Number Theory with Upper and Lower Bounds
In this article, we establish a connection between classical and modern prime number theory using upper and lower bounds. Additionally, weintroduce a new technique to calculate the sum of prime numbers.
[227] vixra:2308.0129 [pdf]
Inner Product of Two Oriented Points in Conformal Geometric Algebra in Detail
We study in full detail the inner product of oriented points in conformal geometric algebra and its geometric meaning. The notion of oriented point is introduced and the inner product of two general oriented points is computed, analyzed (including symmetry) and graphed in terms of point to point distance, and angles between the distance vector and the local orientation planes of the two points. Seven examples illustrate the results obtained. Finally, the results are extended from dimension three to arbitrary dimensions n.
[228] vixra:2308.0127 [pdf]
The Planets of the Binary Star Alpha Coronae Borealis (Alpha CrB)
The Infrared Astronomical Satellite has detected an excess of infrared radiation in der Umgebung von textit{Alpha CrB}. This suggests the presence of a large disc of dust and material around the star, prompting speculation of a planetary or proto-planetary system. Although the binary system is only 76.5 light years away, no planets have been discovered so far.The two stars emit gravitational waves with a frequency of 1333.426 nHz, which can be measured here on Earth. The decoding of the phase modulations of the GW shows textit{twelve} companions -- unseen by electromagnetic waves. The orbital times fit well with the predictions of Dermott's law. The physical interpretation of the result of the phase modulation is difficult, but allows the masses of the planets to be estimated.
[229] vixra:2308.0125 [pdf]
A Solvable Sextic Equation
This paper presents a solvable sextic equation under the condition that several coefficients of such polynomials are restricted to become dependent on the preceding or following coefficients. We can solve a sextic equation by restricting one or two in total seven coefficients available, and by solving a bisextic equation and a quintic equation. And we can also find the arbitrary coupling coefficients that generate a new solvable sextic equation as well.
[230] vixra:2308.0124 [pdf]
Embedding of Octonion Fourier Transform in Geometric Algebra of R^3 and Polar Representations of Octonion Analytic Signals in Detail
We show how the octonion Fourier transform can be embedded and studied in Clifford geometric algebra of three-dimensional Euclidean space Cl(3,0). We apply a new form of dimensionally minimal embedding of octonions in geometric algebra, that expresses octonion multiplication non-associativity with a sum of up to four (individually associative) geometric algebra product terms. This approach leads to new polar representations of octonion analytic signals and signal reconstruction formulas.
[231] vixra:2308.0121 [pdf]
The Planets of the Binary Star Kepler-35
The two stars in textit{Kepler-35} emit gravitational waves with a frequency of 1116 nHz, which can be measured here on Earth. The decoding of the phase modulations of the GW shows textit{twelve} companions -- one was discovered using electromagnetic waves. The orbital times fit well with the predictions of Dermott's law. The physical interpretation of the result of the phase modulation is difficult, but allows the masses of the planets to be estimated.
[232] vixra:2308.0116 [pdf]
An ADMM Algorithm for a Generic L0 Sparse Overlapping Group Lasso Problem
We present an alternating direction method of multipliers (ADMM) for a generic overlapping group lasso problem, where the groups can be overlapping in an arbitrary way. Meanwhile, we prove the lower bounds and upper bounds for both the $ell_1$ sparse group lasso problem and the $ell_0$ sparse group lasso problem. Also, we propose the algorithms for computing these bounds.
[233] vixra:2308.0112 [pdf]
Mutation Validation for Learning Vector Quantization
Mutation validation as a complement to existing applied machine learning validation schemes hasbeen explored in recent times. Exploratory work for Learning vector quantization (LVQ) based onthis model-validation scheme remains to be discovered. This paper proposes mutation validation as an extension to existing cross-validation and holdout schemes for Generalized LVQ and its advanced variants. The mutation validation scheme provides a responsive, interpretable, intuitive and easily comprehensible score that complements existing validation schemes employed in the performance evaluation of the prototype-based LVQ family of classification algorithms. This paper establishes a relation between the mutation validation scheme and the goodness of fit evaluation for four LVQ models: Generalized LVQ, Generalized Matrix LVQ, Generalized Tangent LVQ and Robust Soft LVQ models. Numerical evaluation regarding these models complexity and effects on test outcomes,pitches mutation validation scheme above cross-validation and holdout schemes.
[234] vixra:2308.0105 [pdf]
Analytic and Parameter-Free Formula for the Neutrino Mixing Matrix
A parameter-free analytic expression for the PMNS matrix is derived which fits numerically all the measured matrix components at 99.7$%$ confidence. Results are proven within the microscopic model and also lead to a prediction of the leptonic Jarlskog invariant $J_{PMNS}=-0.0106$. An outlook is given on the treatment of the CKM matrix.
[235] vixra:2308.0102 [pdf]
An Algebraic Structure of Music Theory
We may define a binary relation. Then a nonempty finite set equipped with the binary relation is called a circle set. And we define a bijective mapping of the circle set, and the mapping is called a shift. We may construct a pitch structure over a circle set. And we may define a tonic and step of a pitch structure. Then the ordered pair of the tonic and step is called the key of the pitch structure. Then we define a key transpose along a shift. And a key transpose is said to be regular if it consists of stretches, shrinks and a shift. A key transpose is regular if and only if it satisfies some hypotheses.
[236] vixra:2308.0098 [pdf]
Harmonic Graphs Conjecture: Graph-Theoretic Attributes and their Number Theoretic Correlations
The Harmonic Graphs Conjecture states that there exists an asymptotic relation involving the Harmonic Index and the natural logarithm as the order of the graph increases. This conjecture, grounded in the novel context of Prime Graphs, draws upon the Prime Number Theorem and the sum of divisors function to unveil a compelling asymptotic connection. By carefully expanding the definitions of the harmonic index and the sum of divisors function, and leveraging the prime number theorem's approximations, we establish a formula that captures this intricate relationship. This work is an effort to contribute to the advancement of graph theory, introducing a fresh lens through which graph connectivity can be explored. The synthesis of prime numbers and graph properties not only deepens our understanding of structural complexity but also paves the way for innovative research directions.
[237] vixra:2308.0094 [pdf]
A Theory of Physical Time and its Impact on Physics Applications
The story I wish to tell in this work starts with Newton's mathematical time, which is not a true definition of time but only its measure. It leaves out a deeper understanding of the true nature of time. When applying physics to the broader universe, numerous anomalies appear. One change in the definition of time, and these anomalies, galaxy rotation, dark matter, electric and magnetic properties, the speed of light, Hubble's law, dark energy, the Big Bang, the CMB, and the Pioneer anomaly all disappear, while physical theories are left unchanged.
[238] vixra:2308.0075 [pdf]
Improved Memory-guided Normality with Specialized Training Techniques of Deep SVDD
Deep learning techniques have shown remarkable success in various tasks, including feature learning, representation learning, and data reconstruction. Autoencoders, a subset of neural networks, are particularly powerful in capturing data patterns and generating meaningful representations. This paper presents an investigation into the use of combination with Deep SVDD and memory modules.
[239] vixra:2308.0072 [pdf]
The Planets of the Binary Star Kepler-34
The two stars in Kepler-34 emit gravitational waves with a frequency of 833 nHz, which can be measured here on Earth. The decoding of the phase modulations of the GW shows twelve companions - one was discovered using electromagnetic waves. The orbital times fit very well with the predictions of Dermott's law. The physical interpretation of the result of the phase modulation is difficult, but allows the masses of the planets to be estimated.
[240] vixra:2308.0070 [pdf]
The Planets of the Binary Star Toi-1338/bebop-1
The two stars in TOI-1338 emit gravitational waves with a frequency of $1.58~mu$Hz, which can be measured here on Earth. The decoding of the phase modulations of the GW shows seven companions -- two of which were discovered using electromagnetic waves. The orbital times fit very well with the predictions of Dermott's law. The physical interpretation of the result of the phase modulation is difficult, but allows the masses of the planets to be estimated.
[241] vixra:2308.0068 [pdf]
Quantum Gravity Framework: 1.0. A Framework of Principles for Quantum General Relativity with Time and Measurement
The purpose of this article is to outline a framework of concepts and principles to combine quantum mechanics and general relativity so that time and measurement (reduction) are present as integral parts of the basic foundations. First, the problem of time in quantum gravity and the measurement problem in quantum mechanics are briefly reviewed and the popular proposals to tackle these two problems are briefly discussed. Next, on the already known foundations of quantum mechanics, a framework of principles of dynamics is built: 1) Self-Time Evolution - Newtons first law is reinterpreted to define time, 2) Local Measurement by Local Reduction - Quantum diffusion theory is adapted, and 3) Global Evolution by Global Reduction. Ideas on how to apply the framework to study quantum general relativistic physics are discussed. Further, more general and modified forms of some of these principles are also discussed. The theoretical elements in the framework to be made concrete by further theoretical and experimental investigations are listed. Revision information is included.
[242] vixra:2308.0067 [pdf]
Quantum Graviy Framework 2.0: A Complete Dynamical Framework of Principles for Quantization of General Relativity
Developing Planck scale physics requires addressing problem of time, quantum reduction, determinism and continuum limit. In this article on the already known foundations of quantum mechanics, a set of proposals of dynamics is built on fully constrained discrete models: 1) Self- Evolution - Flow of time in the phase space in a single point system, 2) Local Measurement by Local Reduction through quantum diffusion theory, quantum diffusion equation is rederived with different assumptions, 3) Evolution of a multipoint discrete manifold of Systems through a foliation chosen dynamically, and 4) Continuum limit, and determinism are enforced by adding terms and averaging to the action. The proposals are applied to the various physical scenarios such as: 1) Minisuperspace reduced cosmology of isotropic and homogenous universe with scalar field, 2) Expanding universe with perturbation, and 3) Newtonian Universe. Ways to experimentally test the theory is discussed.
[243] vixra:2308.0066 [pdf]
Quantum Gravity Framework 3: Relative Time Formulation and Simple Applications to Derive Conventional Hamiltonians
In this paper we discuss quantum gravity framework 3.0, where we discuss relative time formulations. Applications of relative time formulations are discussed. The conventional Hamiltonian of bulk matter is derived from quantum gravity Hamiltonian. The derivation of Hamiltonians in the contexts of fields is briefly discussed.
[244] vixra:2308.0065 [pdf]
Quantum Gravity Framework 4: Fully Path Integral Framework, Structure Formation and Consciousness in the Universe
In this paper I give a major update of quantum gravity framework project. The heuristic conceptual framework proposed in previous versions is expanded to include structure formation and consciousness in the universe. A Path Integral version of decoherence in curved space-time is introduced as major update. Then we discuss the philosophical insights into structure formation in the universe and consciousness. We introduce various mathematical concepts to describe structure formation in the universe and consciousness.
[245] vixra:2308.0063 [pdf]
A Conjecture On σ(n) Function
We know many Arithmetical Functions [1] like ϕ(n),σ(n),τ(n) etc. In this paper we will discuss about σ(n) and will see a phenomenal observation.And later we will claim this observation as a conjecture.
[246] vixra:2308.0056 [pdf]
Unexpected Connection Between Triangular Numbers and the Golden Ratio
We find out that when a sum of five consecutive triangular numbers, $S_5(n)= T(n)+...+T(n+4)$, is also a triangular number $T(k)$, the ratios of consecutive terms of $a(i)$ that represent values of $n$ for which this happens, tend to $phi^2$ or $phi^4$ as $i$ tends to infinity, where $phi$ is the Golden Ratio. At the same time, the ratios of consecutive terms $S_5(a(i))$ tend to $phi^4$ or $phi^8$. We also note that such ratios that are the powers of $phi$ can appear in the sequences of triangular numbers that are also higher polygonal numbers, one case of which are the heptagonal triangular numbers.
[247] vixra:2308.0053 [pdf]
Critique of Logical Foundations of Induction and Epistemology
This is my latest books (in Arabic). It is a theoretical analysis and assessment of the logical foundations of induction and epistemology (related also to the theory of probability).
[248] vixra:2308.0049 [pdf]
The Planets of the Binary Star Kepler-16
The two stars in textit{Kepler-16} emit gravitational waves with a frequency of 563.5 nHz, which can be measured on Earth. The decoding of the phase modulations of the GW shows textit{eleven} companions. The orbital times fit very well with the predictions of Dermott's law, an improved version of the Titus-Bode rule. The physical interpretation of the result of the phase modulation is difficult, but allows the masses of the planets to be estimated.
[249] vixra:2308.0046 [pdf]
Four-Dimensional Newtonian Relativity
The constancy of the speed of light seems to imply that time and space are not absolute; however, we intend to show that that is not necessarily the case. In this article, we construct an alternative formulation to the theory of special relativity from the concepts of absolute time and absolute space defined by Newton and from the hypothesis that physical space is four-dimensional. We prove this formulation is mathematically equivalent to Einstein's theory by deriving the Lorentz transformation from the Galilean transformation for frames of reference in four-dimensional Euclidean space.
[250] vixra:2308.0038 [pdf]
Notes of Black Holes
This article invokes non-standard analysis throughout physics to generallygain insight towards how to holographically ``de-centralize" hidden conformalmodes (large central tower OPEs). Ultimately, measurements, continuum, andsingularity mechanics are dualized against Wilson partitions, chaosrepresentation, and RG flow to produce a candidate universal topological fieldtheory. Many results are shown along the way, but the primary resultsestablished are: sub-harmonic chaos in $U(1)$ gauge theory is identified andquantized in the the flat space celestial hologram, a virial model ofcircuit-information is asymptotically probed and found dual to a Hubble-likeequation of state, and a background model of loop-information in Einstein'sgravity is found to be a loop gauge theory of d=4 $2times x2$ Gaussian UnitaryEnsembles, establishing it as a candidate of non-perturbative, loop QCD ingravity. The state preparation of this GUE 2x2 M-gauge simultaneously producesa post-selected supersymmetry algebra (over the canonical log-partition) whichsurvives all possible no-go tests. Finally, the the canonical $2x2$ GUEpartition state identifies a $frac{1}{8}$-BPS topological phase measurementprepared as a quasi-continuous state of information decay; the partiallyconformal background is identified as a $frac{1}{4}$-BPS shadow and given amechanism of spin-entanglement. Notably also, the Cosmological Hierarchy problem is resolved, the finestructure constant is derived (up to 5 orders of magnitude) using analyticblack hole decay, and a new, $21-pt$ emergent universal holographic constraintbound between celestial gravitons, quantum information stability is shown atloop level, which further resolves the naturalness of $d=4$ emergent spacetimeand the directedness of time.
[251] vixra:2308.0036 [pdf]
Pricing of European Options using GBM and Heston Models in C++
The valuation of financial derivatives, particularly options, has long been a topic of interest in finance. Among the various methods developed for option pricing, the Monte Carlo simulation stands out due to its versatility and capability to model complex financial instruments. In this article, we apply the Monte Carlo method to price European options using two prominent models: the Geometric Brownian Motion (GBM) and the Heston model. While the GBM model assumes constant volatility and offers simplicity, it often falls short in capturing real market dynamics. Conversely, the Heston model introduces stochastic volatility, providing a more nuanced representation of market behaviors. Leveraging the computational efficiency of C++, our simulations reveal distinct price paths for each model. The GBM paths exhibit smooth trajectories, while the Heston paths are more varied, reflecting its allowance for stochastic volatility. Statistical analyses further underscore a significant difference in the final stock prices generated by the two models. The Heston model's prices display a broader distribution, capturing the model's inherent variability. Additionally, autocorrelation analyses suggest a more intricate autoregressive structure for the Heston model. In conclusion, while the GBM model provides simplicity and predictability, the Heston model offers a richer, albeit more complex, representation, especially in volatile market scenarios. This article offers a comparative study of the GBM and Heston models, shedding light on their utility under varying market conditions.
[252] vixra:2308.0034 [pdf]
The Gravity Waves from the Binary Galaxies by JWST
The energy spectrum of gravitons emitted by the black hole binary as a gravity waves iscalculated in the first part of the article. Then, the total quantum loss of energy, is calculatedin the Schwinger theory of gravity. Using analogy with the binary stars, we calculate thegraviton spectrum of the binary galaxies discovered by the JWST of NASA.
[253] vixra:2308.0032 [pdf]
The Planets of the Binary Star HD 75747
So far it has been assumed that the star system HD 75747, also known as HR 3524 or RS Chamaeleontis (RS Cha), has only textit{one} companion. A gravitational wave at 13.86 $mu$Hz is calculated from the orbital period of 1.67 days. The decoding of the phase modulations of the GW shows textit{eleven} companions. The orbital times fit very well with the predictions of Dermott's law, an improved version of the Titus-Bode rule.
[254] vixra:2308.0028 [pdf]
Growth in Matrix Algebras and a Conjecture of Perez-Garcia, Verstraete, Wolf and Cirac
Let S be a family of n x n matrices over a field such that, for some integer l, the products of the length l of the matrices in S span the full n x n matrix algebra. We show this for any positive integer l > n^2 + 2n − 5.
[255] vixra:2308.0025 [pdf]
Analytic Proof of The Prime Number Theorem
In this paper, we shall prove the textit{Prime Number Theorem} by providing a brief introduction about the famous textit{Riemann Zeta Function} and using its properties.
[256] vixra:2308.0024 [pdf]
Quasi-Perfect Numbers Have at Least 8 Prime Divisors
Quasi-perfect numbers satisfy the equation sigma(N) = 2*N+1, where sigma is the divisor summatory function. By computation, it is shown that no quasi-perfect number has less than 8 prime divisors. For testing purposes, quasi-multiperfect numbers are examined also.
[257] vixra:2308.0017 [pdf]
Vehicle Longitudinal Dynamics Model
This technical report presents a MATLAB Simulink model that represents the longitudinal dynamics of an actual vehicle with remarkable accuracy. Through validation against empirical data, the model demonstrates a close adherence to the real-world behaviour of a vehicle, encompassing key aspects such as acceleration, braking, and velocity control. With its versatility and applicability in various engineering domains, this model is a valuable tool for automotive research, aiding in developing advanced control systems and autonomous driving technologies.
[258] vixra:2308.0016 [pdf]
Analogies and Decisive Formulas
By presenting 13 correlations involving Avogadro's number, Jean Perrin definitively imposed the idea of the atom, i.e. the negation of the infinitely small. Here, we're talking about both the negation of the infinitely large and the "infinitely insignificant" advocated by officials. textit{We are therefore alone in the Universe}, which the James Webb telescope should confirm. Already, the observation of old galaxies instead of the baby galaxies predicted by standard cosmology destryesz the latter. This failure of cosmology has brought science to a standstill. In particular, CERN failed to detect the expected super-symmetric particles, forgetting Eddington's prophecy of Proton-Tau hyper-symmetry, here linked to cosmology. What we haven't realized is that understanding the quantum world requires cosmology, which therefore appears to be the essential discipline. In fact, it even provides a limit to science, since there is a PERMANENT CREATION of neutrons. Although too weak to be measured directly, this is manifest in the baby galaxies revealed by Halton Arp. The lack of mastery of quantum phenomena is evident in the turbulent history of the laser, still misunderstood, in the quantum Hall and Josephson effects, and in the lack of understanding of climato-cosmic effects. Unlike Perrin's relations, ours are directly verifiable by everyone, which finally rehumanizes Science
[259] vixra:2308.0013 [pdf]
Cosmological Redshift as a Function of Relative Cosmic Time: Introducing a Stationary Light Model
This paper evaluates cosmological redshift as a function of the relative cosmic age of the emitter and observer of light, suggesting a model of spatial expansion that radically differs from the present interpretation. This Stationary Light model suggests that the propagation of light and expansion of space are synonymous. We can yield close numerical agreement of the distance/redshift relation to λCDM without the consideration of any conventional or theoretical forces, and thus no consideration of density (Ω).
[260] vixra:2308.0012 [pdf]
Connecting de Broglie's Inner Frequency to the Hubble Constant: a New Road to Quantum Cosmology?
While working on the concept of space volume absorption, as underlying classical Newtonian gravity, I got the idea to connect de Broglie's idea of an inner frequency to the Hubble constant. The space absorption concept brings gravity conceptually in line with Hubble's space expansion and allows balancing Hubble space volume expansion with space volume absorption. This reproduced Friedmann's critical density formula. The introduction of the concept of the rate of space volume absorption, a Lorentz scalar, leads to an expression for a quantized bubble of space absorption of the size of the largest nucleus. The mass independent formula for the volume of this quantized bubble of space absorption combines Friedmann's formula and de Broglie's formula and thus integrates the universal constants of Newton, Hubble, Planck and Einstein. I noticed a conceptual similarity between thus quantized space and the sub-quantum medium of the Bohm-Vigier-de Broglie theory.
[261] vixra:2308.0011 [pdf]
Significance of the Number Space and Coordinate System in Physics for Elementary Particles and the Planetary System
In physics, a single center of gravity is assumed for forces. However, at least 3 fixed points π, π<sup>2</sup>, π<sup>3</sup>are required as the center, orthograde for the 3 spatial dimensions. With this approach, the universe can be understood as a set of rational numbers Q. This is to be distinguished from how we see the world, a 3-dimensional space with time. Observations from the past is the subset Q<sup>+</sup> for physics. A system of 3 objects, each with 3 spatial coordinates on the surface and time, is sufficient for physics. For the microcosm, the energy results from the 10 independent parameters as a polynomial P(2). For an observer, the local coordinates are the normalization for the metric. Our idea of a space with revolutions of 2π gives the coordinates in the macrocosm in epicycles. For the observer this means a transformation of the energies into polynomials P(2π). This is used to simulate the energies of a system. c can be calculated from the units meter and day.π/2 c m day = r<sub>Earth</sub><sup>2</sup>This formula provides the equatorial radius of the earth with an accuracy of 489 m. Orbits can be calculated using polynomials P(2π) and orbital times in the planetary system with P(8). A common constant can be derived from h, G and c with the consequence for H0:h G c<sup>5</sup>s<sup>8 </sup>/m<sup>10</sup> ( π<sup>4</sup>- π<sup>2</sup>- π<sup>-1</sup> - π<sup>-3</sup>) <sup>1/2</sup> H0<sub>theory</sub>= π<sup>1/2</sup>3 h G c<sup>3</sup> s<sup>5</sup>/m<sup>8</sup>A photon consists of 2 entangled electrons e<sup>-</sup> and e<sup>+</sup>m<sub>neutron</sub> / m<sub>e</sub>=(2π)<sup>4</sup> +(2π)<sup>3</sup>+(2π)<sup>2</sup>-(2π)<sup>1</sup>-(2π)<sup>0</sup>-(2π)<sup>-1</sup>+2(2π)<sup>-2</sup>+2(2π)<sup>-4</sup>-2(2π)<sup>-6</sup> +6(2π)<sup>-8</sup> = 1838.6836611 Theory: 1838.6836611 m<sub>e</sub> measured: 1838.68366173(89) m<sub>e</sub>For each charge there is an energy C = -π+2π<sup>-1</sup>- π<sup>-3</sup>+2π<sup>-5</sup>-π<sup>-7</sup>+π<sup>-9</sup>- π<sup>-12</sup>Together with the neutron mass, the result for the proton is: m<sub>proton</sub>=m<sub>neutron</sub> + C m<sub>e</sub>= 1836.15267363 m<sub>e</sub>Fine-structure constant:1/α= π<sup>4</sup>+ π<sup>3</sup>+ π<sup>2</sup>-1- π<sup>-1</sup> + π<sup>-2</sup>- π<sup>-3</sup> + π<sup>-7</sup> - π<sup>-9</sup>- 2 π<sup>-10</sup>-2 π<sup>-11</sup>-2 π<sup>-12</sup> = 137.035999107The muon and tauon masses as well as calculations for the inner planetary system are given.
[262] vixra:2308.0010 [pdf]
Alone in the Universe
The history of science shows the effectiveness of analogies, complementing rigorous formalism. But this practice has been neglected, leading to the current bankruptcy of official cosmology. Here are 30 formulas giving the Hubble radius, including 3 linked to the Solar System and two linking the cosmos and the Egyptian Nombrol 3570, linked to the 17th power of the golden ratio, holographically defining the meter from the terrestrial radius. In addition to these 5 specific relationships, there are 14 directly solanthropic relationships. This is comparable to Jean Perrin's book "Les atomes", which brings together 13 independent formulas involving Avogadro's number. But these relations are precise to the nearest thousandth, making a total improbability of 10^{-3 times 44 = -132}, whereas for Perrin's 10 % precise relations it's more like 10^{-1 times 13 = -13}. But Perrin definitively imposed the idea of the atom, i.e. the negation of the infinitely small. Here, we're talking about both the negation of the infinitely large and the "infinitely insignificant" advocated by officials. textit{We are therefore alone in the Universe}, which the James Webb telescope should confirm, after the brigth rejection of Initial Big Bang cosmology from the "Universe breakers galaxies".
[263] vixra:2307.0162 [pdf]
Stringy Phenomenology with Preon Models
We compare, following Pati, global symmetries our topological supersymmetric preon model with the heterotic E_8 x E_8 string theory. We include Pati's supergravity based preon model in this work and compare the preon interactions of his model to ours. Based on preon-string symmetry comparison and preon phenomenological results we conclude that the fundamental particles are likely preons rather than standard model particles.
[264] vixra:2307.0161 [pdf]
Gravitational Time Dilation, Relativistic Gravity Theory, Schwarzschild's Physically Sound Original Metric and the Consequences for Cosmology
It is natural to assume that the expanding universe was arbitrarily compact in the sufficiently remote past, in which state gravitational time dilation strongly affected its behavior. We first regard gravitational time dilation as the speed time dilation of a clock falling gravitationally from rest. Energy conservation implies that this depends solely on the the Newtonian gravitational potential difference of the clock trajectory's ends. To extend this to the relativistic domain we work out relativistic gravity theory. The metric result it yields for gravitational time dilation is consistent with our Newtonian gravitational potential result in the Newtonian limit. However the Robertson-Walker metric form for the universe implies complete absence of gravitational time dilation. Since we assume the universe was once arbitrarily compact, we turn instead to the metric for a static gravitational point source, but find that its textbook form puts a sufficiently compact universe inside an event horizon. This is due to transformation of the three radial functions which describe a static, spherically-symmetric metric into only two before inserting that now damaged metric form into the Einstein equation. Schwarzschild's original metric solution, which isn't in textbooks, involved no such transformation and therefore is physically sound; we obtain from it a picture of a universe which had an outburst of star and galaxy formation in the wake of its inflation.
[265] vixra:2307.0152 [pdf]
The Eight Planets of the Kepler-47 Star System
Eight planets of the binary star Kepler-47 modulate the gravitational wave at 3.1 $mu$Hz. Three of them are already known. The measured orbital times fit very well with the predictions of Dermott's rule, an improved version of the Titus-Bode rule. The phase modulation measurement technique is explained in detail.
[266] vixra:2307.0148 [pdf]
A Differential Datalog Interpreter
The core reasoning task for datalog engines is materialization, the evaluation of a datalog program over a database alongside its physical incorporation into the database itself. The de-facto method of computing it, is through the recursive application of inference rules. Due to it being a costly operation, it is a must for datalog engines to provide incremental materialization, that is, to adjustthe computation to new data, instead of restarting from scratch. One of the major caveats, is that deleting data is notoriously more involved than adding, since one has to take into account all possible data that has been entailed from what is being deleted. DifferentialDataflow is a computational model that provides efficient incremental maintenance, notoriously with equal performance between additions and deletions, and work distribution, of iterative dataflows. In this paper we investigate the performance of materialization with three reference datalog implementations, out of which one is built on top of a lightweight relational engine, and the two others are differential-dataflow and non-differential versions of the same rewrite algorithm, with the same optimizations.
[267] vixra:2307.0146 [pdf]
Structural Embeddings of Tools for Large Language Models
It is evident that the current state of Large Language Models (LLMs) necessitates the incorporation of external tools. The lack of straightforward algebraic and logical reasoning is well documented and prompted researchers to develop frameworks which allow LLMs to operate via external tools. The ontological nature of tool utilization for a specific task can be well formulated with a Directed Acyclic Graph (DAG). The central aim of the paper is to highlight the importance of graph based approaches to LLM-tool interaction in near future. We propose an exemplary framework to guide the orchestration of exponentially increasing numbers of external tools with LLMs, where objectives and functionalities of tools are graph encoded hierarchically. Assuming that textual segments of a Chain-of-Thought (CoT) can be imagined as a tool as defined here, the graph based framework can pave new avenues in that particular direction as well.
[268] vixra:2307.0135 [pdf]
Khasi-Jaintia Jaids( Surnames) and the Graphical law
We study the Khasi-Jaintia Jaids( Surnames).We draw the natural logarithm of the number of the Jaids of the Khasi-Jaintia tribes, normalised, starting with a letter vs the natural logarithm of the rank of the letter, normalised. We conclude that the Khasi-Jaintia Jaids( Surnames) can be characterised by the magnetisation curve, BW(c=0), of the Ising Model in the Bragg-Williams approximation in the absence of external magnetic field, H. $c=frac{ H}{gamma epsilon}=0$.
[269] vixra:2307.0129 [pdf]
A Proof of the Legendre Conjecture
If Legendre conjecture does not hold all integers in the interior of BT (n^{2},(n+1)^{2})ET are composed numbers. The composite integers counting shown that the rate of the number of the odd composites to the number of odd integers in theinterior of BT (n^{2},(n+1)^{2})ET is smaller than one. Consequently, the Legendre conjecture holds.
[270] vixra:2307.0125 [pdf]
Khasi-Jaintia Jaids (Surnames)
We collect and put together the jaids( surnames) of Khasi-Jaintia tribes of Meghalaya, of India, in this paper. There are more jaids than presented in the paper.
[271] vixra:2307.0121 [pdf]
Training Self-supervised Class-conditional GAN with Virtual Labels
Class-conditional GAN is a conditional GAN that can generate class-conditional distribution. Among class-conditional GANs, class-conditional InfoGAN can generate class-conditional data through a self-supervised (unsupervised) method without a labeled dataset. Instead, class-conditional InfoGAN requires optimal categorical latent distribution to train the model. In this paper, we propose a novel GAN that allows the model to perform self-supervised class-conditional data generation and clustering without knowing the optimal categorical latent distribution (prior probability). The proposed model consists of a discriminator, a classifier, and a generator, and uses three losses. The first loss is the cross-entropy classification loss to predict the conditional vector of the fake data. The classifier is trained with the classification loss. The second loss is the CAGAN loss for class-conditional data generation. The conditional vector of the real data predicted by the classifier is used for CAGAN loss. The generator and discriminator are trained with CAGAN loss. The third loss is the classifier gradient penalty loss. The classifier gradient penalty loss regularizes the slope of the classifier's decision boundary so that the decision boundary converges to a local optimum over a wide region. Additionally, the proposed method updates the categorical latent distribution with a predicted conditional vector of real data. As training progresses, the entropy of the categorical latent distribution gradually decreases and converges to the appropriate value. The converged categorical latent distribution becomes appropriate to represent the discrete part of the data distribution. The proposed method does not require labeled data, optimal categorical latent distribution, and a good metric to measure the distance between data.
[272] vixra:2307.0118 [pdf]
A Solvable Quintic Equation
This article presents a solvable quintic equation under the conditions that several coef-ficients of a quintic equation are restricted to become dependent on the other coefficients.We can solve a quintic equation by restricting two coefficients among total four coefficientsavailable. If a quintic equation has a quadratic factor (x^2 + b_1 x + b_0), then we get a twosimultaneous equations, which can be solved by using a sextic equation under restriction.
[273] vixra:2307.0117 [pdf]
Networked Robots Architecture for Multiple Simultaneous Bidirectional Real Time Connections
The Architecture for Networked Robots presented in this paper, is designed so that entities at the Enterprise level, such as a Java Application can access multiple Robots with real-time, two-way, on-demand reading of sensors and the control over Robot motion (actuators). If an application can simultaneously have access to the sensors of multiple Robots, then sophisticated algorithms can be developed to coordinate the movement of multiple robots. The simultaneous combined full knowledge of all aspects of the Robot's sensors and motion control, open up the capability to make multiple Robots act in a coordinated and purposeful way. In addition, the Networked Robots Architecture allows for multiple Enterprise Entities to have simultaneous access to the same Robot. A significant aspect of this architecture is that multiple independent entities can simultaneously access the Robot through a real time connection. For example, while a Java Application is monitoring and controlling a Robot, another entity such as an HTML5 WebSocket Client can also control and monitor the same Robot through a Web Browser. A multi-threaded WebSocket Server with routing is combined with a separate multi-threaded TCP/IP Server called the Frontline Server. The Robots connect through the Frontline Server which creates a thread per connection and connects to the WebSocket Server. The WebSocket server accepts connections through Enterprise Applications( e.g. Java based) and Remote Web Based Applications. Each Robot has a unique identification (48 bit represented in hexadecimal) and a truncated WebSocket Session ID that is maintained throughout the connections, including in the Robot's Firmware. Both WiFi LAN and 4G LTE WAN are supported with Robots in both networks accessible through the Internet.
[274] vixra:2307.0115 [pdf]
The Fate of Supersymmetry in Topological Quantum Field Theories
We analyze the role of supersymmetry in nature. We extend our previous model of particles and cosmology beyond its critical energy scale at about $10^{16}$ Gev. We assume that there are three main phases in the evolving universe. The first is topological gravity phase, the second a brief Chern-Simons phase, and the third the standard model (SM) gauge phase. In our scenario supersymmetry (SUSY) appears in all phases but in the third phase confined in topological preons, which form quarks and leptons. The confined SUSY (cSUSY) is supported by the lack of observation of squarks and sleptons. cSUSY also provides a natural mechanism for matter-antimatter asymmetry. The possible relationship of this tentative scenario to quantum gravity and the role of UV-completeness are disclosed.
[275] vixra:2307.0114 [pdf]
Poynting's Theorem and Undecidability of The Logic of Causality in Light of EPR Completeness Condition
The most elementary empirical truth associated with any experiment involving light (electromagnetic radiation) propagation is the distinction between the source (region of cause) and the detector (region of effect), i.e. ``cause/effect'' distinction, based on which one can speak of ``distance between source and detector'', ``propagation from source to detector'' and, therefore, ``action at a distance'', ``velocity of propagation''. According to EPR's completeness condition, ``cause/effect'' distinction should be taken into account in a theory that is supposed to provide explanations for such an experiment, the simplest one being the Hertz experiment. Then, in principle, one can decide whether ``cause before effect'' or ``cause after effect'' i.e. the logic of causality remains decidable. I show that, working with Maxwell's equations and ``cause/effect'' distinction to explain Hertz experiment, Poynting's theorem is unprovable. It is provable if and only if ``cause/effect'' distinction is erased by choice through an act of free will, but the logic of causality becomes undecidable. The current theoretical foundation behind the hypothesis of `light propagation' comes into question as theoretical optics is founded upon Maxwell's equations and Poynting's theorem. A revisit to the foundations of electrodynamics, with an emphasis on the interplay among logic, language and operation, seems necessary and motivated.
[276] vixra:2307.0113 [pdf]
Part III of the Correction of The Selected Collection of Exams of Geodesy and Mathematical Cartography From The German School
This is the part III and the last of the correction of the collection of exams of geodesy and mathematical cartography. These exams are from the German school, namely from the Institute of Geodesy of the University of Stuttgart where the eminent professor Erik W. Grafarend (1939-2020) taught geodesy courses and in particular mathematical cartography. This is an opportunity for French-speaking students to share the German methodology.
[277] vixra:2307.0112 [pdf]
How to Receive Continuous Gravitational Waves
The Kepler space telescope and the Transiting Exoplanet Survey Satellite (TESS) have discovered numerous multiple star systems emitting GW. Some of these have planets that phase modulate the GW and allow the planetary mass to be determined. The detection techniques are explained in detail using examples.
[278] vixra:2307.0111 [pdf]
Verfahren Zum Empfang Von Kontinuierlichen Gravitationswellen (How to Receive Continuous Gravitational Waves)
Mit dem Kepler space telescope und dem Transiting Exoplanet Survey Satellite (TESS) wurden zahlreiche Mehrfach-Sternsysteme entdeckt, die GW abstrahlen. Manche davon besitzen Planeten, die die GW phasenmodulieren. Die Nachweistechniken werden an Beispielen ausführlich erklärt.<p>The Kepler space telescope and the Transiting Exoplanet Survey Satellite (TESS) have discovered numerous multiple star systems emitting GW. Some of these have planets that phase modulate the GW and allow the planetary mass to be determined. The detection techniques are explained in detail using examples.
[279] vixra:2307.0109 [pdf]
Quantum Resonances in Perfect Cosmology
As predicted, the "Universe breakers" galaxies is a one more confirmation of the Perfect Cosmology (steady-state model). Moreover, the main cosmological parameters show quantum resonances using the large Lucas and Eddington numbers. This confirms that matter is a matter-antimatter oscillation and dark matter a quadrature one. This confirms also the BIPM $G$ value, larger by 1.7times 10^{-4} than the official one.
[280] vixra:2307.0108 [pdf]
Boys Localization and Pipek-Mezey Localization of Internal Coordinates and New Intermolecular Coordinates in Turbomole
Local internal coordinates were achieved with simplified Versions of the localization methods of Boys and Pipek-Mezey. By truncation of the orthogonality tails these internal coordinates can be made even more local. The simplified Versions of the Boys localization and of the Pipek-Mezey localization could be applied to any set of orthonormal vectors, if one can assign a location to each component of these vectors. New intermolecular coordinates for supermolecules were implemented in Turbomole. The generation of internal coordinates becomes more stable and the geometry optimization is much faster in some cases.
[281] vixra:2307.0104 [pdf]
Quaternionic Generalization of Telegraph Equations
Using non-commutative space-time quaternion algebra, we represent the generalization of one-dimensional and three-dimensional telegraph equations, which are widely applied to consider the propagation of an electromagnetic signal in communication lines, as well as to describe particle diffusion and heat transfer. It is shown that the system of telegraph equations can be represented in compact form as a single quaternion equation taking into account the space-time properties of physical quantities. The distinctive features of the one-dimensional and three-dimensional telegraph equations are discussed.
[282] vixra:2307.0103 [pdf]
A Comparative Analysis of Smart Contract Fuzzers’ Effectiveness
This study presents a comparative analysis of randomized testing algorithms, commonly known as fuzzers, with a specific emphasis on their effectiveness in catching bugs in Solidity smart contracts. We employ the non-parametric Mann-Whitney U-test to gauge performance, defined as the ``time to break invariants per mutant'', using altered versions of the widely-forked Uniswap v2 protocol. We conduct 30 tests, each with a maximum duration of 24 hours or 4,294,967,295 runs, and evaluate the speed at which the fuzzers Foundry and Echidna can breach any of the 22 protocol's invariant properties for each of the 12 mutants, created both with mutation testing tools and with manual bug injection methods. The research shows significant performance variabilities between runs for both Foundry and Echidna depending on the instances of mutated code. Our analysis indicates that Foundry was able to break invariants faster in 9 out of 12 tests, while Echidna in 1 out of 12 tests, and in the remaining 2 tests, the difference in performance between the two fuzzers was not statistically significant. The paper concludes by emphasizing the necessity for further research to incorporate additional fuzzers and real-world bugs, and paves ground for further developments of more precise and rigorous evaluations of fuzzer effectiveness.
[283] vixra:2307.0102 [pdf]
Gravity's Bridge: Investigating the Link Between Parallel Space, Negative Matter, and Dark Matter's Interaction with Our Universe
The circle and the straight line are two expressions of the same mathematical principle, revealing the intricate interplay between order and chaos in the universe. This principle also explains why the proper time for light is null. In this study, we explore the concept of light acceleration, which is a shared property with all matter in the universe. Similarly, we delve into the abstract nature of dark matter, viewing it as a body in the universe whose components remain unknown, yet interacts with our world only through gravity. Notably, Negative matter bends the fabric of spacetime according to a novel model, revealing the existence of two interconnected universes or spaces through gravity.
[284] vixra:2307.0084 [pdf]
To Understand The Universe: "Follow the Qi!"
The role and importance of EM 4-vector potential extends beyond physics theory. It is conceptually connected with the concept of ether, biofield and qi.Instead of finding an "umbrella theory", we would benefit more from understanding all this knowledge as a Network of Theories, with correspondences and translations, and implications ... This is reminiscent of Topos Theory.At experimental and technological level, the Superconducting Quantum Computing area provides a valuable lesson.At the other level of complex biological systems, the theory of chakras, meridians and acupuncture incites to find a common framework, in the sense of Cybernetics, enriched with Quantum Computing (Hardware and software).An contribution to the EPR debate is included: time-sync needs supplemented by space-alignment.In conclusion matter defines and "follows" the QI flow as a "reference frame", in the spirit of General Relativity, but at a gauge theory connection level.
[285] vixra:2307.0079 [pdf]
Part II of the Correction of The Selected Collection of Exams of Geodesy and Mathematical Cartography From The German School
This is the part II of the correction of the collection of exams of geodesy and mathematical cartography. These exams are from the German school, namely from the Institute of Geodesy of the University of Stuttgart where the eminent professor Erik W. Grafarend (1939-2020) taught geodesy courses and in particular mathematical cartography. This is an opportunity for French-speaking students to share the German methodology.
[286] vixra:2307.0077 [pdf]
Gravity Extensions Equation and Complex Spacetime
This hypothesis will present a possible extension of Einstein's field equations cite{1}cite{2}cite{3}cite{4} which reduces to his field equation by contractions. Basic concepts such as the description of the inertial system or the definition of a physical observer are discussed. The field equation predicts the existence of exactly four-dimensional space-time, as only four-dimensional space-time has an equal number of unknowns for each term of the equation. The equation itself can be written in two mixed and fully covariant forms:begin{gather}R_{mu sigma u}^ho-frac{1}{2}R_{sigmakappa}g^{kappa ho}g_{mu u}=kappa T_{mu kappa}g^{kappa ho} g_{sigma u}R_{phimu sigma u}-frac{1}{2}R_{sigmaphi} g_{mu u}=kappa T_{mu phi} g_{sigma u }end{gather}This model relates the field of matter to the curvature of space-time in a direct way, if matter is not present at a given point in space, it is simply flat space-time, which makes it a requirement that the momentum energy tensor does not zero in the presence of space-time curvature. In this work, I do not give the exact solutions of the equations, only their derivation and their form in a particular case. Field equation can be reduced to statement about equality between energy momentum tensor and Ricci tensor:begin{gather}R_{mu u} =-2 kappa T_{mu u }end{gather}I will also present a possible way to quantize field equations using complex space-time. This gives the quantum field equation which I can write asbegin{gather}left(R_{phimu sigma u}ight) ^dagger R_{phimu sigma u} = kappa left(R_{phimu sigma u}ight) ^dagger left(kappa T_{mu phi} g_{sigma u }+frac{1}{2}R_{sigmaphi} g_{mu u} ight) end{gather}This equation uses a special complex space-time and generates real invariants by using complex conjugates and transpositions. This removes it from singularity field theory as only normalizable fields that do not possess any kind of infinity are a consequence of this normalization.
[287] vixra:2307.0057 [pdf]
Analytical Models of Plane Turbulent Wall-bounded Flows
We present the theoretical description of plane turbulent wall-bounded flows based on the previously proposed equations for vortex fluid, which take into account both the longitudinal flow and the vortex tubes rotation. Using the simple model of eddy viscosity we obtain the analytical expressions for mean velocity profiles of steady-state turbulent flows. In particular we consider near-wall boundary layer flow as well as Couette, Poiseuille and combined Couette-Poiseuille flows. In all these cases the calculated velocity profiles are in good agreement with experimental data and results of direct numerical simulations.
[288] vixra:2307.0056 [pdf]
An Automatic Counting System of Small Objects in Noisy Images with a Noisy Labelled Dataset: Computing the Number of Microglial Cells in Biomedical Images
Counting immunopositive cells on biological tissues generally requires either manual annotation or (when available) automatic rough systems, for scanning signal surface and intensity in whole slide imaging. In this work, we tackle the problem of counting microglial cells in biomedical images that represent lumbar spinal cord cross-sections of rats. Note that counting microglial cells is typically a time-consuming task, and additionally entail extensive personnel training. We skip the task of detecting the cells and we focus only on the counting problem. Firstly, a linear predictor is designed based on the information provided by filtered images, obtained applying color threshold values to the labelled images in thedataset. Non-linear extensions and other improvements are presented. The choice of the threshold values is also discussed. Different numerical experiments show the capability of the proposed algorithms. Furthermore, the proposed schemes could be applied to different counting problems of small objects in other types of images (from satellites, telescopes, and/or drones, to name a few).
[289] vixra:2307.0053 [pdf]
Every Convex Pentagon Has Some Vertex Such that the Sum of Distances to the Other Four Vertices is Greater Than Its Perimeter
In this paper it is solved the case n = 5 of the problem 1.345 of the Crux Mathematicorum journal, proposed by Paul Erdös and Esther Szekeres in1988. The problem was solved for n ≥ 6 by János Pach and the solution published by the Crux Mathematicorum journal, leaving the case n = 5open to the reader. In september 2021, user23571113 posed this problem at the post https://math.stackexchange.com/questions/4243661/prove-thatfor-one-vertex-of-a-convex-pentagon-the-sum-of-distances-to-the-othe/4519514#4519514,and it has finally been solved.
[290] vixra:2307.0051 [pdf]
Part I of The Selected Collection of Exams of Geodesy and Mathematical Cartography From The German School
This is the part I of the correction of the collection of exams of geodesy and mathematical cartography. These exams are from the German school, namely from the Institute of Geodesy of the University of Stuttgart where the eminent professor Erik W. Grafarend (1939-2020) taught geodesy courses and in particular mathematical cartography. This is an opportunity for French-speaking students to share the German methodology.
[291] vixra:2307.0049 [pdf]
Some 3D-Determinant Properties for Calculating of Cubic-Matrix of Order 2 and Order 3
In this paper we have studied some properties for determinant-calculating for cubic-matrix of order 2 and order 3. These properties are analogous to some properties for determinants of square matrix we have proved and noted that these properties also are applicable (or not in some details) on this concept for cubic-matrix of orders 2 and 3. All results in this paper, are presented in detail during the theorem proofs.
[292] vixra:2307.0044 [pdf]
A Software Infrastructure for CS Research Dissemination
Reading research papers is integral to computer science education, especially at the graduate and senior undergraduate levels. Students, as well as researchers, spend much time understanding research work. While this is an essential part of computer science education, little work has been done to understand, aid, and formally assess research dissemination processes and methodologies. This short paper summarizes work in progress to build a comprehensive software infrastructure for understanding and disseminating research. The tool distributes various media and files with the research paper that aid in understanding the research paper. It enables researchers to provide documents, videos, media, data, code, etc., related to their research work through a single, well-organized, easy-to-use interface. It allows easy organizing of online discussion groups and research talks to help improve understanding. This short paper summarizes the tool's structure. It highlights the use of the software infrastructure to enhance and formally assess the comprehension, evaluation, and synthesis of research for CS graduate and senior undergraduate students.
[293] vixra:2307.0029 [pdf]
Traversable Wormholes in (2+1) Dimensions and Gravity’s Rainbow
We investigate the traversable wormholes in (2 + 1) dimensions in the context of Gravity’s Rainbow which may be one of the approaches to quantum gravity. The cases in the presence of cosmological constant and Casimir energy are studied.
[294] vixra:2307.0028 [pdf]
Six Measurement Problems of Quantum Mechanics
The notorious ‘measurement problem’ has been roving around quantum mechanics for nearly a century since its inception, and has given rise to a variety of ‘in- terpretations’ of quantum mechanics, which are meant to evade it. We argue that no less than six problems need to be distinguished, and that several of them classify as different types of problems. One of them is what traditionally is called ‘the measurement problem’. Another of them has nothing to do with measurements but is a profound metaphysical problem. We also analyse critically T. Maudlin’s (1995) well-known statement of ‘three measurements problems’, and the clash of the views of H. Brown (1986) and H. Stein (1997) on one of the six meansurement problems. Finally, we summarise a solution to one measurement problem which has been largely ignored but tatictly if not explicitly acknowledged.
[295] vixra:2307.0027 [pdf]
The IRPL Model of Cosmology
A new cosmological model is proposed that does not require dark energy, yet presents characteristics and trends that are almost comparable to those of the standard model. It differs from the standard model by an "extra path factor" that comes from a central hypothesis and results in an additional distance due to the gravitational radius. This additional distance causes the matter density parameter to rise from 0.5 to 1 from the big bang to the present, which gives rise to a non-zero pressure that drives the present acceleration phase of the universe's expansion. Remarkably, the halving of the density during nucleosynthesis solves the primordial lithium problem, although it introduces a deuterium problem. Finally, the resulting model solves the Hubble tension and the S8 tension, and satisfies all the constraints derived from the most recent accurate measurements of the baryon acoustic oscillation and the angular power spectrum of the cosmic microwave background, despite having one less parameter due to the absence of dark energy. The same hypothesis explains the rotating motion of galaxies on a small scale and produces consequences that are comparable to those of the modified Newtonian dynamics (MOND) theory.However, although the proposed model respects the same principles and physics as the standard model, it needs to be reinterpreted within the framework of the more original space of light to appreciate the naturalness of the hypothesis and its profound implications.
[296] vixra:2307.0022 [pdf]
Generation Mechanism for the Sun's Poloidal Magnetic Field
There was established a sequence of physical processes forming the cause-and-effect relationship between the observed alternating poloidal magnetic field of the Sun and a non-electromagnetic factor external to the Sun. It has been shown that nonuniformity of the Sun's orbital motion about the Solar system barycenter promotes emergence inside the Sun of the conditions for generation of the alternating poloidal component of the Sun's magnetosphere having a period of about 20 years. Keywords: Sun's poloidal magnetic field, inversion, flip, Jupiter, Saturn.
[297] vixra:2307.0019 [pdf]
Gravitational Density $omega = 3/10$ and Boson Density $omega^2/2$ in the Perfect Cosmology
Perfect cosmology (steady-state) is characterized by a single parameter, implying the critical condition with an invariant Hubble radius. This implies that the non-relativistic Universe gravitational energy is $Omega Mc^2$, with $ Omega = 3/10$, whose quantum form implies the baryon density $Omega^2/2 = 0.045$. Thus, the dark matter could be interpretated as an out of phase matter-antimatter oscillation.
[298] vixra:2307.0014 [pdf]
Relative and Absolute Stellar Aberration
If we talk about Stellar Aberration, then we think of the form of Stellar Aberration that was first discovered and explained by Bradley. In addition to Bradley's Stellar Aberration, which can also be defined as Relative Stellar Aberration, we will define Absolute Stellar Aberration based on just one measurement. Here after we will refer to the Absolute Stellar Aberration as "ASA". We will try to explain in a few words why it is necessary to measure and interpret Stellar Aberration in this way. Suppose we performed two measurements of the Doppler Effect within six months. If we don't know the results of those measurements, but only difference between them, then we cannot determine the radial velocities with which the observer moves with respect to the star. We will prove that similar reasoning can be applied in the case of Stellar Aberration as defined by Bradley. Knowing only the difference between the two measurements of the Stellar Aberration, we are not able to determine the transverse velocities the observer moves with respect to the line of sight, but only their difference. Using the results of "ASA" measurements, we will determine a Stationary Frame of Reference and after that derive formulas for Relative and Absolute Stellar Aberration.
[299] vixra:2307.0011 [pdf]
A New Model Suggesting a Mass Difference Between Electron and Positron at 10 ppb
Background/Objectives: The primary objective is to investigate a new theoreticalmodel approach about fundamental particles. Especially the electron and positron is con-sidered. The model utilizing the concept of energy density limits and find an acceptableinterpretation of a speed of light reference frame. Due to it’s consistent nature this enableus to implement these limits without breaking Lorentz’s invariance. This new model em-ploys mass-less current loops at the speed of light, to construct a candidate for a stable,self-contained system, which can be perceived as either an electron or positron, dependingon its configuration.Methods: This is a pure theoretical work where all figures was generated by LaTeXconstructs to illustrate the concepts. However there are referenced measurement results thatare important for the discussion. The mathematics is on a basic level, although the paperis dense with deductions and formulas. Only calculus and general mathematical maturity isneeded as well as knowledge about special relativity, electromagnetism and some basic atomand particle physics.Results: We evaluate the resultant angular momentum and derive a formula that alignswith Bohr’s renowned assumption about angular momentum in his atomic model. Thismethod not only provides insights into the enigmatic number 137 in physics but also suggestsa potential discrepancy between the masses of the electron and positron, with a relativeerror of 10 ppm in the measurement. This difference is too subtle for existing measurementtechniques.Conclusions: The main result in this paper are a model that basis its approach using theelectromagnetic theory and deduces stable constellations, that resembles particles, withinthis model. This theory does introduce the controversial prediction that the particle andantiparticle mass differ using a deduction of a formula for the mass. It is also quite possible aswe quantize the difference, that this prediction can be clarified by forthcoming measurementprojects. Also we deduced a couple of soundness feature of the model, such as deriving theBohr’s condition for angular momentum in his atomic model and explain how this can beused to deduce the actual measured angular momentum. Also the invariance of angularmomentum and charge is proven as a result of the model.
[300] vixra:2307.0008 [pdf]
General Conjecture on the Optimal Covering Trails in a K-Dimensional Cubic Lattice
We introduce a general conjecture involving minimum-link covering trails for any given k-dimensional grid n × n × ··· × n, belonging to the cubic lattice ℕ^k. In detail, if n is above two, we hypothesize that the minimal link length of any covering trail, for the above-mentioned set of n^k points in the Euclidean space ℝ^k, is equal to h(n, k) = (n^k − 1)/(n − 1) + c·(n − 3), where c = k − 1 iff h(4, 3) = 23, c = 1 iff h(4, 3) = 22, or even c = 0 iff h(4, 3) = 21.
[301] vixra:2306.0173 [pdf]
Testing Special Relativity With an Infinite Arm Interferometer
The Michelson-Morley experiment and its resolution by the special theory of relativity form a foundational truth in modern physics. In this paper I propose an equivalent relativistic experiment involving a single-source interferometer having infinite arms. Further,we debate the possible outcomes from such an experiment and in doing so uncover a conflict between special relativity and the symmetry of nature. I demonstrate this conflict by the method of reductio ad absurdum.
[302] vixra:2306.0167 [pdf]
The Impossibility of the Long-Distance Quantum Correlation in an Example
In this brief report we point out that the example in the famous paper of D. Bohm and Y.Aharonov in 1957 might not realize the long-distance quantum correlation proposed by Einstein, Rosen and Podolsky in 1935. The reason is presented briefly.
[303] vixra:2306.0164 [pdf]
The Birth Mechanism of the Universe from Nothing and New Inflation Mechanism
There was a model claiming the birth of the universe from nothing, but the specific mechanism for the birth and expansion of the universe was very poor. According to the energy-time uncertainty principle, during Δt, an energy fluctuation of ΔE is possible, but this energy fluctuation should have reverted back to nothing. By the way, there is also a gravitational interaction during the time of Δt, and if the negative gravitational self-energy exceeds the positive mass-energy during this Δt, the total energy of the corresponding mass distribution becomes negative energy, that is, the negative mass state. Because there is a repulsive gravitational effect between negative masses, this mass distribution expands. Thus, it is possible to create an expansion that does not go back to nothing. Calculations show that if the quantum fluctuation occur for a time less than Δt=(3/10)^(1/2)t_p ~ 0.77t_p, then an energy fluctuation of ΔE > (5/6)^(1/2)m_pc^2 ~ 0.65m_pc^2 must occur. But in this case, because of the negative gravitational self-energy, ΔE will enter the negative energy (mass) state before the time of Δt. Because there is a repulsive gravitational effect between negative masses, ΔE cannot contract, but expands. Thus, the universe does not return to nothing, but can exist. Gravitational Potential Energy Model provides a means of distinguishing whether the existence of the present universe is an inevitable event or an event with a very low probability. And, it presents a new model for the process of inflation, the accelerating expansion of the early universe. This paper also provides an explanation for why the early universe started in a dense state and solves the vacuum catastrophe problem. Additionally, when the negative gravitational potential energy exceeds the positive energy, it can produce an accelerated expansion of the universe. Through this mechanism, inflation, which is the accelerated expansion of the early universe, and dark energy, which is the cause of the accelerated expansion of the recent universe, can be explained at the same time.
[304] vixra:2306.0157 [pdf]
Rencontres for Equipartite Distributions of Multisets of Colored Balls into Urns
A multiset of u*b balls contains u different colors and b balls of each color. Randomly distributing them across u urns with b balls per urn, what is the probability that no urn contains at least two balls of a common color? We reduce this problem to the enumeration of u X u binary matrices with constant row and column sum b and provide an explicit table of probabilities for small b and u.
[305] vixra:2306.0153 [pdf]
A Natural Explanation of Cosmological Acceleration
The problem of cosmological acceleration (PCA) is usually considered in the framework of General Relativity and here the main uncertainty is how the background space is treated. In the approaches where it is flat, PCA is usually treated as a manifestation of dark energy and (as acknowledged in the literature) currently its nature is a mystery. On the other hand, if the background space is curved then a problem arises why the observed value of the cosmological constant is as is. Following the results of our publications, we show that the solution of PCA does not contain uncertainties because cosmological acceleration is an inevitable kinematical consequence of quantum theory in semiclassical approximation. In this approach, background space and its geometry (metric and connection) are not used and the cosmological constant problem does not arise.
[306] vixra:2306.0148 [pdf]
Adapted Metrics for a Modified Coulomb/newton's Potential
Modified Theories of Gravity include spin dependence in General Relativity, to account for additional sources of gravity instead of dark matter/energy approach.The spin-spin interaction is already included in the effective nuclear force potential, and theoretical considerations and experimental evidence hint to the hypothesis that Gravity originates from such an interaction, under an averaging process over spin directions.This invites to continue the line of theory initiated by Einstein and Cartan, based on tetrads and spin effects modeled by connections with torsion. As a first step in this direction, the article considers a new modified Coulomb/Newton Law accounting for the spin-spin interaction. The physical potential is geometrized through specific affine connections and specific semi-Riemannian metrics, canonically associated to it, acting on a manifold or at the level of its tangent bundle. Freely falling particles in these "toy Universes" are determined, showing an interesting behavior and unexpected patterns.
[307] vixra:2306.0140 [pdf]
The Determinant of Cubic-Matrix of Order 2 and Order 3, Some Basic Properties and Algorithms
Based on geometric intuition, in this paper we are trying to give an idea and visualize the meaning of the determinants for the cubic-matrix. In this paper we have analyzed the possibilities of developing the concept of determinant of matrices with three indexed 3D Matrices.We define the concept of determinant for cubic-matrix of order 2 and order 3, study and prove some basic properties for calculations of determinants of cubic-matrix of order 2 and 3.Furthermore we have also tested several square determinant properties and noted that these properties also are applicable on this concept of 3D Determinants.
[308] vixra:2306.0139 [pdf]
Alone in the Neguentropic Universe
Perfect cosmology is confirmed by the unifying role of the negentropic field at 14.125 Kelvin. The terminal term of the Topological Axis f(30) is thus related to the particle parameters to within a few billionths. The natural theoretical framework of Pythagorean integers is confirmed by the involvement of the Lucas-Lehmer series, rational approximations of pi and sporadic groups. The period 19.137 ms (Neuron), defined by the constants of Newton, Planck and Fermi, is close to the average between the universal period of 13.8 billion years, given by the three minutes calculation, and that of the electron. The gap is identified, to the nearest billionth, with the musical interval 419/417. Highly singular relationships link human and astrophysical parameters, introducing the Solanthopic Principle. It's the end of the several paradoxes: that of Fermi, of information, of the cosmological constant and of vacuum energy.
[309] vixra:2306.0135 [pdf]
The Elemental Property of Primes and Small Gaps Between Primes
The solution to the Twin Prime Conjecture lies in the elemental property of primes. We construct a sequence of consecutive primes, analyzing and handling them by the combination of the elemental property of primes and the Statistics theory reveal that Twin Prime Conjecture is true.
[310] vixra:2306.0119 [pdf]
The Fabbrini Problem
This document intends to emphasize some aspects of a recent algorithm capable of generating a secret key by transmitting information over a public channel. Given that the scheme’s construction is engaging and represents a topical innovation, we deem it useful to refer to it as "The Fabbrini Problem", after its author.
[311] vixra:2306.0103 [pdf]
Nachweis Der Begleiter Algol-D Und Algol-e Und Die Geschwindigkeit Von Gravitationswellen (Detection of the Companions Algol-D and Algol-e and the Speed of Gravitational Waves)
In den Langzeitdaten des Luftdruckes findet man die GW des Dreifach-Sternsystems Algol bei 8.073 µHz, die mit vier unterschiedlichen Frequenzen phasenmoduliert ist. Zwei Frequenzen können dem Erdorbit um die Sonne und Algol-C zugeordnet werden. Die beiden anderen könnten durch zwei bisher nicht entdeckte Sterne Algol-D und Algol-E erzeugt werden. Der hohe Modulationsindex aller vier PM lässt sich nur erklären, wenn die Ausbreitungsgeschwindigkeit der Gravitationswellen deutlich kleiner ist als die Lichtgeschwindigkeit.<p>In the long-term data of the air pressure one finds the GW of the triple star system Algol at $8.073~mu$Hz, which is phase-modulated with four different frequencies. Two frequencies can be assigned to Earth's orbit around the Sun and Algol-C. The other two could be generated by two previously undiscovered stars Algol-D and Algol-E. The high modulation index of all four PM can only be explained if the propagation speed of the gravitational waves is significantly lower than the speed of light.
[312] vixra:2306.0102 [pdf]
Detection of the Companions Algol-D and Algol-e and the Speed of Gravitational Waves
In the long-term data of the air pressure one finds the GW of the triple star system Algol at $8.073~mu$Hz, which is phase-modulated with four different frequencies. Two frequencies can be assigned to Earth's orbit around the Sun and Algol-C. The other two could be generated by two previously undiscovered stars Algol-D and Algol-E. The high modulation index of all four PM can only be explained if the propagation speed of the gravitational waves is significantly lower than the speed of light.
[313] vixra:2306.0100 [pdf]
Sprache Als Konstrukt: Eine Untersuchung Der Implikationen Des Linguistischen Nominalismus (Language as a Construct: an Examination of the Implications of Linguistic Nominalism)
Dieses Papier untersucht das Thema der stabilen Bedeutungen und Referenzen in der Sprache. Es beleuchtet die Debatte darüber, ob Wörter und sprachliche Symbole feste und unveränderliche Bedeutungen haben können, die eine zuverlässige Kommunikation und Verständnis ermöglichen. Der Text betrachtet verschiedene Perspektiven und Theorien zur Bedeutung von Wörtern, zur Beziehung zwischen Sprache und Welt sowie zur Rolle des Kontexts und der Interpretation in der Kommunikation. Es werden Argumente für die Möglichkeit stabiler Bedeutungen und Referenzen präsentiert, während skeptische Ansichten betonen, dass Bedeutungen subjektiv und kontextabhängig sind.<p>This paper explores the issue of stable meanings and references in language. It sheds light on the debate over whether words and linguistic symbols can have fixed and unchanging meanings that enable reliable communication and understanding. The text considers different perspectives and theories on the meaning of words, the relationship between language and the world, and the role of context and interpretation in communication. Arguments are presented for the possibility of stable meanings and references, while skeptical views emphasize that meanings are subjective and contextual.
[314] vixra:2306.0099 [pdf]
Boolean Structured Autoencoder Convolutional Deep Learning Network (BSautoconvnet)
In this paper, I am going to propose a new Boolean Structured Autoencoder Convolutional Deep Learning Network (BSautoconvnet) built on top of BSconvnet, based on the concept of monotone multi-layer Boolean algebra. I have shown that this network has achieved significant improvement in accuracy over an ordinary Relu Autoencoder Convolutional Deep Learning Network with much lesser number of parameters on the CIFAR10 dataset. The model is evaluated by visual inspection of the quality of the reconstructed images against groundtruth with reconstructed images by models in the internet.
[315] vixra:2306.0098 [pdf]
The Series Limit of Sum_k Cos(a Log K)/[k Log k]
The slowly converging series sum_{k>=2} cos(a log k)/[k log k] is evaluated numerically for a=1/2, 1, 3/2, ..., 4. After some initial terms, the infinite tail of the sum is replaced by the integral of the associated interpolating function, an Exponential Integral, and the "second form" of the Euler-Maclaurin corrections is derived from the analytic equations for higher order derivatives.
[316] vixra:2306.0088 [pdf]
Dark Matter and Dark Energy
I have been working on the fundamental laws of physics for a long time. During this time, I realized that gravity does not work like Newtonian and this misleads us into Dark Matter. The relationship between distance and gravitational force varies with distance. Gravitational properties vary for every point in empty space and have some limits. The gravity equation varies with some value between $1/r$ and $1/r^2$ for the farthest or closest available distance. However, empty space also has a gravitational and expansion effect. This study aims to analyze and discuss these two phenomena.
[317] vixra:2306.0083 [pdf]
Kara Madde ve Kara Enerji (Dark Matter and Dark Energy)
Uzun zamandır fiziğin temel kanunları üzerine çalışıyorum. Bu süre zarfında, yerçekiminin Newtoncu gibi çalışmadığını fark ettim ve bu durum bizi Kara Madde olarak yanıltmaktadır.. Mesafe ve yerçekimi kuvveti ilişkisi mesafeye göre değişir. Çekim özellikleri boş uzayın her noktası için değişir ve bazı sınırları vardır. Çekim denklemi, mevcut en uzak ya da yakın mesafe için 1/r ile 1/r^2 arasındaki bazı değerlerle değişir. Bununla birlikte boş uzayın da çekim ve genişletme etkisi mevcuttur. Bu çalışma, bu iki olguyu analiz etmeyi ve tartışmayı amaçlamaktadır.<p>I have been working on the fundamental laws of physics for a long time. During this time, I realized that gravity does not work like Newtonian and this misleads us into Dark Matter. The relationship between distance and gravitational force varies with distance. Gravitational properties vary for every point in empty space and have some limits. The gravity equation varies with some value between $1/r$ and $1/r^2$ for the farthest or closest available distance. However, empty space also has a gravitational and expansion effect. This study aims to analyze and discuss these two phenomena.
[318] vixra:2306.0081 [pdf]
Statistics of L1 Distances in the Finite Square Lattice
The L1 distance between two points in a square lattice is the sum of horizontal and vertical absolute differences of the Cartesian coordinates and - as in graph theory - also the minimumnumber of edges to walk to reach one point from the other. The manuscript contains a Java program that computes in a finite square grid of fixed shapethe number of point pairs as a function of that distance.
[319] vixra:2306.0074 [pdf]
Collision Entropy Estimation in a One-Line Formula
We address the unsolved question of how best to estimate the collision entropy, also called quadratic or second order Rényi entropy. Integer-order Rényi entropies are synthetic indices useful for the characterization of probability distributions. In recent decades, numerous studies have been conducted to arrive at their valid estimates starting from experimental data, so to derive suitable classification methods for the underlying processes, but optimal solutions have not been reached yet. Limited to the estimation of collision entropy, a one-line formula is presented here. The results of some specific Monte Carlo experiments give evidence of the validity of this estimator even for the very low densities of the data spread in high-dimensional sample spaces. The method strengths are unbiased consistency, generality and minimum computational cost.
[320] vixra:2306.0072 [pdf]
Fully Non-Local Optimization as Origin of Quantum Randomness
An elemental scheme for an alternative theory to quantum mechanics is proposed. The aim is to reproduce quantum phenomenology avoiding intrinsic randomness and action at a distance, but allowing temporal and spatial non-locality. The hypothesis is that all particle dynamics are driven by a non-local but real optimization principle which determines trajectories by minimizing/maximizing a quantity. This quantity is computed uniformly over an unbounded cluster of events no matter when or where they take place. These events are understood as the points where a possible path forks and eachoption contributes differently to the optimized quantity. In this article the mechanism is sketched using a toy model for the measurement of a particleof spin 1/2 (or two for the entangled case) where the events computed for the particle path are kept discrete and dual. The calculation that follows is aimed to provide a natural yet non-local explanation of the violation of Bell inequalities without the requirement of any intrinsic randomness or express ‘hidden variables’. No method or formalism from current quantum mechanics is intended to be used. Only the experimental outcomes for measurements of spin1/2 particles are considered.
[321] vixra:2306.0067 [pdf]
Relativistic Interferometry Using Aqueous Waves
In this paper we investigate the geometry and sequence of events within a Michelson-Morley interferometer and generalise our findings into the aqueous domain. In doing so we uncover a conflict between the predictions of special relativity and the symmetry of nature.
[322] vixra:2306.0066 [pdf]
The Integration of Modern Technologies in Education
In this article, we discuss the need for and methods of incorporating modern technologies into the educational system. We highlight possible changes in the Polish high school curriculum that utilize the internet and artificial intelligence. We also stress the importance of interdisciplinary learning and creating opportunities for students to learn by participating in educational projects that have real-life applications.
[323] vixra:2306.0061 [pdf]
İkiz Asallar Kestirimi İspatı (Proof for Twin Prime Conjecture)
İkiz asallar, aralarındaki fark 2 olan asal sayılardır. Sonsuz sayıda ikiz asal sayı var mıdır?Twin primes are prime numbers that differ by 2. Are there an infinite number of twin prime numbers?
[324] vixra:2306.0055 [pdf]
Introducing Proteus: a Mega Prompt with Personality, Skills and Dynamic Logic Based Internal Prompt Manipulation
There have been significant improvements in directing large language models (LLM) to answer logic-based question such as mathematical reasoning tasks. This has resulted in near perfect performance on these types of problems with accuracy levels in the mid ninety percentile level using state of the art models (GPT-4). The achievement of this level of accuracy has previously needed a multi-prompt approach to elicit better performances from LLM’s. This paper introduces a new prompt paradigm termed "Mega prompt" and further introduces Proteus, a state of the art mega prompt, that has been used to achieve a new level of accuracy on the GSM8K math data set of 97%.
[325] vixra:2306.0052 [pdf]
Competences in Ontology-based Enterprise Architecture Modeling: Zooming In and Out
Competence-based approaches have received increased attention, as the demand for qualified people with the right combination of competences establishes itself as a major factor of organizational performance. This paper examines how competences can be incorporated into Enterprise Architecture modeling: (i) we identify a key set of competence-related concepts such as skills, knowledge, and attitudes, (ii) analyze and relate them using a reference ontology (grounded on the Unified Foundational Ontology), and (iii) propose a representation strategy for modeling competences and their constituent elements leveraging the ArchiMate language, discussing how the proposed models can fit in enterprise competence-based practices. Our approach is intended to cover two tasks relevant to the combined application of Enterprise Architecture and Competence Modeling: `zooming in' on competences, revealing the relations between competences, knowledge, skills, attitudes and other personal characteristics that matter in organizational performance, and `zooming out' of competences, placing them in the wider context of other personal competences and overall organizational capabilities.
[326] vixra:2306.0035 [pdf]
Illustrative Axiomatic Derivation of the Special Lorentz Transformation from Merely the Properties of Empty Space and Inertial Systems
The Lorentz transformation is derived merely from the properties of space and time when spaceis empty and Galileo relativity. Additional postulates about the speed of light, reciprocity, and other ones are not necessary. Straight world lines are bijectively mapped onto straight world lines. This known fact is exploited in an illustrative manner. This is extremely useful for teaching special relativity, in particular, at an elementary level. Moreover, the approach described here, (i), provides an example of strict physical thinking, (ii), corrects a widespread erroneous belief, see over-next paragraph, and, (iii), presents an elementary introduction to the largely unknown hyperbolic rotations (the common rotations are circular). The transformation to be found is represented as a kind of rotation times a Lorentzian ‘scale factor’. This crucially simplifies the calculations and is much easier to grasp than a rather abstract ansatz with unknown coefficients. The rotation is proven to be hyperbolic rather than circular. After that, the scale factor turns out to equal unity in a most direct manner. The reciprocity property of the transformation is obtained as a by-product. Not special relativity makes an additional assumption for justifying the appearance of a seemingly additional natural constant, the speed of light in vacuum c, but classical mechanics does whence c disappears. Two common basic assumptions of classical mechanics lead not to the Galileo but to the Lorentz transformation. The existence of a maximum speed of bodies is shown to be a purely kinematic effect, too. Einstein’s second postulate is obtained as a by-product.
[327] vixra:2306.0029 [pdf]
A Selected Collection of Exams of Geodesy and Mathematical Cartography From The German School
This paper contains a selected collection of exams of geodesy and mathematical cartography. These exams are from the German school, namely from the Institute of Geodesy of the University of Stuttgart where the eminent professor Erik W. Grafarend (1939-2020) taught geodesy courses and in particular mathematical cartography. This is an opportunity for French-speaking students to share the German methodology.
[328] vixra:2306.0015 [pdf]
Organic Semiconductors and Transistors: State-of-the Art Review
This paper reviews the Organic Semiconductorsand theory behind its development and a brief of the history around developing this technology. A lot of it will discuss the distribution of energy bands in organic semiconductors and the optical properties and luminescence effect when some organic materials such as Phthalocyanine and Polymers such as polypara-phenylene-vinylene. and will discuss OFET and applications of Optical Amplifiers.
[329] vixra:2306.0013 [pdf]
A New Way to Write The Newtonian Gravitational Equation Resolves What The Cosmological Constant Truly Is
We demonstrate that there is a way to represent Newtonian gravity in a form that strongly resembles Einstein’s field equation, but it is still a fundamentally different type of equation. In the non-relativistic regime, it becomes necessary to ad hoc introduce a cosmological constant in order to align it with observations, similar to Einstein’s field equation. Interestingly, in 1917, Einstein also ad hoc inserted a cosmological constant in the Newtonian equation during his discussion on incorporating it into his own field equation. At that time, the cosmological constant was added to maintain consensus, which favored a steady-state universe.However, with the discovery of cosmological redshift and the shift in consensus towards an expanding universe, Einstein abandoned the cosmological constant. Then, around 1999, the cosmological constant was reintroduced to explain recent observations of distant supernovae. Currently, the cosmological constant is once again a topic of great interest and significance.Nevertheless, we will demonstrate that the cosmological constant is likely an ad hoc adjustment resulting from a failure to properly account for relativistic effects in strong gravitational fields. We are able to derive the cosmological constant and show that it is linked to corrections for relativistic effects in strong gravitational fields. In our model, this constant holds true for any strong field but naturally assumes different values, indicating that it is not truly a constant. Its value is constant only for the mass under consideration; for example, for the Hubble sphere, it always has the same value.Additionally, we will demonstrate how relativistic modified Newtonian theory also seems to resolves the black hole information paradox by simply removing it. This theory also leads to the conservation of spacetime. In general relativity theory, there are several significant challenges. One of them is how spacetime can change over time, transitioning from infinite curvature just at the beginning of the assumed Big Bang to essentially flat spacetime when the universe end up in cold death, while still maintaining conservation of energy all the way from the Big Bang to the assumed cold death of the universe. Can one really get something from nothing?
[330] vixra:2306.0009 [pdf]
Algorithmic Computation of Multivector Inverses and Characteristic Polynomials in Non-Degenerate Clifford Algebras
Clifford algebras provide the natural generalizations of complex, dual numbers and quaternions into the concept of non-commutative Clifford numbers.The paper demonstrates an algorithm for the computation of inverses of such numbers in a non-degenerate Clifford algebra of an arbitrary dimension.The algorithm is a variation of the Faddeev--LeVerrier--Souriau algorithm and is implemented in the open-source Computer Algebra System Maxima.Symbolic and numerical examples in different Clifford algebras are presented.
[331] vixra:2306.0003 [pdf]
Deep Learning for Physics Problems: A Case Study in Continuous Gravitational Waves Detection
Deep learning has become a powerful tool for solving a wide variety of problems, including those in physics. In this paper, we explore the use of deep learning for the detection of continuous gravitational waves. We propose two different approaches: one based on time-domain analysis and the other based on frequency-domain analysis. Both approaches achieve nearly the same performance, suggesting that deep learning is a promising technique for this task. The main purpose of this paper is to provide an overview of the potential of deep learning for physics problems. We do not provide a performance-measured solution, as this is beyond the scope of this paper. However, we believe that the results presented here are encouraging and suggest that deep learning is a valuable tool for physicists.
[332] vixra:2306.0001 [pdf]
The Solution to the Measurement Problem
There has been a lot of talk about the measurement problem. While the physics of it has been (at least mostly) rigorous, the underlying philosophy has been nothing short of a complete catastrophe. This paper will establish the true philosophical context and integrate the appropriate science into it, as it should have happened from the beginning. Bad philosophy didn’t allow it. I also highlight the false dichotomy underlying the measurement problem so that it can be detected more easily for those who do not engage with philosophy in an extensive manner.
[333] vixra:2305.0168 [pdf]
Quantum Impedance Networks and the Fermilab Accelerator Complex Evolution
Physics topics to be covered in the upcoming Fermilab ACE Science Workshop include neutrino science, dark matter experiments, muons and the muon collider, and new physics ideas [1]. Quantum Impedance Networks (QINs) sit in the latter, in new physics ideas. They encompass the neutrino, dark matter, muon, and muon collider programs. This note outlines how the new physics synthesis of Geometric Algebra (Clifford algebra in the geometric representation) and quantum impedance networks of wavefunction interactions is potentially helpful to those programs. https://indico.fnal.gov/event/59663/
[334] vixra:2305.0166 [pdf]
Boolean Structured Convolutional Deep Learning Network (BSconvnet)
In this paper, I am going to propose a new Boolean Structured Convolutional Deep Learning Network (BSconvnet) built on top of BSnet, based on the concept of monotone multi-layer Boolean algebra. I have shown that this network has achieved significant improvement in accuracy over an ordinary Relu Convolutional Deep Learning Network with much lesser number of parameters on the CIFAR10 dataset.
[335] vixra:2305.0162 [pdf]
Our Universe: To Model and To Simulate
Due to advancements in Quantum Computing, now we can simulate ``Our Universe'', based on observations, experiments and models we already have. A comparison between Low Energy Physics and HEP benefits us understanding the foundations, including the role of vector potential, quantum phase and how Space, Time and classical concepts emerge from the quantum formalism, e.g. The Standard Model.We will focus on the U(1)-gauge theory, as a paradigm not only of Electromagnetism, but also for the quark fields of QCD. The ultimate test: understanding the fine structure constant!
[336] vixra:2305.0157 [pdf]
The Poems of Tennyson and the Graphical Law
We study The Poems of Tennyson, edited by Christopher Ricks. We draw the natural logarithm of the number of the titles of the poems, normalised, starting with a letter vs the natural logarithm of the rank of the letter, normalised. We conclude that the The Poems of Tennyson, edited by Christopher Ricks, can be characterised by BP(4, $beta H=0$), i.e. the magnetisation curve in the Bethe-Peierls approximation of the Ising Model, in the presence of four nearest neighbours and in the absence of external magnetic field, H, with $beta H= 0$. $beta$ is $frac{1}{k_{B}T}$ where, T is temperature and $k_{B}$ is the tiny Boltzmann constant.
[337] vixra:2305.0155 [pdf]
Exploring the Potential Connection Between Fermat's Last Theorem and Quantum Gravity
This paper investigates the potential connection between Fermat's Last Theorem and quantum gravity, aiming to bridge the gap between seemingly unrelated fields of mathematics and physics. Fermat's Last Theorem, formulated by Pierre de Fermat in the 17th century, states that no non-trivial solutions exist for the equation $x^n + y^n = z^n$, where n > 2 and x, y, z are integers. Its proof was established by Andrew Wiles in 1995. On the other hand, quantum gravity seeks to unify general relativity and quantum mechanics, describing gravity as a curvature of spacetime and the fundamental particles and forces that constitute the universe, respectively.
[338] vixra:2305.0147 [pdf]
The Penguin Encyclopedia of Places by W. G. Moore and the Graphical law
We study the The Penguin Encyclopedia of Places by W. G. Moore. We draw the natural logarithm of the number of entries, normalised, starting with a letter vs the natural logarithm of the rank of the letter, normalised. We conclude that the Dictionary can be characterised by BP(4, $beta H=0.04$), i.e.the Bethe-Peierls curve in the presence of four nearest neighbours and little external magnetic field, H, with $beta H= 0.04$. $beta$ is $frac{1}{k_{B}T}$ where, T is temperature and $k_{B}$ is the tiny Boltzmann constant.
[339] vixra:2305.0130 [pdf]
Energy and Probability Density Computation for Muonic Hydrogen
This paper discusses about the quantum energy levels of muonic hydrogen considering onlyCoulomb interaction. The computational analysis in this paper will help the reader get a goodunderstanding of how different a muonic hydrogen is from its electronic counterparts. Energyeigenvalues are calculated by numerical method and the probability density distribution ofmuon around the nucleus is plotted for different quantum states using special functions. Thecode is written in python. A comparative study of emission spectra between the muonichydrogen and hydrogen has also been discussed.
[340] vixra:2305.0122 [pdf]
Pythagorean Triples and the Binomial Formula
The article shows the possibility of compiling Pythagorean triples using the binomial formula and provides a Theorem that is an alternative proof of the infinity of Pythagorean triples and confirmation of the close connection of the Pythagorean Theorem with the binomial formula.
[341] vixra:2305.0120 [pdf]
The Twin Fallacy
In this paper it is proposed that the clock or age difference predicted in the well known twin paradox thought experiment of special relativity is not a real effect, but only arises because proper clock rate changes when a real clock is transported to a state of higher kinetic energy have not been considered. The kinematic time dilation of SR is cancelled exactly by an increase in proper clock rate that arises due to an increase in optical electron transition frequency when taking into account the relativistic mass increase of a moving atomic clock.
[342] vixra:2305.0109 [pdf]
Harnessing AI in Quantitative Finance: Predicting GDP using Gradient Boosting, Random Forest, and Linear Regression Models
Predicting key macroeconomic indicators such as Gross Domestic Product (GDP) is a critical task in quantitative finance and economics. Precise forecasts of GDP can help in policy-making, investment decisions, and understanding the overall economic health of a country. Machine learning has emerged as a powerful tool in this domain, offering sophisticated techniques for modeling complex systems and making predictions. This project presents a comparative analysis of three machine learning models — Gradient Boosting Regressor, Random Forest Regressor, and Linear Regression — for predicting GDP. Our aim is to assess their performance and identify the model that provides the most accurate forecasts.
[343] vixra:2305.0105 [pdf]
The GW of the Vela Pulsar and the Receiving Pattern of the Livingston Interferometer
After compensation for phase modulation and frequency drift, the pulsar's GW can be detected in the records of the interferometer Livingston. The signatures agree with the known values measured with electromagnetic waves. The measured amplitude modulation of the GW in the daily rhythm shows the directivity of the antenna.
[344] vixra:2305.0104 [pdf]
Detection of Abnormalities in Blood Cells Using a Region-based Segmentation Approach and Supervised Machine Learning Algorithm
Screening (slide reading stage) is a manual human activity in cytology which consists of theinspection or analysis by the cytotechnician of all the cells present on a slide. Segmentation of bloodcells is an important research question in hematology and other related elds. Since this activity is human-based, detection of abnormal cells becomes dicult. Nowadays, medical image processing has recently become a very important discipline for computer-aided diagnosis, in which many methods are applied to solve real problems. Our research work is in the eld of computer-assisted diagnosis on blood images for the detection of abnormal cells. To this end, we propose a hybrid segmentation method to extract the correct shape from the nuclei to extract features and classify them usingSVM and KNN binary classifiers. In order to evaluate the performance of hybrid segmentation and the choice of the classication model, we carried out a comparative study between our hybrid segmentation method followed by our SVM classication model and a segmentation method based on global thresholding followed by a KNN classication model. After this study, it appears from the experiments carried out on the 62 images of blood smears, that the SVM binary classication model gives us an accuracy of 97% for the hybrid segmentation and 57% in the global thresholding and 95% for the KNN Classi cation Model. As our dataset was not balanced, we evaluated precision, recall,F1 score and cross validation with the Strated K-Fold cross validation algorithm of each of these segmentation methods and classication models. We obtain respectively: 93.75%; 98.712% and 99% for hybrid segmentation reecting its effectiveness compared to global fixed threshold segmentation and KNN classication model. To evaluate the performance of these models we obtained the following results: 77% of mean accuracy in the SVM and 61% of mean accuracy in the KNN, 84% of mean testaccuracy in the SVM and 74% mean test accuracy in the KNN making the best performing SVMmodel
[345] vixra:2305.0100 [pdf]
A Computational Approach to Interest Rate Swaps Pricing
In this paper, we discuss the computational model for pricing interest rate swaps using the QuantLib library in Python. This paper provides the practical implications of financial computational theory in the context of interest rate swaps, with an in-depth analysis of its present value, fair rate, duration, and convexity.
[346] vixra:2305.0099 [pdf]
The East and West Philosophies: A Comparison of Geometric and Algebraic Structures
The East and West philosophies are often compared based on their cultural differences and historical backgrounds. However, this paper aims to compare these two philosophical traditions using mathematical concepts. The Eastern philosophy can be compared to a geometric structure, where there is a strong sense of determinism and order. On the other hand, the Western philosophy can be compared to an algebraic structure, where there is a high degree of uncertainty and a need for observation and experimentation. This paper argues that these two structures represent different ways of understanding the world, and that both have their strengths and limitations.
[347] vixra:2305.0097 [pdf]
A Tribute to the Memory of Prof. Helmut Moritz (1933-2022)
This paper is a tribute to the memory of professor and geodesist Helmut Moritz who passed away in December 2022. We present a paper about the theory of geodetic refraction written by him and presented during the International Symposium " Figure of the Earth and Refraction ",Vienna, March 14th-l7th, 1967.
[348] vixra:2305.0096 [pdf]
Detection of the GW of the Crab Pulsar in the Ligo and Virgo O3b Series
After compensation for phase modulation and frequency drift, the pulsar's GW can be detected in the records of all three interferometers. The signatures agree with the known values measured with electromagnetic waves.
[349] vixra:2305.0093 [pdf]
Convergent Method for the Numerical Calculation of Roots of Real Polynomial Functions
The Newton-Raphson method is the most widely used numerical calculation method to solve Real polynomial functions, but it has the drawback that it does not always converge. The method proposed in this work establishes the convergence condition and therefore will always converge towards the roots ofthe equation.
[350] vixra:2305.0092 [pdf]
The Cherenkov Radiation from Dipole and the Lorentz Contraction
The power spectral formula of the Cherenkov radiation of dipole is derived in the framework of the source theory. The distance between chargesof dipole is relativity contracted which manifests in the spectral formula. Theknowledge of the spectral formula then can be used to verification of the Lorentz contractionof the relativistic length of dipole. A feasible experiment for the verication of the dipolecontraction is suggested.
[351] vixra:2305.0091 [pdf]
Calculation of the Rest Masses of Neutron and Proton by Polynomials with Base π in Relation to the Electron
Nature can be understood as a set of rational numbers, Q. This is to be distinguished from how we see the world, a 3-dimensional space with time. Observations and Physics is the subset Q<sup>+</sup>. As described in the general relativity, 10 independent equations are required. The micro world also requires these ten parameters in quanta. This allows the description and simulation of nature as a polynomial of ten parameters P(2) Imagining a space with revolutions of 2π provides the basis for polynomials at P(2π). E.g.m<sub>neutron</sub> / m<sub>e</sub>=(2π)<sup>4</sup> +(2π)<sup>3</sup>+(2π)<sup>2</sup>-(2π)<sup>1</sup>-(2π)<sup>0</sup>-(2π)<sup>-1</sup>+2(2π)<sup>-2</sup>+2(2π)<sup>-4</sup>-2(2π)<sup>-6</sup> +6(2π)<sup>-8</sup> = 1838.6836611 measured: 1838.68366173(89) m<sub>e</sub>The charge operator for all particles is:C = -π+2π<sup>-1</sup>- π<sup>-3</sup>+2π<sup>-5</sup>-π<sup>-7</sup>+π<sup>-9</sup>- π<sup>-12</sup> Together with the neutron mass, the result for the proton is:m<sub>proton</sub>=m<sub>neutron</sub> + C m<sub>e</sub>= 1836.15267363 m<sub>e</sub>For an observer and two objects, from the torque and angular momentum alone, a common constant of h, G, and c can be derived, giving a ratio of meters and seconds:h G c<sup>5</sup>s<sup>8</sup> /m<sup>10</sup> √ (π<sup>4</sup> -π<sup>2</sup> -1/π-1/π<sup>3</sup>) = 1.00000 Fine structure constant: 1/α= π <sup>4</sup>+ π <sup>3</sup>+ π <sup>2</sup >-1- π<sup>-1</sup> + π<sup>-2</sup>-π<sup>-3</sup> + π<sup>-7</sup> - π<sup>-9</sup>- 2π<sup>-10</sup>-2π<sup>-11</sup>-2π<sup>-12</sup> = 137.035999107The ratios of energies are raw natural data. The length and time are derived values. The calculations go beyond quantum theory and general relativity. E.g.2 π c m day = (Earth's diameter) <sup>2</sup This formula provides the equatorial radius of the earth with an accuracy of 489 m. From the details of the radius and rotation of the sun, the radii and, orbits can be calculated using polynomials P(2 π) and orbital times in the planetary system with P(8). E.g.R<sub>Mond</sub>/(R<sub>Erde</sub> + R<sub>Mond</sub>) = 2^3/(2 π) rel. Fehler = 1,0001$Merkur:r<sub>Apoapsis</sub> = 696342 km sqrt((2 π)^5 / 2 - (2 π)^4 /2 + (2 π)^3) = 46006512 km rel. Fehler = 1,0001 r<sub>Periapsis /= 696342 km sqrt(2 π)^5 - 0*(2 π)^4 + (2 π) ^3) = 69775692 km rel. Fehler = 1,0005 r<sub>Venus</sub> / r<sub>Merkur</sub> = 6123,80 / 2448,57 = 2,5009
[352] vixra:2305.0088 [pdf]
Design and Implementation of a Real-time Portfolio Risk Management System
The purpose of this report is to describe the design and implementation of a real-time portfolio risk management system. The system is developed in Python and utilizes pandas and numpy libraries for data management and calculations. With the advent of high-frequency trading, risk management has become a crucial aspect of the trading process. Traditional risk management practices are often not suitable due to the high-speed nature of these trades. Therefore, there is a need for a real-time risk management system that can keep pace with high-frequency trades.
[353] vixra:2305.0084 [pdf]
Double Spiral Galaxies and the Extratropical Cyclone in South Georgia and the South Sandwich Islands
The work is focused on the comparative analysisof the shape of spiral galaxies and the ubtropical cyclone that formed north of Georgia Island and passed north of the South Sandwich Islands, in the South Atlantic Ocean. Subtropical cyclones with double spirals appear to be common in theseareas of the South Atlantic. A subtropical cyclone is a weather system that has some characteristics of a tropical cyclone and some characteristics of an extratropical cyclone. They can form between the equator and the 50th parallel. In mathematics, a spiral is a curve which emanates from a point, moving farther away as it revolves around the point. The characteristic shape of hurricanes, cyclones, typhoons is a spiral. The characteristicequation of which spiral the Extratropical Cyclone (EC) Its double spiral shape, whose mathematical equation has already been defined as Cote’s spiral, Gobato et al. (2022) and similarlyLindblad (1964) show shape of double spiral
[354] vixra:2305.0083 [pdf]
The Frame Mechanics: an Euclidean Representation of the Special Relativity
This document proposes another representation of the well-known effects of Einstein's special relativity. This is not a new theory but an alternative to four vectors in Minkowski spacetime, using Euclidean geometry that shows all the effects of special relativity in 2 or 3 dimensional diagrams. The mathematics used will voluntarily remain as simple as possible for the sake of accessibility.
[355] vixra:2305.0082 [pdf]
Dynamical Geodesy
It is a tribute to the memory of my professor of geodesy Jacques Le Menestrel. We give a numerical version of the two first chapters of his booklet "Dynamical Geodesy". It is part of his complete geodesy course, taught in the 70s of the last century, at the Ecole Nationale des Sciences Géographiques (ENSG), France.
[356] vixra:2305.0069 [pdf]
A Category is a Partial Algebra
A category consists of arrows and objects. We may define a language L B {dom, cod, ◦}. Then a category is a partial algebra of the language L. Hence a functor is a homomorphism of partial algebras. And a natural transformation of functors is a natural transformation of homomorphisms. And we may define a limit of a homomorphism like a limit of functor. Then a limit of a homomorphism forms a homomorphism of partial algebras.
[357] vixra:2305.0062 [pdf]
A Simple Market Making Strategy for the S&P 500 Index Using Synthetic Bid-Ask Spread Data
Market making is a crucial component of financial markets, providing liquidity to market participants by quoting both buy (bid) and sell (ask) prices for an asset. The main objective of a market maker is to profit from the bid-ask spread while managing inventory risk. In this paper, we implement a simple market making strategy for the S&P 500 index using synthetic bid-ask spread data.
[358] vixra:2305.0058 [pdf]
Problem with the Derivation of Navier-Stokes Equations
The English idiom "Where there’s a will, there’s a way" means that if someone really wants to do something, she or he will find a way to do it.
[359] vixra:2305.0057 [pdf]
Michelson-Morley Experiment Emission Theory vs Postulate 2 of Special Relativity
Regarding the interference of the two light beams, We usually think of it as interference between two light beams of the same wavelength and frequency. However , if the interferometer of the Michelson-Morley experimental device accurately records the interference of the two beams of the same wavelength, that is negative evidence for the correctness of the postulate 2 of Special relativity.
[360] vixra:2305.0053 [pdf]
Rutherford Cross Section in the Laboratory Frame-Part III
In this pedagogical article, we elucidate on the direct derivation of the classical non-relativistic Rutherford scattering cross section, differential, in the laboratory frame, of two electrons, a la relativistic quantum mechanics as presented in the book of Bjorken and Drell.
[361] vixra:2305.0009 [pdf]
Bajo Electrico - Experimento (Electric Bass - Experiment)
Solamente por curiosidad, arrollé alambre envainado sobre la carcaza de cada pastilla. Denomino pastilla al dispositivo sensible que capta en forma electromagnética las vibraciones de las cuerdas. El bajo tiene dos pastillas y ambas carcazas están envueltas por el mismo alambre, sin ser cortado. Esto equivale a construir sobre cada carcaza un bobinado individual y después conectarlos en serie. Después de envolver ambas carcazas quedan libres los dos extremos del alambre. Antes de conectar algo entre ellos probé el instrumento y noté un cambio en el sonido. Es decir que sin colocar algo material para cerrar el circuito se verifica una acción efectiva, audible aunque no es intensa. Después construí una bobina y la puse en serie con un capacitor. Conectando esta serie entre ambos extremos libres del alambre que envuelve a las pastillas cerré el circuito. Modificando iterativamente la bobina y probando capacitores de valores distintos llegué a una condición que exhibió una acción muy evidente, que optimizó el comportamiento del bajo. Los detalles están en el desarrollo de este documento.<p>Just out of curiosity, I wrapped sheathed wire over the housing of each pickup. This is the name given to the sensitive device that electromagnetically captures the vibrations of the strings. The bass has two pickups and both cases are wrapped by the same wire, without being cut. This is equivalent to building an individual winding on each case and then connecting them in series. After wrapping both casings, the two ends of the wire remain free. Before connecting anything between them I tested the instrumentand noticed a change in sound. That is to say that without placing something material to close the circuit, an effective action is verified, audible although it is not intense. Then I built a coil and put it in series with a capacitor. Connecting this series between both free ends of the wire that surrounds the pickups I closed the circuit. Iteratively modifying the coil and trying capacitors of different values, I arrived at a condition that exhibited a very evident action, which optimized the behavior of the bass. The details are in the development of this document.
[362] vixra:2305.0008 [pdf]
W. B. Yeats, The Poems and the Graphical Law
We study W. B. Yeats, The Poems, edited by Daniel Albright.We draw the natural logarithm of the number of the titles of the poems, normalised, starting with a letter vs the natural logarithm of the rank of the letter, normalised. We conclude that W. B. Yeats, The Poems, can be characterised by BW(c=0.01),the magnetisation curve of the Ising Model in the Bragg-Williams approximation in the presence of external magnetic field, H. $c=frac{ H}{gamma epsilon}=0.01$ with $epsilon$ being the strength of coupling between two neighbouring spins in the Ising Model, $gamma$ representing the number of nearest neighbours of a spin, which is very large.
[363] vixra:2305.0007 [pdf]
First Principles of Mathematics
A dependent type theory is proposed as the foundation of mathematics. The formalism preserves the structure of mathematical thought, making it natural to use. The logical calculus of the type theory is proved to be syntactically complete. Therefore it does not suffer from the limitations imposed by Gödel’s incompleteness theorems. In particular, the concept of mathematical truth can be defined in terms of provability.
[364] vixra:2305.0006 [pdf]
Bio-Inspired Simple Neural Network for Low-Light Image Restoration: A Minimalist Approach
In this study, we explore the potential of using a straightforward neural network inspired by the retina model to efficiently restore low-light images. The retina model imitates the neurophysiological principles and dynamics of various optical neurons. Our proposed neural network model reduces the computational overhead compared to traditional signal-processing models while achieving results similar to complex deep learning models from a subjective perceptual perspective. By directly simulating retinal neuron functionalities with neural networks, we not only avoid manual parameter optimization but also lay the groundwork for constructing artificial versions of specific neurobiological organizations.
[365] vixra:2305.0003 [pdf]
A Probabilistic Proof of the Multinomial Theorem Following the Number $a_n^p$
In this note, we give an alternate proof of the multinomial theorem following the number $ A_n^p $ using probabilistic approach. Although the multinomial theorem following the number $ A_n^p$ is basically a combinatorial result, our proof may be simple for a student familiar with only basic probability concepts.
[366] vixra:2305.0001 [pdf]
Algebraic-Geometry Tools for Particle Physics
The TOI Platonic groups already improved the SM allowing to explain what fermion generations, quark flavors and the CKM and PMNS mixing matrices are and how to compute them theoretically.After the unification of the fundamental forces, including Gravity,the next step is to remodel Electroweak Theory as a theory of transitions of states. The tools needed to extend the Membrane Theory from Platonic and Archimedian Klein geometries, to account for the nuclear shells, baryon spectrum etc.: modular curves, Belyi pairs, dessin d'enfant and Belyi morphisms.This approach allows to interpret mass as monodromy (curvature of quark fields connection of EM type), introducing many other finite groups: Galois groups and associated tools (Riemann surfaces, divisors, periods, tessentalions, Fuchsian groups etc.).In this way an intrinsic String Theory touches base with the Standard Model.
[367] vixra:2304.0228 [pdf]
Foundations of Differential Geometric Algebra
Tools built on these foundations enable computations based on multi-linear algebra and spin groups using the geometric algebra known as Grassmann algebra or Clifford algebra. This foundation is built on a direct-sum parametric type system for tangent bundles, vector spaces, and also projective and differential geometry. Geometric algebra is a mathematical foundation for differential geometry, which can be used to simplify the Maxwell equations to a single wave equation due to the geometric product. Introduction of geometric algebra to engineering science disciplines will be easier with programmable foundations.In order to devise an expressive and performance oriented language for efficient discrete differential geometric algebra with the Grassmann elements, an efficient computer algebra representation was programmed. With this unifying mathematical foundation, it is possible to improve efficiency of multi-disciplinary research using geometric tensor calculus by relying on universal mathematical principles. Tools built on universal differential geometric algebra provide a natural geometric language for the Helmholtz decomposition and Hodge-DeRahm co/homology.
[368] vixra:2304.0222 [pdf]
The Asymptotic Squeeze Principle and the Binary Goldbach Conjecture
In this paper, we prove the special squeeze principle for all sufficiently large $nin 2mathbb{N}$. This provides an alternative proof for the asymptotic version of the binary Goldbach conjecture in cite{agama2022asymptotic}.
[369] vixra:2304.0221 [pdf]
Multinomial Development
In this paper we obtain the multinomial theorem following the numbers $ A_n^p $ and $ C_n^p $ (Vandermonde's identity generalization). Using this notion we obtain generalization of products of numbers in arithmetic progression, arithmetic regression and their sum. From the generalisation we propose (define) the arithmetics sequences product.
[370] vixra:2304.0220 [pdf]
New Expressions of Various Spin Particle Equations and Their Quantization
This is a version in Chinese. Compared to the previous English version, I have added more new chapters and proved some old conjectures. Overall, I have enriched, perfected, and further developed relativity, particle physics and quantum field theory in this book. Generally, a rigorous, analytical, elegant description method is adopted. I try best to impart a mathematical and physical aesthetic feeling to the entire article. The first seventeen chapters of this book are the basic parts. Several very useful mathematical tools have been proposed. Specially I have independently developed and created the constant invariant tensors analysis method for physical research. And I have restated classical physics in my own way. Most of the content belongs to the fields of classical field theory and quantum mechanics. The later chapters of Chapter 18 are the advanced parts. Most of the content belongs to the fields of quantum field theory. In particular, a new quantization program is given. According to this program, the quantization of arbitrary spin linear particles is completed in arbitrary space-time. These have greatly enriched and expanded the content of quantum field theory.
[371] vixra:2304.0218 [pdf]
New Prime Number Theory
This paper introduces a novel approach to estimating the distribu- tion of prime numbers by leveraging insights from partition theory, prime number gaps, and the angles of triangles. Application of this methodology to infinite sums and nth terms, and propose several ways of defining the nth term of a prime number. By using the Ramanujan infinite series of natural numbers, I am able to derive an infinite series of prime numbers value . Overall, this work represents a significant contribution to the field of prime number theory and sheds new light on the relationship between prime numbers and other mathematical concepts.
[372] vixra:2304.0209 [pdf]
Complex Circles of Partition and the Squeeze Principle
In this paper we continue the development of the circles of partition by introducing the notion of complex circles of partition. This is an enhancement of such structures from subsets of the natural numbers as base sets to the complex area as base and bearing set. The squeeze principle as a basic tool for studying the possibilities of partitioning of numbers is demonstrated.
[373] vixra:2304.0208 [pdf]
Messung Der Gravitationswellen Des Krebspulsars (2) in Den Ligo and Virgo O3b Daten<br> Measurement of the Gravitational Waves of the Crab Pulsar (2) in the Ligo and Virgo O3b Data
Nach Kompensation von Phasenmodulation und Frequenzdrift kann das GW des Pulsars in den Aufzeichnungen aller drei Interferometer nachgewiesen werden. Die Signaturen stimmen mit den bekannten Werten überein, die mit elektromagnetischen Wellen gemessenen wurden. (mit Ergänzungen)<p>After compensation for phase modulation and frequency drift, the pulsar's GW can be detected in the recordings of all three interferometers. The signatures agree with the known values u200bu200bmeasured with electromagnetic waves. (with additions)
[374] vixra:2304.0207 [pdf]
Rudyard Kipling's Verse and the Graphical Law
We study the Rudyard Kipling's Verse, Definitive edition.We draw the natural logarithm of the number of the titles of the verses, normalised, starting with a letter vs the natural logarithm of the rank of the letter, normalised. We conclude that the Rudyard Kipling's Verse can be characterised by BW(c=0.01),the magnetisation curve of the Ising Model in the Bragg-Williams approximation in the presence of external magnetic field, H. $c=frac{ H}{gamma epsilon}=0.01$ with $epsilon$ being the strength of coupling between two neighbouring spins in the Ising Model, $gamma$ representing the number of nearest neighbours of a spin, which is very large.
[375] vixra:2304.0197 [pdf]
On the Geometry of Axes of Complex Circles of Partition Part 1
In this paper we continue the development of the circles of partition by introducing a certain geometry of the axes of complex circles of partition. We use this geometry to verify the condition in the squeeze principle in special cases with regards to the orientation of the axes of complex circles of partition.
[376] vixra:2304.0192 [pdf]
Irrationality of Pi Using Just Derivatives
The quest for an irrationality of pi proof that can be incorporated into an analysis (or a calculus) course is still extant. Ideally a proof would be well motivated and use in an interesting way the topics of such a course. In particular $e^{pi i}$ should be used and the more easily algebraic of derivatives and integrals -- i.e. derivatives. A further worthy goal is to use techniques that anticipate those needed for other irrationality and, maybe even, transcendence proofs. We claim to have found a candidate proof.
[377] vixra:2304.0191 [pdf]
On Clifford-valued Actions, Generalized Dirac Equation and Quantization of Branes
We explore the construction of a generalized Dirac equation via the introduction of the notion of Clifford-valued actions, and which was inspired by the work of [1],[2] on the De Donder-Weyl theory formulation of field theory. Crucial in this construction is the evaluation of the $exponentials$ of $multivectors$ associated with Clifford (hypercomplex) analysis. Exact $matrix$ solutions (instead of spinors) of the generalized Dirac equation in $ D = 2,3$ spacetime dimensions were found. This formalism can be extended to curved spacetime backgrounds like it happens with the Schroedinger-Dirac equation. We conclude by proposing a wave-functional equation governing the quantum dynamics of branes living in $ C$-spaces (Clifford spaces), and which is based on the De Donder-Weyl Hamiltonian formulation of field theory.
[378] vixra:2304.0189 [pdf]
Mr. Fabri's Mistakes
A couple of mistakes made by Mr. Elio Fabri in the Italian Google group "Scienza Fisica" are corrected. The first concerns the calculation of an integral which, according to Mr. Fabri, had to be null, showing that, instead, it was anything but null. The second concerns an error repeated several times by Mr. Fabri according to which the Riemann tensor calculated in the Langevin spacetime would still be null. Unfortunately, a sad truth emerges: what is really zero is Mr. Fabri's understanding of what he is saying.
[379] vixra:2304.0187 [pdf]
On the Indication from Pioneer 10/11 Data of an Anomalous Acceleration
The physics of Hubble's law, motivated by the anomalous acceleration of Pioneer 10/11, suggests that the Hubble constant corresponds to a gravitational field of the universe. The deviation from linearity, which was shown by observations of high redshift Type Ia supernovae, is shown to be an apparent result of the velocity of light being affected by the gravitational field of the universe. The Hubble constant inferred from Pioneer 10/11 data is ~ 69 km/s/Mpc.
[380] vixra:2304.0181 [pdf]
The Randomness in the Prime Numbers
The prime numbers have very irregular pattern. The problem of finding pattern in the prime numbers is the long-standing open problem in mathematics. In this paper, we try to solve the problem axiomatically. And we propose some natural properties of prime numbers.
[381] vixra:2304.0170 [pdf]
Gerhart Enders as a Scientist
This is a largely revised and extended translation of my article 'Gerhart Enders als Wissenschaftler. Zum 90. Geburtstag am 17. Oktober 2014, Brandenburgische Archive 32 (2015) 77-79, https://opus4.kobv.de/opus4-slbp/files/8026/Brandenburgische+Archive+32.pdf
[382] vixra:2304.0169 [pdf]
Integrability of Continuous Functions in 2 Dimensions
In this paper it is shown that the Banach space of continuous, R^2- or C-valued functions on a simply connected either 2-dimensional real or 1-dimensional complex compact region can be decomposed into the topological direct sum of two subspaces, a subspace of integrable (and conformal) functions, and another one of unintegrable (and anti-conformal) functions. It is shown that complex integrability is equivalent to complex analyticity. This can be extended to real functions. The existence of a conjugation on that Banach space will be proven, which maps unintegrable functions onto integrable functions.The boundary of a 2-dimensional simply connected compact region is defined by a Jordan curve, from which it is known to topologically divide the domain into two disconnected regions. The choice of which of the two regions is to be the inside, defines the orientation.The conjugation above will be seen to be the inversion of orientation.Analyticity, integrability, and orientation on R^2 (or C) therefore are intimately related.
[383] vixra:2304.0168 [pdf]
On the Quantization of Electric and Magnetic Charges and Fluxes. Axiomatic Foundation of Non-Integrable Phases. Two Natural Speeds Different from $c$
We present an axiomatic foundation of non-integrable phases of Schr"{o}dinger wave functions and use it for interpreting Dirac's 1931 pioneering article in terms of the electromagnetic 4-potential. The quantization of the electric charge in terms of $e$ implies the quantization of the dielectric flux through closed surfaces $bar{Psi} := oiint vec{D} cdot mathrm{d}vec{S}$ in terms of the `Lagrangean' dielectric flux quantum $Psi_D=e$. The quantization of the analogous magnetic monopole charge in terms of $g$ implies the quantization of the magnetic flux through closed surfaces $bar{Phi} := oiint vec{B} cdot mathrm{d}vec{S}$ in terms of the `Diracian' magnetic induction flux quantum $Phi_B=g=h/e$, and textit{vice versa}. Here, the question is raised, if the quantization of the magnetic charge (and hence field) in a given volume depends on the total electric charge in that volume. Furthermore, we have $Phi_B/Psi_D = g/e = h/e^2 = R_text{K}$, the von Klitzing constant, the basic resistance of the quantum Hall effect. $R_text{K}$ and the vacuum permittivity $varepsilon_{0}$ and permeability $mu_{0}$, respectively, combine to two natural speed constants different from that of light in vacuum $c$.%footnote{This article is an extension of a talk presented (in German) before the Physical Society at Berlin, Febr. 15, 2023, url{https://www.dpg-physik.de/veranstaltungen/2023/mhb-agsen-2023-02-15}.}
[384] vixra:2304.0166 [pdf]
Proof of the Triple and Twin Prime Conjectures Using the Sindaram Sieve Method
Yitang Zhang proved in 2013 that there are infinitely many pairs of prime numbers differing by 70 million, it has been proved now that there are infinitely many pairs of prime numbers differing by 246. In this paper, we use the sievemethod invented by Snndaram in 1934 to find out the solution of triple prime numbers and twin prime numbers, and find the general solution formula of the subset, i.e, an1 + b which is result of each subset, such as 3n + 1, 5n + 2, 7n + 3, 9n + 4, 11n+ 5, 13n+ 6, 15n+ 7, 17n+ 8, · · · in 2mn+n+m, modulo x respectively (x ≤ 3 takes prime). This general solution formula is used to prove the triple prime conjecture and the twin prime conjecture.
[385] vixra:2304.0153 [pdf]
Serious Problems in Standard Complex Analysis Texts From The Viewpoint of Division by Zero Calculus
In this note, we shall refer to some serious problems for the standard complex analysis text books that may be considered as common facts for many years from the viewpoint of the division by zero calculus. We shall state clearly our opinions with the new book: V. Eiderman, An introduction to complex analysis and the Laplace transform (2022).
[386] vixra:2304.0149 [pdf]
Abraham-Lorentz Force and the Dirac-Sea
We describe two kinds of equations of motion in classical electrodynamics, there are dynamicallaws for the charges and the equations for the electromagnetic (EM) field.
[387] vixra:2304.0145 [pdf]
Identification of Universal Features in the Conductivity of Classes of Two-Dimensional QFTs Using the Ads/cft Correspondence
We study the electrical conductivity of strongly disordered, strongly coupled quantum fieldtheories, holographically dual to non-perturbatively disordered uncharged black holes. The computation reduces to solving a diffusive hydrostatic equation for an emergent horizonfluid. We demonstrate that a large class of theories in two spatial dimensions have a universal conductivity independent of disorder strength, and rigorously rule out disorder-drivenconductor-insulator transitions in many theories. We present a (fine-tuned) axion-dilatonbulk theory which realizes the conductor-insulator transition, interpreted as a classical percolation transition in the horizon fluid. We address aspects of strongly disordered holographythat can and cannot be addressed via mean-field modeling, such as massive gravity.
[388] vixra:2304.0144 [pdf]
The Navier-Stokes Equations from a Minimal Effective Field Theory
We use an effective Schwinger-Keldysh field theory of long-range massless modes to derive the Navier-Stokes equations as an energy-momentum balance equation. The fluid will be invariant under the linear subgroup of the volume-preserving diffeomorphisms, which are the non-linear, time-independent spatial translations.
[389] vixra:2304.0137 [pdf]
A Domain Wall Model
We consider a general axion-dilaton model with vanishingly small temperature. Under certain conditions, it is very likely that we can have a stable domain wall structure on the horizon
[390] vixra:2304.0136 [pdf]
Strongly Disordered Metals and Disorder-Driven Metal-Insulator Transitions in Holography
Recently, much progress has been made on understanding transport properties of strongly coupled quantum field theories by employing gauge-gravity duality. However, a theory of transport at finite density and temperature is still lacking for strongly disordered systems. We reduce the computation of direct current electrical conductivity, for a wide variety of strongly disordered holographic systems with no background charge density, to the solution of a linear differential equation dependent only on data on the black hole horizon of the bulk theory. Some strongly coupled theories in two spatial dimensions have a universal conductivity, independent of disorder strength. We realize a disorder-driven holographic metal-insulator transitions through the percolation of poorly-conducting regions across the black hole horizon. We compare results from our exact realizations of holographic disorder with simpler approaches to the problem, such as massive gravity.
[391] vixra:2304.0125 [pdf]
Shear Excitations of the Electron Star Diffusion Out of Thermal Equilibrium
The electron star is a holographic bulk setup which consists of a non-extremal AdS-Reissner-Nordstr"{o}m black brane and an ideal gas of electrons. The gravitational system is dual to a field theory with interacting fractionalised and mesonic degrees of freedom, and is thermodynamically favoured over a pure black brane scenario. The electron gas in this Einstein-Maxwell-fluid theory is treated as being at zero temperature. The system is thus gravitationally stable but is {it not} in thermodynamic equilibrium. After analysing thermodynamic properties of the background, we compute the quasi-normal mode spectrum and correlation functions of gauge-invariant quantities on the boundary to study momentum and charge transport in the shear sector. We perform a detailed analysis of the hydrodynamic diffusion mode dispersion relation and compare our numerical results with thermodynamic predictions. We show that they only agree at very low temperature and near transition to a purely black brane background. We thus conclude that in accordance with expectations, hydrodynamics and thermodynamics cannot successfully describe a system out of thermal equilibrium. This provides further evidence for the importance of holographic studies of thermalisation, hydrolysation and out-of-equilibrium phenomena.
[392] vixra:2304.0124 [pdf]
The Missed Physics
Crucial developments in physics, checkable by everyone, have been missed, excluding the Universe expansion and the initial Big Bang model, and restablishing the cosmological steady-state model. The single electron cosmology gives a close estimation of the Hubble length, meaning the matter is in fact a matter-antimatter oscillation in a Permanent Bang cosmology, where dark matter would be out of phase oscillation. The nuclear fusion cosmic model gives the background temperature 2.73 Kelvin, validating the Hoyle's prediction of permanent neutron creation, an ultimate limit of physics. The Diophantine treatment of the Kepler laws induces the Space-Time quantification in a Total Quantum Physics, pushing back the Planck wall by a factor 10^{61}, possibly resolving the vacuum energy dilemma. The three-body gravitational hydrogen model explains the Tachyonic Three Minutes Formula giving half the Hubble radius, thus its critical mass, showing the Universe is a Particle in the Cosmos, whose radius would be deduced from holographic Space-Time Quantification. The Kotov Doppler-free oscillation rehabilitates the tachyonic physics of the bosonic string theory in the Octonion Topological Axis prolonging the Quaternion Periodic Table, implying the string-spin identification and gives the gravitational constant G, compatible with the BIPM measurements but 2 10^{-4} larger than the official value.
[393] vixra:2304.0120 [pdf]
On the Equation X + (X/X) = X
In this note, we shall refer to the equation X + (X/X) = X from our division by zero and division by zero calculus ideas against the Barukčić's idea.
[394] vixra:2304.0118 [pdf]
Training GPT4 on Quantum Impedance Networks
A recent paper submitted to the 79th annual Gravity Research Foundation essay competitionentitled "chatGPT explains Quantum Gravity" [1] was written in collaboration with GPT3.5. Withminimal prompting the bot has generated plausible coherent explanations for what it calls QINs(quantum impedance networks) of the unstable particle spectrum, massless neutrino oscillation,muon collider topological lifetime enhancement, and quantum gravity at the Compton, Planck, andcosmological scales [2]. One goal of that paper was to minimize pretraining, to find out what thebot already ‘understood’ before introducing new ideas.A similar training process on GPT4 has completed the first three dialogs. The first two arepresented here. GPT4 is much deeper, appears much more coherent. And poses its own uniquechallenges to learning and teaching. How to train a generative transformer on QINs? It seems theessential next step is to introduce visualGPT, to train on the images [1, 3].Humility is to be curious and willing to learn. chatGPT appears both humble and very powerful,in some sense an ideal collaborator, when facts matter a model to mirror as best one can.
[395] vixra:2304.0117 [pdf]
Pythagorean Nature
Crucial Pythagorean scientific developments, checkable by everyone, have been missed, refutating the Universe expansion and the initial Big Bang model, imposing the cosmological steady-state model. The single electron cosmology gives a close estimation of the Hubble length, meaning the matter is in fact a matter-antimatter oscillation in a Permanent Bang cosmology, where dark matter is an out of phase oscillation. The nuclear fusion cosmic model gives the background temperature 2.73 Kelvin, validating the Hoyle's prediction of permanent neutron creation, an ultimate limit of physics. The Diophantine treatment of the Kepler laws induces the Space-Time quantification in a Total Quantum Physics, pushing back the Planck wall by a factor 10^{61}, resolving so the vacuum energy dilemma. The three-body gravitational hydrogen model explains the Tachyonic Three Minutes Formula giving half the Hubble radius, thus its critical mass, showing the Universe is a Particle in the Cosmos, whose radius is deduced from holographic Space-Time Quantification. The Kotov Doppler-free oscillation rehabilitates the tachyonic physics of the bosonic string theory in the Octonion Topological Axis prolonging the Quaternion Periodic Table, implying the string-spin identification and gives G, compatible with the BIPM measurements but 2x10^{-4} larger than the official value. This confirms the Higgs mass is tied to the third perfect couple 495-496. The so-called "free parameters", as well as the Archimedes pi, are confirmed to be computation basis, in liaison with the Holography and Holic Principles, opening the way to a revolution in mathematics where the Happy Family of the sporadic groups and the Egyptian Nombrol 3570 are central. The parameter values are deduced in the ppb domain by Optimal Polynomial Relations involving the Large Lucas Prime Number, the forth (last) term of the Combinatorial Hierarchy. The photon-neutrino background manages to divide this prime number by holographic terms respecting the symmetry electron-hydrogen. The data analysis rehabilitates the Wyler's and the Eddington theory, which predicted correctly the supersymmetry Proton-Tau. The tachyonic synthesis defines the Neuron, the characteristic time of the neuro-musical Human, corresponding to 418/8 Hz, three octavus down the La bemol for the chording 442.9. The Total Quantum Physics introduces the Human Measure Mass x Heigth, and connects with the Solar system, the CMB and the DNA through musical scales, introducing the Cosmobiology where the CMB is identified with the genetic code of the Universe (Truncated by viXra Admin).
[396] vixra:2304.0114 [pdf]
Delayed Choice Quantum Erasure: The Path Information and Complementarity
Photon wave functions collapse into particles only after one discerns their path, as was observed in certain experiments. However, scientists like Wheeler, and Scully contemplated that this causality and the uncertainty principle could be violated through quantum erasure. Complementarity and availability of path information are sufficient to explain quantum mechanics. Scientists widely debate this claim; in the process, they often try to reinterpret the tenets of quantum mechanics. Qureshi employs microscopic-macroscopic entanglement to save causality. Qureshi also posits that the experimenter's active choice of Hilbert space basis determines wave or particle nature. Qureshi insists that when the experimenter measures the photon in the x-basis, it entangles with his novel qubit which-way detector and the two detectors that constitute the screen and show interference. If the basis choice is z, then the interference is destroyed. In this paper, we peruse the shortcomings of Qureshi's analysis.The distinction between evolution and measurement is not acknowledged. The entanglement of a photon with experimental apparatus smears quantum-classical distinction. Qureshi forgets that the screen in quantum experiments can be a single entity. The quantum qubit which-way detector he contemplates will likely function classically. In assigning measurement basis to the photon, Qureshi forgets that the phase change due to path difference in the experimental setup does not influence quantum measurement. Thus, it does not contribute to a distinct quantum state. Scientists have studied wave-particle duality using entangled photons: entanglement alone cannot destroy the interference. The photon can choose its wave-particle option randomly; the experimenter's role is thus inactive. Wave nature formulation is not derived from Hilbert space basis, and the mathematical formulation in quantum mechanics is meant to predict probabilities in the particle nature; it does not say anything about the photon's physical realization. We observe that Complementarity ensures that causality violations are possible.
[397] vixra:2304.0111 [pdf]
Solution to Infinity Problem Based on Scattering Matrix Using Time-Evolution Operators Without Needing Renormalization
The current situation of research challenging thedemanding tasks of renormalization implies that the presentframework of quantum scattering theory does not offer goodprospect, and therefore it is necessary to construct a newtheory able to solve the infinity problem fundamentally in ageneral way. Our purpose is to construct an alternative mathematical formulation capable of ensuring the convergenceof the scattering matrix without relying on renormalizationtheory, thus preventing overlapping divergences of the scattering matrix in principle. We demonstrate that the infinityproblem is due mainly to the mathematical representation ofthe scattering operator and present, as a solution to the problem, alternative mathematical representations of the scattering matrix in terms of the local and global time-evolutionoperators which replace the Dyson series and do not need theFeynman diagram. Importantly, the obtained results clarifythat substantially, there does not exist the infinity problemof the scattering matrix. Ultimately, we draw the successfulconclusion that it is possible to conceive of an alternativeto the conventional scattering theory and our formalism as anew proposal can lay the foundation for formulating a consistent theory without infinity and renormalization.
[398] vixra:2304.0110 [pdf]
Beyond the Standard Model: Neutrino Oscillations and the Search for New Physics
The interpretation that the positive and negative solutions of the Dirac equation are particles and antiparticles is a common and widespread one. In this paper, we would aim to extend the 0-Sphere Electron Model to explore the internal structure of neutrino oscillations. From another point of view, the positive and negative solutions could correspond to a process in which two particles emit and absorb energy. This paper proposes a new model for the internal structure of neutrinos. The model has been created in which the Lissajous curve arises from two energy oscillators obtained from the positive and negative solutions of the Dirac equation. This model, the 0-Sphere Neutrino Model, assumes the existence of oscillators with thermal energy that are converted into kinetic energy, and includes two oscillators with different vibrational frequencies that are described by Lissajous curves.
[399] vixra:2304.0104 [pdf]
Gamma-Ray Bursts and Fermi Bubbles
According to a recent calculation, 10^58 erg of radiant energy was released by Sgr A*, when it formed the Fermi bubbles. Here, it is arguedthat this explosion constituted a long gamma-ray burst.
[400] vixra:2304.0102 [pdf]
The Maxwell Equations With Radiative Corrections
The one-loop radiative correction to the photon propagator can be graphicallyrepresented by the Feynman diagram of the second order. The physical meaningof this diagram is that photon can existin the intermediate state with electron and positron as virtual particles. The photon propagationfunction based on such process with electron-positron pair is derived. Themodified Lagrangian of electromagnetic field is derived supposing the modifedpropagator of photon. The Schwinger source methods of quantum field theoryis applied. Then, the corresponding Maxwell equations are derived from the newLagrangian.
[401] vixra:2304.0093 [pdf]
Rydberg Atoms as Tests of Coulomb's Law Over Colossal Distances
Guided by the problem of flat galaxy rotation curves in Cosmology, it is argued that deviations from Coulomb potential might be observed in the microscopic analogues of galaxies which are Rydberg atoms. It is found that such deviations might occur in Rydberg atoms with principal quantum numbers of more than 780.
[402] vixra:2304.0073 [pdf]
The Time Travel Interpretation of the Bible
We describe the Biblical work of ages as a time travel program for saving humanity from extinction. God's existence is proven as a consequence of the existence of time travel, which is supposed. We present the case that Abraham's grandson Jacob, also called Israel, is Satan. We make the case that the Israelites are described as God's chosen people in the Bible despite their identity as the children of Satan because God's Messiah is descended from Abraham through Satan. They are chosen as the ancestors of the Messiah rather than as Satan's children. We propose an interpretation in which God commanded Abraham to kill his son Isaac to prevent Isaac from becoming the father of Satan. We suggest that God stayed Abraham's hand above Isaac because preventing the existence of Satan would also prevent the existence of Satan's descendant the Messiah. The history of the Israelites is summarized through Jesus and Paul. This book is written so that the number of believers in the world will increase.
[403] vixra:2304.0060 [pdf]
Pesticide Residues in Children’s Diets A Comprehensive Review of Health Risks, Agroecology, and Policy Implications
The presence of pesticide residues in children's diets is a significant public health concern. This comprehensive review examines the prevalence of pesticide residues in children's diets, the potential health risks associated with exposure to these chemicals, and the role of agroecology in promoting healthier food options. I've conducted a literature review, comparative analysis, policy analysis, and case studies to gain a deeper understanding of this issue and identify potential policy changes and interventions needed to address it. My findings indicate that promoting agroecology and organic food production can reduce the reliance on chemical pesticides and provide healthier food options for children. Furthermore, implementing and expanding farm-to-school programs and prioritizing organic food in school feeding programs can help reduce children's exposure to pesticide residues. By raising public awareness and investing in research on the health impacts of pesticide residues, we can work towards creating a safer and healthier food system for future generations.
[404] vixra:2304.0051 [pdf]
"The Beauty and The Beast"
The Standard Model of Elementary Particle Physics is an amazing and beautiful achievement of theory, experiment and technology, explaining the foundations of Physics in terms of three out of four interactions, considered fundamental. Classification of Finite Simple Groups is too, an amazing achievement in Mathematics. Recent advancements in the understanding of SM lead to an unexpected "encounter" between the two: lepton masses are related to the Monster and VOAs, via j-invariant of elliptic curves ... under The Moonshine. But there is a "contender": Platonic and Archimedean solids (models for baryons, like the proton and neutron) can be represented as Dessins d'Enfant, introduced by Grothendieck in the 1960s, using Belyi maps! Who will win the heart of the Beauty? The main concepts will be defined and pictures will help bring the subject to the understanding of a general audience.
[405] vixra:2304.0049 [pdf]
On Goldbach Conjecture and Twin Prime Conjecture Part One: History, Development and Doubt
In this paper, we introduce Goldbach Conjecture and Twin Prime Conjecture: history, development, public dissemination in China, and propose doubt about the effectiveness of Analytical Number Theory
[406] vixra:2304.0048 [pdf]
A New Approach to the Foundation of Quantum Theory and Mathematics
...The concept of infinitesimals was proposed by Newton and Leibniz. In those days, people knew nothing about elementary particles and atoms and thought that, in principle, any substance can be divided into any number of parts. But now it is clear that as soon as we reach the level of elementary particles, further division is impossible. After all, even the very name ''elementary particle'' says that such a particle has no parts, that is, it cannot be divided into 2, 3, etc. So, there are no infinitesimals in nature, and the usual division is not universal: it makes sense only up to some limit.Would it seem obvious? And then it is clear that fundamental quantum physics must be built without infinitesimals. It would seem that everyone understands that the construction of such a physics is far from being an easy task, and attempts at such a construction should be encouraged. However, my stories, described below, show that, as a rule, the establishment not only does not encourage such attempts, but does everything to ensure that the results in this direction are not published.In addition to the infinitesimal problem, I describe other problems in which I proposed new approaches, but since they are not in the spirit of what the establishment does, I had big problems with the publication. But, of all these tasks, there is one that probably overshadows all the others. This is a dark energy problem.In all my works on this problem I explain that there are no problems with explaining the cosmological acceleration and therefore dark energy and quintessence are nonsense. It would seem that if the establishment is honest, then they should at least read [1] and directly say that I do not understand something or they. But they pretend that they do not notice my publications on this topic.
[407] vixra:2304.0047 [pdf]
Samsad Bengali-English Dictionary and The Graphical Law
We study the entries of the dictionary, the Samsad Bengali-English Dictionary compiled by Sailendra Biswas, the first edition.We draw the natural logarithm of the number of entries, normalised, starting with a letter vs the natural logarithm of the rank of the letter, normalised. We conclude that the dictionary can be characterised by BW(c=0.01),the magnetisation curve of the Ising Model in the Bragg-Williams approximation in the presence of external magnetic field, H. $c=frac{ H}{gamma epsilon}=0.01$ with $epsilon$ being the strength of coupling between two neighbouring spins in the Ising Model, $gamma$ representing the number of nearest neighbours of a spin, which is very large.
[408] vixra:2304.0046 [pdf]
A Modified Born-Infeld Model of Electrons as Rotating Waves
This work presents a modified Born-Infeld field theory that might include electron-like solutions in the form of rotating waves of finite self-energy. Preliminary numerical experiments suggest that the proposed model might show quantum mechanical features in a classical field theory.
[409] vixra:2304.0045 [pdf]
On the Nature of the Poloidal Component of the Sun's Magnetosphere
In this work we have found out such a sequence of physical processes which allowed us to reveal the cause-and-effect relationship between the observed poloidal alternating magnetic field of the Sun and a factor of a non-electromagnetic nature, which is external to the Sun. Specific character of the Sun's orbital motion about the Solar system barycenter promotes emergence inside the Sun of the conditions necessary for generation of the observed poloidal component of the Sun's magnetosphere without participation of the Sun's own rotation. Using a pendulum as a model, we have shown in what way the external impact upon the Sun initiates within the Sun itself forced oscillations which are just those that induce the poloidal alternating magnetic field; in its turn, this magnetic field defines the cyclic character of the Wolf numbers. We have also revealed a process which allowed putting into consideration a natural event-time scale that may be quite useful in planning the observations and also in systematizing and synchronizing fragments of the available time series of observations manifesting the Sun's poloidal magnetic field about 20 years in period. Keywords: Sun's poloidal magnetic field, inversion, pole flip, barycenter, Jupiter, Saturn, Wolf numbers
[410] vixra:2304.0015 [pdf]
Harnessing Fusion Energy: A Novel Approach to Plasma Confinement and Stabilization
Fusion energy has long been considered the holy grail of clean and sustainable power generation. However, the practical realization of fusion energy has been hindered by the challenges associated with plasma confinement and stabilization. In this article, I propose a novel approach to address these issues, combining advanced magnetic confinement techniques with innovative plasma stabilization methods. Our approach aims to significantly improve the efficiency and feasibility of fusion energy, bringing us closer to a sustainable energy future.
[411] vixra:2304.0014 [pdf]
Tailoring Geometries and Magnetic Configurations in Magnetochiral Nanotubes for Enhanced Spin-Wave Properties: Towards Energy-Efficient, High-Density 3D
The development of energy-efficient, high-density three-dimensional (3D) magnonic devices has garnered significant interest due to their potential for revolutionizing information processing and storage technologies. Building upon recent findings on spin-wave modes in magnetochiral nanotubes with axial and circumferential magnetization, this study investigates the effects of tailored geometries and magnetic configurations on the spin-wave properties of these nanostructures.By employing advanced simulation techniques, experimental methods, and theoretical analysis, we explore the interplay between geometry, magnetization, and spin-wave dynamics in magnetochiral nanotubes. Our results reveal that specific combinations of geometrical parameters and magnetic configurations lead to enhanced spin-wave properties, paving the way for the design of novel 3D magnonic devices with improved performance and energy efficiency.Furthermore, we demonstrate the potential of these optimized magnetochiral nanotubes for various applications, including logic nanoelements and vertical through-chip vias in 3D magnonic device architectures. This study not only advances our understanding of spin-wave dynamics in magnetochiral nanotubes but also provides a foundation for the development of next-generation magnonic devices.
[412] vixra:2304.0013 [pdf]
Fusion Energy: Current Progress and Future Prospects
Fusion energy, the process of combining light atomic nuclei to form heavier nuclei, offers the potential for a clean, safe, and virtually limitless energy source. As the world faces increasing energy demands, climate change, and diminishing fossil fuel reserves, the pursuit of fusion energy has become more critical than ever. This article provides an overview of the current state of fusion energy research, discussing the main approaches to achieving fusion, such as magnetic confinement fusion (tokamaks and stellarators) and inertial confinement fusion (laser-driven and heavy-ion-driven).I highlight the progress made in major experimental facilities, including ITER, National Ignition Facility, Wendelstein 7-X, and Joint European Torus, and outline the key challenges that must be overcome before fusion energy can become a viable and widely-used energy source. Furthermore, I explore future prospects and potential developments in fusion energy research, emphasizing the importance of continued investment, international collaboration, and public-private partnerships in advancing this transformative energy source.The pursuit of fusion energy is crucial for securing a sustainable energy future and combating the adverse effects of climate change, making it a vital area of research for the benefit of humanity.
[413] vixra:2304.0012 [pdf]
Exploring the Turing Complete Universe: Implications for Universe Generators and Optimal Policy Autonomous Games in Addressing the Fundamental Question of Existence
This paper delves into the concept of a Turing complete universe, exploring its implications for the best policy zero player game and addressing the fundamental question of why anything exists or how something has always existed. I begin by examining the potential of a Turing complete universe to construct a universe maker, a recursive loop of universes within universes. Subsequently, I investigate the implications of this universe maker in creating a best policy zero player game, assessing its potential to answer the fundamental question of existence.Furthermore, I evaluate the possible applications of this research, such as generating new universes and probing the boundaries of reality. Lastly, I contemplate the potential implications and applications of this research, including the possibility of unraveling the mysteries of the universe and addressing the age-old question of existence. While this research holds the potential to offer insights into the nature of reality and the ultimate question, its theoretical nature necessitates a long-term research plan to further explore its implications.
[414] vixra:2304.0005 [pdf]
Peer2Panel: Democratizing Renewable Energy Investment With Liquid and Verifiable Tokenized Solar Panels
With an expected investment cost of $sim$$100 trillion within the next decades, renewable energy is at the heart of the United Nation's transition to net-zero emissions by 2050. Unfortunately, there are several challenges associated to these investments, such as the low exit liquidity and the hassles of going through centralized agencies. Investments in renewable energy is thus currently mostly limited to governments, corporate, and wealthy individuals. At Peer2Panel (P2P), we address these issues by tokenizing solar panels into unique SolarT NFTs on the Ethereum blockchain, where we function as an intermediary between a token-owning individual and a physical solar panel. Apart from panels installation and maintenance, our role is to redistribute the profits from the generated energy to the SolarTs holders, thus making investments in SolarTs transparent, democratic, and liquid. In fact, ownership of a SolarT token gives a direct ownership interest in the solar panels owned by P2P, which can then be exchanged freely on-chain and thus remove most of the hassles of the traditional energy market. In addition, P2P leverages the most recent innovations from the decentralized finance (Defi) ecosystem to propose SolarTs-collateralized loans, instant liquidity and multi-chain interoperability to its investors.
[415] vixra:2304.0003 [pdf]
Computational Consciousness
Computational consciousness is a novel hypothesis that aims to repli-cate human consciousness in artificial systems using Multithreaded Prior-ity Queues (MPQs) and machine learning models. The study addressesthe challenge of processing continuous data from various categories, suchas vision, hearing, and speech, to create a coherent and context-aware sys-tem. The proposed model employs parallel processing and multithreading,allowing multiple threads to run simultaneously, each executing a machinelearning model. A priority queue manages the execution of threads, pri-oritizing the most important ones based on the subjective importance ofevents determined by GPT-3.The model incorporates short-term and long-term memory, storinginformation generated at each moment, and uses an Evolutionary Al-gorithm (EA) for training the machine learning models. A preliminaryexperiment was conducted using Python 3.9.12, demonstrating the tech-nical feasibility of the hypothesis. However, limitations such as the lackof a comprehensive environment, absence of load balancing, and GPT-3API constraints were identified.The significance of this study lies in its potential contribution to theunderstanding of consciousness and the development of Artificial GeneralIntelligence (AGI). By exploring the integration of multiple threads ofexecution and machine learning models, this work provides a foundationfor further research and experimentation in the field of computationalconsciousness. Addressing the limitations and potential criticisms willhelp strengthen the model’s validity and contribute to the understandingof this complex phenomenon.
[416] vixra:2304.0001 [pdf]
A Proof of the Twin Prime Conjecture
It is well known to mathematicians, that there is an infinite number of primes as proven via simple logic by Euclid in the 4th Century BC1,2 and confirmed by Leonhard Euler in 17373. In 1846 French mathematician Alphonse de Polignac4 proposed that any even number can be expressed in infinite ways as the difference between two consecutive primes, since when or perhaps possibly even before that all the way back to Euclid, mathematicians have been trying to prove that there is an infinite number of TWIN PRIMES. In this paper a relatively simple proof is presented, that there is indeed an infinity of TWIN PRIMES based on a new approach without any assumptions.
[417] vixra:2303.0149 [pdf]
On Emergence of Relativistic Time from Quantum Phase
Previously, an analysis of emergence of 3D-space from quark fields in the context of QCD suggested a similar analysis of the relation between quantum phase and relativistic time.As a follow-up, Einstein's synchronization analysis, in the context of quantum theory of EM leads to emergence of Minkowski Space-Time from quantum phase in the context of Scalar EM.An essential step in this direction was made by Kaluza-Klein approach to unification of gravity and electromagnetism.This research program suggests that GR can be derived from the SM, via gauge theory, with Gravity from quantum origin, emerging macroscopically as geometry. In essence this is a "reverse engineering" of the historical development of modern theories from classical ones, taking the quantum level as primary, in order to derive the classical models we call "reality".
[418] vixra:2303.0144 [pdf]
Solving Triangles Algebraically
Quaterns are a new measure of rotation. Since they are defined in terms of rectangular coordinates, all of the analogue trigonometric functions become algebraic rather than transcendental. Rotations, angle sums and differences, vector sums, cross and dot products, etc., all become algebraic. Triangles can be solved algebraically. Computer algorithms use truncated infinite sums for the transcendental calculations of these quantities. If rotations were expressed in quaterns, these calculations would be simplified by a few orders of magnitude. This would have the potential to greatly reduce computing time. The archaic Greek letter koppa is used to represent rotations in quaterns, rather than the traditional Greek letter theta. Because calculations utilizing quaterns are algebraic, simple rotation in the first two quadrants can be done "by hand" using "pen and paper." Using the approximate methods outlined towards the end of the paper, triangles may be approximately solved with an error of less than 3% using algebra and a few simple formulas.
[419] vixra:2303.0129 [pdf]
Emulador - Emulator
ENGLISH: It is a circuit that uses two transistors in a novel configuration. Minimize distortion to the point of being undetectable. It combines the properties of a preamplifier and an operational amplifier. The output level is regulated by the gain control and not by a volume potentiometer. It has two inputs, one non-inverting and one inverting. The latter makes it possible to create, for example, a very simple and very pure mixer. As a preamplifier it is ideal for microphones because the output signal is strictly identical in shape to the input signal./////////ESPAÑOL: Es un circuito que utiliza dos transistores en una configuración novedosa. Minimiza la distorsión hasta el extremo de ser indetectable. Combina las propiedades de un preamplificador y de un amplificador operacional. El nivel de salida está regulado por el control de ganancia y no por un potenciómetro de volumen. Tiene dos entradas, una no inversora y otra inversora. Esta última permite crear, por ejemplo, un mezclador muy simple y muy puro. Como pramplificador es ideal para micrófono porque la señal de salida es, en forma, rigurosamente idéntica a la señal de entrada.
[420] vixra:2303.0118 [pdf]
A Note On Gravity Field And Gravimetry
In this short note, we give some elements on the gravity field and gravimetry. In addition, we will return to applications for altitude definitions and precision leveling observations as well as distance reductions.
[421] vixra:2303.0115 [pdf]
Zero-shot Transferable and Persistently Feasible Safe Control for High Dimensional Systems by Consistent Abstraction
Safety is critical in robotic tasks. Energy function based methods have been introduced to address the problem. To ensure safety in the presence of control limits, we need to design an energy function that results in persistently feasible safe control at all system states.However, designing such an energy function for high-dimensional nonlinear systems remains challenging.Considering the fact that there are redundant dynamics in high dimensional systems with respect to the safety specifications, this paper proposes a novel approach called abstract safe control.We propose a system abstraction method that enables the design of energy functions on a low-dimensional model.Then we can synthesize the energy function with respect to the low-dimensional model to ensure persistent feasibility.The resulting safe controller can be directly transferred to other systems with the same abstraction, e.g., when a robot arm holds different tools. The proposed approach is demonstrated on a 7-DoF robot arm (14 states) both in simulation and real-world. Our method always finds feasible control and achieves zero safety violations in 500 trials on 5 different systems.
[422] vixra:2303.0112 [pdf]
The Radiative Corrections to the Coulomb Law and Bohr Energy
The photonpropagation function with electron-positron pair is determined from the effective emission and absorption sources. The Schwingersource methods of quantum field theory is applied. Then, the Coulomb potentialand Bohr energy with radiative corrections are determined.
[423] vixra:2303.0111 [pdf]
Calculation of the Rest Masses of Elementary Particles Using Polynomials with the Base π
By restricting k to rational numbers, the Schrödinger wave equation Ψ = Ae<sup>-i/ℏ(Et+mrdr/dt)</sup> = Ae<sup>-i π k</sup> can be converted into a polynomial with the base π. Ultimately, this leads to the action S for each object:<br>S<sub>Object</sub> =(2π)<sup>4</sup> E<sub>t</sub> + (2π)<sup>3</sup> E<sub>r</sub> i<sup>t</sup> + (2π)<sup>2</sup> E<sub>φ</sub> i<sup>t-1</sup> + (2π)E<sub>θ</sub> i<sup>t-2</sup><br> t in Z If 2 objects and an observer have a common center of gravity, the energies can be related and calculated using a single polynomial. The integer quantum numbers E<sub>t</sub>, E<sub>r</sub>, E<sub>φ</sub> and E<sub>θ</sub> ensure cohesion and lead to the four fundamental natural forces. Our worldview, with 3 isotropic dimensions x, y and z and rotations with 2π must be distinguished from this. The polynomials are transformed by simple operators (addition) for parity, time and charge. The 3 spatial dimensions result from regularly recurring parity operators. Numerous calculations are given for the orbits in the solar system and for the masses of the elementary particles, e.g.:<br>m<sub>neutron</sub> / m<sub>e</sub> =(2π)<sup>4</sup> +(2π)<sup>3</sup>+(2π)<sup>2</sup>-(2π)<sup>1</sup>-(2π)<sup>0</sup>-(2π)<sup>-1</sup>+2(2π)<sup>-2</sup>+2(2π)<sup>-4</sup>-2(2π)<sup>-6</sup> +6(2π)<sup>-8</sup>=1838.6836611<br> The charge operator for all particles is:<br>C = - π + 2π<sup>-1</sup> - π<sup>-3</sup> + 2π<sup>-5</sup> - π<sup>-7</sup> + π<sup>-9</sup> - π<sup>-12</sup><br>Together with the neutron mass, the result for the proton is:<br>m<sub>proton</sub>=m<sub>neutron</sub> + C m<sub>e</sub>= 1836.15267363 m<sub>e</sub><br>The probabilities for the correct representation of the neutron and proton mass have been calculated and are greater than 0.99997. The muon and tauon masses can be calculated in the same way.<br>Fine structure constant:<br> 1/α = π<sup>4</sup>+π<sup>3</sup>+π<sup>2</sup>-1- π<sup>-1</sup> + π<sup>-2</sup>-π<sup>-3</sup> + π<sup>-7</sup> - π<sup>-9</sup>- 2π<sup>-10</sup>-2π<sup>-11</sup>-2π<sup>-12</sup><br>For an observer and two objects, from the torque and angular momentum alone, a common constant of h, G, and c can be derived, giving a ratio of meters and seconds:<br>h G c<sup>5</sup> s<sup>8</sup> /m<sup>10</sup> Math.sqrt( π<sup>4</sup>- π<sup>2</sup>- π<sup>-1</sup>- π<sup>-3</sup> ) = 1.00000<br>How fast the surface of a body moves relative to its radius is determined purely by the smallest possible ratio of its rational coordinates.<br>2π c(α) meter orbital period = Math.sqrt(diameter)<br>2π c meter day = (Earth's diameter)<sup>2</sup>
[424] vixra:2303.0109 [pdf]
Post Pandemic, Social Media Pedagogy: Math with TI Calculator Menu Programs
Laments from teachers of all stripes are growing. Looking out from behind the podium one can constantly see a sea of bored, confused, seemingly moribund students staring at their I-phones, maybe still wearing Covid masks. Between the pandemic and social media and traditional sage on a stage teaching teachers reside: a kind of prehistoric fish out of any known water. In this article we propose a solution using in novel and controversial ways TI-84 CE calculators. The idea is to show how to animate students via increased teacher-student and student-student interactions. It is field tested: life does come back into the classroom with the techniques I give in this article: get an easy A using your cool calculator, filling in a shell provided by the teacher.
[425] vixra:2303.0102 [pdf]
The Riemann Hypothesis Is True: The End of the Mystery
In 1859, Georg Friedrich Bernhard Riemann had announced the following conjecture, called Riemann Hypothesis : The nontrivial roots (zeros) s=sigma+it of the zeta function, defined by: zeta(s) = sum_{n=1}^{+infty}frac{1}{n^s},,for Re(s)>1 have real part} sigma= 1/2. In this note, I give the proof that sigma= 1/2 using an equivalent statement of the Riemann Hypothesis concerning the Dirichlet eta function
[426] vixra:2303.0101 [pdf]
A Specific Magnitude Budget for the Detection of 36 Nuclear Earthquakes Near Large Urban Areas Subject to a Natural Seismic Hazard.
Multiple underground nuclear explosions may trigger the rupture of seismic faults and mimic a natural earthquake. Moreover, multiple nuclear explosions can be spatially arranged (on a vertical line for instance) and temporally synchronized in order to reduce significantly the P-waves (except inside both spherical cones along the vertical line arrangement).A Specific Magnitude Budget, with the relevant elementary approximations, is relatively enough accurate to compare unambiguously the energy of the stress drop over the fault rupture and the energy of the radiated seismic waves.Indeed, for the largest natural earthquakes precisely recorded ($6.9 leq M_wleq 7.3$), we define very conservatively their average seismic radiation efficiency to $0.25$. It follows from that definition, the natural seismic radiation efficiency ranges between $0.113$ and $0.520$ around the average $0.269$ (the natural Specific Magnitude Budget range between $Delta_{nat}^{min} M_Z=-0.630$ and $Delta_{nat}^{max} M_Z=-0.189$ around the average $Delta_{nat}^{mean} M_Z=-0.380$).On the other hand, the nuclear seismic radiation efficiency ranges between $1.185$ and $1~113$ around the average $81.9$ (the nuclear Specific Magnitude Budget range between $Delta^{min}_{nuc} M_Z=0.049$ and $Delta^{max}_{nuc} M_Z=2.031$ around the average $Delta_{nuc}^{mean} M_Z=1.275$).In practice, the natural seismic radiation efficiency is always $2.278times$ times smaller than the nuclear seismic radiation efficiency (an artificial gap of the Specific Magnitude Budget $Delta_{gap} M_Z=0.238$ is found). Indeed, to provoke a more powerful stress drop over the fault rupture with multiple underground explosions, an accurate information about the future epicenters should be known which is impossible in practice. Lowering too much the energy of the multiple underground nuclear explosions would also increase the risk of not triggering at all the rupture of a seismic fault.
[427] vixra:2303.0091 [pdf]
Collatz Conjecture
The paper analyzes the number of zeros in the binary representation of a natural number. The analysis is carried out using the concept of the fractional part of a number, which naturally arises when finding a binary representation. This idea relies on the fundamental property of the Riemann zeta function, which is constructed using the fractional part of a number. Understanding that the ratio of the fractional and integer parts, by analogy with the Riemann zeta function, expresses the deep laws of numbers, will explain the essence of this work. For the Syracuse sequence of numbers that appears in the Collatz conjecture, we use a binary representation that allows us to obtain a uniform estimate for all terms of the series, and this estimate depends only on the initial term of the Syracuse sequence. This estimate immediately leads to the solution of the Collatz conjecture
[428] vixra:2303.0082 [pdf]
A Boolean Algebra over a Theory
Suppose that L is a first-order language. Let Lu2020 denote the union of L and {t, f} where t(true), f(false) are the nullary operations. We may define a binary relation ‘≤’ such that the sentences set Φ of the language Lu2020 is a preordered set. And we may construct a boolean algebra Φ/∼, denoted Φ ̃ , by an equivalence relation ‘∼’. Then Φ ̃ is a partial ordered set. Let A be a structure of the language L. If Th(A) is a theory of A, then Thu2020(A) is an ultrafilter. If Ψ ⊂ Φ ̃ is a finitely generated filter, then Ψ is principal. We may define a kernel of a homomorphism of the boolean algebra Φ ̃ such that the kernel is a filter. And a filter is a kernel if it is satisfied by some assumptions.
[429] vixra:2303.0078 [pdf]
3D Map of the Universe - A Big Misunderstanding [?]
It is impossible to create a correct 3D map of the observable part of the universedue to the fact that what we see around us is not a three-dimensional space. Eachobject we observe is distant from us not only in space but also in time. What we seeis a fragment of spacetime, and if we try to imagine it as a three-dimensional space,various deformations and incorrect determinations of distances and sizes of distantobjects occur.Regardless of whether real space is curved or flat, the observed part of the universecan be modeled as system of spheres (which differ in the time it took for the lightto reach us from a given sphere) distributed in a certain way in spacetime. In orderto correctly imagine the observable part of the universe, a four-dimensional map isnecessary. In this paper, I present one possible solution for constructing a 4D map ofthe universe. One may be surprised at how big the differences can be in comparison toa 3D map, which treats the observable part of the universe as three-dimensional space.
[430] vixra:2303.0076 [pdf]
Hall Effect Thruster Design Via Deep Neural Network for Additive Manufacturing
Hall effect thrusters are one of the most versatile and popular electric propulsion systems for space use. Industry trends towards interplanetary missions arise advances in design development of such propulsion systems. It is understood that correct sizing of discharge channel in Hall effect thruster impact performance greatly. Since the complete physics model of such propulsion system is not yet optimized for fast computations and design iterations, most thrusters are being designed using so-called scaling laws. But this work focuses on rather novel approach, which is outlined less frequently than ordinary scaling design approach in literature. Using deep machine learning it is possible to create predictive performance model, which can be used to effortlessly get design of required hall thruster with required characteristics using way less computing power than design from scratch and way more flexible than usual scaling approach.
[431] vixra:2303.0070 [pdf]
Neoclassical Physics Presentation
Since 1905, physics has lost its soul, the intimate contact with reality and reason.The modern era has brought a revolution, a profound change in the representationswe have of the world. But the old classical Newtonian physics is resisting, it hasnot yet said its last word. Throughout this article, we will show that it provides anelegant, clear and precise answer to current questions in physics. Despite appearances,it remains consistent with all the empirical, experimental data that it can explain whilemaintaining a requirement for realism and rationality.
[432] vixra:2303.0059 [pdf]
Exobiological DNA 1bna based on Silicon
The Silies has been investigated as a possible al-- the DNA double helix - sometimes doubles up again. Researchers have now found this quadruple-stranded form in healthy human cells for the first time. Speculate that the quadruplex structure forms to hold the molecule open and facilitate the reading of the genetic code and thus the production of proteins. G-quadruplex DNA is a four-stranded structure made that can form a ’knot’ in the DNA of living cells. In these terms, it was analyzed through computationalcalculations, via MM(Molecular Mechanics), and byab initio Restricted Hartree-Fock (RHF) method, on a simple STO-3G (Slater-type orbitals with 3 Gaussians) basis, the possibility of a DNA macromolecule based on Silicon. From the basic structure of 1bna, however, you assume conditions without the presence of Carbon, replacing it with Silicon. It was obtained a cluster of G-quadruplex DNA, forming a cocoon, with great possibility for the pharmaceutical industry in capturingmolecules foreign to human DNA.
[433] vixra:2303.0057 [pdf]
The Problem of Particle-Antiparticle in Particle Theory
The title of this workshop is: "What comes beyond standard models?". Standard models are based on standard Poincare invariant quantum theory (SQT). Here irreducible representations (IRs) of the Poincare algebra are such that in each IR, the energies are either >=0 or <=0. In the first case, IRs are associated with particles and in the second case - with antiparticles, while particles for which all additive quantum numbers (electric charge, baryon and lepton quantum numbers) equal zero are called neutral. However, SQT is a special degenerate case of finite quantum theory (FQT) in the formal limit p→∞ where p is a characteristic of a ring in FQT. In FQT, one IR of the symmetry algebra describes a particle and its antiparticle simultaneously, and there are no conservation laws of additive quantum numbers. One IR in FQT splits into two standard IRs with positive and negative energies as a result of symmetry breaking in the formal limit p→∞. The construction of FQT is one of the most fundamental (if not the most fundamental) problems of particle theory.
[434] vixra:2303.0054 [pdf]
Deriving Measurement Collapse Using Zeta Function Regularisation and Novel Measurement Theory
This paper shows how an application of zeta function regularisation to a physical model of quantum measurement yields a solution to the problem of wavefunction collapse. A realistic measurement ontology is introduced, which is based on particle distinguishability being imposed by the measurement process entering into the classical regime. Based on this, an outcome function is introduced. It is shown how regularisation of this outcome function leads to apparent collapse of the wavefunction. Some possible experimental approaches are described.
[435] vixra:2303.0051 [pdf]
On the Topology of Problems and Their Solutions
In this paper, we study the topology of problems and their solution spaces developed introduced in our first paper. We introduce and study the notion of separability and quotient problem and solution spaces. This notions will form a basic underpinning for further studies on this topic.
[436] vixra:2303.0047 [pdf]
Statistical Independence in Quantum Mechanics
Algebraic mistakes of using a non-relativistic functions betrayed Dirac’s elegant derivation of the relativistic equation of quantum mechanics and exposed a short coming of special relativity. It was a serious mistake because that famous paper became a model for theorist to follow who produced an unending stream of nonsense. The mistake was compounded because it hid the fact that special relativity was still incomplete. Multiple independent spaces are required to generate both dynamics as well as produce particle properties. The concept of statistical independence of spaces that encapsulated quantum objects, fields and particles, was necessary for physics to have a relativistic basis for both massive particles and massless fields. The example that will be developed is the origin of the solar neutrino survival data that requires the electron neutrino to be massless as originally proposed by Pauli. The analysis renders a proof of the original quantum conjecture by Planck and Einstein that radiation is quantized and how inertia for massive particles is generated.
[437] vixra:2303.0032 [pdf]
Mond from FLRW
After proving that the universal acceleration scale of MOND is the acceleration of light in an expanding Universe, it is shown that accelerating null rays require a modification of the metric of velocity space, hence the differential of velocity. Consequently the canonical momentum, and from there the law of motion are changed. After some approximations, MOND's essence is vindicated and it is seen as a necessary consequence of the acceleration of the Universe.
[438] vixra:2303.0030 [pdf]
A Quark Fields Emergent Space-Time and Gravity
The idea of using the gauge theory paradigm of the SM to couple Quark-Lepton Model of matter and fields to Space-Time, as an emergent structure, is suggested. The color QCD quark fields, as SU(2)-connections of EM type, allows to define an adapted Space-Time metric compatible with the SU(3)-symmetry. It uses the double role of EM U(1)-connection, with the electric charge of the electron as a 4th color T, aside the RGB quark colors. The Quantum Flavor-Dynamics (Weak Force) corresponds to the break of symmetry due to finite subgroups of symmetry of SU(2), of Platonic type, and plays the role of a theory of transitions of states defined by principal quantum number in 3D, that we call "generation", similar to the case of electron transitions in an atom (2D finite subgroups). Gravity is a nuclear spin-spin dependent interaction, resulting from the fractional electric charge of quarks, as U(1)-components of the quark fields. This allows to derive Newtonian gravity as an emergent force, always attractive, by averaging over quark spin directions, providing a solution to the Hierarchy Problem. The compatibility between the fiber structure (SM Gauge Theory) and base connections (Space-Time and EM) is conjectured to allow in principle to relate the SM and General Relativity. This approach provides a unified approach to the four fundamental interactions. Several other remarks are included, e.g. regarding the relation between the emergent Space-Time approach and String Theory, GUTs, TOEs and Quantum Gravity.
[439] vixra:2303.0028 [pdf]
New Classifications of Labeled Rooted Growing Trees and Their Application to Simplifying of the Tree Representation of Functions
The article deals with labeled rooted growing trees. Research in this area, carried outby the author of this article over the past 35 years, has led to the creation of the conceptof tree classification of labeled graphs. This concept is the mathematical basis of the treesum method aimed at simplifying the representations of the coefficients of power series in classical statistical mechanics. This method was used to obtain tree representations of Mayer coefficients of expansions of pressure and density in terms of activity degrees, which are free from asymptotic catastrophe. The same method was used to obtain tree representations of thecoefficients of the expansion of the ratio of activity to density in terms of activity degrees.All these representations for n ≥ 7 are much simpler than the comparable Ree-Hooverrepresentations according to the complexity criteria defined on these representations. Treerepresentations of the coefficients of the expansion of the m-particles distribution functioninto a series in terms of activity degrees were also obtained. All the above representations ofthe coefficients of power series obtained by the trees sum method are free from the asymptoticcatastrophe.In order to provide a mathematical basis for constructing new, even less complexrepresentations of the coefficients of these power series, further development of the conceptof tree classification of labeled graphs was required.As part of solving the problem of further development of this concept, the article proposesnew classifications of labeled rooted growing trees. And on their basis, the theorem wasformulated and proved, which is the basis for simplifying the tree representations of functions,that is, its representations as a sum of labeled by trees products of functions.
[440] vixra:2303.0027 [pdf]
Some Notes on Fermion Masses in the Tetron Model
Quark and lepton masses and mixings are considered in the framework of the microscopic model. The most general ansatz for the interactions among tetrons leads to a Hamiltonian $H_T$ involving Dzyaloshinskii-Moriya (DM), Heisenberg and torsional isospin forces. Diagonalization of the Hamiltonian provides for 24 eigenvalues which are identified as the quark and lepton masses. While the masses of the third and second family arise from DM and Heisenberg type of isospin interactions, light family masses are related to torsional interactions among tetrons. Neutrino masses turn out to be special in that they are given in terms of tiny isospin non-conserving DM, Heisenberg and torsional couplings. The approach not only leads to masses, but also allows to calculate the quark and lepton eigenstates, an issue, which is important for the determination of the CKM and PMNS mixing matrices. Compact expressions for the eigenfunctions of $H_T$ are given. The almost exact isospin conservation of the system dictates the form of the lepton states and makes them independent of all the couplings in $H_T$. Much in contrast, there is a strong dependence of the quark states on the coupling strengths, and a promising hierarchy between the quark families shows up.
[441] vixra:2303.0018 [pdf]
Polinomial Natural Solution of the Pythagorean Equation<br>Solución Natural Polinómica de la Ecuación Pitagórica
While unsuccessfully playing with the last Fermat's theorem I noticed that this way of playing, applied to the Pythagorean equation a²+b²=c², leads to a polynomial general natural solution. The game combines arithmetic, common sense and logic. It is a bit like teaching and learning mathematical and geometric subjects with the help of colorful manipulatives, as is done at the most elementary levels of teaching. The only thing different in the adult experience is operating with mentally conceived objects. The way of operating with them does not change.<p>Mientras jugaba sin éxito con el UTF noté que esa manera de jugar, aplicada a la ecuación pitagórica a²+b²=c² , conduce a una solución natural general polinómica. El juego combina aritmética, sentido común y lógica. Se parece un poco a enseñar y aprender asuntos matemáticos y geométricos con ayuda de objetos manipulables coloridos, como se hace en los niveles más elementales de la enseñanza. Lo único distinto en la experiencia adulta es operar con objetos concebidos mentalmente. La manera de operar con ellos no cambia.
[442] vixra:2303.0007 [pdf]
One Procedure for Determining the Astronomical Azimuth of a Direction
This note, written after my participation in 1982 in the astronomical campaign for observations of 8 Laplace points, presents the procedure for determining the astronomical azimuth of a direction by observing the pole star, notingthe time of its observation, i.e. the so-called time method.
[443] vixra:2303.0005 [pdf]
Particles Of Space-Time a Brief 'Experimental' Approach
Some theoretical physics models take space-time to be discontinuous. We concur. Further, we suggest that all of space-time consists of real, charged, Planck-scale, particles that move in an imaginary manifold under the influence of a force acting between those particles. We propose such a force, an amalgam of the strong and electro-weak forces. In addition, the gravitational force and the force between charged particles is included. We then drop a large number of the particles into the manifold and observe their motions. The motions can be observed in 3D on a dedicated URL. This lets us explore various quantum mechanical phenomena.
[444] vixra:2302.0149 [pdf]
Equation of Motion for the Throat of Wormhole in Three Dimensional Gravity
We investigate the equation of motion for the throat of wormhole in three dimen-sions , both classically and quantum mechanically. Minisuperspace model is applied tothe latter case. Our main purpose is to treat the motion of the throat in the same wayas the wave function of the universe by Hartle-Hawking. The resulting wave functionmay have the Yukawa potential like solution. In this article some part is preliminary.u20021
[445] vixra:2302.0148 [pdf]
General Relativity Theory of Numbers
In this paper we show that thorough understanding of numbers is possible only if we present them as value in relation to the certain reference measure. Commonly, we use number 1 as a reference measure, however, it does not have to always be 1, it can be any other number. To fully understand the meaning of numbers, we have to maintain their natural form which is a quotient of a value to a reference measure. Only by keeping this form we can do mathematics properly and appreciate its natural beauty.
[446] vixra:2302.0137 [pdf]
Interpretation of Higher Order Field Terms
In this paper, the higher-order terms are evaluated for an electric potential field equation derived from a unified classical electrostatic-gravitational field theory. It is shown that the higher order terms represent higher order gravitational interactions arising due to gravity coupling to itself. It is shown that in the low-voltage limit, these approximately correspond to the expected magnitude and sign expected from classical gravitation. In the high-voltage limit, these terms are of a hyperbolic trigonometric form and appear to have a finite sum. This strongly suggests that this theory will not require re-normalization even in higher dimensions. After evaluation of the mathematics, a new mechanistic explanation for these self-gravitation terms is also provided from the new theory.
[447] vixra:2302.0134 [pdf]
Deterministic Degradation Process for Diffusion GAN and Its Inversion
Recently, diffusion models have shown impressive generative performance. However, they have the disadvantage of having a high latent dimension and slow sampling speed. To increase the sampling speed of diffusion models, diffusion GANs have been proposed. But the latent dimension of diffusion GANs using non-deterministic degradation is still high, making it difficult to invert the generative model. In this paper, we introduce an invertible diffusion GAN that uses deterministic degradation. Our proposed method performs inverse diffusion using deterministic degradation without a model, and the generator of the GAN is trained to perform the diffusion process with the latent random variable. The proposed method uses deterministic degradation, so the latent dimension is low enough to be invertible.
[448] vixra:2302.0132 [pdf]
Generalized Proof of Uncertainty Relations in Terms of Commutation Relation and Interpretation Based on Action Function
The uncertainty principle is the most important for the foundations of quantum mechanics but it still remains failed to reach a consensus of its interpretation, which gives rise to debates upon its physical nature. In this work, we address the problem of its foundation from a different aspect to present an alternative formulation for proving the uncertainty relations in a general way in terms of commutation relations and action function. The relationship between the de Broglie relation and the uncertainty principle is studied from a new angle. As a result, it is demonstrated that the de Broglie relation is the foundation of the uncertainty principle. Starting with the de Broglie relation as the origin of the problem, we show the logic context that the de Broglie relation provides the form of the wave function and the determined form of the wave function in turn leads to the conception of operators for quantum mechanics, and thus it is possible to provide with the help of the operators and wave function a generalized proof of the uncertainty principle as the law governing ensemble of states. As a decisive solution to the problem, the interpretation of the uncertainty principle in terms of the action function is offered that gives a consistent explanation in agreement with the known physical phenomena. Eventually, we show the necessity and possibility of reassessing and improving the foundation and interpretation of the uncertainty principle as a leading principle of quantum mechanics.
[449] vixra:2302.0127 [pdf]
Semi-Stable Quiver Bundles Over Gauduchon Manifolds
In this paper, we prove the existence of the approximate $(sigma,tau)$-Hermitian Yang--Mills structure on the $(sigma,tau)$-semi-stable quiver bundle $mathcal{R}=(mathcal{E},phi)$ over compact Gauduchon manifolds. An interesting aspect of this work is that the argument on the weakly $L^{2}_1$-subbundles is different from ['{A}lvarez-C'{o}nsul and Garc'{i}a-Prada, Comm Math Phys, 2003] and [Hu--Huang, J Geom Anal, 2020].
[450] vixra:2302.0126 [pdf]
A Novel Quantum Belief Entropy for Uncertainty Measure in Complex Evidence Theory
In this paper, a new quantum representation of CBBA is proposed. In addition, a novel quantum belief entropy is proposed to measure the uncertainty of CBBA in complex evidence theory.
[451] vixra:2302.0116 [pdf]
Clifford Algebra Cl(0,6) Approach to Beyond the Standard Model and Naturalness Problems
Is there more to Dirac's gamma matrices than meets the eye? It turns out that the gamma zero operator can be split into three components. This revelation facilitates the expansion of Dirac's space-time algebra to Clifford algebra Cl(0,6). The resultant rich geometric structure can be leveraged to establish a combined framework of gravity and beyond the standard model, wherein a gravi-weak interaction between the vierbein field and the extended weak gauge field is allowed. Inspired by the composite Higgs model, we examine the vierbein field as an effective description of the fermion-antifermion condensation. The compositeness of space-time manifests itself at an energy scale which is different from the Planck scale. We propose that the regular Lagrangian terms including the cosmological constant are of quantum condensation origin, thus possibly addressing the naturalness problem. The Clifford algebra approach also permits a weaker form of charge conjugation without particle-antiparticle interchange, which leads to a Majorana-type mass that conserves lepton number. Additionally, in the context of spontaneous breaking of two global U(1) symmetries, we explore a three-Higgs-doublet model which could explain the fermion mass hierarchies.
[452] vixra:2302.0110 [pdf]
Theory of Surfaces: Application To The Equipotential Surfaces of The Normal Field of Gravity
Recently, in an article, we reviewed some models of equipotential surfaces of the normal field of gravity relative to the Earth. In this note, we will apply the important theorems of surface theory to the surface obtained from the second model whose normal gravitational field is given by:$$ U= frac{Gm}{r}+frac{1}{2}omega^2left(x^2+y^2ight) $$
[453] vixra:2302.0106 [pdf]
Unification Without Extra Dimensions
In this paper we develop a modified Lagrangian for electrostatics based on the concept of voltage non-additivity, using recently-derived voltage boost transformation laws. Using a single dimensional physical space as a model, we solve the Euler-Lagrange equations for the electrostatic potential. It is shown that the modified electrostatic potential takes the form of a hyperbolic tangent function, where the function is approximately linear around the origin with horizontal asymptotes at the positive and negative Planck Voltages. Moreover, it is shown that the modified electrostatic Lagrangian automatically gives rise to classical gravitation as the leading correction term in the Taylor expansion of the solution. This provides a new interpretation of gravitation as a corrective term for electrostatics at very high voltages, wherein that term helps correct the potential in such a way that it never reaches the Planck Voltage. It is therefore a somewhat unique example of a unified field theory that does not assume the existence of extra dimensions. Future directions are discussed including possible implications for quantum gravity.
[454] vixra:2302.0103 [pdf]
Phase Changes and Interactions of Energy and Matter in the Universe Viewed Through Temperature Change
The universe is indeed composed of energy and matter. Matter can be measured by weight. Energy can be measured in terms of temperature. The higher the density, the higher the temperature, and the lower the density, the lower the temperature. If our universe started from a small point with a very high temperature and reached today, as the universe expands, the density of energy decreases, so the temperature lowers, too. When the temperature lowers, some of the energy causes a phase change into matter, and vice versa. When matter is created, an interaction takes place. At $10^{13}K$, imps (invisible material particles, aka dark matter) are created and a gravitational field is formed as a result of emitting graviton acting on it, which gain masses. Down quarks and up quarks interact with their own intrinsic properties. This interaction is called the quark interaction. When two down quarks and one up quark meet with an imp to create a neutron, the force resulting from the quark interaction is confined inside the neutron.The quark interaction confines the strong force when quarks form a neutron, and mediates the electromagnetic field and electromagnetic force when a neutron transforms to a proton, and binds the nucleons to form heavier particles by strong nuclear interaction.
[455] vixra:2302.0100 [pdf]
You Will Never Be Alone Again
This paper will investigate the concept of a Turing complete universe and its implications for the best policy zero player game, as wellas the fundamental question of why anything exists at all or how something has always existed. We will begin by analyzing the implications ofa Turing complete universe and how it can be used to construct a universe maker, a recursive loop of universes within universes. We will thenexamine the implications of this universe maker and how it can be used to create a best policy zero player game. We will assess the implicationsof this game and how it can be used to answer the fundamental question of why anything exists at all or how something has always existed. Additionally, we will evaluate the potential applications of this research and its implications for the future, such as the potential to generate new universes and explore the limits of reality. Finally, we will consider the potential implications of this research and its potential applications,such as the potential to uncover the mysteries of the universe and answer the age-old question of why anything exists. This research has thepotential to provide insight into the nature of reality and the answer to the ultimate question, however, due to its theoretical nature, a long-term research plan is necessary to further explore the implications of this research.
[456] vixra:2302.0099 [pdf]
Route Planning Algorithms for Efficient 3D Printing
General 3D printers use Fused Deposition Modeling (FDM) and Stereolithography (SLA) technologies to print 3D models. However, turning the nozzle on and off during FDM or SLA extruding will cause unwanted results. This project created an experimental 3D model slicer named Embodier that generates continuous extruding paths whenever possible. This enables 3D printers to draw out the printing layers accurately in a stable manner. Furthermore, the slicer partitions the outlines to tree structures for efficiency and applies flooding algorithm for water-tightness validation. Lastly, a 3D printing simulator is also created to visualize the printed paths in 3D for a more intuitive review of the Embodier slicer. The end result is that we have discovered that not only a single continuous-extruded-path slicer is possible, it can also be optimized for performance and robustness in practice.
[457] vixra:2302.0097 [pdf]
Complex Analysis and Theory of Reproducing Kernels
The theory of reproducing kernels is very fundamental, beautiful and will have many applications in analysis, numerical analysis and data sciences. In this paper, some essential results in complex analysis derived from the theory of reproducing kernels will be introduced, simply.
[458] vixra:2302.0094 [pdf]
Dark Matter and Dark Energy: Specifications that Associate with Data
This paper suggests a specification for dark matter and an explanation for dark energy. This paper features two key hypotheses. First, this paper assumes that nature includes six isomers of most elementary particles. Five of the six isomers associate with dark matter. Second, this paper assumes that multipole expansions can prove useful regarding gravity. Some terms in the expansions associate with gravitational attraction. Some terms associate with dilution of attraction. Dilution can associate with mutual repulsion between objects and with dark energy. This paper suggests that those two assumptions lead to explanations for data that pertain to the rate of expansion of the universe, the formation of galaxies, and other aspects of cosmology and astrophysics.
[459] vixra:2302.0084 [pdf]
Why Bell’s Experiment is Meaningless
We demonstrate that a Bell experiment asks the impossible of a Kolmogorovian correlation. An Einstein locality explanation in Bell’s format is therefore excluded beforehand by way of the experimental and statistical method followed.
[460] vixra:2302.0073 [pdf]
Interpretation of Correspondence Principle Based on Examination of Existence of Isomorphic Mapping Between Observables and Operators
It still remains an important open question of the interpretation of the foundations of quantum mechanics to thoroughly elucidate the essence and significance of the correspondence principle. We focus from a new mathematical aspecton the review of the correspondence principle to gain the correct understanding of the principle. As a result, we show that there does not exist the algebraic isomorphism between the algebra of the observables and that of the quantum operators, and therefore the previous interpretation of the correspondence principle aiming to provide allthe operators corresponding to physical quantities is inconsistent from the mathematical view point. Furthermore, it is demonstrated that the correspondence between physical quantities and quantum operators is possible within canonically conjugate observables constituting the action, while classical and quantum quantities satisfy one and the same dynamical relation. Moreover, it is shown that the classical limit of quantum mechanics can be explained not by the correspondence principle but by the de Broglie relation and the operator equations.
[461] vixra:2302.0067 [pdf]
Multi-Class Product Counting and Recognition for Automated Retail Checkout: a Survey Paper of the 6th ai City Challenge Track4
Track 4 of of the 6th AI City Challenge specifically focuses on implementing accurate and automatic check-out systems in retail stores. The challenge includes identifying and counting products as they move along a retail checkout conveyor belt, despite obstacles such as occlusion, movement, and similarity between items. I was on the evaluation team for this track where I evaluated the methods of the top-performing teams on hidden Testset B along with my professor, David C. Anastasiu, who is on the organizing team of the challenge. Teams were provided with a combination of real-world and synthetic data for training and were evaluated on their ability to accurately recognize products in both closed and open-world scenarios, as well as the efficiency of their program. The team with the highest combined score for effectiveness and efficiency was declared the winner. The goal of this track is to accurately identify and count products as they move along a retail checkout lane, even if the items are similar or obscured by hands. Distinguishing this track from others, only synthetic data was provided for training models. The synthetic data included a variety of environmental conditions to train models on, while real-world validation and test data will be used to evaluate the performance of models in realistic scenarios.
[462] vixra:2302.0060 [pdf]
Bangiya Sabdakosh and The Graphical Law
We study the Bangiya Sabdakosh: A Bengali-Bengali lexicon compiled by the Late Haricharan Bandyopadhyay.We draw the natural logarithm of the number of words, normalised, starting with a letter vs the natural logarithm of the rank of the letter, normalised. We conclude that the dictionary can be characterised by BW(c=0.01),the magnetisation curve of the Ising Model in the Bragg-Williams approximation in the presence of external magnetic field, H. $c=frac{ H}{gamma epsilon}=0.01$ with $epsilon$ being the strength of coupling between two neighbouring spins in the Ising Model, $gamma$ representing the number of nearest neighbours of a spin, which is very large.
[463] vixra:2302.0055 [pdf]
Non-Volatility Property and Pinched Hysteresis Loops of 2-terminal Devices and Memristors
It is well known that memristors can be classified into the four classes according to the complexity of their mathematical representation. Furthermore, the four classes of memristors are used to qualitatively simulate many of the experimentally measured pinched hysteresis loops. In this paper, we define the 2-terminal devices, which do not belong to the above four classes of memristors, but have the same non-volatile property as the ideal memristor. We then study the non-volatile mechanism of these devices and the memristors that can retain the previous value of the state even when the driving signal is set to zero. We show that the ideal generic memristors and the generalized 2-terminal devices can have interesting applications: non-volatile multi-valued memories and two-element chaotic oscillators, if we remove the condition that no state change occurs after the zero driving signal. We also show that the 2-terminal devices and the four classes of memristors can exhibit a wide variety of pinched hysteresis loops similar to those measured experimentally. Furthermore, we show that a wide variety of Lissajous curves are possible, depending on whether the direction of the Lissajous curve is clockwise or counterclockwise and which quadrants the Lissajous curve passes through.
[464] vixra:2302.0052 [pdf]
A Proposal for Mass Variation under Gravity
We review the special relativistic properties of a rotating system of coordinates, as considered by Einstein, in his initial considerations of time and length changes in gravity. Then, following Einstein’s application of the principle of equivalence, we propose that an object’s energygain implies a gravitational potential dependent mass increase (GPDM). We then show how thismass-energy change is a function of frequency changes in light and hence time dilation. Applyingthis to gravitational lensing, we present predictions of GPDM in galaxies and large gravitational bodies. We explore various possible cosmological implications, including rapid clumping in the early universe, the formation of the cosmic web and dark matter-like phenomena. Experiments to test the theory in the terrestrial domain are also suggested.
[465] vixra:2302.0050 [pdf]
Detection of the Gravitational Wave of the Crab Pulsar in the O3b Series from Ligo
After compensation for phase modulation and frequency drift, the pulsar's GW can be detected in the records of all three interferometers. The signatures agree with the known values measured with electromagnetic waves.
[466] vixra:2302.0042 [pdf]
Neuro-symbolic Meta Reinforcement Learning for Trading
We model short-duration (e.g. day) trading in financial mar- kets as a sequential decision-making problem under uncer- tainty, with the added complication of continual concept- drift. We therefore employ meta reinforcement learning via the RL2 algorithm. It is also known that human traders often rely on frequently occurring symbolic patterns in price series. We employ logical program induction to discover symbolic patterns that occur frequently as well as recently, and ex- plore whether using such features improves the performance of our meta reinforcement learning algorithm. We report ex- periments on real data indicating that meta-RL is better than vanilla RL and also benefits from learned symbolic features.
[467] vixra:2302.0041 [pdf]
Note About The Equipotential Surface of The Normal Gravity Field U=U_0
The object of this note is to present the equation of the surface which defines the equipotential field of gravity $U=U_0$ for certain models of the normal potential of gravity $U$.
[468] vixra:2302.0038 [pdf]
Interpretation of Superposition of Eigen States and Measurement Problem Concerning Statistical Ensemble
The measurement problem is an important open question for the interpretation of the foundations of quantum mechanics. For the purpose of solving this problem, we focus from a new angle on the interpretation of the superpositionprinciple that is the origin of the measurement problem. As a result, we show that the measurement problem at issuecannot arise, provided the mathematical and physical aspects of the superposition principle are considered correctly.Our work demonstrates that since any mathematical superposition of eigenstates is never a new eigenstate, the superposed state at issue, if any, should be interpreted simply as a statistical ensemble of possible states that occursequentially, instead of a mixed state indicative of simultaneous occurrence. Actually, this view leads to the conclusion that the concept of the currently accepted state vector and the motivation of the measurement problem have noperfect ground from both mathematical and physical aspects. Furthermore, using mainly mathematical method ratherthan thought experiments, we offer a realistic interpretation of the superposition of eigenstates based on ensemble ofquantum states, thereby helping to capture the essence of the measurement problem which actually is not implicatedin Apparatus and Observer.
[469] vixra:2302.0026 [pdf]
Samsad Bangla Abhidan and The Graphical Law
We study the entries of the dictionary, the Samsad Bangla Abhidan compiled by Sailendra Biswas, the fifth edition.We draw the natural logarithm of the number of entries, normalised, starting with a letter vs the natural logarithm of the rank of the letter, normalised. We conclude that the dictionary can be characterised by BW(c=0.01),the magnetisation curve of the Ising Model in the Bragg-Williams approximation in the presence of external magnetic field, H. $c=frac{ H}{gamma epsilon}=0.01$ with $epsilon$ being the strength of coupling between two neighbouring spins in the Ising Model, $gamma$ representing the number of nearest neighbours of a spin, which very large.
[470] vixra:2302.0019 [pdf]
Some Deep Properties of the Green Function of Q. Guan on the line of Suita-Saitoh-Yamada's Conjectures
In this paper, we would like to refer to some deep results of the Green function of Q. Guan on the conjugate analytic Hardy $H_2$ norm and on the line of Oikawa-Sario's problems; Suita's conjecture, Saitoh's conjecture and Yamada's conjecture and to propose new related open problems. In particular, Q. Guan examined the deep properties of the inversion of the normal derivative of the Green function, the property of the level curve of the Green function and the magnitude of the logarithm capacity with his colleagues.
[471] vixra:2302.0017 [pdf]
A Monte Carlo Packing Algorithm for Poly-Ellipsoids and Its Comparison with Packing Generation Using Discrete Element Model
Granular material is showing very often in geotechnical engineering, petroleum engineering, material science and physics. The packings of the granular material play a very important role in their mechanical behaviors, such as stress-strain response, stability, permeability and so on. Although packing is such an important research topic that its generation has been attracted lots of attentions for a long time in theoretical, experimental, and numerical aspects, packing of granular material is still a difficult and active research topic, especially the generation of random packing of non-spherical particles. To this end, we will generate packings of same particles with same shapes, numbers, and same size distribution using geometry method and dynamic method, separately. Specifically, we will extend one of Monte Carlo models for spheres to ellipsoids and poly-ellipsoids.
[472] vixra:2302.0010 [pdf]
Tutorial: The Galilean Transformations’ Conflict with Electrodynamics, and its Resolution Using the Four-Potentials of Constant-Velocity Point Charges
Acceleration is invariant under the Galilean transformations, which implies that a system moving at a nonzero constant velocity doesn't undergo acceleration it isn't already subject to when it is at rest. However a charged particle moving at a nonzero constant velocity in a static magnetic field undergoes acceleration it isn't subject to when it is at rest in that field (Faraday's Law or the Lorentz Force Law), and the needle of a magnetic compass moving at a nonzero constant velocity in a static electric field undergoes deflection it isn't subject to when it is at rest in that field (Maxwell's Law). The Galilean transformations therefore conflict with electrodynamics, and must be modified. Einstein obtained the modified Galilean transformations via postulating that the speed of light in empty space is fixed at the value c, which in fact is a consequence of electrodynamics rather than a postulate. Here we instead read off the space part of a modified constant-velocity Galilean transformation from the four-potential of a point charge moving at that constant velocity; its time part then follows from its space part plus either relativistic reciprocity (a fundamental property of the unmodified Galilean transformations) or the fixed speed c of light.
[473] vixra:2301.0150 [pdf]
Neutron Lifetime Anomaly and Mirror Matter Theory
This paper reviews the puzzles in modern neutron lifetime measurements and related unitarity issues in the CKM matrix. It is not a comprehensive and unbiased compilation of all historic data and studies, but rather a focus on compelling evidence leading to new physics. In particular, the largely overlooked nuances of different techniques applied in material and magnetic trap experiments are clarified. Further detailed analysis shows that the ``beam'' approach of neutron lifetime measurements is likely to give the ``true'' $beta$-decay lifetime, while discrepancies in ``bottle'' measurements indicate new physics at play. The most feasible solution to these puzzles is a newly proposed ordinary-mirror neutron ($n-n'$) oscillation model under the framework of mirror matter theory. This phenomenological model is reviewed and introduced, and its explanations of the neutron lifetime anomaly and possible non-unitarity of the CKM matrix are presented. Most importantly, various new experimental proposals, especially lifetime measurements with smallslash narrow magnetic traps or under super-strong magnetic fields, are discussed in order to test the surprisingly large anomalous signals that are uniquely predicted by this new $n-n'$ oscillation model.
[474] vixra:2301.0142 [pdf]
Some Unifications Needed in Particle Physics
The current crises in Elementary Particle Physics requires a few new unification ideas: fermions and bosons, leptons and quarks, spin-parity and flavor etc. in order to resolve several problems in fundamental physics. Some possibilities involving some well known mathematical models are suggested and a few questions are raised. By now it is clear that the four fundamental interactions of the Standard Model can be related at low energies, with a natural change of viewpoint regarding what quarks are and replacing the pointwise particle concept, beyond String Theory, with that of qubit as a basic state, at the level of Quantum Computing. Understanding the role of the neutrino is also a recognized major point of today's Physics.
[475] vixra:2301.0134 [pdf]
Correlation Between Substance Representing that Tier and Its Typical Price in Several Games Using a Tier System
Substances representing tier (Iron, Bronze, Silver, Gold, Platinum, Diamond) and its typical price (USD/gram) in several games using a tier system have a positive correlation [1, 2, 5].
[476] vixra:2301.0127 [pdf]
Quantum Corrections to the Alfven Waves in the Tokamak and Iter Plasma
The hydrodynamical model of quantum mechanics based on the Schroedingerequation is combined with the magnetohydrodynamical term to form so calledquantum magnetohydrodynamic equation. It is shown that the quantum correctionto the Alfven waves follows from this new equation. The possible generalization isconsidered for the so called nonlinear Schroedinger equation and for the situationwhere dissipation is described by the Navier-Stokes equation.
[477] vixra:2301.0122 [pdf]
Relations Between e, π and Golden Ratios
We write out relations between the base of natural logarithms (e), the ratio of the circumference of a circle to its diameter (π), and the golden ratios of the additive p-sequences. An additive p-sequence is a natural extension of the Fibonacci sequence in which every term is the sum of p previous terms given p>=1 initial values called seeds.
[478] vixra:2301.0115 [pdf]
The Generic Action Principles of Conformal Supergravity
All N = 4 conformal supergravities in four space-time dimensions are constructed. These are theonly N = 4 supergravity theories whose actions are invariant under off-shell supersymmetry. Theyare encoded in terms of a holomorphic function that is homogeneous of zeroth degree in scalarfields that parametrize an SU(1, 1)/U(1) coset space. When this function equals a constantthe Lagrangian is invariant under continuous SU(1, 1) transformations. Based on the knownnon-linear transformation rules of the Weyl multiplet fields, the action of N = 4 conformalsupergravity is constructed up to terms quadratic in the fermion fields. The bosonic sectorcorrects a recent result in the literature. The construction of these higher-derivative invariantsalso opens the door to various applications for non-conformal theories.
[479] vixra:2301.0110 [pdf]
Connections Between the Plastic Constant, the Circle and the Cuspidal Cubic
The unit circle and the cuspidal cubic curve have been found to intersect at coordinates that can be defined by the Plastic constant, which is defined as the solution to the cubic function x^3 = x + 1. This report explores the connections between the algebraic properties of the Plastic constant and the geometric properties of the circle and this curve.
[480] vixra:2301.0102 [pdf]
The Liquid Metallic Hydrogen Model of the Sun: Modelling a Density Profile of the Chromosphere
In the last years, the liquid metallic hydrogen model has proven to be a viable alternative to the standard solar model, almost exclusively due to the work of Pierre-Marie Robitaille (2002, 2009, 2011, 2013). By modeling the density of both the liquid metallic and the molecular state of hydrogen from first principles, the pressure at the phase transition can be estimated, resulting in about 550 GPa. In the liquid metallic hydrogen model, this phase transition defines the photosphere which can therefore be considered real surface, as it is consistent with many observations. However, considerable pressure must be exerted by the above chromosphere, which is assumed to consist of molecular hydrogen, albeit in a compressed, liquid form. Deriving a relation between pressure and density from the above considerations, it can be shown that the chromosphere has an approximate thickness of not more than 8000 km, in agreement with observations.
[481] vixra:2301.0101 [pdf]
Modeling Bias in Vaccine Trials Relying on Fragmented Healthcare Records
COVID-19 vaccine trials depend on the localization of vaccination records for each trial subject. Misclassification bias occurs when vaccination records cannot be localized or uniquely identified. This bias may be significant in trials where the trial subjects’ vaccination and health records are distributed between more than one database. The potential for this bias is present in numerous published COVID-19 vaccine trials. A model is proposed for estimation of the magnitude of this bias on apparent vaccine efficacy. In the model, misclassification is always in the direction from partial or fully vaccinated status to unvaccinated status. The model predicts a disproportionate effect of vaccination status misclassification on the apparent vaccine efficacy when population vaccination rates are high.
[482] vixra:2301.0098 [pdf]
On the Twin Prime Conjecture
Every prime number $p geq 5$ has the form $6x-1$ or $6x+1$. We call $x$ the textbf{generator} of $p$. Twin primes are distinguished by a textbf{common generator} for each pair. Therefore it makes sense to search for the Twin Primes on the level of their generators. This paper presents a new approach to prove the textbf{Twin Prime Conjecture} by a method to extract all Twin Primes on the level of the Twin Prime Generators.We define the om{p_n}--numbers $x$ as numbers for that holds that $6x-1$ and $6x+1$ are coprime to the primes $5,7,ldots,p_n$. By dint of the average size $bd(p_n)$ of the om{p_n}--gaps we can prove the textbf{Twin Prime Conjecture}.
[483] vixra:2301.0096 [pdf]
Why Gravity Cannot be a Classical Force
Some properties of general fields are reviewed. Electrostatics is a natural, universal paradigm, part of classical field theory. Newton's Gravity Theory, on the other hand, has properties that suggest a quantum origin, via an internal property akin to spin. Such an abstract "internal" property breaks isotropy of pointwise singularities of fields, yielding an attractive force. Theory of Gravity based on the quark structure of nucleons is consistent with the above considerations.
[484] vixra:2301.0088 [pdf]
The Theory of Plafales. P vs NP Problem Solution (Sections 1-7)
This paper is dedicated to a rigorous review of the theory of plafales, which describes the properties and applications of a new mathematical object. As a consequence of the created theory we give a proof of the equality of complexity classes P and NP.
[485] vixra:2301.0082 [pdf]
On Solutions to Erdos-Straus Equation Over Certain Integer Powers
We apply the notion of the textbf{olloid} to show that the ErdH{o}s-Straus equation $$frac{4}{n^{2^l}}=frac{1}{x}+frac{1}{y}+frac{1}{z}$$ has solutions for all $lgeq 1$ provided the equation $$frac{4}{n}=frac{1}{x}+frac{1}{y}+frac{1}{z}$$ has solution for a fixed $n>4$.
[486] vixra:2301.0075 [pdf]
Sanchayita and the Graphical Law
We study the Sanchayita, one collection of poems, of Rabindranath Tagore.We draw the natural logarithm of the number of the titles of the poems of the Sanchayita, normalised, starting with a letter vs the natural logarithm of the rank of the letter, normalised. We conclude that the Sanchayita can be characterised by the magnetisation curve of the Ising Model in the Bragg-Williams approximation in the absence of external magnetic field.
[487] vixra:2301.0067 [pdf]
Dark Matter and Dark Energy Found
The Reality-Sucks theory can compute both, the dark matter and the dark energy, in the required range. The ratio between dark and ordinary matter is estimated between 83.0 % and 85.7 %. Dark matter and dark energy together constitute between 94.4 % and 95.8 % of the total mass-energy content. Similar results should be obtained using the Λ-CDM model.
[488] vixra:2301.0062 [pdf]
On Anti-Gravity
This is an informal overview in support of "antigravity", as a real, observable and reproducible phenomenon, that now we begin to understand.This corresponds to the highest energy level of interaction of two nucleons, nuclear spin de-pendent, in addition to the state that yields Newtonian Gravity, which is an average over spindirections, achieving a lower energy level, attractive at equilibrium, and much weaker effectiveforce (Hierarchy Problem).The Standard Model based Theory of Gravity is recalled.Some of the lab experimental methods to control gravitational interaction are reviewed: microwave Dynamical Nuclear Orientation (Polarization) based (Alzofone, Hutchinson a.a.) androtation based, which requires superconductivity (Podkletnov, Ning Li) to take advantage of theproperties of the Bose-Einstein condensate.Other observations and methods supporting the theory and experiments, are recalled.A program to help implement the technology, within the new paradigm in Physics, is proposed:open source research and implementation.
[489] vixra:2301.0058 [pdf]
Light Refraction and Gravitational Redshift
The inductive-inertial phenomenon is the precondition of the E/M waves, while the spin oscillations of the electron cause the E/M formations. The photon is the autonomous motion of the E/M wave with constant photon length and the number of its fundamental E/M waves determines its wavelength. The light speed is determined as the transmission speed of the disturbance into the tense elastic-dynamic space. However, the light speed depends on the cohesive pressure that is proportional to the square of the distance from the Universe center and therefore it is a local constant in our region. The change of cohesive pressure in electric fields directly affects the change in the light speed, which can be attributed to photon refraction phenomena. The deviation of E/M waves in the dynamic fields occurs, of course, in the gravitational field as well. It is proved, the light has gravity only in the back half-space with result the gravitational redshift of the stars spectrum, while gravitational blueshift cannot be detected, since there is no gravity in the front half-space of the E/M wave.
[490] vixra:2301.0057 [pdf]
An Additional Note On Least Squares Method
In this additional note on least squares, we will come back to a few theorems of the theory of errors and that of least squares which are not well enough known even among geodesists. This is due to the absence of documentation in French on the subject. It is thanks to a work of the Russian school, of the mathematician Yori Vladimirovich LINNIK (1915-1972) on the theory of least squares, published in French in 1963 which I was inspired to write this note.
[491] vixra:2301.0054 [pdf]
The Kilogram, Inertial Mass, Gravitational Mass and Types of Universal Free Fall
When gravitational motion of test objects are studied, two distinct concepts of mass are invoked in principle, namely, inertial mass and (passive) gravitational mass. If inertial mass and gravitational mass are quantities, then according to the definition of ``quantity'' available in the standard literature of metrology, there need to be specific units corresponding to inertial mass and gravitational mass. The possibilities of such definitions and the associated obstacles of reason are discussed. A recent classification of ``kilogram'' as a unit of inertial mass, that depends on the realization of kilogram through atom count method involving the use of matter wave interferometers, is critically analyzed. The process of reasoning that leads to such a classification is misleading because the equivalence between inertial mass and gravitational mass is an implicit assumption, in the process, both at the macroscopic level (mass measurement of silicon sphere) and the microscopic level (mass measurement of silicon atom with atom wave interferometers) resulting in no a priori distinction between inertial mass and gravitational mass in the concerned literature. Further inquiry reveals a crucial, but hitherto unexplained, difference between the working principles of neutron wave interferometers and atom wave interferometers. In neutron wave interferometers, the universal free fall of neutrons is attributed to the equality of inertial mass and gravitational mass, whereas in atom wave interferometers the distinction between inertial mass and gravitational mass is not made in the first place and universal free fall of atoms is attributed to either mass difference of the different species of atoms or different energy states of the same species of atom. Such observations result in the recognition of different {it types} of universal free fall that are studied in the literature which, however, has hitherto not been recognized assertively.
[492] vixra:2301.0050 [pdf]
The Θ-Brane Theory : A Canvas for the Universe
Among the different models of representation of the universe, the Friedmann positive curvature model often comes up. It is a model using the universal, harmonious and simple shape of the sphere, therefore a closed but unlimited universe, uniform in all its directions. The figure on the right is a simplified representation. However, to properly contain the universe, this shape must have one dimension more than the conventional sphere of our 3D world (called 2-sphere in mathematics). It is therefore a 3-sphere, which takes one dimension more than the 2-sphere: its surface becomes a hyper-surface, therefore a volume and the volume it occupies becomes a space with 4 Euclideandimensions. Thus, in this model, the universe resides in the hypersurface of the 3-sphere and requires 4D space to unfold.We also know that the universe is essentially made up of vacuum at an extremely low average density of around 1 atom/m³. In this context, the legitimate question that can be asked is how the universe, being so empty, can ensure its cohesion in the even emptier 4D space which surrounds it both inside and outside of its hyper-surface. The present article responds to this by hypothesizing a solid and perfectly elastic hyper-membrane which occupies the entire hyper-surface of the 3-sphere and which thus serves as a support structure for the universe. Once well characterized, this hypermembrane, named here Θ-Brane, can then serve as a support for the universe itself but also as a support for the propagation of its waves, for the movement of its matter without drag effect, and finally support its various energy fields and all this, without contradiction with the two postulates of Special Relativity.Furthermore, the presence of this hyper-membrane makes it possible to provide explanations for matter-antimatter asymmetry, the exclusive helicity of neutrinos-antineutrinos, gravitation based on quantum fluctuations of the vacuum and more.
[493] vixra:2301.0041 [pdf]
On The Current Physics Crises and The Hierarchy Problem
The crises in fundamental physics is by now openly reported and acknowledged. The new paradigm, based on the Network Model, quark field of baryons and mesonic nuclear bonds, proposes a reinterpretation of the Standard Model and fundamental experiments in quantum physics, that claims to resolve major debates, including the Hierarchy Problem. Recall that the Electro-Gravity Magneto-Dynamics is the theory of the long range component of the quark field, which is yet missed by, for instance Yukawa potential, color QCD or chiral Effective Field Theory. The short range component described by the nuclear effective potential of chiral Effective Field Theory is consistent with Yukawa's meson model of nuclear forces, unified by the above approach. The experiments verifying this model of Gravity are recalled: F. Alzofon and D. Sarkadi. Is there a relation between the two? Further confirmations of the new Theory of Gravity come from of Podkletnov and Ning Li's research and experiments. The new paradigm is claimed to be consistent with the already existing Standard Model, with its accurate predictions but containing historical miss-interpretations, proposing a change in our point of view and goals.
[494] vixra:2301.0040 [pdf]
Improving ML Algorithmic Time Complexity Using Quantum Infrastructure
With the rising popularity of machine learning in the past decade, a stronger urgency has been placed on drastically improving computational technology. Despite recent advancements in this industry, the speed at which our technologies can complete machine learning tasks continues to be its most significant bottleneck. Modern machine learning algorithms are notorious for requiring a substantial amount of computational power. As the demand for computational power increases, so does the demand for new ways to improve the speed of these algorithms. Machine learning researchers have turned to leverage quantum computation to significantly improve their algorithms' time complexities. This counteracts the physical limitations that come with the chips used in our technology today. This paper questions current classical machine learning practices by comparing them to their quantum alternatives and addressing the applications and limitations of this new approach.
[495] vixra:2301.0034 [pdf]
On Born Reciprocal Relativity, Algebraic Extensions of the Yang and Quaplectic Algebra, and Noncommutative Curved Phase Spaces
After a brief introduction of Born's reciprocal relativity theory is presented, we review the construction of the deformed Quaplectic group that is given by the semi-direct product of U (1,3) with the deformed (noncommutative) Weyl-Heisenberg group corresponding to noncommutative fiber coordinates and momenta $ [ X_a, X_b ] ot= 0$; $[ P_a, P_b ] ot=0$. This construction leads to more general algebras given by a two-parameter family of deformations of the Quaplectic algebra, and to further algebraicextensions involving antisymmetric tensor coordinates and momenta of higher ranks $ [ X_{a_1 a_2 cdots a_n}, X_{b_1 b_2 cdots b_n} ] ot= 0$; $[ P_{a_1 a_2 cdots a_n}, P_{b_1 b_2 cdots b_n} ] ot=0$. We continue by examining algebraic extensions of the Yang algebra in extended noncommutative phase spaces and compare them with the above extensions of the deformed Quaplectic algebra. A solution is found for the exact analytical mapping of the non-commuting $ x^mu, p^mu$ operator variables (associated to an $8D$ curved phase space) to the canonical $ Y^A, Pi^A$ operator variables of a flat $12D$ phase space. We explore the geometrical implications of this mapping which provides, in the $classical$ limit, with the embedding functions $ Y^A (x,p), Pi^A (x,p) $ of an $8D$ curved phase space into a flat $12D$ phase space background. The latter embedding functions determine the functional forms of the base spacetime metric $ g_{mu u} (x,p) $, the fiber metric of the vertical space $h^{ab}(x,p)$, and the nonlinear connection $N_{a mu} (x,p) $ associated with the $8D$ cotangent space of the $4D$ spacetime. Consequently, one has found a direct link between noncommutative curved phase spaces in lower dimensions to commutative flat phase spaces in higher dimensions.
[496] vixra:2301.0022 [pdf]
Properties of Elementary Particles, Dark Matter, and Dark Energy
This paper suggests new elementary particles, a specification for dark matter, and modeling regarding dark-energy phenomena. Thereby, this paper explains data that other modeling seems not to explain. Suggestions include some methods for interrelating properties of objects, some catalogs of properties, a method for cataloging elementary particles, a catalog of all known and some method-predicted elementary particles, neutrino masses, quantitative explanations for observed ratios of non-ordinary-matter effects to ordinary-matter effects, qualitative explanations for gaps between data and popular modeling regarding the rate of expansion of the universe, and insight regarding galaxy formation and evolution. Key assumptions include that nature includes six isomers of most elementary particles and that stuff that has bases in five isomers underlies dark-matter effects. Key new modeling uses integer-arithmetic equations; stems from, augments, and does not disturb successful popular modeling; and helps explain aspects and data regarding general physics, elementary-particle physics, astrophysics, and cosmology.
[497] vixra:2301.0013 [pdf]
Where Are the "Hidden Variables" Hidden?
We aim to find a place where hypothetical hidden variables of quantum mechanics might be accommodated. We consider the possibility that hidden variables belong to the Calabi-Yau manifold, the space of six extra dimensions, appearing in the superstring theory.
[498] vixra:2301.0012 [pdf]
The Double Split Experiment
The 2-slit experiment is explained by using the Network Model, which then transfers to other experiments: entanglement, delayed choice and quantum erasure etc.In essence, De Broglie pilot wave, on a Feynman path are aspects of a fermionic quantum channel transmitting bosonic excitations between tho nodes of the quantum network formed as part of the experiment. This model is called the Network Model.An essential feature of the Network Model is that it evolves in transient-steady state cycles, akin to machine learning: The Living Universe.Other considerations involving Space-Time, antimatter, Heisenberg uncertainty relations, fermion-boson unification etc., are included.
[499] vixra:2301.0011 [pdf]
Abridged Riemann's Last Theorem
The central idea of this article is to introduce and prove a special form of the zeta function as proof of Riemann’s last theorem. The newly proposed zeta function contains two sub-functions, namely f1(b,s) and f2(b,s) . The unique property of zeta(s)=f1(b,s)-f2(b,s) is that as tends toward infinity, the equality zeta(s)=zeta(1-s) is transformed into an exponential expression for the zeros of the zeta function. At the limiting point, we simply deduce that the exponential equality is satisfied if and only if real(s)=1/2. Consequently, we conclude that the zeta function cannot be zero if real(s)=1/2, hence proving Riemann’s last theorem.
[500] vixra:2212.0218 [pdf]
Light Propagation
Light propagation is a stone in the construction of A. Einstein SR theory and the comment he made in 1921 is still relevant. The existence of or non existence of a medium for the light propagation has not yet been given a clear answer. It is admited nowadays that light propagates in vaccum", without knowing its nature. Even if we call "vaccum" the medium for the propagation of light, to not say "ether", it is still a mystery how this medium behaves inside transparent matter in motion relative to space. The negation of the term "ether" comes from debates about experiments made in 19th century when stellar aberration was discovered and which gave rise to the Augustin Fresnel proposition of the "entrainment of ether by matter". Hyppolite Fizeau validated this concept with an experiment in which light is propagating through moving water and which was confirmed by Michelson, a few years later, in water and air. To have a clearer idea of the cohabitation of the two entities, matter and ether, Michelson initiated the issue of the behaviour of light in moving matter through ether ("ether wind") which was not treated by Fizeau experiment. The Michelson and Morley experiment was devoted to demonstrate the effect of the earth's motion through space on light propagation. The results obtained are still in controversial. In this paper we shows that the results obtained by Michelson, which were not null, can be explained by a new model of light propagation in transparent matter in motion through space.
[501] vixra:2212.0217 [pdf]
On Quark Field and Nuclear Physics
The quark field unifies the four interactions of the Standard Model.SU(2)-Nuclear Physics as an analog of U(1)-chemistry, is related to discrete symmetry groups, corresponding to quark flavors, and supporting Dr. Moon's Model of the nucleus. Reinterpreting Weak Force as modeling transitions of Klein geometries of baryons via Quark Lines Diagrams, in particle accelerators experiments and Nuclear Physics is attempted.Nuclear Force is a resultant of exchange of mesons as two-ways quark ``bonds'' between nucleons, similar to electronic bonds in chemistry.An effective potential has terms corresponding to Coulomb force, Gravity and Nuclear Force, with applications to Gravity Control and Cold Fusion / Biological Transmutations.Further considerations regarding supersymmetry and the Network Model of Quantum Physics, are included.
[502] vixra:2212.0209 [pdf]
A Complete Proof of the Conjecture ca Complete Proof of the Conjecture C Smaller Than Rad^{1.63}(abc)
In this paper, we consider the $abc$ conjecture, we will give the proof that the conjecture $c smaller than rad^{1.63}(abc)$ is true. It constitutes the key to resolve the $abc$ conjecture.
[503] vixra:2212.0205 [pdf]
A Framework for Uncertain Spatial Trajectory Processing
There are many factors that affect the precision and accuracy of location data. These factors include, but not-limited to, environmentalobstructions (e.g., high buildings and forests), hardware issues (e.g., malfunctioning and poor calibration), and privacy concerns (e.g., users want not to release theirs location). These factors lead to uncertainty about user’s location which in turns affect the quality of location-aware services. This paper proposes a novel framework called UMove, UMove stands for uncertain movements, to manage trajectory of moving objects under location uncertainty. The UMove framework employs the connectivity (i.e., links between edges) and constraints (i.e., travel time and distance) on road network graph to reduce the uncertainty of object’s past, present, and projected locations. To accomplish this, UMove incorporates (i) a set-based pruning algorithm to reduce or eliminate uncertainty from imprecisetrajectories; and (ii) a wrapper that can extend user-defined probability models designed to predict future locations of moving objectsunder uncertainty. Intensive experimental evaluations based on real data sets of GPS traces prove the efficiency of the proposed UMoveframework. In terms of accuracy, for past exact-location inference, UMove achieves rates from 88% to 97% for uncertain regions withsizes of 75 meters and 25 meters respectively; for future exact-location inference, rates can reach up to 72% and 82% for 75 meters and 25 meters of uncertain regions.
[504] vixra:2212.0199 [pdf]
New Law or Boundary Condition of Electromagnetic Wave Theory
According to Maxwell's electromagnetic theory or classical electromagnetic theory, if there is a changing current on the antenna, an electromagnetic wave will be induced around the antenna, which can propagate in space. When the electromagnetic wave propagates to the receiving antenna, the electromagnetic wave transfers the electromagnetic energy and momentum to the receiving antenna. The receiving antenna receives the electromagnetic signal. The above discussion is the standard description of electromagnetic wave in almost all electromagnetic field textbooks. The author found this description wrong. According to Wheeler Feynman's absorber theory, the radiation generated by the transmitting antenna is not only determined by the changing current of the transmitting antenna, but also affected by the current changes of the environmental materials. The material absorbing electromagnetic wave affects the transmitting antenna by radiating the advanced wave. The author supports Wheeler Feynman's view. Maxwell's electromagnetic theory needs to satisfy the Sliver-Muller radiation condition. This condition actually describes that a good absorber material has been arranged at the far field of the transmitting antenna, which can absorb all the radiated electromagnetic waves. Considering these, the author establishes a new electromagnetic theory, in which all transmitting antennas, absorber materials are near the origin. It is assumed that no matter on the sphere with infinite radius can absorb electromagnetic waves, so electromagnetic waves cannot transmit electromagnetic energy to infinity. The author adds this idea to Maxwell's electromagnetic theory. That is to add a boundary condition for Maxwell's electromagnetic theory that radiation does not overflow the universe. This boundary condition must be in conflict with Maxwell's electromagnetic theory. Because the far field of Maxwell's electromagnetic theory can only take the Sliver-Muller radiation condition. However, the author finds that we can relax Maxwell's equation appropriately. In fact, the author relaxes the mutual energy principle equivalent to Maxwell's equation. The relaxed mutual energy principle can add new boundary condition that radiation does not overflow the universe. This constitutes a new electromagnetic theory.
[505] vixra:2212.0195 [pdf]
A Note on the Standard Model
The four fundamental interactions of the SM can be unified as a quark field, by using the Hopf fibration to model the basic building block of matter: qubit space (software viewpoint) / quark structure of the neutron (hardware). This approach uses a much richer mathematical structure in lieu of the GUT approach via a larger symmetry group and recycling the gauge theory paradigm, while still missing Gravity.Quarks are not independent particles. The unified field is the quark field, a type $(2,1)$ vector field associated with neutrons, breaking the $SO(3)$-symmetry of classical or quantum pointwise charges.Under interactions with the environment it decays into the constituents of the stable form, the hydrogen atom.Weak interaction is not a force, rather a transition between modes of vibration of baryons. Strong Force needs to be redesigned as a nuclear force, instead of being tailored to confine quarks. Gravity is a correction to EM as the main long range component of the quark field. Motion adds dynamical aspects to Gravity, including induction due to mass currents, which has been experimentally proved.Applications to Gravity Control, experimentally verified, and controlling Cold Fusion / Transmutations, also experimentally observed, are briefly mentioned.
[506] vixra:2212.0185 [pdf]
The Graphical Law Behind the Head Words of Dictionary Kannada and English Written by W. Reeve, Revised, Corrected and Enlarged by Daniel Sanderson
We study the head words of the Dictionary Kannada and English written by W. Reeve, revised, corrected and enlarged by Daniel Sanderson.We draw the natural logarithm of the number of head words, normalised, starting with a letter vs the natural logarithm of the rank of the letter, normalised. We conclude that the Dictionary can be characterised by BW(c=0.01),the magnetisation curve of the Ising Model in the Bragg-Williams approximation in the presence of external magnetic field, H. $c=frac{ H}{gamma epsilon}=0.01$ with $epsilon$ being the strength of coupling between two neighbouring spins in the Ising Model, $gamma$ representing the number of nearest neighbours of a spin which very large.Moreover, we put forth a parallelism with a Bengali dictionary.
[507] vixra:2212.0176 [pdf]
Efficient Integration of Perceptual VAE into Dynamic Latent Scale GAN
Dynamic latent scale GAN is a method to train an encoder that inverts the generator of GAN with maximum likelihood estimation. In this paper, we propose a method to improve the performance of dynamic latent scale GAN by integrating perceptual VAE loss into dynamic latent scale GAN efficiently. When training dynamic latent scale GAN with normal i.i.d. latent random variable, and latent encoder is integrated into discriminator, a sum of a predicted latent random variable of real data and a scaled normal noise follows normal i.i.d. random variable. This random variable can be used for both VAE and GAN training. Considering the intermediate layer output of the discriminator as a feature encoder output, the generator can be trained to minimize perceptual VAE loss. Also, inference & backpropagation for perceptual VAE loss can be integrated into those for GAN training. Therefore, perceptual VAE training does not require additional computation. Also, the proposed method does not require prior loss or variance estimation like VAE.
[508] vixra:2212.0171 [pdf]
The Problem of the Causality in the Atomic World
Interaction theories are usually based on a elativistically invarieant Lagrange function. This function is generally known and accepted for the electromagnetic interaction. The variation of that Lagrangian leads to the system of the coupled Maxwell-Dirac equations. It contains a non-linear term. If you neglect this term, you obtain the well-known linear Dirac equation and rules for determining the correct values of the spectral lines of atoms. However, one cannot describe the radiation process and has to introduce the quantum hypothesis. But, if the non-linear term is also taken into account, there are solutions of the system what describe the emission of "quantum jumps" in space and time with correct frequencies. This is demonstrated in the presented work for hydrogen and helium atoms. It explains the entangled eigenfunctions in the context of a classical mear-field theory. Further problems like diffraction effects, photo effects and relativistic transformation of the field tensor are discussed. Aim of the work is a proposal of an alternative to the statistical interpretation of the quantum theory in context of a classical near-field theory.
[509] vixra:2212.0167 [pdf]
A Note on Dark Matter and Dark Energy
The hypothetical dark matter and dark energy play the role of sources of gravity, supplementing what General Relativity predicts based on the observed masses in the Universe.This brief note accounts for this additional G-force / potential based on the Theory of Gravity of quantum origin, emerging from the quark structure of matter.This supplemental gravity is due to the presence of fractional charges of quarks, which yield different interaction strengths depending on the polarization of spin directions of neutrons and protons. In the presence of high intensity magnetic fields of rotating systems (neutron stars, galaxies etc.) the change in polarization of spins affects the intensity of the gravitational field. Note that such a dependence cannot be accounted for my GR via only the metric of space-time.This is consistent with theories of Modified Gravity on fiber bundles, including spin for instance, as effective theories of Gravity without black matter and black energy.
[510] vixra:2212.0164 [pdf]
Consistent Supergravity Theories in Diverse Dimensions
The scientific article reviews some properties of the low-energy effective actions for consistentsupergravity models. We summarize the current state of knowledge regarding quantum gravitytheories with minimal supersymmetry. We provide an elegant extension of the theory and give adeffinitions of the anomaly-free models in advanced supergravity constructions. The deep relationbetween anomalies and inconsistency is emphasized in this research. The conditions for anomalycancellation in these supergravity theories typically constitute determined types of equations.For completeness of theoretical framework, we are including anomaly-free models, which areconsistent supergravity theories.
[511] vixra:2212.0163 [pdf]
The SP-multiple-alignment Concept as a Generalisation of Six Other Variants of "Information Compression via the Matching and Unification of Patterns"
This paper focusses on the powerful concept of SP-multiple-alignment, a key part of the SP System (SPS), meaning the SP Theory of Intelligenceand its realisation in the SP Computer Model. The SPS is outlined in an appendix. More specifically, the paper shows with examples how the SP-multiplealignment construct may function as a generalisation of six other variants of ‘Information Compression via the Matching and Unification of Patterns’ (ICMUP). Each of those six variants is described in a separate section, and in each case there is a demonstration of how that variant may be modeled via the SP-multiple-alignment construct.
[512] vixra:2212.0162 [pdf]
Improved Bound for the Number of Integral Points in a Circle of Radius R Larger Than 1
Using the method of compression, we prove an inequality related to the Gauss circle problem. Let $mathcal{N}_r$ denotes the number of integral points in a circle of radius $r>0$, then we have $$2r^2bigg(1+frac{1}{4}sum limits_{1leq kleq lfloor frac{log r}{log 2}floor}frac{1}{2^{2k-2}}bigg)+O(frac{r}{log r}) leq mathcal{N}_r leq 8r^{2}bigg(1+sum limits_{1leq kleq lfloor frac{log r}{log 2}floor}frac{1}{2^{2k-2}}bigg)+O(frac{r}{log r})$$ for all $r>1$.
[513] vixra:2212.0157 [pdf]
Mirror Symmetry for New Physics Beyond the Standard Model in 4D Spacetime
The two discrete generators of the full Lorentz group $O(1,3)$ in $4D$ spacetime are typically chosen to be parity inversion symmetry $P$ and time reversal symmetry $T$, which are responsible for the four topologically separate components of $O(1,3)$. Under general considerations of quantum field theory (QFT) with internal degrees of freedom, mirror symmetry is a natural extension of $P$, while $CP$ symmetry resembles $T$ in spacetime. In particular, mirror symmetry is critical as it doubles the full Dirac fermion representation in QFT and essentially introduces a new sector of mirror particles. Its close connection to T-duality and Calabi-Yau mirror symmetry in string theory is clarified. Extension beyond the Standard model can then be constructed using both left- and right-handed heterotic strings guided by mirror symmetry. Many important implications such as supersymmetry, chiral anomalies, topological transitions, Higgs, neutrinos, and dark energy, are discussed.
[514] vixra:2212.0144 [pdf]
Models of Pen-PL Gaussian Games in Non-cooperative Communication
Consider non-cooperative pen games where both players act strategically and heavily influence each other. In spam and malware detection, players exploit randomization to obfuscate malicious data and increase their chances of evading detection at test time. The result shows Pen-PL Games have a probability distribution that approximates a Gaussian distribution according to some probability distribution defined over the respective strategy set. With quadratic cost functions and multivariate Gaussian processes, evolving according to first order auto-regressive models, we show that Pen-PL "smooth" curve signaling rules are optimal. Finally, we show that computing a socially optimal Pen-PL network placement is NP-hard and that this result holds for all P-PL-G distributions.
[515] vixra:2212.0143 [pdf]
Fifa World Cup 2022 and Gini Coefficient up to the Final
In this paper we study the Fifa World Cup 2022 matches up to the Final. In the Group Stage, for the decided matches, the probability for a team with lower relative Gini coefficient to win was 22/38( (22-3)/(38-4) i.e. 19/34 in the most conservative estimate). The probability to win with lower HDI was 18/38 or, 1/2.11 i.e. less than half. The probability to win with lower Gini coefficients in the next three stages was 1/2.In the Round of Sixteen stage, the probability to win with lower HDI has swung to the opposite direction to 5/8 i.e. more than half, as in the previous World Cup. In the Quarter Final stage, the probability for a team with lower relative HDI to win was 3/4 i.e swung more to the right.In the Semi Final stage, the probability for a team with lower relative HDI to win was1/2 i.e. swung back to the center. In the Third Place Play-off, the team with lower relative Gini coefficient, higher HDI won. In the Final, the team with higher relative Gini coefficient, lower HDI won. The overall probability to win with lower relative Gini coefficient was 30/54(27/50).
[516] vixra:2212.0141 [pdf]
On the Finiteness of Sequences of Even Squarefree Fibonacci Numbers
Let 2p1p2 . . . pk−1 be an even squarefree Fibonacci number with k distinct prime factors. For each positive k, such numbers form an integersequence. We conjecture that each such sequence has only a finite number of terms. In particular, the factorization data for the first 1000 Fibonacci numbers suggests that there is only one such term for k = 2, 5 for k = 3, and 8 for k = 4. We also renew attention to the fact that a proof that there are infinitely many squarefree Fibonacci numbers remains lacking. Some approachto proving this, emerging from our study, is suggested.
[517] vixra:2212.0129 [pdf]
Knot, Refractive Index, and Scalar Field
We construct the geometric optical knot in 3-dimensional Euclidean (flat) space of the Abelian Chern-Simons integral using the variables (the Clebsch variables) of the complex scalar field, i.e. the function of amplitude and the phase, where the phase is related to the refractive index.
[518] vixra:2212.0123 [pdf]
Explanatory Principle no 1 in Quantum Physics: the Entropy-Lessness of Physical Subsystems with Apparent Retrocausality
The EPR-phenomenon, usually described with the use of the rather mystical process of quantum entanglement, is consistenly explained here as a consequence of the ambivalence of the time direction in entropy-less physical subsystems of our world.
[519] vixra:2212.0116 [pdf]
On Tetration Theory
In this paper I am going to explain and compare some of the different tetration notations and properties for the tetration concept. Then I am going to incorporate some tables of reference for numerical tetration.
[520] vixra:2212.0113 [pdf]
Oxford Dictionary of Chemistry, the Seventh Edition and the Graphical Law
We study the Oxford Dictionary Of Chemistry, the seventh edition. We draw the natural logarithm of the number of head entries, normalised, starting with a letter vs the natural logarithm of the rank of the letter, normalised. We conclude that the Dictionary can be characterised by BP(4,$beta H=0$) i.e. a magnetisation curve for the Bethe-Peierls approximation of the Isingmodel with four nearest neighbours with $beta H=0$, in the absence of external magnetic field, H.$beta$ is $frac{1}{k_{B}T}$ where, T is temperature and $k_{B}$ is the tiny Boltzmann constant.
[521] vixra:2212.0100 [pdf]
Neutro-BCK-Algebra
This paper introduces the novel concept of Neutro-BCK-algebra. In Neutro-BCK-algebra, the outcome of any given two elements under an underlying operation (neutro-sophication procedure) has three cases, such as: appurtenance, non-appurtenance, or indeterminate. While for an axiom: equal, non-equal, or indeterminate. This study investigates the Neutro-BCK-algebra and shows that Neutro-BCK-algebra are different from BCKalgebra. The notation of Neutro-BCK-algebra generates a new concept of NeutroPoset and Neutro-Hassdiagram for NeutroPosets. Finally, we consider an instance of applications of the Neutro-BCK-algebra.
[522] vixra:2212.0080 [pdf]
A Survey on Deep Transfer Learning and Edge Computing for Mitigating the COVID-19 Pandemic
Global Health sometimes faces pandemics as are currently facing COVID-19 disease. The spreading and infection factors of this disease are very high. A huge number of people from most of the countries are infected within six months from its first report of appearance and it keeps spreading. The required systems are not ready up to some stages for any pandemic; therefore, mitigation with existing capacity becomes necessary. On the other hand, modern-era largely depends on Artificial Intelligence(AI) including Data Science; Deep Learning(DL) is one of the current flag-bearer of these techniques. It could use to mitigate COVID-19 like pandemics in terms of stop spread, diagnosis of the disease, drug & vaccine discovery, treatment, and many more. But this DL requires large datasets as well as powerful computing resources. A shortage of reliable datasets of a running pandemic is a common phenomenon. So, Deep Transfer Learning(DTL) would be effective as it learns from one task and could work on another task. In addition, Edge Devices(ED) such as IoT, Webcam, Drone, Intelligent Medical Equipment, Robot, etc. are very useful in a pandemic situation. These types of equipment make the infrastructures sophisticated and automated which helps to cope with an outbreak. But these are equipped with low computing resources, so, applying DL is also a bit challenging; therefore, DTL also would be effective there. This article scholarly studies the potentiality and challenges of these issues. It has described relevant technical backgrounds and reviews of the related recent state-of-the-art. This article also draws a pipeline of DTL over Edge Computing as a future scope to assist the mitigation of any pandemic.
[523] vixra:2212.0072 [pdf]
Penguin Dictionary of Physics, the Fourth Edition, by John Cullerne, and the Graphical Law
We study the Penguin Dictionary of Physics, the fourth edition, by John Cullerne. We draw the natural logarithm of the number of entries, normalised, starting with a letter vs the natural logarithm of the rank of the letter, normalised. We conclude that the Dictionary can be characterised by BP(4, $beta H=0.01$), i.e.the Bethe-Peierls curve in the presence of four nearest neighbours and little external magnetic field, $beta H= 0.01$. $beta$ is $frac{1}{k_{B}T}$ where, T is temperature and $k_{B}$ is the tiny Boltzmann constant.
[524] vixra:2212.0063 [pdf]
Application of Oscillation Symmetry to Decay Mode Fractions of Several Mesons and Baryons
The oscillation symmetry is applied with success to the decay mode fractions of several meson and baryons. The found periods display a "like quantification" behaviour.
[525] vixra:2212.0056 [pdf]
Introduction to SuperHyperAlgebra and Neutrosophic SuperHyperAlgebra
In this paper we recall our concepts of n th-Power Set of a Set, SuperHyperOperation, SuperHyperAxiom, SuperHyperAlgebra, and their corresponding Neutrosophic SuperHyperOperation, Neutrosophic SuperHyperAxiom and Neutrosophic SuperHyperAlgebra. In general, in any field of knowledge, one actually encounters SuperHyperStructures (or more accurately (m, n)- SuperHyperStructures).
[526] vixra:2212.0052 [pdf]
Fifa World Cup 2022 and Gini Coefficient up to the Round of Sixteen
In this paper we study the Fifa World Cup 2022 matches up to the Round of Sixteen. We find that the empirical evidence of lower Gini coefficient is continuing to decide the higher probability of winning of a country in a match in the FIFA World Cup 2022up to the Round of Sixteen. Moreover, in the Group Stage, probability to win with lower HDI was 18/37 or, 1/2.06 i.e. less than half. In the Round of Sixteen stage probability to win with lower HDI has swung to the opposite direction to 5/8 i.e. more than half, as in the previous World Cup.
[527] vixra:2212.0045 [pdf]
Bell’s Theorem and Einstein’s Worry About Quantum Mechanics
With the use of local dependency of probability density of local hidden variables on the instrument settings, it is demonstrated that Bell’s correlation formulation is incomplete. This result concurs with a previous computational violation close to quantum correlation with a computer model based on Einstein locality principles.
[528] vixra:2212.0015 [pdf]
A Dictionary of Sindhi Literature by Dr. Motilal Jotwani and the Graphical Law
We study A Dictionary of Sindhi Literature by Dr. Motilal Jotwani. We draw the natural logarithm of the number of entries, normalised, starting with a letter vs the natural logarithm of the rank of the letter, normalised. We conclude that the Dictionary can be characterised by BP(4,$beta H=0$) i.e. a magnetisation curve in the Bethe-Peierls approximation of the Isingmodel with four nearest neighbours in the absence of external magnetic field ie. $beta H=0$. $beta$ is $frac{1}{k_{B}T}$ where, T is temperature, H is external magnetic field and $k_{B}$ is the tiny Boltzmann constant.
[529] vixra:2212.0011 [pdf]
A Proof of the Scholz Conjecture on Addition Chains
Applying the pothole method on the factors of numbers of the form $2^n-1$, we prove the inequality $$iota(2^n-1)leq n-1+iota(n)$$ where $iota(n)$ denotes the length of the shortest addition chain producing $n$.
[530] vixra:2212.0006 [pdf]
Comment on Lev I. Verkhovsky's 'Memoir on the Theory of Relativity and Unified Field Theory'
In his remarkable 'Memoir on the Theory of Relativity and Unified Field Theory', Lev I. Verkhovsky has reanimated the scaling factor which occurs in pioneering articles on the Lorentz transformation by Lorentz 1904, Einstein 1905, and Poincaré 1906. In this comment, we shortly look at their determination of the scaling factor and then show that his reanimation is not successful.
[531] vixra:2211.0170 [pdf]
Discovering and Programming the Cubic Formula
Solving a cubic polynomial using a formula is possible; a formula exists. In this article we connect various dots from a pre-calculus course and attempt to show how the formula could be discovered. Along the way we make a TI-84 CE menu driven program that allows for experiments, confirmations of speculations, and eventually a working program that solves all cubic polynomials.
[532] vixra:2211.0168 [pdf]
Weyl and Majorana for Neutral Particles
We compare various formalisms for neutral particles. It is found that they contain unexplained contradictions.Next, we investigate the spin-1/2 and spin-1 cases in different bases. Next, we look for relations with the Majorana-like field operator. We show explicitly incompatibility of the Majorana anzatzen with the Dirac-like field operators in both the original Majorana theory and its generalizations. Several explicit examples are presented for higher spins too. It seems that the calculations in the helicity basis only give mathematically and physically reasonable results.
[533] vixra:2211.0148 [pdf]
Puzzling, Very Slow Oscillations of the Air Pressure in Europe
If one compares long-term recordings of the air pressure measured by neighboring barometers, one observes synchronous oscillations at certain frequencies for which there is no known cause. Are they excited by gravitational waves?
[534] vixra:2211.0139 [pdf]
A Refined Pothole Method and the Scholz Conjecture on Addition Chains
Applying the pothole method on the factors of numbers of the form $2^n-1$, we prove the inequality $$iota(2^n-1)leq frac{3}{2}n-left lfloor frac{n-2}{2^{lfloor frac{log n}{log 2}-1floor+1}}ight floor-lfloor frac{log n}{log 2}-1floor +frac{1}{4}(1-(-1)^n)+iota(n)$$ where $lfloor cdot floor$ denotes the floor function and $iota(n)$ the shortest addition chain producing $n$.
[535] vixra:2211.0130 [pdf]
The Incompleteness of the Schroedinger Equation
The Schroedinger equation with the nonlinear term is derived in the frameworkof the Dirac heuristics. The particle behaves classically in case the mass of it is infinite.The nonlinear term involves new physical constant b. The constant b can bemeasured by the same methods that were used in the case of the Casimir effect. Of course, the experimental procedure is based on well educatedexperimenters. The new experiments, different from the Zeilinger ones, are proposed, with theFaraday simplicity. The article isthe extended and perfectionized version of the articles by author (Pardy, 1993; 1994; 2001).
[536] vixra:2211.0123 [pdf]
Preventing Advanced Eugenics and Generational Testosterone Decline
The goal of the paper is to prevent eugenics against testosterone, by disclosing it’s possible shapes, because the first step of preventing it, is to know that the idea/weakness exists. We cannot protect our selves from something that we don’t know exists. We should not neglect the fact that testosterone levels have been dropping a lot in the last decades [1] [2] [3] [4], across generations, and independently of age.
[537] vixra:2211.0120 [pdf]
On the Irrationality of Riemann Zeta Functional Value at Odd Integers
In this article we provide a proof of the irrationality of ζ(2n+1) ∀n ∈ N.Also,in our attempt, we construct an upper bound to the Zeta values at odd integers.It is interesting to see how the irrationality of Zeta values at even positive integers mixed up with Dirichletsirrationality criterion and this bound accelerates our proof further,case by case.
[538] vixra:2211.0102 [pdf]
One Second Crucial Theorem for the Refoundation of Elementary Set Theory and the Teaching of that Discipline to Future Generations
For a given infinite countable set constituted by the union of infinitely many non-empty, finite or infinite countable, disjoint sub-sets A = {i∈N*}[A_i]_P(A) , ∀i ∈ N* , A_i ≠ ∅, ∀i, j ∈ N* , i ≠ j, A_i ∩ A_j = ∅, we demonstrate that it is legitimate to partition A in a finite or infinite number of sub-sets that are themselves constituted by a finite or infinite number of sub-sets of A when the initial order of indexation of the sub-sets of A maintains a strictly increasing order in each sub-set.
[539] vixra:2211.0095 [pdf]
A Formula of the Dirichlet Character Sum
In this paper, We use the Fourier series expansion of realvariables function, We give a formula to calculate the Dirichlet charactersum, and four special examples are given
[540] vixra:2211.0093 [pdf]
Empirical Equations that Relate the Minimum Degree of a Faithful Complex Representation of the Monster Group Whose Dimension is 47 X 59 X 71 = 196,883, with Parameters of the Standard Model
In this work, the deep relationship between the monster group and the standard model is once again manifested. This was already manifested in the calculation of the entropy of black holes by proving the E. Witten's conjecture. Some parameters obtained are the mass of the Higgs boson, the value of the Higgs vacuum, the Fine-structure constant for zero momentum, and the entropy between the Planck mass and the mass of the electron. Despite being purely empirical equations, their logical consistency is based on the exclusive use of dimensionless parameters of the standard model. This makes us think about its relevance and not coincidence. The dimension of the monster group, 196883 and its prime divisors,47,59 and 71; are the basis of these equationsWe also think that the monster group plays an essential role in a theory of quantum gravity and even a theory of everything. It is even possiblethat the 26 exceptional groups are also involved.
[541] vixra:2211.0091 [pdf]
On Problems and Their Solution Spaces
We introduce and develop the logic of existence of solution to problems. We use this theory to answer the question of Florentin Smarandache in logic. We answer this question in the negative.
[542] vixra:2211.0087 [pdf]
A Greek and English Lexicon by H.g.liddell et al Simplified by Didier Fontaine and the Graphical Law
We study a Greek dictionary, A Greek and English Lexicon by H.G.Liddell et al simplified by Didier Fontaine. We draw the natural logarithm of the number of entries, normalised, starting with a letter vs the natural logarithm of the rank of the letter, normalised. We conclude that the Dictionary can be characterised by BP(4, $beta H=0.02$), i.e.the Bethe-Peierls curve in the presence of four nearest neighbours and little external magnetic field, H, with $beta H= 0.02$. $beta$ is $frac{1}{k_{B}T}$ where, T is temperature and $k_{B}$ is the tiny Boltzmann constant.
[543] vixra:2211.0081 [pdf]
Neutrinos and the Speed of Light
While neutrino particles are being extensively studied in modern physics, researchers are still debating whether these particles travel at the speed of light. While theoretical methods suggest that this cannot be the case, experimentations have failed to show any significant difference between neutrino speed and c. In this paper, we suggest that neutrinos actually travel at the speed of light, and propose a simple theoretical approach to compute neutrino quantum states corresponding masses, using observed flavour distribution. We also introduce the idea that using imaginary masses for some of these quantum states would solve the current dead-end researchers are facing when studying the speed of neutrinos.
[544] vixra:2211.0079 [pdf]
(Anti) de Sitter Geometry, Complex Conformal Gravity-Maxwell Theory from a $Cl (4, C)$ Gauge Theory of Gravity and Grand Unification
We present the deep connections among (Anti) de Sitter geometry, complex conformal gravity-Maxwell theory, and grand unification, from a gauge theory of gravity based on the complex Clifford algebra Cl(4,C). Some desirable results are found, like a plausible cancellation mechanism of the cosmological constant involving an algebraic constraint between e^a_mu, b^a_mu (the real and imaginary parts of the complex vierbein).
[545] vixra:2211.0068 [pdf]
A Note About The Determination of Integer Coordinates of Elliptic Curves - Part II, v1 -
In this paper, we give an elliptic curve $(E)$ given by the equation:y^2=f(x)=x^3+px+q with $p,qin Z$ not null simultaneous. We study the conditions verified by $(p,q)$ so that $exists ,(x,y) in Z^2$ the coordinates of a point of the elliptic curve $(E)$ given by the equation above. Key words: elliptic curves, integer points, solutions of degree three polynomial equations, solutions of Diophantine equations.
[546] vixra:2211.0063 [pdf]
General and Consistent Explanation of Tunnel Effect Based on Quantum-Statistical Interpretation
We present an alternative quantum-statistical approach to the electron tunneling through the potential barrier, which is distinguished from the conventional interpretation.In our approach, the tunnel effect is treated in both the statistical aspect and quantum aspect. The conventional interpretation of the tunnel effect based purely on the wave property of a single electron cannot elucidate satisfactorily the dynamics of electron motion in the potential barrier because the interpretation violates the universal law of energy conservation, just as the subtle term `tunnel effect' implies.In this work, we clarify the fact that the tunnel effect has statistical aspects too, and explain it both by applying the electron statistics and by considering the quantum restriction by the potential barrier on electron surmounting the barrier instead of tunneling. Therefore, our interpretation satisfies the law of energy conservation and naturally explains all the characteristics of tunneling including the influence of temperature as the statistical aspect as well. The consideration of the quantum restriction that is determined by potential barrier leads to a satisfactory explanation of the quantum properties of tunneling. Finally, we offer a complete and general explanation of the tunnel effect as a phenomenon of quantum plus statistical origin, thus demonstrating that the tunneling substantially depends on quantum-statistical nature.
[547] vixra:2211.0061 [pdf]
Dictionary of Culinary Terms by Philippe Pilibossian and the Graphical Law
We study the Dictionary of Culinary Terms by Philippe Pilibossian. We draw the natural logarithm of the number of entries, normalised, starting with a letter vs the natural logarithm of the rank of the letter, normalised. We conclude that the Dictionary can be characterised by BP(4, $beta H=0$), i.e.the Bethe-Peierls curve in the presence of four nearest neighbours and no external magnetic field, H, with $beta H= 0$. $beta$ is $frac{1}{k_{B}T}$ where, T is temperature and $k_{B}$ is the tiny Boltzmann constant.
[548] vixra:2211.0052 [pdf]
Bell’s Theorem, EPR and Locality
In a technical report under the auspices of The Nobel Committee for Physics, dated 4 October 2022, we find these claims: (i) Bell’s first inequality was a spectacular theoretical discovery; (ii) Bell showed mathematically that no hidden variable theory would be able to reproduce all the results of quantum mechanics; (iii) Bell provided a proof ... that all attempts to construct a local realist model of quantum phenomena are doomed to fail. Against such claims, and with a special focus on EPR’s criteria, this personal note shows: (i) that all such claims are flawed; (ii) that Bell’s theorem and Bell’s inequality fall to straightforward considerations; (iii) that, consequently, for their part in proving that EPR were right: we are indebted to those who develop the sources; hopefully en route to wholistic mechanics -- a commonsense quantum mechanics -- as we celebrate the birth of Olivier Costa de Beauregard, 111 years ago.
[549] vixra:2211.0050 [pdf]
Tornadoes Analysis Concordia, Santa Catarina, Southern Brazil, 2022 Season
Large storms such as tornadoes and extratropicalcyclones have become increasingly common in southern Brazil. The season of strong storms has been increasingly evident between autumn and winter in southern Brazil, such as hurricane Catarina. Several tornadoes were evidenced in2022 in this region, and the ones discussed here are from It’s and Concodia, which crossed the rural areas of Santa Catarina, causing great damage to their avian production. As the main focus, the one of Concordia was analyzed, classified as category F1, and approximate dimensions of 100m in diameter.
[550] vixra:2211.0049 [pdf]
Vacío Y Energía (Vacuum and Energy)
La propuesta de este documento es mostrar que la energía del vacío y la tarea de producir vacío y están ligadas en forma directa e inmediata. Ni en las aulas ni en la bibliografía estudiantil es mencionado ese detalle. El objetivo es remediar la carencia.<p>The proposal of this document is to show that the energy of the vacuum and the task of producing the vacuum are directly and immediately linked. Neither in the classrooms nor in the student bibliography is this detail mentioned. The objective is to remedy the deficiency.
[551] vixra:2211.0046 [pdf]
On a Conjecture of Erdos on Additive Basis of Large Orders
Using the methods of multivariate circles of partition, we prove that for any additive base A [...] holds for sufficiently large values of k provided the counting function [...] is an increasing function for all k sufficiently large.
[552] vixra:2211.0037 [pdf]
Incompatible Solar Altitude Angle During the Apollo 11 Eva from Elementary Ecliptic Calculations.
From elementary ecliptic calculations, we found a solar altitude angle of $07.87°$ (not more than $09.87°$ with respect to the ground of the Tranquility base) instead of a minimal solar altitude angle of $16.2°$ at the official lunar landing site (Tranquility base) during the Apollo 11 EVA (extravehicular activity). Since the sidereal rotation period of the moon is particularly large, during the period were both astronauts were outside the Lunar Module of the Apollo 11 mission, the solar angle variation in the lunar sky was only $0.888°$. Moreover, the smaller solar altitude angle, the larger absolute precision of the solar altitude angle is reached by the shadow measurements on a horizontal surface.
[553] vixra:2211.0030 [pdf]
Transformation of Altitudes From One Geodetic System to Another
With the introduction of GPS (Global Positioning System) technology, which provides the user with his three-dimensional (X,Y,Z) position in the global geocentric system called WGS84 (World Geodetic System 1984), it is necessary to know the transformation of altitudes from the world geodetic system to the national or local geodetic system. We present below some models of transformations of passage of altitudes between geodetic systems.
[554] vixra:2211.0029 [pdf]
Convergent Fundamental Constants of the Universe
Several theoretical equations are derived which determine the vacuum permittivity, vacuum permeability, free impedance and speed of light to a highly convergent value on the order of $10^{-8}$ to $10^{-9}$. Their dependence is related to the particle horizon, Hubble horizon, Planck length, proton wavelength and the fine structure constant. Due to these highly convergent equations, an image of both the photon and the vacuum may start to emerge.
[555] vixra:2211.0024 [pdf]
A Real TI-83 Craps Simulation Teaches Beginning Probability
Simple craps, a Vegas casino game, is easily modeled using a TI-83's programming, list, and statistics features. Roll two dice. If the coming out roll, as it is called is 2, 3, or 12, the player immediately loses. If a 7 or 11 is rolled, they immediately win. If any of the remaining totals are rolled, if the player rolls that number before rolling a 7, they win, not they lose. A program can be initialized with a bank roll and a standard, non-changing bet. Using the list feature 999 rolls, the maximum size of a list, can be stored in a built-in list. A bar chart for the distribution of numbers is easily generated and confirms the calculated probabilities. The code to mimic the game is straight-forward. The user repeatedly plays the game until the stake is 0, an inevitability given say a stake of $100 and a bet of $5. This certainty instills the truth: it's a loser's game.
[556] vixra:2211.0015 [pdf]
The Acceleration of Multi-Factor Merton Model on FPGA
Credit risk stands for the risk of losses caused by unwanted events, such as the default of an obligor. The managing of portfolio credit risks is crucial for financial institutions. The multi-factor Merton model is one of the most widely used tools that modelling the credit risks for financial institutions. Typically, the implementation of the multi-factor Merton model involves Monte Carlo simulations which are time-consuming. This would significantly restrict its usability in daily credit risk measurement. In this report, we propose an FPGA architecture for credit-risk measurements in the multi-factor Merton models. The presented architecture uses a variety of optimization techniques such as kernel vectorization and loop unrolling, to optimize the performance of the FPGA implementation. The evaluation results show that compare to a basic C++ implementation running on a single-core Intel i5-4210 CPU, our proposed FPGA implementation can achieve an acceleration of up to 22 times, with a precision loss of less than 10−8.
[557] vixra:2211.0014 [pdf]
Parallel Parameter Estimation for Gilli-Winker Model Using Multi-Core CPUs
Agent-based modeling is a powerful tool that is widely used to model global financial systems. When the parameters of the model are appropriate, the price time series generated by the model exhibit marked similarities with actual financial time series and even reproduces some of their statistical characteristics.By using Kirman’s Ant model as a prototype, this report systematically explored Gilli and Winker’s parameter optimization method. In view of some limitations of this method, this report promoted some improvements, including a local-restart strategy to enhance the convergence ability of the original optimization method, as well as incorporate Simulated Annealing into the original method to help the algorithm escape from local optimums. Furthermore, since the parameter optimization of agent-based modeling tends to be very time-consuming, an acceleration method was also proposed to speed up this procedure. In the end, the presented methods have been validated with the EUR/USD exchange rate.
[558] vixra:2211.0007 [pdf]
The Black Hole Fallacy
It is shown here that the currently accepted solution in general relativity predicting black holes and an event horizon is unphysical. The spacetime outside a point mass is entirely regular, the effective gravitational mass decreases to zero as a test object comes into close proximity with it, and free-fall velocities do not exceed the speed of light. The ring-like radio images of galactic centres can be attributed to gravitational lensing of regular point sources.
[559] vixra:2210.0166 [pdf]
Hidden Premises in Galois Theory
This is a primer for Chapter 3 of Hadlock's book Field Theory and Its Classical Problems: Solution by Radicals. We take a rather naive perspective and consider the linear and quadratic cases afresh and evolve what is really met by solving a polynomial by radicals. There are what we consider to be several hidden premises that some students might be subconsciously puzzled about.
[560] vixra:2210.0159 [pdf]
Tetravalent Logic in Mathematics and Physics
In this work, we solve the main mathematical puzzles of the UMMO file which contains severalthousand postal letters sent since the 1960s addressing many fields such as philosophy, mathematics, human sciences, biology, cosmology, theoretical physics, among others. "The UMMO affair" refers to more than 200 listed documents, representing at least 1300 typed pages (the "Ummite letters"), which are said to have been sent since 1966 to numerous recipients, in particular in Spain and France by editors - the Ummites - claiming to be extraterrestrials on an observation mission on Earth and which have four centuries of technological advance on the terrestrial human technologies. We demonstrate the importance of angular tetravalent logic in mathematics and theorical physics. As an example, we give a proof of Fermat’s last theorem using angular tetravalent logic, as suggested by the Ummites. Then, we will pierce the secrets of the universe always using the tetravalent logic, we will explain the reasoning which proves the existence of a twin universe and we give the mathematical formula for the folding of space-time which separates the two twin universes and finally we explain why the curvature of the universe is necessarily negative.
[561] vixra:2210.0155 [pdf]
Lecture Notes On Celestial Mechanics: Elements of Central Configuration For Undergraduate Students (Part I)
From the lectures for an advanced course on celestial mechanics which Prof. Richard Moeckel gave in Trieste in 1994 on the topic - Central configurations of the n-body problem - that was one of his favorites, I have decided to develop a part of it as an introduction course for the undergraduate students where I have added more details (gray boxes) of the proofs to be understood by undergraduate students.It is based on the handwritten notes from the 1994 Trieste course. Part I of the notes concerns 3 chapters :- chapter 1: introduction,- chapter 2: the two-body problem,- chapter 3: special solutions to the n body problem.
[562] vixra:2210.0139 [pdf]
The General Hohmann Transfer
An analytical method is presented for tangent transfers (Hohman type transfers) between non-coaxial elliptical orbits. Since Hohmann transfers are thought not to apply to non-coaxial orbits, this method generalizes the Hohmann transfer, typically used only between circular orbits. Since tangent transfers are less complex, as they require no change in direction, they offer an alternative to other orbital transfer and rendezvous methods.
[563] vixra:2210.0137 [pdf]
Electron and Electrons System
This article try to unified the four basic forces by Maxwell equations, the only experimental theory. Self-consistent Maxwell equations with the e-current coming from matter current is proposed, and is solved to electrons and the structures of particles and atomic nucleus. The static properties and decay are reasoned, all meet experimental data. The equation of general relativity sheerly with electromagnetic field is discussed as the base of this theory. In the end the conformation elementarily between this theory and QED and weak theory is discussed.
[564] vixra:2210.0135 [pdf]
The Witten Conjecture About the Entropy of Black Holes and the Relationship of the Monster Group With String Theory
In 2007, E. Witten suggested that AdS/CFT correspondence yields a duality between pure quantum gravity in (2+1)-dimensional anti de Sitter space and extremal holomorphic CFTs. Pure gravity in 2+1 dimensions has no local degrees of freedom, but when the cosmological constant is negative, there is nontrivial content in the theory, due to the existence of BTZ black hole solutions. Extremal CFTs, introduced by G. Höhn, are distinguished by a lack of Virasoro primary elds in low energy, and the moonshine module is one example. Part of Witten's proposal is that Virasoro primary elds are dual to black-hole-creating operators, and as a consistency check, he found that in the large-mass limit, the Bekenstein-Hawking semiclassical entropy estimate for a given black hole mass agrees with the logarithm of the corresponding Virasoro primary multiplicity in the moonshine module. In the low-mass regime, there is a small quantum correction to the entropy, e.g., the lowest energy primary elds yield In(196883) ~ 12.19, while the BekensteinHawking estimate gives 4π ∼ 12.57.
[565] vixra:2210.0119 [pdf]
On the Distribution of Perfect Numbers and Related Sequences via the Notion of the Disc
In this paper we investigate some properties of perfect numbers and associated sequences using the notion of the disc induced by the sum-of-the-divisor function $sigma$. We reveal an important relationship between perfect numbers and abundant numbers.
[566] vixra:2210.0112 [pdf]
De Sitter Symmetry and Neutrino Oscillations
Although the phenomenon of neutrino oscillations is confirmed in many experiments, the theoretical explanation of this phenomenon in the literature is essentially model dependent and is not based on rigorous physical principles. We propose an approach where the neutrino is treated as a massless elementary particle in anti-de Sitter (AdS) invariant quantum theory. In contrast tostandard Poincare invariant quantum theory, an AdS analog of the mass squared changes over timeeven for elementary particles. Our approach naturally explains why, in contrast to the neutrino, the electron, muon and tau lepton do not have flavors changing over time and why the number of solar neutrinos reaching the Earth is around a third of the number predicted by the standard solar model.
[567] vixra:2210.0095 [pdf]
One First Crucial Theorem for the Re-foundation of Elementary Set Theory and the Teaching of That Discipline to Future Generations
For a given infinite countable set A = U_{i∈N∗}{a_i }, ∀i ∈ N∗ , a_i≠∅, we demonstrate that it is legitimate to partition A in a finite or infinite number of infinite countable sub-sets when the initial order of indexation of the elements of A maintains a strictly increasing order in each sub-set. At this occasion we introduce two new formalism allowing to signify the fact that it is A that has been partitioned and the fact that A and the latter partition of A belong to the same class comprising infinitely many infinite countable sets constituted by the same infinite countable sub-sets that constitute the partition of A.
[568] vixra:2210.0089 [pdf]
Extending F1 Metric: Probabilistic Approach
This article explores the extension of well-known F1 score used for assessing the performance of binary classifiers. We propose the new metric using probabilistic interpretation of precision, recall, specifcity, and negative predictive value. We describe its properties and compare it to common metrics. Then we demonstrate its behavior in edge cases of the confusion matrix. Finally, the properties of the metric are tested on binary classifier trained on the real dataset.
[569] vixra:2210.0076 [pdf]
Properties of Tiny Objects and Vast Things
Physics has yet to complete a catalog of properties of objects, complete a list of elementary particles, describe dark matter, and explain dark energy phenomena. This paper shows modeling that catalogs properties (such as charge and mass) and elementary particles (such as quarks and gluons). The catalog of properties includes known properties and suggests new properties. The catalog of elementary particles includes all known elementary particles and suggests new elementary particles. The modeling has bases in integer arithmetic and complements popular modeling that has bases in space-time coordinates. This paper shows applications that combine popular modeling, the expanded set of properties, and the expanded set of elementary particles. Applications describe dark matter, explain known ratios of dark matter effects to ordinary matter effects, point to possible resolutions for so-called tensions (between data and popular modeling) regarding dark energy phenomena, and suggest insight about galaxy evolution.
[570] vixra:2210.0074 [pdf]
The N-Body Problem (Le Problème de Mouvement de N Corps)
The object of this paper is the problem of the motion of three bodies subjected to the attraction of gravitation. In section 1, we write the equations of motion, then we give the ten first integrals of motion. We treat the equations of motion of the case of an artificial satellite around the earth in section 2, where we deduce the 3 laws of Kepler and we give the resolution of the equations of motion of the artificial satellite. In section 3, we consider the case of the motion of two bodies. Finally in section 4, we give details of the equations of motion of 3 bodies, we develop the inverse of the squares of the distances to the first order. We treat, by neglecting the mass of a body, the problem called the restricted movement of two bodies.
[571] vixra:2210.0071 [pdf]
Algorithm for Identification and Classification of Datasets Assisted by kNN
The tinnitus retraining therapy, is to be supported with the help of an algorithm in combination with the kNN algorithm. The neurophysiological model is now used in the training of many audiologists and has found wide application in tinnitus therapy. Tinnitus retraining therapy has been heralded as a major advance in alleviating tinnitus perception. The goal of the research was to reduce the loudness of the tinnitus in study participants for a short period of time so that they could learn to deal with the hearing problems more easily. The algorithm I developed helps with the patient’s decision making and the kNN algorithm predicts the next frequency in each iteration.
[572] vixra:2210.0061 [pdf]
Lagrange Multipliers and Adiabatic Limits I
Critical points of a function subject to a constraint can be either detected by restricting the function to the constraint or by looking for critical points of the Lagrange multiplier functional. Although the critical points of the two functionals, namely the restriction and the Lagrange multiplier functional are in natural one-to-one correspondence this does not need to be true for their gradient flow lines. We consider a singular deformation of the metric and show by an adiabatic limit argument that close to the singularity we have a one-to-one correspondence between gradient flow lines connecting critical points of Morse index difference one. We present a general overview of the adiabatic limit technique in the article [FW22b].The proof of the correspondence is carried out in two parts. The current part I deals with linear methods leading to a singular version of the implicit function theorem. We also discuss possible infinite dimensional generalizations in Rabinowitz-Floer homology. In part II [FW22a] we apply non-linear methods and prove, in particular, a compactness result and uniform exponential decay independent of the deformation parameter.
[573] vixra:2210.0057 [pdf]
Lagrange Multipliers and Adiabatic Limits II
In this second part to [FW22a] we finish the proof of the one-to-one correspondence of gradient flow lines of index difference one between the restricted functional and the Lagrange multiplier functional for deformation parameters of the metric close to the singular one. In particular, we prove that, although the metric becomes singular, we have uniform bounds for the Lagrange multiplier of finite energy solutions and all its derivatives. This uniform bound is the crucial ingredient for a compactness theorem for gradient flow lines of arbitrary deformation parameter. If the functionals are Morse we further prove uniform exponential decay. We finally show combined with the linear theory in part I that if the metric is Morse-Smale the adiabatic limit map is bijective. We present a general overview of the adiabatic limit technique in the article [FW22b].
[574] vixra:2210.0054 [pdf]
Complex Circles of Partition and the Asymptotic Lemoine Conjecture
Using the methods of the complex circles of partition (cCoPs), we study textit{interior} and textit{exterior} points of such structures in the complex plane. With similarities to textit{quotient groups} inside of the group theory we define textit{quotient cCoPs}. With it we can prove an asymptotic version of the textbf{Lemoine Conjecture}.
[575] vixra:2210.0047 [pdf]
Towards the Explanation of Galaxies Totation Curves
We suggest a new explanation of flatness of galaxies rotation curves without invoking dark matter. For this purpose a new gravitational tensor field is introduced in addition to the metric tensor.
[576] vixra:2210.0044 [pdf]
An Alternative Derivation of the Hamiltonian of Quantum Electrodynamics
We derive a proto-Hamiltonian of quantum electrodynamics (QED) from the coupled Dirac equation by quantizing the electromagnetic field. We then introduce a process of eliminating the gauge symmetry via separation of variables, and argue that this does not break the Lorentz covariance of the theory. From this approach, we obtain a Hamiltonian that is similar to the conventional one of QED. We conclude the paper short of making the Dirac sea reinterpretation, where one would otherwise reinterpret the negative-energy solutions to the Dirac equation as antiparticles.
[577] vixra:2210.0039 [pdf]
A Solution of a Quartic Equation
This solution is equal to L. Ferrari's if we simply change the inner square root $sqrt{w}$ to $sqrt{alpha + 2y}$. This article shows the shortest way to have a resolvent cubic for a quartic equation as well as the solution of a quartic equation.
[578] vixra:2210.0034 [pdf]
On the Verification of the Multiverse
We outline a proposal for an experimental test of Everett’s many-worlds interpretation of quantum mechanics that could potentially verify the existence of a multiverse. This proposal is based on a quantum field theory formulation of many-worlds through the path integral formalism and a careful choice of the vacuum state.
[579] vixra:2210.0030 [pdf]
Approximation by Power Series of Functions
Derivative-matching approximations are constructed as power series built from functions. The method assumes the knowledge of special values of the Bell polynomials of the second kind, for which we refer to the literature. The presented ideas may have applications in numerical mathematics.
[580] vixra:2210.0029 [pdf]
Olutions to the Exponential Diophantine 1 + P_1^x + P_2^y + P_3^z = W^2 for Distinct Primes P_1, P_2, P_3.
We list non-negative integer solutions (x,y,z,w) to the 5-term exponential diophantine equation 1 +p_1^x +p_2^y + p_3^z = w^2 for three distinct primes p_1 < p_2 < p_3 <= 113 obtained by exhaustive search in the exponents x, y, z <= 60.
[581] vixra:2210.0028 [pdf]
An Interconnected System of Energy
In this paper, I aim to describe a system of interconnected energy models. These models produce inferences that may enhance our understanding of classical and quantum mechanics. From the formulated models, an empirical test is derived to validate the proposed light projection hypothesis. It is understood that a chasm exists between our classical understanding of relativity and the subatomic descriptions of quantum mechanics. Many theories provide solutions to this discrepancy: from superstrings to loop quantum gravity and other numerous paradigms, each with their significant contributions to furthering our knowledge of theoretical physics. The system of models I present provides a foundation for reframing the underlying assumptions of special relativity and an alternate description of quantum activity. By simplifying our understanding of time, we are able to shift to homogeneous coordinates as a viable basis. From homogeneous coordinates, we can create a spatial framework of reality. The spatial framework allows us to hypothesize the function of black holes and discern the projections of higher dimensions of light. The projections of light hint toward a cosmic camera obscura that a pinhole camera matrix can model. We can then view the entire structure through a different lens and relate the layers of light projections to a Deep Restricted Boltzmann Network (DRBN) representation of quantum information.
[582] vixra:2210.0026 [pdf]
Reduction Formulas of the Cosine of Integer Fractions of Pi
The power of some cosines of integer fractions pi/n of the half circle allow a reduction to lower powers of the same angle. These are tabulated in the format sum_{i=0}^[n/2] a_i^n cos^i(pi/n)=0; n=2,3,4,...Related expansions of Chebyshev Polynomials T_n(x) and factorizations of T_n(x)+1 are also given.
[583] vixra:2210.0015 [pdf]
The Radiation of RLC Circuit with the Longitudinal Capacitor
The RLC circuit is generalized in such a way that the capacitor has longitudinal form and the components are all in series with the voltage source . The medium in capacitor is dielectric with the index of refraction n. The change of the amount of charges on the left and right sideof the capacitor generate in medium special radiation which is not the Cerenkov radiation, no the Ginzburg transition radiation but the originalradiation which must be confirmed in laboratories.We have calculated the spectral form of the radiation. It depends on the dielectric constant n of the capacitor medium. The defect in medium is involved in the spectral form and can be compared with the original medium. Such comparison is the analog of the Heyrovsky-Ilkovic procedure in the electro-chemistry (Heyrovsky et al., 1965).
[584] vixra:2210.0013 [pdf]
The Asymptotic Binary Goldbach and Lemoine Conjecture
In this paper we use the former of the authors developed theory of circles of partition to investigate possibilities to prove the binary Goldbach as well as the Lemoine conjecture. We state the squeeze principle and its consequences if the set of all odd prime numbers is the base set. With this tool we can prove asymptotic versions of the binary Goldbach as well as the Lemoine conjecture.
[585] vixra:2209.0167 [pdf]
A Problem on Sums of Powers
We pose a problem which is motivated by Newton's identity on sums of powers. We prove two special cases using algebraic manipulations. The method used is inefficient to prove all cases.
[586] vixra:2209.0159 [pdf]
A Note on Neutrino Oscillations, or Why Miniboone and Microboone Data Are Compatible
The theoretical scenario introduced in Ref [1] implies a new approach to flavour mixing, and neutrino oscillations.It not only predicts non-vanishing neutrino masses, but also uniquely determines their values as functions of the age of the universe. We compare here the predicted neutrino mixing with the experimental data from atmospheric (Super-Kamiokande) and accelerator neutrinos (MiniBooNE, MicroBooNE), finding a substantial agreement with the experiments. In particular, a mismatch between MiniBooNE and MicroBooNE data is predicted that fits the measured data. Within this scenario, we can also make a predictionfor the forthcoming SBDN and ICARUS neutrino experiments.
[587] vixra:2209.0157 [pdf]
Multiple Reflection Interference Experiments Violating the Conservation of Energy Law
The conservation law of energy asserts that the total energy must always remain constant, even when its form changes. This conservation law also holds in electromagnetism (optics), where the total energy of light incident on and output from a system must be equal. However, we have found a phenomenon in which the total energy of the interference light emitted from a multiple-reflection interferometer is greater than the energy of the incident light. This increase is stable in time and can be explained by wave optics. The energy conservation law is valid when averaged over a region sufficiently wider than the interference fringe period, but when the beam width is narrower than the fringe period, the total light intensity increases or decreases and the conservation law does not hold.
[588] vixra:2209.0156 [pdf]
Light Beam Traveling with Varying Energy
Light emitted from a multiple reflection device using a mirror and a half-mirror does not show interference fringes when the relative angle between the mirrors is zero, and the energy of the incident light and the outgoing light coincide at each point. On the other hand, when the relative angle is non-zero, interference fringes are observed depending on the angle. We have reported that the total energy of the incident and outgoing beams do not match (that is, the law of conservation of energy does not hold) when the relative angle is small (the fringe spacing is wide) and the incident beam width is narrow. Furthermore, the light beam emitted from this multiple reflector has the interesting property of changing in intensity as it propagates. We have experimentally confirmed this change.
[589] vixra:2209.0153 [pdf]
Technical Report for WAIC Challenge of Financial QA under Market Volatility
This technical report presents the 1st winning model for Financial Community Question-and-Answering (FCQA), which is a task newly introduced in the Challenge of Financial QA under Marker Volatility in WAIC 2022. FCQA aims to respond to the user’s queries in the financial forums with the assistance of heterogeneous knowledge sources. We address this problem by proposing a graph transformer based model for the efficient multi-source information fusion. As a result, we won the first place out of 4278 participating teams and outperformed the second place by 5.07 times on BLUE.
[590] vixra:2209.0134 [pdf]
New Principles of Dierential Equations VI
This paper uses Z transformations to get the general solutions of many second-order, third-order and fourth-order linear PDEs for the first time, and uses the general solutions to obtain the exact solutions of many typical definite solution problems. We present the Z4 transformation for the first time and use it to solve a specific case. We successfully get the Fourier series solution by the series general solution of the one-dimensional homogeneous wave equation, which successfully solves a famed unresolved debate in the history of mathematics.
[591] vixra:2209.0132 [pdf]
Universal and Automatic Elbow Detection for Learning the Effective Number of Components in Model Selection Problems
We design a Universal Automatic Elbow Detector (UAED) for deciding the effective number of components in model selection problems. The relationship with the information criteria widely employed in the literature is also discussed. The proposed UAED does not require the knowledge of a likelihood function and can be easily applied in diverse applications, such as regression and classification, feature and/or order selection, clustering, and dimension reduction. Several experiments involving synthetic and real data show the advantages of the proposed scheme with benchmark techniques in the literature.
[592] vixra:2209.0123 [pdf]
Spectral Information Criterion for Automatic Elbow Detection
We introduce a generalized information criterion that contains other well-known information criteria, such as Bayesian information Criterion (BIC) and Akaike information criterion (AIC), as special cases. Furthermore, the proposed spectral information criterion(SIC) is also more general than the other information criteria, e.g., since the knowledge of a likelihood function is not strictly required. SIC extracts geometric features of the error curve and, as a consequence, it can be considered an automatic elbow detector. SIC provides a subset of all possible models, with a cardinality that often is much smaller than the total number of possible models. The elements of this subset are elbows" of the error curve. A practical rule for selecting a unique model within the sets of elbows is suggested as well. Theoretical invariance properties of SIC are analyzed. Moreover, we test SIC in ideal scenarios where provides always the optimal expected results. We also test SIC inseveral numerical experiments: some involving synthetic data, and two experiments involving real datasets. They are all real-world applications such as clustering, variable selection, or polynomial order selection, to name a few. The results show the benefits of the proposed scheme. Matlab code related to the experiments is also provided. Possible future research lines are finally discussed.
[593] vixra:2209.0121 [pdf]
Synopsis of Main Theories of Gravity
There are many theories of gravity.We summaries a few points regarding the Theories of Gravity by Newton, Einstein, Alzofon and Ionescu.The first two are ``static'', implying that Gravity is a fundamental interaction, while the later two are ``dynamic'', deriving Gravity from the elementary particles structure and predicting Gravity Control.Further explanations are provided in other dedicated articles.That Gravity is a quantum effect is also advocated by other researchers on the SM with discrete symmetry groups cite{Potter}, p.3.
[594] vixra:2209.0112 [pdf]
Falsification of the Atomic Model
A mathematical proof is presented showing that the contemporary, widely accepted model of theatom must be false. In particular, the charge distribution of separated charges violates Gauss’slaw. It is further shown, that quantum mechanics cannot be used as an excuse for this impossibleobject to exist. Then a major flaw in the Rutherford gold foil experiment is discussed. Finally, theconclusion is drawn, that there cannot be an atomic nucleus and that the charges and thus the massmust be somehow more equally distributed across the volume of the atom.
[595] vixra:2209.0107 [pdf]
Locally Uniform Approximations and Riemann Hypotheses (Fourth Revised)
This paper offers a breakthrough in proving the veracity of original Riemann hypothesis, and extends the validity of its method to include the cases of the Dedekind zeta functions, the Hecke L-functions hence the Artin L-functions, and the Selberg class.First we parametrize the Riemann surface $mathbf{S}$ of $log$-function, with which we first shrink the scale of each chosen parameter for which it depends on the chosen natural number $Q_{N_{0}}$ which is a chosen common multiple of all the denominators which are derived from a pre-set choice of rational numbers which approximate the values $log(k+1)$ with the integers $k$ in $0leq kleq N$.Then in (1.7) we define the mapping $-Q_{N_{0}}log(.)$ to pull the truncated Dirichlet $eta$-function $f_{N}(s)$ back to be re-defined on $mathbf{S}$, after that we shrink all the points to have their absolute values are all less than $1$ and closer to $1$. We apply the Euler transformation to the alternative series of Dirichlet $eta$-functions $f(s)$ which are defined in (1.4), then we build up the locally uniform approximation of Theorem 4.7 for $f(s)$ which are established on any given compact subset contained in the right half complex plane.In the second part we define the functions $phi(s)$ which are formulated in (6.1) then by specific property of the functions $phi(s)$, we have the similar asymptotics Theorem 6.5 as those of Theorem 4.7 to obtain the result of Theorem 6.8.And with the locally uniform estimation Lemma 5.10, finally in Theorem 5.9 and Theorem 6.9 we employ Theorem 5.8 and Theorem 6.8 to solve problems of Riemann hypothesis for the Dedekind zeta functions, the Hecke L-functions, the Artin L-functions, and the Selberg class for which all of their nontrivial zeros are contained in the vertical line $Re(s)=1/2$.Finally for the $gamma(s)$-factor of each Dirichlet series $D(s)$ which is formulated in (1.4), then by Theorem 6.9 it has neither zeros nor poles contained in the critical strip $0<Re(s)<1$ and the non-existence of Siegel's zeros for such Dirichlet series $D(s)$ is confirmed.
[596] vixra:2209.0102 [pdf]
Non-Linear Phenomena of the Torsion Field Communication Sessions
Torsion field can be used for communicationpurposes because of the Non-Local phenomenarelated to the objects which generate this field. Torsion Field Communication, which is Non-Electromagnetic one, is a very different when compared to normal Electromagnetic communication approach. These new properties bring some advantages. This work introduces a series of TFC experiments between Beijing and New York, accomplished by the authors. SEVA instrument was employed as the receiver in New York, and the scalar wave generator, which is based on tworesonant Tesla coils, was used as the TF transmitter. As the authors believe, such the conceptions like the non-locality, nonlinearity, quantum entanglement, Field Gyroscope and related to them Synchronicity, all having the common root, are the carriers of this NonElectromagneticphenomenon. During these sessions, some accompanying, interesting non-linear phenomenaoccurred.
[597] vixra:2209.0092 [pdf]
Measurement of the Frequency Drift of the Binary Star System HM Cancri (2)
The binary star system HMC (J0806.3+1527) emits gravitational waves near 6220.5 $mu$Hz, which can be detected with superconducting gravimeters. The signal amplitude can be raised by in-phase addition of many data sets to the point where the second derivative of the orbital frequency can also be measured precisely. Knowledge of the frequency evolution is necessary to understand the transport of matter in this mysterious stellar system.
[598] vixra:2209.0091 [pdf]
An Elementary Proof of Franel Number Recurrence Relation
In this paper, we provide an elementary proof of Franel number recurrence relation. Consequently, we derive a recurrence relation involving the sum of third powers of binomial coefficients.
[599] vixra:2209.0089 [pdf]
Attention Weighted Fully Convolutional Neural Networks for Dermatoscopic Image Segmentation
The goal of this project was to develop a fully convolutional neural network (FCNN) capable of identifying the region of interest (ROI) in dermatoscopic images. To achieve this goal, a U-Net style model was developed for this task and enhanced with an attention module which operated on the extracted features. The addition of this attention module improved our model's semantic segmentation performance and increased pixel-level precision and recall by 4.0% and 4.6%respectively. The code used in thie paper can be found on the project github page: https://github.com/Michael-Blackwell/CapstoneProject
[600] vixra:2209.0086 [pdf]
佛经宇宙观的现代解读 (前三部分节选) <br>The Modern Interpretation of Buddhist Cosmology (Excerpts from Part 1, 2 and 3)
"佛经宇宙观的现代解读" 的前三部分节选,共18章内容,约占全体内容的80%。现免费供研究人员和有兴趣的读者阅读。本节选分三大部分,共一十八章,围绕众多尚未很好解决的佛学问题,提供了系统的符合逻辑一致性的可验证式解答:<p>第一部分为《基础概述》共 10 章。我们根据佛经描述中部分对象的空间规模,从小到大依次论证了它们与我们所观测世界中众多事物之间的 "对应非等价" (或 "对应非等同") 关系。对数千年来佛经神话传说中若干个悬而未决的问题,给出了符合经文描述内在逻辑一致性 (经文间可互相印证) 且符合现代科学观测的可验证式解答,其中包括 "须弥山"、"四大部洲"、"诸天宫殿"、"十八地狱"、"三千大千世界"等著名佛学术语所涉及的对象。我们的研究表明,这些对象绝非虚无缥缈,亦非纯粹的神话臆想。<p>第二部分为《拓展分析》共 5 章。首先,我们详细论证了佛经关于"阎浮提洲"、"郁多啰究留洲" 的描述中,有很多事物与我们所熟知的历史 (譬如古埃及)、人文 (譬如法老) 及自然景观等有着密切的关系。其次,我们论证了佛经关于世界 "成住坏空" 的描述中蕴含着一个关于太阳系形成的 "三阶段星云学说"。再次,我们根据经文的描述对佛经神话事物进行必要的基于神话故事内在逻辑的推导,还原出它们包含地理位置、大小、时间等必要的属性,表明了神话故事内在逻辑的自洽性。我们的研究还表明,东西方宗教文化所阐述的神话形象存在部分相似性和一致性。从次,我们以大量的科学观测案例论证了佛经描述的神话事 件与现实世界自然现象之间存在某种 "对应非等价" 关系。最后,我们关于佛经对人类和文明起源的分析表明,佛经的相关描述存在逻辑自洽性,并且与其它宗教经典 (《圣经·旧约·创世纪》) 的相关描述存在一定程度上的相似性,即相关术语和时间逻辑上的近似。<p>第三部分为《其它问题和争议》共 3 章。首先,我们对经文不同译本的部分矛盾问题做了分析,研究表明大藏经存在某种特殊的高智商破坏和干扰经文现象。其次,我们发现佛经的部分描述存在混淆失真的现象,并对部分数值进行勘误。最后,我们对各种包括经文来源权威性、经文伪造、牵强附会的解读等众多极具争议的话题做出正面回应,并对我们的相关解读进行辩护。</p><p>This article contains excerpts (part 1, 2 and 3) from the author's book "The Modern Interpretation of Buddhist Cosmology" in Simplified Chinese.<p>The Modern Interpretation of Buddhist Cosmology (Part 1, 2 and 3 Excerpts) </p><p>Simplified Chinese version only.</p>
[601] vixra:2209.0069 [pdf]
Predictive Signals Obtained from Bayesian Network and the Prediction Quality
In this paper, we will propose a method for learning signals related to a data frame $D_{1}$. The learning algorithm will be based on the biggest entropy variations of a Bayesian network. The method will make it possible to obtain an optimal Bayesian network having a high likelihood with respect to signals $D_{1}$. From the learned optimal Bayesian network, we will show what to do to infer new signals $D_{2}$ and we will also introduce the prediction quality $Delta_{CR}$ allowing to evaluate the predictive quality of inferred signals $D_{2}$. We will then infer a large number (10000) of candidate signals $D_{2}$ and we will select the predictive signals $D_{2}^{*}$ having the best prediction quality. Once the optimal signals $D_{2}^{*}$ obtained, we will impose the same order of scatter (computed from the Mahalanobis) to the points of signals $D_{2}^{*}$ as of signals $D_{1}$.
[602] vixra:2209.0062 [pdf]
Measurement of the Frequency Drift of the Binary Star System HM Cancri
The binary star system HMC (J0806.3+1527) emits gravitational waves near 6220.5 $mu$Hz, which can be detected with superconducting gravimeters. A newly developed method improves the signal-to-noise ratio of the signal so much that the second derivative of the orbital frequency can now also be precisely measured. This frequency evolution is needed to understand the transport of matter in this mysterious star system.
[603] vixra:2209.0060 [pdf]
Generalized (σ,τ)-Derivations on Associative Rings Satisfying Certain Identities
The main purpose of this paper is to study a number of results concerning the generalized (σ, τ )-derivation D associated with the derivation d of semiprime ring and prime ring R such that D and d are zero power valued on R, where the mappings σ and τ act as automorphism mappings.Precisely, this article divided into two sections, in the first section, we emphasize on generalized (σ, τ )-derivation D associated with the derivation d of the semiprime ring and prime ring R while in the second section, we study the effect of the compositions of generalized (σ, τ )-derivations of semiprime ring and prime ring R such that D is period (n − 1) on R, for some positive integer n.
[604] vixra:2209.0055 [pdf]
New Quantum Spin Perspective and Space-Time of Mind-Stuff
The fundamental building block of the loop quantum gravity (LQG) is the spin network which is used to quantize the physical space-time in the LQG. Recently, the novel quantum spin is proposed using the basic concepts of the spin network. This perspective redefines the notion of the quantum spin and also introduces the novel definition of the reduced Planck constant. The implication of this perspective is not only limited to the quantum gravity; but also found in the quantum mechanics. Using this perspective, we also propose the quantization of the mind-stuff. Similarity between the physical space-time and the space-time of the mind-stuff provides novel notions to study the space-time scientifically as well philosophically. The comparison between the physical- space-time and the space-time of the mind-stuff is also studied.
[605] vixra:2209.0047 [pdf]
Demystifying the Mystery of Quantum Superposition
A magician, seems to be analogous to quantum mechanics, throws the spectators into a bewildering surprise by exhibiting a magic trick similar to the quantum superposition. The trick appears to be strange, weird and counter-intuitive like the quantum superposition, as long as the underlying secret behind its working is unknown. In the present article, the mystery of quantum superposition is demystified at a single-quantum level. Also, the counterfactual reality and the causality in Young's double-slit and Wheeler's delayed-choice experiments are pointed out, respectively.
[606] vixra:2209.0040 [pdf]
The Yang Algebra, Born Reciprocal Relativity Theory and Curved Phase Spaces
We begin with a review of the basics of the Yang algebra of noncommutative phase spaces and Born Reciprocal Relativity. A solution is provided for the exact analytical mapping of the non-commuting $ x^mu, p^mu$ operator variables (associated to an $8D$ curved phase space) to the canonical $ Y^A, Pi^A$ operator variables of a flat $12D$ phase space. We explore the geometrical implications of this mapping which provides, in the $classical$ limit, with the embedding functions $ Y^A (x,p), Pi^A (x,p) $ of an $8D$ curved phase space into a flat $12D$ phase space background. The latter embedding functions determine the functional forms of the base spacetime metric $ g_{mu u} (x,p) $, the fiber metric of the vertical space $h^{ab}(x,p)$, and the nonlinear connection $N_{a mu} (x,p) $ associated with the $8D$ cotangent space of the $4D$ spacetime. A review of the mathematical tools behind curved phase spaces, Lagrange-Finsler, and Hamilton-Cartan geometries follows. This is necessary in order to answer the key question of whether or not the solutions found for $ g_{mu u} , h^{ab}, N_{a mu}$ as a result of the embedding, also solve the generalized gravitational vacuum field equations in the $8D$ cotangent space. We finalize with an Appendix with the key calculations involved in solving the exact analytical mapping of the $ x^mu, p^mu$ operator variables to the canonical $ Y^A, Pi^A$ operator ones.
[607] vixra:2209.0038 [pdf]
Zeta-Pad'e SRWS Theory with Lowest Order Approximation
In my previous preprint about SRWS-zeta theory[Y.Ueoka,viXra:2205.014,2022],I proposed an approximation of rough averaged summation of typical critical Greenfunction for the Anderson transition in the Orthogonal class. In this paper, I removea rough approximate summation for the series of the typical critical Greenfunction by replacing summation with integral. Pade approximant is used to takea summation. The perturbation series of the critical exponent nu of localizationlength from upper critical dimension is obtained. The dimensional dependence ofthe critical exponent is again directly related with Riemann zeta function. Degree offreedom about lower critical exponent improve estimate compared with previousstudies. When I fix lower critical dimension equal to two, I obtained similar estimateof the critical exponent compared with fitting curve estimate of the criticalexponent[E.Tarquini et al.,PhysRevB.95(2017)094204].
[608] vixra:2209.0034 [pdf]
Application of the Oscillation Symmetry to Various Theoretical Hadronic Masses
Several theoretical results of meson and baryon masses are compared to experimen-tal data using the Oscillation Symmetry method. This method allows to comparethe calculated masses to the experimantal ones, whereas these last are clearly less
[609] vixra:2209.0015 [pdf]
A Note on the Distortion in Length of the Tunisian Lambert Map Projection
In this note, we study the variation of m the distortion in length in the function of the geodetic latitude phi of the Tunisian Lambert map projection.
[610] vixra:2209.0009 [pdf]
A Chern-Simons Model for Baryon Asymmetry
In search of a phenomenological model that would describe physics from Big Bang to the Standard Model (SM), we propose a model with the following properties (i) above an energy about Lambda_{cr} > 10^{16} GeV there are Wess-Zumino supersymmetric preons and Chern-Simons (CS) fields, (ii) at Lambda_{cr} ~ 10^{16} GeV spontaneous gauge symmetry breaking takes place in the CS sector and the generated topological mass provides an attractive interaction to equal charge preons, (iii) well below 10^{16} GeV the model reduces to the standard model with essentially pointlike quarks and leptons, having a radius ~ 1/Lambda_{cr} ~ 10^{-31} m. The baryon asymmetry turns out to have a fortuitous ratio n_B/n_{gamma} << 1.}
[611] vixra:2209.0007 [pdf]
FaithNet: A Generative Framework in Human Mentalizing
In this paper, we first review some of the innovations in modeling mentalizing.Broadly, this involves building models of computing World Model and Theory of Mind(ToM). A simple framework, FaithNet, is then presented with concepts like persistence, continuity, cooperation and preference represented as faith rules.FaithNet defines a generative model that can sample faith rules. Our FaithNet utilize a general-purpose conditioning mechanism based on cross-attention, offering computations that best explain observed real-world events under a Bayesian criterion.
[612] vixra:2209.0005 [pdf]
Beatnet: CRNN and Particle Filtering for Online Joint Beat Downbeat and Meter Tracking
The online estimation of rhythmic information, such as beat positions, downbeat positions, and meter, is critical for many real-time music applications. Musical rhythm comprises complex hierarchical relationships across time, rendering its analysis intrinsically challenging and at times subjective. Furthermore, systems which attempt to estimate rhythmic information in real-time must be causal and must produce estimates quickly and efficiently. In this work, we introduce an online system for joint beat, downbeat, and meter tracking, which utilizes causal convolutional and recurrent layers, followed by a pair of sequential Monte Carlo particle filters applied during inference. The proposed system does not need to be primed with a time signature in order to perform downbeat tracking, and is instead able to estimate meter and adjust the predictions over time. Additionally, we propose an information gate strategy to significantly decrease the computational cost of particle filtering during the inference step, making the system much faster than previous sampling-based methods. Experiments on the GTZAN dataset, which is unseen during training, show that the system outperforms various online beat and downbeat tracking systems and achieves comparable performance to a baseline offline joint method.
[613] vixra:2209.0002 [pdf]
Doppler Effect and One-Way Speed of Light
The Doppler effect is one of the most important and most elementary phenomenon in nature. If we accept that fact, then there is no need to use any other theories to understand its essence. There is only one Doppler effect, regardless of whether the signal between the sender and the receiver travels directly or through a medium. Therefore, there is only one formula that is valid in all cases. The goal of this paper is to determine that formula. Using the obtained results, we will show that the one-way speed of light can be measured by physical experiment.
[614] vixra:2208.0172 [pdf]
A Novel 1D State Space for Efficient Music Rhythmic Analysis
Inferring music time structures has a broad range of applications in music production, processing and analysis. Scholars have proposed various methods toanalyze different aspects of time structures, such as beat, downbeat, tempo and meter.Many state-of-the-art (SOFA) methods, however, are computationally expensive. This makes them inapplicable in real-world industrial settings where the scale of the music collections can be millions. This paper proposes a new state space and a semi-Markov model for music time structure analysis. The proposed approach turns the commonly used 2D state spaces into a 1D model through a jump-back reward strategy. It reduces the state spaces size drastically. We then utilize the proposed method for causal, joint beat, downbeat, tempo, and meter tracking, and compare it against several previous methods. The proposed method delivers similar performance with the SOFA joint causal models with a much smaller state space and a more than 30 times speedup.
[615] vixra:2208.0171 [pdf]
Singing Beat Tracking With Self-supervised Front-end and Linear Transformers
Tracking beats of singing voices without the presence of musical accompaniment can find many applications in music production, automatic song arrangement, and social media interaction.Its main challenge is the lack of strong rhythmic and harmonic patterns that are important for music rhythmic analysis in general. Even for human listeners, this can be a challenging task. As a result, existing music beat tracking systems fail to deliver satisfactory performance on singing voices. In this paper, we propose singing beat tracking as a novel task, and propose the first approach to solving this task. Our approach leverages semantic information of singing voices by employing pre-trained self-supervised WavLM and DistilHuBERT speech representations as the front-end and uses a self-attention encoder layer to predict beats. To train and test the system, we obtain separated singing voices and their beat annotations using source separation and beat tracking on complete songs, followed by manual corrections. Experiments on the 741 separated vocal tracks of the GTZAN dataset show that the proposed system outperforms several state-of-the-art music beat tracking methods by a large margin in terms of beat tracking accuracy. Ablation studies also confirm the advantages of pre-trained self-supervised speech representations over generic spectral features.
[616] vixra:2208.0169 [pdf]
Correlation Method for Finding Gravitational Waves in LIGO Data
The method for finding the signal from the gravitational wave in the interferometrical data of the LIGO observatory is demonstrated on the test example of the gravitational event GW150914. The method is based on the use of correlation analysis, reasoning from the fact that the shape of the signal to be found is known, as well on the use of computational statistics methods. The developed method was applied for the search for signals in the LIGO data for a 32 seconds time-frame inside standard data block of 16 s prior and 16 s after the GW150914 event. The performance of the method and its advantages for the analysis of signals in a noise are shown. The paper includes analysis of possibility of the existence of other useful signals in the noise signal where the chirp of the event GW150914 was detected.
[617] vixra:2208.0166 [pdf]
Analysis of the 2021 Instant Run-Off Elections in Utah
We analyzed the 2021 Ranked Choice Voting elections in Utah County and Moab (the capitol of Grand County), focusing on the Instant Runoff Voting (IRV) algorithm. We found three issues: (1) The fractions of ballots discarded and those that needed rectification exceeded 10% in 7 of the 17 races (across 4 municipalities) indicating a considerable degree of voter confusion .(2) Four different election pathologies were detected in the Council (Seat 1) race in Moab: failure to elect a consensus winner, monotonicity failure and two participation failures. A ``spoiler'' candidate, which IRV proponents have claimed this method prevents, was also detected.(3) Four towns elected two seats by discarding the winner of the first seat, then re-running IRV with the resulting modified ballots to determine the winner of the second seat. This all four times caused the second-place finisher in the first election, not to win the 2nd seat, but rather to finish second again, somewhat frustratingly for them.Overall, IRV elected the same winner as standard plurality in 15 of 16 races with >= 3 candidates, with the single changed outcome in the Vineyard City Council (Seat 2) race.We also analyzed the data from a recount viewpoint -- implementing the Utah Election Code rules for automatic recounts versus Blom et al. ``Exact Margin of Victory (EMOV)'' method. According to the former, a recount was justified in two races, namely the Springville (4yr) and Moab City Councils; butonly Moab was actually recounted. However, EMOV showed recounting Springville almost certainly would have been inconsequential. As far as we know, this is the first-of-kind analysis of IRV elections in the state of Utah and it highlights the paradoxical properties of the IRV algorithm --- often incorrectly dismissed as too rare to worry about --- showing that these unfortunately indeed occur in real-world elections hence really are worrisome. In conjunction with the above-mentioned ballot issues, these problems cast doubt on the wisdom of IRV for Utah. We believe there are better alternatives than IRV, e.g. ``range voting,'' and these should become part of a debate towards fundamentally rethinking the program.
[618] vixra:2208.0163 [pdf]
Unusual Boards and Meeples
We introduce boards others than the usual chessboard. Further we define meeples which can move in other ways than the usual chess meeples. We ask whether these meeples can reach every field, like a knight can reach every field on the chessboard.
[619] vixra:2208.0158 [pdf]
A Sheaf on a Lattice
A sheaf is constructed on a topological space. But a topological space is a bounded distributive lattice. Hence we may construct a sheaf of lattices on a bounded dis- tributive lattice. Then we define a stalk of the sheaf at a chain in a bounded distributive lattice. And we define a morphism of the sheaves, that the morphism is induced by a homo- morphism of the bounded distributive lattices. Then the kernel and image of the morphism are the subsheaves. A sheaf is obtained by gluing sheaves together.
[620] vixra:2208.0154 [pdf]
A Formula for Electron Mass Calculation Depending Only on the Four Fundamental Physical Constants
Here is presented a novel formula for the mass of the electron as a function only of four fundamental constants of electromagnetism and gravitation: the fine structure constant, Planck's constant, speed of light, and gravitational constant. Result is obtained by dérivation in three steps : First, modeling the electron as a decreasing standing wave, solution of the propagation equation coming from Maxwell's relations with a particular wave number k0, so the internal energy is distributed according to the square of the amplitude of the wave. Second, it is made the hypothesis that the field of the wave has lower and upper bounds, assuming quantization of kinetic momentum we find that the logarithm of the quotient of these bounds can be assimilated to the inverse of the fine structure constant. Third, it is proposed that vacuum space is a granular fluid medium, the value k0 is such that it implies opacity of the electron wave to gravitons, which are the elementary dynamic corpuscles of the fluid.
[621] vixra:2208.0148 [pdf]
Sur L'Unification des Réseaux Géodésiques : Cas de la Société Nationale des Industries Minières de Mauritanie (SNIM)
This paper presents some solutions on how to unify the geodetic terrestrial networks established by the geometers of the National Mauritanian Society of the Mining Industries (SNIM).
[622] vixra:2208.0146 [pdf]
Some Philosophical Point of View Around Special and General Theory of Relativity and a Solution on Nature of Gravity and Time
In this paper, I showed how the circular denition in special relativity and unknowable questionin general relativity, become clear with the new denition of time. In the end, I found out thecreation of everything out of space and motion.
[623] vixra:2208.0131 [pdf]
On the General Erdh{o}s-Moser Equation Via the Notion of Olloids
We introduce and develop the notion of the textbf{olloid}. We apply this notion to study a variant and a generalized version of the ErdH{o}s-Moser equation under some special local condition.
[624] vixra:2208.0130 [pdf]
Possible Modification of Coulomb's law at Low Field Strengths
Assuming that the analogy between Gravity and Electricity is universal, and that modification of gravity is favoured over modification of Newton's Second Law in order to solve the problem of flat galaxy rotation curves, I show that there exists a critical scale E_0=a_0/√{4πϵ_0 G} for Electric field strength, which is approximately about 1.39 volts per meter. If the assumptions are sound, for field strengths well below this value Coulomb's law must be reconsidered.
[625] vixra:2208.0125 [pdf]
Investigation of the Gravitational Wave of the Binary Star System J0651+2844
The binary star system J0651 is expected to be be one of the brightest sources of gravitational waves in our galaxy. Despite its known frequency, the radiation could not be detected so far. A new method eliminates the strong phase modulation caused by the Earth's orbit and drastically reduces the bandwidth. Therefore, the GW may be identified in the records of numerous superconducting gravimeters. The determined parameters of GW agree well with the results of previous observations in the optical domain and the predictions of relativity.
[626] vixra:2208.0111 [pdf]
Recurrence for the Atkinson-Steenwijk Integrals for Resistors in the Infinite Triangular Lattice
The integrals R_{n,n}$ obtained by Atkinson and van Steenwijkfor the resistance between points of an infinite set ofunit resistors on the triangular latticeobey P-finite recurrences. The main causeof these are similarities uncovered by partial integrations of theirintegral representations with algebraic kernels. All R_{n,p} resistancesto points with integer coordinates n and p relative to an originin the lattice can be derived recursively.
[627] vixra:2208.0093 [pdf]
Dictionary of all Scriptures and Myths by G. A. Gaskell and the Graphical law
We study the Dictionary of all Scriptures and Myths by G. A. Gaskell. We draw the natural logarithm of the number of entries, normalised, starting with a letter vs the natural logarithm of the rank of the letter, normalised. We conclude that the Dictionary can be characterised by BP(4, $beta H=0.04$), i.e.the Bethe-Peierls curve in the presence of four nearest neighbours and little external magnetic field, H, with $beta H= 0.04$. $beta$ is $frac{1}{k_{B}T}$ where, T is temperature and $k_{B}$ is the tiny Boltzmann constant.
[628] vixra:2208.0079 [pdf]
A Proposal for More Economic Fuel Use at Lagrange Points
This paper demonstrates why — regarding fuel consumption — it is more senseful to perform stationkeeping at Lagrange point as often as possible, i.e. when thrust needed is greater than 12 cm/s for the James Webb Space Telescope. Fuel can be saved by striving to correct the orbit each time as early as manageable. With such change, the conservative estimate for the mission lifetime increase is over 12%.
[629] vixra:2208.0075 [pdf]
A Dictionary of Zoology by Michael Allaby and the Graphical Law
We study A Dictionary of Zoology, the fourth edition, by Michael Allaby from the Oxford University Press. We draw the natural logarithm of the number of head entries, normalised, starting with a letter vs the natural logarithm of the rank of the letter, normalised. We conclude that the Dictionary can be characterised by BP(4, $beta H=0.01$), i.e. the Bethe-Peierls curve in the presence of four nearest neighbours and little external magnetic field, $beta H= 0.01$. $beta$ is $frac{1}{k_{B}T}$ where, T is temperature and $k_{B}$ is the tiny Boltzmann constant.
[630] vixra:2208.0071 [pdf]
The Promised Science
The aim of the paper is to unify the natural sciences with the human sciences.Unification rests on the foundation of the original thesis at the center of this paper and which concerns the Ontology of Being. In reality, in the absence of an Ontology of Being, every science, insofar as it concerns reality, is up in the air, unaware of its foundation.The method of investigation consists in testing its explanatory and clarifying power by addressing the fundamental questions of both areas, in particular physics, philosophy and the mind-body problem.A true Ontology of Being, in fact, must be able to bring all the sciences back to itself, and therefore to clarify their structure as they branch out, explaining them in their being identical and in their being different.The result is that the proposed ontological thesis, which derives from the synergy of the acquired results of both sciences, reacts on them, purifying their foundations and therefore spanning the way to their safer advancement. Indeed, the natural and human sciences, each in their own domains and with their own methods, apparently so distant, concern the two opposite moments of the same being. In other words, they are complementary to each other, as the act is to the potency, as external is to internal, as dead is to life, as the consciousness is to the soul, as electricity is to gravitation, as each makes no sense without the other but needs the other and is completed in the other.The promised science is the overcoming in the synthesis of these two opposite moments.
[631] vixra:2208.0066 [pdf]
Tribute to The Memory of My Friend and Colleague Abdelkader Sellal, Engineer Geodesist (1946-2017) - V3- September 2022 -
This document presents a tribute to the memory of my colleague and friend Abdelkader Sellal, a retired geodesist engineer from the Algerian Institute of Cartography and Remote Sensing. I published a report he had written in 2004where he had proposed projects concerning the modernization of Algerian Geodesy in particular :- Establishment of a Network of Permanent GPS Stations and a Basic Geodetic Network.- GPS calculations. Re-Adjustment and redefinition of the Algerian national geodetic network using adjustment software.- Definition of a System and a Reference of Altitudes in Algeria.- The Geoid: Studies on the determination of a gravimetric geoid in Algeria.
[632] vixra:2208.0063 [pdf]
Quantum Impedance Networks of Dark Matter and Energy
Dark matter has two independent origins in the impedance model: Geometrically, extending two-component Dirac spinors to the full 3D Pauli algebra eight-component wavefunction permits calculating quantum impedance networks of wavefunction interactions. Impedance matching governs amplitude and phase of energy flow. While vacuum wavefunction is the same at all scales, flux quantization of wavefunction components yields different energies and physics as scale changes, with corresponding enormous impedance mismatches when moving far from Compton wavelengths, decoupling the dynamics. Topologically, extending wavefunctions to the full eight components introduces magnetic charge, pseudoscalar dual of scalar electric charge. Coupling to the photon is reciprocal of electric, inverting fundamental lengths - Rydberg, Bohr, classical, and Higgs - about the charge-free Compton wavelength $lambda=h/mc$. To radiate a photon, Bohr cannot be inside Compton, Rydberg inside Bohr,... Topological inversion renders magnetic charge `dark'.Dark energy mixes geometry and topology, translation and rotation gauge fields. Impedance matching to the Planck length event horizon exposes an identity between gravitation and mismatched electromagnetism. Fields of wavefunction components propagate away from confinement scale, are reflected back by vacuum wavefunction mismatches they excite. This attenuation of the `Hawking graviton' wavefunction results in exponentially increasing wavelengths, ultimately greater than radius of the observable universe. Graviton oscillation between translation and rotation gauge fields exchanges linear and angular momentum, is an invitation to modified Newtonian dynamics.
[633] vixra:2208.0050 [pdf]
An Example of the Division by Zero Calculus Appeared in Conformal Mappings
We introduce an interesting example of conformal mappings (Joukowski transform) from the view point of the division by zero calculus. We give an interpretation of the identity, for a larger than b larger than 0 frac{rho + 1/rho}{rho - 1/rho} = frac{a}{b}, quad rho = sqrt{frac{a+b}{a - b}}, for the case a=b.
[634] vixra:2208.0049 [pdf]
Make Two 3D Vectors Parallel by Rotating Them Around Separate Axea
To help fill the need for examples of introductory-level problems that have been solved via Geometric Algebra (GA), we show how to calculate the angle through which two unit vectors must be rotated in order to be parallel to each other. Among the ideas that we use are a transformation of the usual GA formula for rotations, and the use of GA products to eliminated variables in simultaneous equations. We will show the benefits of (1) examining an interactive GeoGebra construction before attempting a solution, and (2) considering a range of implications of the given information.
[635] vixra:2208.0041 [pdf]
General Base Decimals with the P-Series of Calculus Shows All Zeta(n) Irrational
We give a new approach to the question of whether or not all greater than one, integer arguments of Zeta are irrational. Currently only Zeta(2n) and Zeta(3) are known to be irrational. We show that using the denominators of the terms of Zeta(n)-1=z_n as decimal bases gives all rational numbers in (0,1) as single decimals, property one. We also show the partial sums of z_n are not given by such single digits so using the denominators of the partial sum's terms as number bases, property two. Next, using integrals for the p-series contracting upper and lower bounds for partial sum remainders of z_n are generated. Assuming z_n is rational, it is expressible as a single decimal using the denominator of a term of z_n (property one) and eventually these bounds will consist of infinite decimals (property two) with their first decimal equal to this single decimal. But as no single decimal can be between two infinite decimals with the same first digit a contradiction is derived and all z_n are proven irrational.
[636] vixra:2208.0037 [pdf]
Energy is Conserved in General Relativity
In this article, the author demonstrates that there is a huge contradiction between the statements made in the famous literature about general relativity regarding the vanishing covariant divergence of the energy-momentum tensor of matter representing a conservation law. It is reasoned which of these contradictory standpoints are correct and which are not. The author points out why pseudotensors cannot represent the energy density of the gravitational field. Contrary to the statements in the famous literature about general relativity, the energy density of the gravitational field is shown to be described by a tensor. Moreover, the author demonstrates that in general relativity there necessarily exists the conservation of total energy, momentum, and stress regarding the completed version of Einstein's field equations which is that one with the cosmological constant, whereby the latter one takes on a completely new meaning that solves the cosmological constant problem. This new interpretation of the cosmological constant also explains the dark energy and the dark matter phenomenon. The modified Poisson equation, that is obtained from Einstein's field equations with the cosmological constant in the limit of weak gravitational fields, approximately meets the requirement of conservation of total energy in Newton's theory of gravity, whereby flat rotation curves of spiral galaxies are obtained.
[637] vixra:2208.0027 [pdf]
Towards Settling Doubly Special Relativity
[In] viXra:2201.0082[,] I put forward a method to resolve the contradiction between the existence of Planck units (length, time, mass) and Special Relativity (length contraction, time dilation, relativistic mass). In this note, [], I present the final solution, but postpone the derivation and implications to a detailed future treatment.
[638] vixra:2208.0025 [pdf]
Fermat's Last Theorem: A Proof by Contradiction
In this paper I oer an algebraic proof by contradiction of Fermat's Last Theorem. Using an alternative to the standard binomial expansion, (a+b)n = an + b Pn i=1 ani(a + b)i1, a and b nonzero integers, n a positive integer, I showthat a simple rewrite of the equation stating the theorem, Ap + (A + b)p = (2A + b c)p; A; b and c positive integers, entails the contradiction of two positive integers that sum to less than zero,(2f + g)(f + g)(f + g + b) Xp2 i=1 (2f + g)p2i(3f + 2g + b)i1 + (f + b)(f + g)(3f + 2g + b)p2 + fb(3f + 2g + b)p2 < 0; f and g positive integers. This contradiction shows that the rewrite has nonon-trivial positive integer solutions and proves Fermat's Last Theorem.
[639] vixra:2208.0024 [pdf]
Unified Modeling that Explains Dark Matter Data, Dark Energy Effects, and Galaxy Formation Stages
Physics lacks a confirmed description of dark matter, has yet to develop an adequate understanding of dark energy, and includes unverified conjectures regarding new elementary particles. This essay features modeling that addresses those problems and explains otherwise unexplained data. Our modeling starts from five bases — multipole expansions for the electromagnetic and gravitational fields associated with an object, the list of known elementary particles, some aspects of mathematics for isotropic harmonic oscillators, concordance cosmology, and a conjecture that the universe includes six isomers of most elementary particles. The multipole expansions — which have use in conjunction with Newtonian kinematics modeling, special relativity, and general relativity — lead to a catalog of kinematics properties such as charge, magnetic moment, mass, and repulsive gravitational pressure. The multipole expansions also point to all known elementary particles, some properties of those particles, and properties of some would-be elementary bosons and elementary fermions. The harmonic-oscillator mathematics points to Gauge symmetries regarding some elementary bosons. The would-be elementary fermions lack charge and would measure as dark matter. The conjecture regarding six isomers of most elementary particles rounds out and dominates our specification for dark matter. Five of the isomers form the basis for most dark matter. Our modeling explains ranges of observed ratios of dark matter effects to ordinary matter effects — for the universe, galaxy clusters, two sets of galaxies observed at high redshifts, three sets of galaxies observed at modest redshifts, and one type of depletion of cosmic microwave background radiation. Our description of repulsive gravitational pressure points toward resolution for tensions — between data and modeling — regarding the recent rate of expansion of the universe, resolution for possible tensions regarding large-scale clumping, and resolution for possible tensions regarding interactions between neighboring galaxies. Our work regarding gravity, dark matter, and elementary particles suggests characterizations for eras that might precede the inflationary epoch, a mechanism that might have produced baryon asymmetry, mechanisms that govern the rate of expansion of the universe, and insight about galaxy formation and evolution.
[640] vixra:2208.0019 [pdf]
Approximating Roots and π Using Pythagorean Triples
Methods approximating the square root of a number use recursive sequences. They do not have a simpleformula for generating the seed value for the approximation, so instead they use various algorithms for choosing the first term of the sequences. Section 1 introduces a new option, based upon the number of digits of the radicand, for selecting the first term. This new option works well at all scales. This first term will then be used in a traditional recursive sequence used to approximate roots. Section 2 will apply the method shown in Section 1 to approximate pi using Archimedes’ method, which then no longer requires different algorithms at different scales for seed values. Section 3 will introduce new recursive sequences for approximating rootsusing Pythagorean triples. Section 4 will then use the same new method to approximate pi.
[641] vixra:2208.0018 [pdf]
A Mystery is Solved: Gravitational Waves Generate the Constant Hum of the Earth
25 years ago, weak permanent oscillations were discovered in the records of gravimeters, which are not excited by earthquakes and are also not natural resonances of the earth. Proposed causes like wind and ocean waves are ruled out because of their unreliability. The oscillation at 836.69~$mu$Hz exhibits most typical modulations expected for gravitational waves. Their frequency stability can also explain the surprising, previously unknown phase coherence discovered in the records of eight gravimeters. The gravitational waves emitted by the countless binary star systems in our galaxy are probably the cause of the strikingly strong background noise of gravimeters.
[642] vixra:2208.0016 [pdf]
A Lower Bound for Length of Addition Chains
In this paper we show that the shortest length $iota(n)$ of addition chains producing numbers of the form $2^n-1$ satisfies the lower bound $$iota(2^n-1)geq n+lfloor frac{log (n-1)}{log 2}floor$$ where $lfloor cdot floor$ denotes the floor function.
[643] vixra:2208.0007 [pdf]
A Simple Proof that Goldbach's Conjecture is True
A induction proof shows Goldbach's conjecture is correct. It is as simple as can be imagined. A table consisting of two rows is used. The lower row counts from 0 to any n and and the top row counts down from 2n to n. All columns will have all numbers that add to 2n. Using a sieve, all composites are crossed out and only columns with primes are left. For the base case of k=5 suppose that primes on the lower row always map to composites on the top and that this results in too many composites on the top. This is true for this base case. Suppose it is true for k=n, then the shifts and additions necessary for the k=n+1 case maintain this property of too many composites on top. The contrapositive is that there exists a prime on the bottom that maps to a prime on top and Goldbach is established: the sum of these two primes is 2(n+1).
[644] vixra:2208.0003 [pdf]
Intrinsic Correlation Between Superconductivity and Magnetism
Based on the real-space Mott insulator model, we establish a unifiedpairing, coherent and condensate mechanism of superconductivity. Motivatedby Dirac's magnetic monopole and Maxwell's displacement current hypothesis,we demonstrate that electric and magnetic fields are intrinsicallyrelevant. An isolated proton or electron creates an electric field,whereas a quantized proton-electron pair creates a magnetic field.The electric dipole vector of the proton-electron pair is the Ginzburg-Landauorder parameter in the superconducting phase transition. The Pierce-likedimerization pairing transition of the electron-proton electric dipolelattice leads to the symmetry breaking of the Mott insulating stateand the emergence of superconducting and magnetic states. This theoreticalframework can comprehensively explain all superconducting phenomena.Our research sheds new light on electron spin, magnetic monopoles,and the symmetry of Maxwell's equations.
[645] vixra:2207.0183 [pdf]
The WKB Limit of the Dun-Kemmer-Petiau Equation
The equivalent system of equations corresponding to the Duffin-Kemmer-Petiau (DKP)equation is derived and the WKB approximation of this system is found. It is proved thatthe Lorentz equation follows from the new DKP-Pardy system.
[646] vixra:2207.0170 [pdf]
Combinatorial Twelvefold Way, Statistical Mechanics and Inclusion Hypothesis
There are three different ways of counting microstates for indistinguishable particles and distinguishable energy levels. Two of them correspond to Bosons and Fermions (and anyons, which interpolate between the two), but the third one, which is not considered so far, is when we require a `dual' of the Exclusion Principle to hold: in each energy level (state) there must exist at least one particle. I call this `the Inclusion Hypothesis' and propose the statistics as a possibility of existence of a third kind of particles.
[647] vixra:2207.0165 [pdf]
A Dictionary of the Mikir Language by G. D. Walker and the Graphical law
We study a Dictionary of the Mikir Language by G. D. Walker.We draw the natural logarithm of the number of entries, normalised, starting with a letter vs the natural logarithm of the rank of the letter, normalised. We conclude that the Dictionary can be characterised by BP(4,$beta H=0.02$) i.e. a magnetisation curve for the Bethe-Peierls approximation of the Isingmodel with four nearest neighbours with $beta H=0.02$. $beta$ is $frac{1}{k_{B}T}$ where, T is temperature, H is external magnetic field and $k_{B}$ is the Boltzmann constant.
[648] vixra:2207.0150 [pdf]
Simple O(1) Query Algorithm for Level Ancestors
This note describes a very simple O(1) query time algorithm for finding level ancestors. This is basically aserial (re)-implementation of the parallel algorithm. Earlier, Menghani and Matani described another simple algorithm; however, their algorithm takesO(log n) time to answer queries. Although the basic algorithm has preprocessing time of O(n log n), by having additional levels, thepreprocessing time can be reduced to almost linear or linear.
[649] vixra:2207.0148 [pdf]
Erratum to "Tables of Integral Transforms" by A. Erdelyi, W. Magnus, F. Oberhettinger & F. G. Tricomi (1953), p. 61 (4)
The integral (4) on page 61 in the "Tables of Integral Transforms", the Fourier Cosine Transform of a product of a Gaussian and a symmetric sum of two Parabolic-Cylinder Functions, is erroneous. A more general integral is derived here.
[650] vixra:2207.0146 [pdf]
Generalized Attention Mechanism and Relative Position for Transformer
In this paper, we propose generalized attention mechanism (GAM) by first suggesting a new interpretation for self-attention mechanism of Vaswani et al. . Following the interpretation, we provide description for different variants of attention mechanism which together form GAM. Further, we propose a new relative position representation within the framework of GAM. This representation can be easily utilized for cases in which elements next to each other in input sequence can be at random locations in actual dataset/corpus.
[651] vixra:2207.0139 [pdf]
Applied Phase Modulation (for Astronomers)
Phase modulation of gravitational waves often occupies a large bandwidth and degrades the S/N. This paper explains the basics, properties and implications of PM with examples from astronomy as well as how to eliminate PM. The focus is on the search for gravitational waves.
[652] vixra:2207.0126 [pdf]
On the Notion of Quantum `indistinguishability'
Based on the possibility of `indistinguishability' not being a binary property of quantum particles I argue that allowing for fractional quanta to occur can provide a means to `distinguish' so far-indistinguishable quantum particles.
[653] vixra:2207.0121 [pdf]
A Static, Stable Universe: Curvature-Cosmology
Curvature-cosmology is a tired-light cosmology that predicts a well-defined static and stable universe. Since it is a complete challenge to the big bang paradigm, it can only be judged by its agreement with direct cosmological observations. It predicts a universe of a hydrogen plasma with a temperature of $2.456times10^9,$K [observed: $2.62times 10^9$K] and a cosmic background radiation temperature of 2.736 K [observed: 2.725K]. It has only one parameter which is the density of the cosmic plasma. In addition this paper provides a new simpler raw data analysis for Type Ia supernova which provides excellent predictions for the redshift variation of Type I supernova light curve width and magnitude. A new discovery is intrinsic magnitude distribution. The analysis of 746,922 quasars provides important cosmological information on the distribution on intrinsic magnitudes and the density distribution of quasars. Other major observations that are shown to be consistent with Curvature-cosmology are: Tolman surface density, galaxy clusters, angular size, galaxy distributions, X-ray background radiation, and quasar variability. It does not need inflation, dark matter or dark energy.
[654] vixra:2207.0119 [pdf]
On a Certain Inequality on Addition Chains
In this paper we prove that there exists an addition chain producing 2^n-1 of length delta(2^n-1) satisfying the inequality delta(2^n-1)leq 2n-1-2left lfloor frac{n-1}{2^{lfloor frac{log n}{log 2}floor}}ight floor+lfloor frac{log n}{log 2}flooronumber where lfloor cdot floor denotes the floor function.
[655] vixra:2207.0114 [pdf]
Knot in Geometrical Optics
We treat the geometrical optics as an Abelian $U(1)$ local gauge theory the same as the Abelian $U(1)$ Maxwell's gauge theory. We propose there exists a knot in a 3-dimensional Euclidean (flat) space of the geometrical optics (the eikonal equation) as a consequence there exists a knot in the Maxwell's theory in a vacuum. We formulate the Chern-Simons integral using an eikonal. We obtain the relation between the knot (the geometric optical helicity, an integer number) and the refractive index.
[656] vixra:2207.0113 [pdf]
Why Black Holes Cannot Disappear?
The antigravity force is the corresponding buoyancy force, according to the physicallaw of buoyancy (Archimedes’ principle), but for the dynamic space. As a Universal antigravity force, it causes centrifugal accelerated motion of the galaxies with radial direction to the periphery of the Universe and as a nuclear antigravity force on which the architecture of the nuclei model is based. Also, as a particulate antigravity force, it prevents the further gravitational collapse and destruction of the vacuum bubbles (Higgs bosons) in the core of the neutrons, that build the black holes in the form of grid space matter, consisting of polyhedral cells, like bubbles in a foamed liquid. Therefore,matter has the same fundamental form both during the beginning of the Genesis of primary neutron and during its final gravitational collapse in the cores of the stars.
[657] vixra:2207.0107 [pdf]
The Clifford-Yang Algebra, Noncommutative Clifford Phase Spaces and the Deformed Quantum Oscillator
Starting with a brief review of our prior construction of $n$-ary algebras in noncommutative Clifford spaces, we proceed to construct in full detail the Clifford-Yang algebra which is an extension of the Yang algebra in noncommutative phase spaces. The Clifford-Yang algebra allows to write down the commutators of the $noncommutative$ polyvector-valued coordinates and momenta and which are compatible with the Jacobi identities, the Weyl-Heisenberg algebra, and paves the way for a formulation of Quantum Mechanics in Noncommutative Clifford spaces. We continue with a detail study of the isotropic $3D$ quantum oscillator in $noncommutative$ spaces and find the energy eigenvalues and eigenfunctions. These findings differ considerably from the ordinary quantum oscillator in commutative spaces. We find that QM in noncommutative spaces leads to very different solutions, eigenvalues, and uncertainty relations than ordinary QM in commutative spaces. The generalization of QM to noncommutative Clifford (phase) spaces is attained via the Clifford-Yang algebra. The operators are now given by the generalized angular momentum operators involving polyvector coordinates and momenta. The eigenfunctions (wave functions) are now more complicated functions of the polyvector coordinates. We conclude with some important remarks.
[658] vixra:2207.0103 [pdf]
Integration Of Ampere Force And Tripled Railgun Design
This papers presents an analytical integration for the Ampere's force law. It then makes estimations of the operational characteristics of the railgun based on the Ampere's force law. Operating at a current of 300kA, a 4m long tripled railgun may fire a 1kg projectile reaching an exit speed of 2020m/s (Mac 5.9) with kinetic energy of 2.04MJ. It is estimated that the ohmic loss is just about 3% of the kinetic energy. When the operation of the railgun is analyzed based on the Lorentz magnetic force, there is difficulty in identifying the precise seat of the railgun recoil. In contrast, the analysis done based on the Ampere's force law could precisely specify the seat of recoil of the railgun; it is at the `empty space' in the interface separating the atoms of the rails and the atoms of the gun breech. During firing, contrary to expectation, the rails would be under tension and not compression.
[659] vixra:2207.0097 [pdf]
Simulating the Trajectory of Differently Charged Particles in Cyclotron Using Gnu Octave
Computer simulation is essential to experimental physics research, as well as to the design, commissioning and operation of particle accelerators. In this work, we have tried to simulate the particle trajectories fordifferent input values. From the simulation, we can observe the radius of that particle trajectory. Largely, we have considered Electron, Helium atom, Proton, Tritium atom, and Deuteron atom and obtained their corresponding trajectories. The whole process is carried out in GNU Octave.
[660] vixra:2207.0095 [pdf]
On a Related Thompson Problem in Rk
In this paper we study the global electrostatic energy behaviour of mutually repelling charged electrons on the surface of a unit-radius sphere. Using the method of compression, we show that the total electrostatic energy $U_k(N)$ of $N$ mutually repelling particles on a sphere of unit radius in $mathbb{R}^k$ satisfies the lower boundbegin{align} U_k(N)gg_{epsilon}frac{N^{2}}{sqrt{k}}.onumberend{align}.
[661] vixra:2207.0089 [pdf]
Oxford Dictionary of Biology by Robert S. Hine and the Graphical law
We study Oxford Dictionary of Biology, the eighth edition, by Robert S. Hine. We draw the natural logarithm of the number of entries, head as well as all, normalised, starting with a letter vs the natural logarithm of the rank of the letter, normalised. We conclude that the Dictionary can be characterised by BP(4, $beta H=0.01$), i.e.the Bethe-Peierls curve in the presence of four nearest neighbours and little external magnetic field, $beta H= 0.01$. $beta$ is $frac{1}{k_{B}T}$ where, T is temperature and $k_{B}$ is the tiny Boltzmann constant.
[662] vixra:2207.0086 [pdf]
Multidimensional Numbers
A multidimensional number will not be viewed as a single real scalar value, rather,as a set of scalar values, each associated with a dimension. This gives rise to variations of"complex numbers", and consequently, Euler’s formula. The properties of complex numbers,such as the product of magnitudes being equal to the magnitude of the products, may also beapplicable in the case of multidimensional numbers, depending on how they are constructed.Although similar ideas exist, such as a hypercomplex number, the differences will be discussed.
[663] vixra:2207.0077 [pdf]
The Introduction of the Entraining Force and Its Consequences for the Interpretation of Red-Shift
The main intention of this paper is to start an open discussion about a new model with the potential to find a better understanding of several observations within astronomy. As a first step a model of an entraining force is introduced based on the observation that the distances between earth and moon respectively earth and sun are slightly increasing over the year. Based on further explanation on how to calculate the entraining force and its corresponding counter-torque similar to law of induction but for moving matter. Further on an outlook is given about how this force may contribute to a better understanding about how accretion disks or planetary ring systems are forming or how it may drive the differential rotation on the surface ofgas giants but also within galaxies related to their rotary curve. In a second step the consequences of the concept are described related to the current interpretation of red-shift. If the model of the entraining force will be further verified the concept of the expanding universe might have to be revised.
[664] vixra:2207.0071 [pdf]
On the Integral Inequality of Some Trigonometric Functions in $mathbb{r}^n$
In this note, we prove the inequality begin{align}bigg| int limits_{|a_n|}^{|b_n|} int limits_{|a_{n-1}|}^{|b_{n-1}|}cdots int limits_{|a_1|}^{|b_1|}cos bigg(frac{sqrt[4s]{sum limits_{j=1}^{n}x^{4s}_j}}{||vec{a}||^{4s+1}+||vec{b}||^{4s+1}}bigg)dx_1dx_2cdots dx_nbigg| leq frac{bigg|prod_{i=1}^{n}|b_i|-|a_i|bigg|}{|Re(langle a,b angle)|}onumberend{align}and begin{align}bigg|int limits_{|a_n|}^{|b_n|} int limits_{|a_{n-1}|}^{|b_{n-1}|}cdots int limits_{|a_1|}^{|b_1|}sin bigg(frac{sqrt[4s]{sum limits_{j=1}^{n}x^{4s}_j}}{||vec{a}||^{4s+1}+||vec{b}||^{4s+1}}bigg)dx_1dx_2cdots dx_nbigg| leq frac{bigg|prod_{i=1}^{n}|b_i|-|a_i|bigg|}{|Im(langle a,b angle)|}onumberend{align}under some special conditions.
[665] vixra:2207.0070 [pdf]
The Weber Nucleus as a Classical and Quantum Mechanical System
Wilhelm Weber’s electrodynamics is an action-at-a-distance theory which has the property that equal charges inside a critical radius be- come attractive. Weber’s electrodynamics inside the critical radius can be interpreted as a classical Hamiltonian system whose kinetic energy is, however, expressed with respect to a Lorentzian metric. In this article we study the Schroedinger equation associated with this Hamiltonian system, and relate it to Weyl’s theory of singular Sturm-Liouville problems.
[666] vixra:2207.0068 [pdf]
Linear Formulation of Square Peg Problem Test Function
In this paper, we developed a set of linear constraints to test whether 4 points form a square. Traditionally people use Euclidean distance to test whether the 4 points form a square. It forms a square if the four sides are of equal length and the diagonals are of equal length. My test function using a set of linear constraints is much simpler without the use of quadratic operations in Euclidean distance test function. This is needed in the future to prove Square Peg Problem for any arbitary closed curve.
[667] vixra:2207.0062 [pdf]
Wave Function Collapse Visualization
Wave Function Collapse initializes output bitmapin a completely unobserved state, where each pixel value is in a superposition of colors of the input bitmap (so if the input was black-white then the unobserved states are shown in different shades of grey). The coefficients in these superpositions are real numbers, not complex numbers, so it doesn’t do the actual quantum mechanics, but it was inspired by QM. In this, we have been matching each tile to tile value by pixel to pixel by namingas it as "socket". We know that in code when we match the tile it would be in a random order so we had rotated them into a specific order to match each socket to socket which indicates the overlapping of tiles as the superposition of several Eigen states. It was first introduced in 2016 by Maxim Gumin which can generate procedural patterns from a sample image or from a collection of tiles. So we are just visualizing it in a mathematical way
[668] vixra:2207.0060 [pdf]
The prime Number Theorem and Prime Gaps
Let there exists m > 0 such that gn = O((logpn)m), then∀k > 0, ∃M ∈ N s.t. n ≥ M ⇒ gn := pn+1 − pn < pknwhere pn is nth prime number, O is big O notation, log is natural logarithm.This lead to a corollary for Andrica conjecture, Oppermann conjecture.
[669] vixra:2207.0057 [pdf]
The Arrew Theorem Prover
Arrew (Arrow Rewriter) is a mathematical system (theorem prover) that allows expressing and working with formal systems. It relies on a simple substitution rule and set equality to derive theorems.
[670] vixra:2207.0056 [pdf]
Designing Potential Drugs That Can Target Sars-COV-2’s Main Protease: A Proactive Deep Transfer Learning Approach Using LSTM Architecture
Drug discovery is a crucial step in the process of delivering a new drug to the market that can take up to 2-3 years which can be more penalizing given the current global pandemic caused by the outbreak of the novel coronavirus SARS-CoV 2. Artificial Intelligence methodologies have shown great potential in resolving tasks in various domains such as image classification, sound recognition, also in the range of the previous years, Artificial Intelligence proved to be the go-to for generative tasks for use cases such as music sequences, text generation and solving also problems in biology. The goal of this work is to harvest the power of these architectures using generative recurrent neural network with long short-term memory (LSTM) gating techniques in order to generate new and non-existing molecules that can bind to the main COVID-19 protease, which is a key agent in the transcription and replication of the virus, and thus can act as a potential drug that can neutralize the virus inside of an infected host. As of today, there are no specific targeted therapeutic agents to treat the disease and all existing treatments are all very limited. Known drugs that are passing clinical trials such as Hydroxychloroquine and Remdesivir showed respectively a binding energy with SARS-CoV-2’s main protease of -5.3 and -6.5, the results of the newly generated molecules exhibited scores ranging till -13.2.
[671] vixra:2207.0052 [pdf]
Numerical Derivatives
The idea of ​​this work is to present the software that allows us to quickly and numerically calculate values ​​of f 0, f 00 , f 000 and f IV at the points where they are required, especially thinking about the estimation of the error in problems that involve differential equations. ordinary and partial differential equations. The calculation of these values ​​by means of numerical methods is of great. It helps in solving these problems, as it saves a lot of time. The routines presented have been written in Google Inc.'s Go language, following our policy of making the most of "21st century C", which is a very fast, comfortable tool with sufficient accuracy for the proposed applications. We hope that this study will be useful for professional mathematicians as well as scientists from other areas and engineers who need to calculate the error in their equations or the rates of change associated with a whole potential of physical applications. <p> La idea de este trabajo es presentar el software que nos permite calcular rápida y numéricamente valores de f 0, f 00 , f 000 y f IV en los puntos donde se les requiera, sobre todo pensando en la estimación del error en problemas que involucran ecuaciones diferenciales ordinarias y ecuaciones en derivadas parciales. El cálculo de estos valores mediante métodos numéricos es de gran ayuda en la resolución de estos problemas, pues ahorra mucho tiempo. Las rutinas presentadas han sido escritas en lenguaje Go de Google Inc., siguiendo nuestra polı́tica de usufructuar al máximo el “C del siglo XXI”, que es una herramienta muy rápida, cómoda y con la exactitud suficiente para las aplicaciones propuestas. Esperamos que este estudio sea de utilidad tanto para matemáticos profe-sionales como cientı́ficos de otrás áreas e ingenieros que requieran calcular el error en sus ecuaciones o las ratas de cambio asociadas con todo un potencial de aplicaciones fı́sicas.
[672] vixra:2207.0028 [pdf]
Cosmology of Inevitable Flat Space
In the combined theory of Special Relativity and Quantum Mechanics (c-SRQM), the upper limit of local acceleration is constrained to c2/A, where c is the speed of light and A is the diameter of the event horizon of the smallest black hole in nature - called the Unit Black Hole (UBH). In this article, a new cosmological model is proposed wherein the flatness of the universe is inevitable from the onset. The theory indicates that at any given moment of the cosmic evolution, the age of the universe can be expressed as some integer multiple of the cosmological time constant A/c. The integer multiple 1, signifies the end of the Big Bang at which the initial conditions undergo a sudden change. The known universe is then shown to be the observable portion of a much bigger structure - named the grand universe - which is originated from a Primordial Black Hole (PBH) expanding with the limit rate c2/A at time A/c. It is shown that the dipole in the Cosmic Microwave Background (CMB) could be explained by the anisotropy in the gravitational redshift of the grand universe. Moreover, a best fit to the observational Hubble diagram is obtained when the absolute luminosity of type Ia supernovae is constrained to 3.02e9 times that of the sun. The age of the universe is found to be 15.96e9 (years). The new age is higher than that of the standard cosmology by 2.14e9 (years), therefore, reducing the age discrepancy between the universe and the old metal-deficient stars. The actual value of the Hubble constant Ho is found to be 40.83 (km/sec/Mpc). The discrepancy with the current estimates of the constant is due to neglecting the gravitational redshift of the grand universe in the current standard cosmology.
[673] vixra:2207.0013 [pdf]
Proofs of Four Conjectures in Number Theory: Beal's Conjecture, Riemann Hypothesis, The abc and c Smaller Than R^{1.63} Conjectures
This monograph presents the proofs of 4 important conjectures in the field of number theory: The Beal's conjecture; The Riemann Hypothesis; The c smaller than R^{1.63} conjecture; and abc conjecture is true. We give in detail all the proofs.
[674] vixra:2207.0011 [pdf]
Proof of 16 Formulas Barnes Function
I have already published several months ago in the papers "Values of Barnes Function" and "Another Values of Barnes Function and Formulas" in total 16 conjectural formulas that I find with unsualmethods.So, in this article, I write the proof of 16 formulas.
[675] vixra:2207.0001 [pdf]
Experimental Results of a Simple Pendulum and Inelastic Collisions
In this article, theoretical and experimental values of a simple pendulum and an inelastic collision oftwo pendulums are calculated, through numerical methods with the python language, and analysis of the experimental data. An accuracy of 98.46% for the collision time and 97.78% for the speed during impact is demonstrated.
[676] vixra:2206.0168 [pdf]
Bernoulli Sums of Powers, Euleru2013maclaurin Formula and Proof that Riemann Hypothesis is True
On 1859, the german mathematician Georg Friedrich Bernhard Riemann made one of his most famouspublications "On the Number of Prime Numbers less than a Given Quantity" when he was developing hisexplicit formula to give an exact number of primes less than a given number x, in which he conjectured that"all non-trivial zeros of the zeta function have a real part equal to 1/2 ". Riemann was sure of his statement,but he could not prove it, remaining as one of the most important hypotheses unproven for 163 years.In this paper, we have to prove that the Riemann Hypothesis is true, based on the Bernoullipower sum, the Euleru2013Maclaurin formula and its relation with the Riemann Zeta function.
[677] vixra:2206.0164 [pdf]
What Makes Goldbach's Conjecture Correct
A direct proof shows Goldbach's conjecture is correct. It is as simple as can be imagined. A table consisting of two rows is used. The lower row counts from 0 to any n and and the top row counts down from 2n to n. All columns will have all numbers that add to 2n. Using a sieve, all composites are crossed out and only columns with primes are left. Without loss of generality, an example shows that primes, ones that sum to 2n will always be left in such columns.
[678] vixra:2206.0156 [pdf]
Extension of Dirac Equation by the Charge Creation-Annihilation Field Model and Simulation for Formation Process of Atomic Orbitals by FDTD Method
We have reported that Maxwell's equations should be extended by using the charge creation-annihilationfield to treat the charge generation-recombination in semiconductor devices. Considering the charge creation-annihilation field, the extended Maxwell's and Dirac equations can be given by the same formula consisting of the dual 4-vector fields and the 8 by 8 spatially symmetric differential operator matrix. By using FDTD 3D simulations, we found that the Dirac field wave packet with enough smaller velocity than light can be stably created without explicit consideration of Zitterbewegung, although it is difficult in 1D simulations. We calculated the Dirac field propagation in the electric central force potential and succeeded to simulate the formation process of atomic orbitals based on the extended Dirac equation by this method without any physical approximation for the first time. A small unstable orbital appears at first and rapidly grows and finally becomes to a large stable orbital with the same radius as Bohr radius dividedby the atomic number given by Schrödinger equation. This result could be regarded as a proof of correctness of the charge creation-annihilation field model.
[679] vixra:2206.0154 [pdf]
Energy Current And Photoelectricity Theory
This paper continues with the development of the author's aether Simple Unified Theory (SUT) and the wave-pulse theory of light. The theory may be called photoelectricity, a replacement for electromagnetism based on Maxwell's equations. In contemporary electromagnetism, energy transmission in current carrying conductors is explained based on the Poynting theory; that it is the surrounding magnetic fields of the conductor which is responsible for energy transmission. This paper argues that such an explanation is not convincing. It is hypothesized that the actual mechanism of energy transmission is through apulses(aether wave pulses, almost photon-like), being absorbed and remitted within the conductors. This is the basis of the novel concept of the energy current in electrical circuit. This paper also touches on various related aspects of physics including the Ampere's force law. An integration method for Ampere's forces is explained. Various experiments involving the Ampere's longitudinal forces have been re-examined. Faraday's law of electromagnetic induction for the AC alternators is explained as aether apulses being emitted within the magnets that jump the air gap entering into the armature winding of the alternator; this is the energy current source for the conversion from mechanical to electrical energy in AC alternators.
[680] vixra:2206.0152 [pdf]
Sixty-Six Theses: Next Steps and the Way Forward in the Modified Cosmological Model
The purpose is to review and lay out a plan for future inquiry pertaining to the modified cosmological model (MCM) and its overarching research program. The material is modularized as a catalog of open questions that seem likely to support productive research work. The main focus is quantum theory, but the material spans a breadth of physics and mathematics. Cosmology is heavily weighted, and some Millennium Prize problems are included. A comprehensive introduction contains a survey of falsifiable MCM predictions and associated experimental results. Listed problems include original ideas deserving further study as well as investigations of others' work when it may be germane. A longstanding and important conceptual hurdle in the approach to MCM quantum gravity is resolved with a framework for quantum cosmology time arrow eigenstates. A new elliptic curve application is presented. With several exceptions, the presentation is high-level and qualitative. Formal analyses are mostly relegated to the future work which is the topic of this book. Sufficient technical context is given that third parties might independently undertake the suggested work units.
[681] vixra:2206.0149 [pdf]
Simplest Integrals for the Zeta Function and its Generalizations Valid in All C
In this paper we derive the possibly simplest integral representations for the Riemann zeta function and its generalizations (the Lerch function, $\Phi(e^m,-k,b)$, the Hurwitz zeta, $\zeta(-k,b)$, and the polylogarithm, $\mathrm{Li}_{-k}(e^m)$), valid in the whole complex plane relative to all parameters, except for singularities. We also present the relations between each of these functions and their partial sums. It allows one to figure, for example, the Taylor series expansion of $H_{-k}(n)$ about $n=0$ (when $-k$ is a positive integer, we obtain a finite Taylor series, which is nothing but the Faulhaber formula). With these relations, one can also obtain the simplest integral representation of the derivatives of the zeta function at zero. The method used requires evaluating the limit of $\Phi\left(e^{2\pi\ii\,x},-2k+1,n+1\right)+\pi\ii\,x\,\Phi\left(e^{2\pi\ii\,x},-2k,n+1\right)/k$ when $x$ goes to $0$, which in itself already constitutes an interesting problem.
[682] vixra:2206.0147 [pdf]
Similarity of a Ramanujan Formula for $pi$ with Plouffe's Formulae, and Use of This for Searching of Physical Background for Some Guessed Formula for the Elementary Physical Constants
The paper is comprised of two parts. In the first part, it discusses the similarity between one of Ramanujan's formulae for $pi$ and Plouffe's formulae where he uses the Bernoulli numbers. This similarity is help for further determination either that the similarity is only accidental, or that we can derive the Ramanujan formula in this way. This is also help for setting up a calculation system where we would estimate the probabilities with which we can obtain guessed formulae for $pi$ that are very accurate and very simple. (We only consider formulae that are not the approximations of the exact formulae for $pi$.) In the second part, it discusses various guessed formulae for the fine structure constant and for the other physical constants, and how the above probability calculation would help estimate whether these formulae have a physical basis or are only accidental.
[683] vixra:2206.0142 [pdf]
FASFA: A Novel Next-Generation Backpropagation Optimizer
This paper introduces the fast adaptive stochastic function accelerator (FASFA) for gradient-based optimization of stochastic objective functions. It works based on Nesterov-enhanced first and second momentum estimates. The method is simple and effective during implementation because it has intuitive/familiar hyperparameterization. The training dynamics can be progressive or conservative depending on the decay rate sum. It works well with a low learning rate and mini batch size. Experiments and statistics showed convincing evidence that FASFA could be an ideal candidate for optimizing stochastic objective functions, particularly those generated by multilayer perceptrons with convolution and dropout layers. In addition, the convergence properties and regret bound provide results aligning with the online convex optimization framework. In a first of its kind, FASFA addresses the growing need for diverse optimizers by providing next-generation training dynamics for artificial intelligence algorithms. Future experiments could modify FASFA based on the infinity norm.
[684] vixra:2206.0141 [pdf]
The Holographic Complexity on Extremal Branes with Exceptional Higher Derivative Interactions
The philosophy of presented multiversum doctrina dominum article is related to the coloring of the theoretical framework with respect to holographic complexity on extremal branes in exclusive higher- dimensional representations. We examine holographic complexity in the doubly holographic model introduced in the current literature to study quantum extremal islands. We focus on the holographic complexity volume proposal for boundary subregions in the island phase. Exploiting the Fefferman-Graham expansion of the metric and other geometric quantities near the extremal brane, we derive the leading contributions to the complexity and interpret these in terms of the generalized volume of the island derived from the induced higher-curvature gravity action on the extremal brane. We discuss the interpretation of path integral optimization as a uniformization problem in even dimensions. This perspective allows for a systematical construction of the higher-dimensional path integral complexity in holographic conformal field theories in terms of Q-curvature actions. Motivated by the exceptional results, we propose a generalization of the higher-dimensional derivative actions of exotic extremal branes.
[685] vixra:2206.0128 [pdf]
The Positive Energy Conditions in a Near Black Hole.
I speculate that the positive energy conditions are maintained by quantum events, in which case I suggest it is unlikely they can be relied on in the extreme conditions associated with a black hole.
[686] vixra:2206.0125 [pdf]
Chiral Symmetry of Neutrino
In this paper (i) is given a new representation for gamma matrices in which is confirmed theoretically the absence of positive helicity neutrino and respectively negative helicity antineutrino, (ii) is proved the equivalence of Dirac equation for mass m with Proca equation for mass 2m, and (iii) is proposed a discrete symmetry group for weak and strong interactions built with 4 unitary and 4 nilpotent operators. The cosmological constant predicted by theory is Λ = 2πG(c/ℏ)^3 m^4, where m is neutrino mass.
[687] vixra:2206.0124 [pdf]
Convergent Fine-Structure Constant Using the Lambert Function
Here a correlation to the exact fine-structure constant is found. This derivation suggests that the fine structure constant can be theoretically determined as a Lambert function that utilizes the spectrum range of all the energy modes (radiation modes) that fit inside the observable universe between the particle horizon down to Planck Length. Alternatively, this also could be interpreted as the Lambert function of the particle horizon in natural units. Several methods use hyperbolic geometry to achieve full convergence. A compilation of various convergent equations are found to represent the fine structure constant.
[688] vixra:2206.0117 [pdf]
Against Finitism a Criticism of Norman J Wildberger
If you study mathematics you are probably aware of the foundational crises that mathematics went through at the beginning of the 20th century. The three broad schools of thought namely constructivism, intuitionism and formalism collided and judging by the approach used today by most mathematicians, we can easily say that formalism emerged victorious in some sense. However while debates regarding the foundations of mathematics have subsided over the years, they aren’t dead. One such school of mathematics which still sees considerable traffic is finitism. In this article, we will be analysing the criticism of a finitist named Norman J Wildberger and trying to defend the current axiomatic mathematical systems against them.
[689] vixra:2206.0106 [pdf]
Regarding Geometrization of MOND
One of the candidates for a resolution of the problem of dark matter is the Modified Newtonian Dynamics, which modifies the Newtonian gravity so as to fit the data. One of the key open problems of this theory which can have important empirical consequences is that of its geometrization. In this note I argue that this problem has a simple solution: metric tensor in MOND is not the gravitational potential *itself.
[690] vixra:2206.0105 [pdf]
Some Facts about Relations and Operations of Algebras
Let A be a σ-algebra. Suppose that Θ is a congruence of A. Then Θ is a subalgebra of A×A. If φ is an automorphism from A to A, then (φ,φ) is an automorphism of A×A. And it is obvious that (φ,φ)(Θ) is a congruence of A. Let B be a σ-algebra and ψ a homomorphism from A to B. Then B′ := ψ(A) is a subalgebra of B. And (ψ,ψ)(Θ) is a congruence of B′. If ψ is an epimorphism, then (ψ,ψ)(Θ) is a congruence of B. Suppose that A is a category of all σ-algebras. Let A,B ∈ A and ψ: A → B be a homomorphism. Then the pullback A ⊓B A is isomorphic to a congruence of A. An n-ary relation of an algebra A is a subset of An. If satisfies some conditions, then is a subalgebra of An. The set of languages is a lattice. If is the set of the compositions of the operations in a language σ, then is an algebra.
[691] vixra:2206.0101 [pdf]
On the Number of Points Included in a Plane Figure with Large Pairwise Distances
Using the method of compression we show that the number of points that can be placed in a plane figure with mutual distances at least $d>0$ satisfies the lower bound \begin{align} \gg_2 d^{\epsilon}\nonumber \end{align}for some small $\epsilon>0$.
[692] vixra:2206.0096 [pdf]
The Intuitive Root of Classical Logic, an Associated Decision Problem and the Middle Way
We revisit Boole's reasoning regarding the equation ``$x.x=x$'' that sowed the seeds of classical logic. We discuss how he considered both ``$0.0=0$'' and ``$0.0\neq 0$'' in the ``same process of reasoning''. This can either be seen as a contradiction, or it can be seen as a situation where Boole could not decide whether ``$0.0=0$'' is universally valid -- an elementary ``decision problem'' in the words of Hilbert and Ackermann. We conclude that Boole's reasoning, that included a choice of ignorance, was founded upon the middle way of the Buddha, later mastered by Nagarjuna. From the modern standpoint, the situation can be likened to Turing's halting problem which resulted from the use of automatic machines and the exclusion of choice machines.
[693] vixra:2206.0095 [pdf]
The Undecidable Charge Gap and the Oil Drop Experiment
Decision problems in physics have been an active field of research for quite a few decades resulting in some interesting findings in recent years. However, such research investigations are based on a priori knowledge of theoretical computer science and the technical jargon of set theory. Here, I discuss a particular, but a significant, instance of how decision problems in physics can be realized without such specific prerequisites. I expose a hitherto unnoticed contradiction, that can be posed as a decision problem, concerning the oil drop experiment and thereby resolve it by refining the notion of ``existence'' in physics. This consequently leads to the undecidability of the charge spectral gap through the notion of ``undecidable charges'' which is in tandem with the completeness condition of a theory as was stated by Einstein, Podolsky and Rosen in their seminal work. Decision problems can now be realized in connection to basic physics, in general, rather than quantum physics, in particular, as per some recent claims.
[694] vixra:2206.0088 [pdf]
Test of Oscillation Symmetry Applied to Some Physical Properties of Various Hydrocarbons
The oscillation symmetry is applied with success to some physical properties (densities, Boiling points, and Melting points) of different Hydrocarbons: Alkanes, Cycloalkanes, Alkenes, Alkynes, Alkadienes, and Polycyclic Aromatic Hydrocarbons. It is also applied to Hydro Silicons. It allows to tentatively predict possible values for several unknown properties. The same shape of oscillation describes, sometimes after renormalization, the "mass data" of several particle families, nuclei families and Alkane Melting Point "data". The periods of oscillation exhibit discret values as if they are quantified.
[695] vixra:2206.0076 [pdf]
A Lower Bound for Multiple Integral of Normalized Log Distance Function in $\mathbb{R}^n$
In this note we introduce the notion of the local product on a sheet and associated space. As an application, we prove that for $\langle a,b \rangle>e^e$ then \begin{align} \int \limits_{|a_n|}^{|b_n|} \int \limits_{|a_{n-1}|}^{|b_{n-1}|}\cdots \ints_{|a_1|}^{|b_1|}\bigg|\log \bigg(i\frac{\sqrt[4s]{\sum \limits_{j=1}^{n}x^{4s}_j}}{||\vec{a}||^{4s+1}+||\vec{b}||^{4s+1}}\bigg)\bigg| dx_1dx_2\cdots dx_n\nonumber \\ \geq \frac{\bigg| prod_{j=1}^{n}|b_j|-|a_j|\bigg|}{\log \log (\langle a,b\rangle)}\nonumber \end{align}for all $s\in \mathbb{N}$, where $\langle,\rangle$ denotes the inner product and $i^2=-1$.
[696] vixra:2206.0075 [pdf]
A Formula for the Function π(x) to Count the Number of Primes Exactly if 25 ≤ X ≤ 1572 with Python Code to Test it v. 4.0
This paper shows a very elementary way of counting the number of primes under a given number with total accuracy. Is the function π(x) if 25 ≤ x ≤ 1572.
[697] vixra:2206.0072 [pdf]
Special Affine Fourier Transform for Space-Time Algebra Signals in Detail
We generalize the space-time Fourier transform (SFT) [1] to a special affine Fourier transform (SASFT, also known as offset linear canonical transform) for 16-dimensional space-time multivector Cl(3,1)-valued signals over the domain of space-time (Minkowski space) R^{3,1}. We establish how it can be computed in terms of the SFT, and introduce its properties of multivector coefficient linearity, shift and modulation, inversion, Rayleigh (Parseval) energy theorem, partial derivative identities, a directional uncertainty principle and its specialization to coordinates. All important results are proven in full detail. [1] E. Hitzer, Quaternion Fourier Transform on Quaternion Fields and Generalizations. Adv. Appl. Clifford Algebras 17(3), pp. 497-517 (2007), DOI: https://doi.org/10.1007/s00006-007-0037-8.
[698] vixra:2206.0051 [pdf]
Proof of Riemann Hypothesis
This paper is a trial to prove Riemann hypothesis accordingto the following process.1. We make one identity regarding x from one equation that gives Riemannzeta function ζ(s) analytic continuation and 2 formulas (1/2+a±bi, 1/2−a ± bi) that show non-trivial zero point of ζ(s).2. We find that the above identity holds only at a = 0.3. Therefore non-trivial zero points of ζ(s) must be 1/2 ± bi because a cannothave any value but zero.
[699] vixra:2206.0049 [pdf]
Blocking Aircraft
Arranging a small type decoy between an aircraft and an aam and making the decoy meet the aam or using various kinds of ways, we free the aircraft from the aam's threat.
[700] vixra:2206.0044 [pdf]
A Solution to the Sign Problem Using a Sum of Controlled Few-Fermions
A restricted path integral method is proposed to simulate a type of quantum system or Hamiltonian called a sum of controlled few-fermions on a classical computer using Monte Carlo without a numerical sign problem. Then a universality is proven to assert that any bounded-error quantum polynomial time (BQP) algorithm can be encoded into a sum of controlled few-fermions and simulated efficiently using classical Monte Carlo. Therefore, BQP is precisely the same as the class of bounded-error probabilistic polynomial time (BPP), namely, BPP = BQP.
[701] vixra:2206.0028 [pdf]
Generalizing The Mean
I want to find a constructive extension of the average from the Hausdorff Measure and Integral w.r.t to that measure as the averages (from Tim Bedford and Albert M. Fisher) given for fractals, such that it gives a unique, and satisfying average for nowhere continuous functions defined on non-fractal, measurable sets in the sense of Caratheodory without a gauge function.
[702] vixra:2206.0021 [pdf]
Engagement Capstone Projects: a Collaborative Approach to a Case Study in Psychoacoustics
Undergraduates in Spanish universities conclude their Bachelor of Science in Telecommunication Engineering with a capstone project. In recent years, students in technical degrees often postpone this last step due to an accelerated entry into the labour market or disappointment about the capstone project development. This article presents an approach which attempts to overcome these challenges: \textit{Engagement} capstone projects. The authors, lecturers in two Spanish Universities (Universidad Politécnica de Madrid and Universidad Rey Juan Carlos, supported by the French Company, EOMYS, manage this educational project. Students become responsible for their contribution to a free, libre and open software project, which provides sound quality metrics based on psychoacoustics. They have the opportunity to work in a collaborative and international environment with industrial partners. The presentation of the technological platform shows the educational benefits of the employed tools: Python, GitHub and Jupyter Notebook. A student survey and the supervisors feedback supports an analysis, which helps improve the methodology as well as verify the benefits: better supervision, the development of social and professional skills, and useful community work. Finally, a couple of examples of Engagement capstone projects give insight into the results of this educational strategy.
[703] vixra:2206.0018 [pdf]
On Riemann Hypothesis
A line of study of the Riemann Hypothesis is proposed, based on a comparison with Weil zeros and a categorification of the duality between Riemann zeros and prime numbers. The three case of coefficients, complex, p-adic and finite fields are also related.
[704] vixra:2205.0155 [pdf]
Algorithm for Finding the Nth Root of Modulo P
For { p-1 = q^L*m ( ∤ q^x ∨ | q^x (x larger than L))}, it is the deterministic algorithm. The previously created calculation method was for a single prime number, but a method to calculate multiple primes has been added. The original calculation method has also been partially modified. To find the nth root, we need to factoer n into prime factors. In some case, primitive roots are needed. If you don't know these, use the Tonelli-Shanks algorithm.
[705] vixra:2205.0154 [pdf]
Unprovability of First Maxwell's Equation in Light of EPR's Completeness Condition
Maxwell's verbal statement of Coulomb's experimental verification of his hypothesis, concerning force between two electrified bodies, is suggestive of a modification of the respective computable expression on logical grounds. This modification is in tandem with the completeness condition for a physical theory, that was stated by Einstein, Podolsky and Rosen in their seminal work. Working with such a modification, I show that the first Maxwell's equation, symbolically identifiable as ``$\vec{\nabla}\cdot\vec{E}=\rho/\epsilon_0$'' from the standard literature, is {\it unprovable}. This renders Poynting's theorem to be {\it unprovable} as well. Therefore, the explanation of `light' as `propagation of electromagnetic energy' comes into question on theoretical grounds.
[706] vixra:2205.0151 [pdf]
On the Infinitude of Cousin Primes
In this paper we prove that there infinitely many cousin primes by deducing the lower bound \begin{align} \sum \limits_{\substack{p\leq x\\p,p+4\in \mathbb{P}\setminus \{2\}}}1\geq (1+o(1))\frac{x}{2\mathcal{C}\log^2 x}\nonumber \end{align}where $\mathcal{C}:=\mathcal{C}(4)>0$ fixed and $\mathbb{P}$ is the set of all prime numbers. In particular it follows that \begin{align} \sum \limits_{p,p+4\in \mathbb{P}\setminus \{2\}}1=\infty\nonumber \end{align}by taking $x\longrightarrow \infty$ on both sides of the inequality. We start by developing a general method for estimating correlations of the form \begin{align} \sum \limits_{n\leq x}G(n)G(n+l)\nonumber \end{align}for a fixed $1\leq l\leq x$ and where $G:\mathbb{N}\longrightarrow \mathbb{R}^{+}$.
[707] vixra:2205.0146 [pdf]
Zeta-Pad'e SRWS Theory with Approximation of Averaged Summation
In my previous paper about Statistical Random Walk Summation(SRWS) theory[1], I proposed a new expansion of typical critical Green function for the Anderson transition in the Orthogonal class. In this paper, I perform an approximate summation for the series of the typical critical Green function. Pad'e approximant is used to take a summation. The new approximate expression of the critical exponent nu of localization length is obtained. The dimensional dependence of the critical exponent is directly related with Riemann zeta function. Thus, the number theory and the critical phenomena of the Anderson transition is connected. Therefore I call this method as zeta-Pad'e SRWS theory. Existence of lower critical dimension is understood as the infinite existence of prime numbers. Besides it, analogy with statistical mechanics also becomes clear.
[708] vixra:2205.0138 [pdf]
Cosmological Redshift: Accelerating Expansion or Quantum Phenomenon
The postulate of tired light can be cast in terms of an elementary quantum of energy lost from a photon during each cycle. The uncertaintyin time associated with the quantum of energy is the Hubble time. Given uncertainty at this cosmological scale, it is argued that complementarity between received photon energy and observed distant time dilation at the source overcomes a common objection to tired light. Observed supernova redshift, luminosity distance, and distant time dilation tend to support two possibilities for a quantum interpretation of the redshift.
[709] vixra:2205.0137 [pdf]
Along the Side of the Onsager's Solution, the Ekagi Language-Part Three
We continue to consult the Ekagi-Dutch-English-Indonesian Dictionary by J. Steltenpool. In this short note, we remove all the multiple countings of an entry in a letter's section which have gone in in the companion paper "Along the side of the Onsager's solution, the Ekagi language; viXra: 2205.0065[Condensed Matter]". We draw the natural logarithm of the number of entries, denoted as f, normalised, starting with a letter vs the natural logarithm of the rank of the letter, denoted as k. We find that $\frac{lnf}{lnf_{max}}$ vs $\frac{lnk}{lnk_{lim}}$ is matched by the graph of the reduced magnetisation vs the reduced temperature of the exact Onsager solution of the two dimensional Ising model in the absence of the external magnetic field
[710] vixra:2205.0136 [pdf]
A Method for Studying the Anderson Transition in the Orthogonal Symmetry Class Employing a Random Walk Expansion, the Statistics of Asymptotic Walks and Summation Method
I propose a method to study the Anderson transition in the orthogonal symmetryclass. This method employs a virtual lattice characterised by an arbitraryspectral dimension instead of a concrete lattice with a given integer or fractal dimension.This method makes it possible to simulate numerically infinite size systemon a computer. Moreover, the computational complexity does not increaseexponentially as the dimensionality increases. Thus, we can avoid the curse ofdimensionality. Also, we can estimate the critical exponent numerically withoutresorting to the finite size scaling method often used in previous numerical studiesof critical phenomena.
[711] vixra:2205.0134 [pdf]
Hom-Sets Category
Let C be a category. Suppose that the hom-sets of C is small. Let CH be a category consist of the hom-sets of C. Then we define a morphism of CH by a morphisms pair 〈ν,μ〉. Hence the morphism is monic if and only if ν is epi and μ is monic. An object HomC (P, E) ∈ CH is an injective object if and only if P is a projective object and E is an injective object. There exists a bifunctor T : (C ↓ A)op × (B ↓ C) → (Hom(A, B) ↓ CH). And the bifunctor T is bijective. There exist the products in CH if and only if there exist the products and coproducts in C. There exist the pullback in CH if and only if there exist the pushout and pullback in C.
[712] vixra:2205.0132 [pdf]
Comprehending the Euler-Riemann Zeta Function and a Proof of the Riemann Hypothesis
This paper will prove that the Riemann Hypothesis is true., based on the following statements: -The resulting value of the Euler-Riemann zeta function ζ(k) is the center of a spiral on the complex plane, where k ∈ C. -The center of this spiral when ζ(k) = 0, coincides with the origin of coordinates of the complex plane. -There exists a function related to this spiral, obtained from Bernoulli's sum of powers, which allows to calculate the zeta funtion.
[713] vixra:2205.0128 [pdf]
Surrounding Matter Theory: First Mathematical Developments
Surrounding Matter Theory, an alternative theory to dark matter, suggests mathematical developments. Two of them are presented. Then they are used in the study of a different interpretation of General Relativity (GR) principles using a four-momentum in place of the stress-energy tensor. It is showned that the surrounding effect prevailing in Surrounding Matter Theory appears also as the inner part of such a model. A surrounding effect in the context of particle physics is tried.
[714] vixra:2205.0124 [pdf]
The Gravitational Wave of the Crab Pulsar in the O3b Series from Ligo
Identification of the crab pulsar spectral line in the records of LIGO and measurement of the frequency drift. After removing the known frequency drift of the Crab pulsar, sufficiently long data segments from the LIGO interferometers can be narrow-band filtered in order to reduce the noise. The spectral line at 59.23 Hz is clearly visible in 84 records of L1, H1 and V1. The spectral lines of other pulsars of neighboring frequency can be separated well due to different values of the frequency drift.
[715] vixra:2205.0122 [pdf]
A Novel Complex Intuitionistic Fuzzy Set
Intuitionistic fuzzy set has been widely applied to decision-making, medical diagnosis, pattern recognition and other fields, because of its powerful ability to represent and address the uncertain of information. In this paper, we propose a novel complex intuitionistic fuzzy set.
[716] vixra:2205.0121 [pdf]
An Introduction to the Radioactive Hypersensitivity of the Intelligence Development and the Anthropological Statistical Anomalies.
An Introduction to the Radioactive Hypersensitivity of the Intelligence Development and the Anthropological Statistical Anomalies. The present article enlightens a specific list of well known scientific observations and scientific facts. It also enlightens a very compact formula for the upper bound of the total number of the neural reconnections of a neural network over its whole life cycle.
[717] vixra:2205.0117 [pdf]
A Proof of the Line Like Kakeya Maximal Function Conjecture
In this paper we will prove the Kakeya maximal function conjecture in a special case when tube intersections behave like points. We achieve this by showing there exist large essentially disjoint tube-subsets.
[718] vixra:2205.0111 [pdf]
Antenas Tradicionales Y Escalares Relación Pitegórica<br>Traditional and Scalar Antennae Pythagorean Relationship
El tema de las antenas escalares fue tratado en un documento previo, situado en la dirección siguiente. https://vixra.org/abs/2205.0017 Este documento informa la relación física y matemática íntima entre ambos tipos de antenas, tradicionales y escalaresd, regida por la ortogonalidad y asociada a un triángulo rectángulo, que adquiere un significado esclarecedor. <p> The topic of scalar antennas was covered in a previous paper, located at the following address. https://vixra.org/abs/2205.0017 This document reports the intimate physical and mathematical relationship between both types of antennas, traditional and scalar, governed by orthogonality and associated with a right triangle, which acquires an illuminating meaning.
[719] vixra:2205.0109 [pdf]
A Brief Note on the Asymmetry of Time
Given a granular space-time where the grains can move freely in x, y, z, and also t. But there is a very small, symmetry-breaking increased probability of a grain moving forward versus moving backward in time. At this juncture then, there is no discernible arrow of time. In order that masses not be pulled apart by moving grains, we assume that grains clump together when the grains hold mass. The more mass there is, the more grains are in the clump. We show that as the clump grows larger, the more the probability increases of the clump moving forward in time as opposed to moving backward. So when the clump size grows to measurable dimensions, the arrow of time points consistently forward.
[720] vixra:2205.0106 [pdf]
On a Variant of Brocard's Problem Via the Diagonalization Method
In this paper we introduce and develop the method of diagonalization of functions $f:mathbb{N}longrightarrow mathbb{R}$. We apply this method to show that the equations of the form $Gamma_r(n)+k=m^2$ has a finite number of solutions $nin mathbb{N}$ with $n>r$ for any fixed $k,rin mathbb{N}$, where $Gamma_r(n)=n(n-1)cdots (n-r)$ denotes the $r^{th}$ truncated Gamma function.
[721] vixra:2205.0103 [pdf]
The 3D and 2D Cerenkov Effect with Massive Photons
The equations of massive electrodynamics are derived and the power spectrum formula for the 3-dimensional Cerenkov radiation of massive photons is found. It is argued that the massive Cerenkov effect can be observed in superconductive media, ionosphere plasma, waveguides and in particle laboratories. The same is valid for the Cerenkov radiation in the 2-dimensional medium including silicene. The 2D-Cerenkov effect with the massless photons was observed by the famous world laboratories (Adiv et al., 2022).
[722] vixra:2205.0099 [pdf]
The Limit of a Strategic Mapping of a Recursive Fibonacci Sequence
Let F1,F2,F3,...........Fn represent the sequence of Fibonacci elements. Let us define F to be the parent set of all Fibonacci elements. G and G′ are the subsets of F such that G is a given set of consecutive Fibonacci elements of finite order k and G′ is defined to be a shift on G of l degrees, where l ∈ N. Let R = min(r1,r2,....) denote the set of remainders obtained such that rn ∈ F. For a given G of order k, we show that a strategic mapping operator ϕ: (G × G) −→ R defined by §: ϕ(g ⊗g ′ h) = r, where (G × G) represents the Cartesian product and g, h ∈ G , g ′ ∈ G′ . The strategic map ϕ exists upto (l + 1)0 transition, with its limit as L Fn+(l+1) thereof. We consider a special introductory case of |G|, |G′ |=4 to illustrate the results and thereby proving the ”Fundamental Theorem of limit of a strategic map of Fibonacci sequence[Thomas heorem] and its consequences”.
[723] vixra:2205.0090 [pdf]
The Generating Function Technique and Algebraic Ordinary Differential Equations
In the past, theorems have shown that individuals can implement a (formal) power series method to derive solutions to algebraic ordinary differential equations, or AODEs. First, this paper will give a quick synopsis of these “bottom-up” approaches while further elaborating on a recent theorem that established the (modified) generating function technique, or [m]GFT, as a powerful method for solving differentials equations. Instead of building a (formal) power series, the latter method uses a predefined set of (truncated) Laurent series comprised of polynomial linear, exponential, hypergeometric, or hybrid rings to produce an analytic solution. Next, this study will utilize the [m]GFT to create several analytic solutions to a few example AODEs. Ultimately, one will find [m]GFT may serve as a powerful "top-down" method for solving linear and nonlinear AODEs.
[724] vixra:2205.0087 [pdf]
Granular Spacetime: The Nature of Time
Granular space-time posits that everything can be expressed as a function of space-time and matter. And this includes the quantum wave function Ψ . To give a geometric interpretation of Ψ , we First need to examine time. The fact that the wave function is complex results in the time dimension also being complex with the imaginary component being rolled-up. The symmetry of time is deduced.
[725] vixra:2205.0086 [pdf]
1 Planck Length, Planck Time and Speed of Gravity When Taking into Account Relativistic Mass with No Knowledge off G, h or c
In this paper, we take into account Lorentz’s relativistic mass and then derive formulas for the Planck length and the Planck time that are not dependent on any other constants. Thus we can find the Planck length, the Planck time, and also the speed of gravity, from gravitational observations without any knowledge of any physical constants. This is in strong contrast to what has been, and currently is, thought to be the case. Since we take into account relativistic mass, our formulas are also fully accurate for a strong gravitational field. We will claim general relativity theory cannot be fully precise in respect to strong gravitational fields. For example, general relativity theory leads to an imaginary time dilation factor when at the Planck length distance from a Planck mass, but when taking into account Lorentz’s relativistic mass, the time dilation works properly all the way down to, and including, the Planck length distance.
[726] vixra:2205.0084 [pdf]
On the Length of Addition Chains Producing $2^n-1$
Let $\delta(n)$ denotes the length of an addition chain producing $n$. In this paper we prove that the exists an addition chain producing $2^n-1$ whose length satisfies the inequality $$\delta(2^n-1)\lesssim n-1+\iota(n)+\frac{n}{\log n}+1.3\log n\int \limits_{2}^{\frac{n-1}{2}}\frac{dt}{\log^3t}+\xi(n)$$ where $\xi:\mathbb{N}\longrightarrow \mathbb{R}$. As a consequence, we obtain the inequality $$\iota(2^n-1)\lesssim n-1+\iota(n)+\frac{n}{\log n}+1.3\log n\int \limits_{2}^{\frac{n-1}{2}}\frac{dt}{\log^3t}+\xi(n)$$ where $\iota(n)$ denotes the length of the shortest addition chains producing $n$.
[727] vixra:2205.0066 [pdf]
On the Number of Integral Points in the Annular Region Induced by Spheres in $\mathbb{R}^k$
Using the method of compression we show that the number of integral points in the annular region induced by two $k$ dimensional spheres of radii $r$ and $R$ with $R>r$ satisfies the lower bound \begin{align} \mathcal{N}_{R,r,k} \gg (R^{k-1}-r^{k+\delta})\sqrt{k}.\nonumber \end{align}for some small $\delta>0$ with $k>\frac{\delta(\log r)}{\log R-\log r}$.
[728] vixra:2205.0063 [pdf]
Quantum Computing Using Chaotic Numbers
Quantum Mechanics and Computation has a major problem called the measurement problem [7] [19]. This has given physicists a very hard time over the years when I first looked into the problem my approach was simple find a new number system that can go with the uncertainty of a Quantum particle the paper deals with the mathematics of uncertainty which has solved 2 millenium prize problems [4], [5] and quantum mea- surement problem very efficiently. We divide chaos into two parts low chaos and high chaos then we find the desired value [19] inside the inter- section of both. This helps us find something in a ℵ3 >>> ∞ this takes the problems around us to the next level if we are able to control a chaos then we can achieve pretty much anything.
[729] vixra:2205.0055 [pdf]
The Ehrhart Volume Conjecture Is False in Sufficiently Higher Dimensions in $\mathbb{R}^n$
Using the method of compression, we show that volume $Vol(K)$ of a ball $K$ in $\mathbb{R}^n$ with a single lattice point in it's interior as center of mass satisfies the lower bound \begin{align} Vol(K)\gg \frac{n^n}{\sqrt{n}}\nonumber \end{align}thereby disproving the Ehrhart volume conjecture, which claims that the upper bound must hold \begin{align} Vol(K) \leq \frac{(n+1)^n}{n!}\nonumber \end{align}for all convex bodies with the required property.
[730] vixra:2205.0050 [pdf]
FC1: A Powerful, Non-Deterministic, Symmetric Key Cipher
In this paper we describe a symmetric key algorithm that offers an unprecedented grade of confidentiality. Based on the uniqueness of the modular multiplicative inverse of a positive integer a modulo n and on its computability in a polynomial time, this non-deterministic cipher can easily and quickly handle keys of millions or billions of bits that an attacker does not even know the length of. The algorithm’s primary key is the modulo, while the ciphertext is given by the concatenation of the modular inverse of blocks of plaintext whose length is randomly chosen within a predetermined range. In addition to the full specification, we present a working implementation of it in Julia Programming Language, accompanied by real examples of encryption and decryption.
[731] vixra:2205.0049 [pdf]
Five More Proofs of the Cosine Addition Formula (Inspired by Mark Levi's Perpetuum Mobile Proof)
Inspired by Mark Levi's wonderful proof of the Cosine addition formula, that showed that it follows from the sad fact that Perpetual Motion is impossible, we recall five other proofs.
[732] vixra:2205.0042 [pdf]
From Neutrino Masses to the Full Size of the Universe
Our universe is a 3-dimensional elastic substrate which once has condensed and now is expanding within some higher dimensional space. The elastic substrate is built from tiny invisible constituents, called tetrons, with bond length about the Planck length and binding energy the Planck energy. All ordinary matter particles are quasiparticle excitations of the tetrons gliding on the elastic medium. Since the quasiparticles fulfill Lorentz covariant wave equations, they perceive the universe as a 3+1 dimensional spacetime continuum lacking a preferred rest system. Any type of mass/energy induces curvature on the spacetime continuum as determined by the Einstein equations. The 24 known quarks and leptons arise as eigenmode excitations of a tetrahedral fiber structure, which is made up from 4 tetrons and extends into 3 additional `internal' dimensions. While the laws of gravity are due to the elastic properties of the tetron bonds, particle physics interactions take place within the internal fibers. I will concentrate on three of the most intriguing features of the model: (i) Understanding small neutrino masses from the conservation of isospin, and, more in general, calculating the spectrum of quark and lepton masses. This is obtained from the tetron model's interpretation of the Higgs mechanism. As a byproduct, the connection between the large top mass and the electroweak symmetry breaking becomes apparent. (ii) The possibility to determine the full size of the universe from future dark energy measurements. This is obtained from the tetron model's interpretation of the dark energy effect. In the course of discussion, the dark energy equation of state, i.e. the equation of state of the elastic tetron background will be derived. (iii) Finally, the origin of the big bang `Hubble tension' within the tetron scheme will be elucidated, and deviations from the standard picture such as a varying Newton constant are discussed.
[733] vixra:2205.0039 [pdf]
A Hundred Attacks in Distributed Systems
The objective of any security system is the capacity to keep a secret. It is vital to keep the data secret when it is stored as well as when it is sent over a network. Nowadays, many people utilize the internet to access various resources, and several businesses employ a dispersed environment to give services to their users. As a result, a more secure distributed environment is required, in which all transactions and processes can be effectively completed safely. It is critical in a distributed system environment to deliver reliable services to users at any time and from any place. As an example of a distributed system, Blockchain is a unique distributed system that has confronted lots of attacks despite its security mechanism. Security is a top priority in a distributed setting. This paper organizes many attacks that byzantine users may apply to take advantage of the loyal users of a system. A wide range of previous articles dealt considered diverse types of attacks. However, we could not find a well-organized document that helps scientists consider different attacking aspects while designing a new distributed system. A hundred various kinds of most essential attacks are categorized and summarized in this article.
[734] vixra:2205.0037 [pdf]
Geometrical Optics as U(1) Local Gauge Theory in Curved Space-time
We treat the geometrical optics as an Abelian $U(1)$ local gauge theory in vacuum curved space-time. We formulate the eikonal equation in (1+1)-dimensional vacuum centrally symmetric curved space-time using null geodesic of the Schwarzschild metric and obtain mass-the U(1) gauge potential relation.
[735] vixra:2205.0036 [pdf]
Cosmological Scale Versus Planck Scale: As Above, So Below!
We will demonstrate that the mass (equivalent mass) of the observable universe divided by the universe radius is exactly identical to the Planck mass divided by the Planck length. This only holds true in the Haug universe model that takes into account Lorentz’s relativistic mass, while in the Friedmann model of the universe the critical mass of the universe divided by the Hubble radius is exactly equal to 1/2*mp/lp . This is much more than just a speculative approximation, for the findings are consistent with a new, unified, quantum gravity theory that links the cosmological scale directly to the Planck scale.
[736] vixra:2205.0035 [pdf]
Dark Energy is Gravitational Potential Energy or Energy of the Gravitational Field
When the bound object acts on gravity, the gravitational action of gravitational potential energy is also included. Therefore, even in the case of the universe, the gravitational action of gravitational potential energy must be considered. Gravitational potential energy generates a repulsive force because it has a negative equivalent mass. Mass energy (Mc^2) is an attractive component, and the equivalent mass (-M_gs) of gravitational potential energy is a repulsive component. Therefore, if |(-M_gs)c^2| < Mc^2, there is a decelerated expansion, and if |(- M_gs)c^2| > Mc^2, accelerated expansion is performed. |(-M_gs)c^2}| = Mc^2 is the inflection point from the decelerated expansion to the accelerated expansion. The source of dark energy is presumed to be due to gravitational self-energy and an increase in mass due to the expansion of the particle horizon. The dark energy effect occurs because all positive energy (mass) entering the particle horizon produces negative gravitational potential energy. While mass energy is proportional to M, gravitational self-energy increases faster because gravitational self-energy is proportional to -M^2/R. Accordingly, an effect of increasing dark energy occurs. I present Friedmann's equations and dark energy function obtained through gravitational self-energy model. There is no cosmological constant and dark energy is a function of time. This model predicts an inflection point where dark energy becomes larger and more important than the energy of matter and radiation. Since the observable universe is almost flat and the mass density is very low, a correspondence principle between general relativity and Newtonian mechanics will be established. Therefore, gravitational potential energy or gravitational self-energy can be good approximation to the dark energy.
[737] vixra:2205.0030 [pdf]
The Kinematics of Keplerian Velocity Imposes Another Interpretation of Newtonian Gravitation
The velocity of any Keplerian orbiter is well known, but its time derivative is a centripetal acceleration, not an attractive one. Furthermore the rectilinear accelerated trajectory of Newton's attraction is not part of the Keplerian conics. Newton's postulate of attraction is therefore not consistent with Kepler's laws. We demonstrate this geometric reality by the factual kinematics and expose its consequences from the bodies falling, to the rotation speed of the galaxies, passing through Einstein's equivalence principle or the stability of the solar system.
[738] vixra:2205.0029 [pdf]
On Class Field Theory from a Group Theoretical Viewpoint
The main goal of Class Field Theory, of characterizing abelian field extensions in terms of the arithmetic of the rationals, is achieved via the correspondence between Arithmetic Galois Theory and classical (algebraic) Galois Theory, as formulated in its traditional form by Artin. The analysis of field extensions, primarily of the way rational primes decompose in field extensions, is proposed, in terms of an invariant of the Galois group encoding its structure. Prospects of the non-abelian case are given in terms of Grothendieck's Anabelian Theory.
[739] vixra:2205.0028 [pdf]
On Addition Chains of Fixed Degree
In this paper we extend the so-called notion of addition chains and prove an analogue of Scholz's conjecture on this chain. In particular, we obtain the inequality $$\iota^{\lfloor \frac{n-1}{2}\rfloor}(2^n-1)\leq n+\iota(n)$$ where $\iota(n)$ and $\iota^{\lfloor \frac{n-1}{2}\rfloor}(n)$ denotes the length of the shortest addition chain and the shortest addition chain of degree $\lfloor \frac{n-1}{2}\rfloor$, respectively, producing $n$.
[740] vixra:2205.0020 [pdf]
Arithmetic Galois Theory (Part II)
A brief historic introduction to Galois Theory is followed by "Arithmetic Galois Theory", which applies the concepts of Galois objects to the category Z of cyclic groups.
[741] vixra:2205.0019 [pdf]
On the Average Number of Integer Powered Distances in $\mathbb{r}^k$
Using the method of compression we obtain a lower bound for the average number of $d^r$-unit distances that can be formed from a set of $n$ points in the euclidean space $\mathbb{R}^k$. By letting $\mathcal{D}_{n,d^r}$ denotes the number of $d^r$-unit distances~($r>1$~fixed) that can be formed from a set of $n$ points in $\mathbb{R}^k$, then we obtain the lower bound \begin{align} \sum \limits_{1\leq d\leq t}\mathcal{D}_{n,d^r}\gg n\sqrt[2r]{k}\log t.\nonumber \end{align}for a fixed $t>1$.
[742] vixra:2205.0014 [pdf]
Einsteintensor - Grundlagen und Berechnung (Einstein Tensor: Basics and Calculation)
The paper is concerned with the mathematical basics for the calculation of the Einstein tensor. Einstein's tensor is part of Einsteins field equations in General Relativity.
[743] vixra:2205.0010 [pdf]
The Possibility of Silicon-Based Life
Silicon is the most obvious potential substi- tute for carbon, and the Possibility of Silicon- Based Life is the focus of the work. An analysis of the sites of action of four silicon-based exobiological nanomolecules, determined by the distribution of electrical charges around the nanomolecules atoms called: ASi, CSi, GSi and TSi. The Van der Waals radius distribution calculations have been determined via ab initio Hartree-Fock methods, Unrestricted and Restrict (UHF and RHF) in the set of basis used Effective Core Potential (ECP) minimal basis, and CC-pVTZ (Correlation-consistent valenceonly basis sets triple-zeta). Polymers can also be assembled as chains of alternating elements such as Si-C, Si-O, and B-N. Alternation with carbon is used to some extent in terran organisms (such as C-C-N in proteins and C-C-C-O-P-O in nucleic acids), and silated compounds play important structural roles in the cells of many organisms on Earth.
[744] vixra:2205.0008 [pdf]
Generating and Deconstructing Prime Numbers
Prime numbers have a rich structure, when viewed as sizes of finite fields. Iteration of an analysis as Klein geometry yields their deconstruction into simpler primes: the POSet structure. Reversing the process is Euclid's trick of generating new primes. A generalization of this is used by McCanney to cover the set of primes away from primorials as centers. This fast algorithm has a ``propagation'' flavor. Generating primes in this manner is also related with Goldbach's Conjecture.
[745] vixra:2205.0006 [pdf]
General Solutions of Ordinary Differential Equations and Division by Zero Calculus - New Type Examples
We examined many examples of the relation between general solutions with singular points in ordinary differential equations and division by zero calculus, however, here we will introduce a new type example that was appeared from some general solution of an ordinary differential equation.
[746] vixra:2204.0176 [pdf]
Why Particle Ontology is Unavoidable in Quantum Mechanics?
Using the quantum formalism, a question - ``Why particle ontology is unavoidable in quantum mechanics?'' - is analyzed. The frequently outspoken inference, ``particle appears to be fuzzy and spread out, i.e., they seem to be at multiple states at once'', is shown to be inconsistent with respect to quantum formalism.
[747] vixra:2204.0175 [pdf]
The Cosmology of the Instant Reconstruction of the Path of Light
A new cosmological model is presented, with characteristics and trends very similar to those of the standard model, but without dark energy.<br> It differs from the standard one essentially for a constant of integration, which derives from a hypothesis at the centre of this work, which gives rise to an extra spatial distance and an extra fictitious component of matter.<br> Due to these extra parts, the density parameter of matter is no longer constant but increases from 0.5 to 1 from the beginning of time to the present day, although the universe is homogeneous and isotropic, and although the total amount of energy and matter are constant.<br> Consequently, the new model, which has one less parameter, satisfies all the constraints arising from the current accurate measurements of the BAO and the angular power spectrum of the CMB with the values of the density parameters of matter which, according to the theory, apply in each context.<br> Analogously, it solves the Hubble tension and the primordial lithium problem, although it introduces a deuterium problem.<br> Finally, it shows that it is the pressure of matter due to its variability, not dark energy, that drives the current acceleration phase of the expansion of the universe started by z ≅ 0.5099, when the universe was 7.99 billion years old, about 5 billion years ago.<br> On a small scale, the same hypothesis has very similar effects to the MOND theory and explains the rotational motion of galaxies.
[748] vixra:2204.0172 [pdf]
Assuming c Less Than Rad*2(abc), The abc Conjecture Is False
In this paper, we consider the abc conjecture. Assuming that c<rad^2(abc) is true, we give anelementary proof that the abc conjecture is false using an equivalent statement.
[749] vixra:2204.0169 [pdf]
Distribution of Leptons by Van Der Waals Radius in Exobiological Nanomolecules
The focus of the work deals with the analysis of the action sites of four exobiological nanomolecules, determined by the distribution of electrical charges around the nanomolecules atoms called: ASi, CSi, GSi and TSi. The Van der Waals radius distribution calculations have been determined via ab initio Hartree-Fock methods, Unrestricted and Restrict (UHF and RHF) in the set of basis used Effective Core Potential (ECP) minimal basis, and CC-pVTZ (Correlation-consistent valenceonly basis sets triple-zeta). The study has so far been limited to computational ab initio methods. The results are compatible with the theory of quantum chemistry, but their comprovation experimental verification depend on advanced techniques for their synthesis, obtaining in laboratory for experimental biochemical.
[750] vixra:2204.0162 [pdf]
New Schwarzschild Black Hole Solution for Kerr-Newman-Like Black Holes
When $\Lambda$ is the cosmological constant about $\left(g^{\theta\theta}\right)^{2}$\cite{3}. We assume that both the macroscopic system and the microscopic system are closed systems, the entropy of the system increases to 0, the entropy of the macroscopic open system increases, and the entropy of the microscopic system decreases. We know that Ads space can constitute Ads/CFT theory, and ds space has serious difficulties (experimentally proves that the universe is ds spacetime). Assuming that there is a spontaneous entropy reduction process in the microscopic system, the Ads space can evolve into a ds space. We got that new Schwarzschild black hole.At that time we saw a similar situation with the new Schwarzschild black hole and the Kerr-Newman-like black hole.
[751] vixra:2204.0161 [pdf]
Space-Time Quantification
The quantification of Length and Time in Kepler's laws implies an angular momentum quantum, identified with the reduced Planck's constant, showing a mass-symmetry with the Newtonian constant G. This leads to the Diophantine Coherence Theorem which generalizes the synthetic resolution of the Hydrogen spectrum by Arthur Haas, three years before Bohr. The Length quantum breaks the Planck wall by a factor 10^61, and the associated Holographic Cosmos is identified as the source of the Background Radiation in the Steady-State Cosmology. An Electricity-Gravitation symmetry, connected with the Combinatorial Hierarchy, defines the steady-state Universe with an invariant Hubble radius 13.812 milliard light-year, corresponding to 70.793 (km/s)/Mpc, a value deposed (1998) in a Closed Draft at the Paris Academy, confirmed by the WMAP value and the recent Carnegie-Chicago Hubble Program, and associated with the Eddington number and the Kotov-Lyuty non-local oscillation. This confirms definitely the Anthropic Principle and the Diophantine Holographic Topological Axis rehabilitating the tachyonic bosonic string theory. The Holographic Principle uses the Archimedes pi-value 22/7. This specifies $G$, compatible with the BIPM measurements, but at 6 sigma from the official value, defined by merging discordant measurements.
[752] vixra:2204.0145 [pdf]
Spiral Galaxies and Powerful Extratropical Cyclone in the Falklands Islands
A subtropical cyclone is a weather system that cyclone. They can form between the equator and the 50th parallel. In mathematics, a spiral is a curve which emanates from a point, moving farther away as it revolves around the point. The characteristic shape of hurricanes, cyclones, typhoons is a spiral. There are several types of turns, and determining the characteristic equation of which spiral the Extratropical Cyclone (EC) fits into is the goal of the work. The study demonstrates a double spiral for the EC similarly Lindblad (1964) demonstrates a double spiral, to demonstrate the structure of spiral galaxies. Despite the data obtained in the EC that passed through the southern tip of South America west and east of the Falklands Islands, everything indicates that the short occurrence ECs indicate the double spiral structure, but with the structure of a Cote’s double spiral.
[753] vixra:2204.0143 [pdf]
The Series of Reciprocals of The Primes Diverges
This paper gives a detailed proof of Euler's theorem, which is the divergence of a series of reciprocals of the primes. The key idea of the proof is to assume the series converges and then complete the proof by contradiction.
[754] vixra:2204.0141 [pdf]
W Boson?
The beta decay is the second grand Cosmic event of the Universe and is interpreted by the increased cohesive pressure, by the electric entity of the macroscopically neutral neutron and by the inductive-inertial phenomenon. It is a new interpretation of E/M and weak nuclear forces. Their integration with the strong nuclear and gravitational force is achieved with the unified field of the dynamic space, which is identical with the Higgs field, while the homonymous boson is located in the core of the particles and in black holes as space holes after the collapse of the stellar matter. The unconfirmed results (found from the team of the Fermilab Accelerator), that the W boson is more massive, suggest deviations from the Standard Model and come possibly as a result of an as yet undiscovered fifth force of Nature, are no longer needed to be interpreted or justified.
[755] vixra:2204.0134 [pdf]
On the Number of Integral Points on the Boundary of a K-Dimensional Sphere
Using the method of compression, we show that the number of integral points on the boundary of a $k$-dimensional sphere of radius $r$ satisfies the lower bound \begin{align} \mathcal{N}_{r,k} \gg r^{k-1}\sqrt{k}.\nonumber \end{align}
[756] vixra:2204.0130 [pdf]
The Structure of The Proton and the Calculation of Its G-factor (Some Recent Results from Ether Electrodynamics)
Some recent results of a more rigorous electrodynamics than Special Relativity or Quantum Electrodynamics are summarized with some pedagogy. The results include the correct explanation of line radiation, the correct interpretation of the g-factor, the introduction of the precessional mass, the dismissal of the half-quantum, the deduction of the proton structure from first principles, and a ”T-shirt” calculation of the proton g-factor.
[757] vixra:2204.0129 [pdf]
Explorando la Relatividad Especial (Exploring the Special Relativity)
Puede un cambio de variable alterar la información contenida en una teoría ? En caso de respuesta afirmativa, es erónea la última parte de este documento. Rn caso de respuesta negativa, está implícito en la Relatividad Especial un valor de velocidad preciso, determinado unívocamente por la teoría. Ese valor es 0.74872022894058(...) C . ¿ Qué consecuencias epistemológicas tendría un valor específico de velocidad determinado por la estructura de la teoría ? La Relativdad Especial se basa en postulados propios. Deberían ser revisados en caso de ser inaceptable una velocidad específica distinta de C implícita en el esquema. <p> Can a variable change alter a theory? In case of affirmative answer, the final part of this document is wrong. In case of negative answer, is implied in the Special Relativity an accurate velocity value, equal to 0.7487202289405(...) C , uniquelly determined by the theory.} What epistemological consequences would a specific speed value determined by the structure of the theory have? Special Relativity is based on its own postulates. They should be reviewed if a specific velocity value, unequal to C, implicit in the schema is unacceptable.
[758] vixra:2204.0126 [pdf]
Scientific Method and Game Theory as Basis of Knowledge and Language
We use methods of science (parts of falsificationism) and game theory (focal points) as a foundation of knowledge and language. We draw some parallels to human sensory experience, using recent progress in AI and demonstrate how do we know basic facts about space or ourselves or other people. Then we demonstrate how we can understand and make language with these methods, giving examples from Tok Pisin language. Then we demonstrate the viability of this approach for clarification of philosophy. We demonstrate that our theory is a good answer to many linguistic conundrums given in "Philosophical Investigations" by Wittgenstein. We also demonstrate an application to other philosophical problems.
[759] vixra:2204.0125 [pdf]
On Weyl Zeros
We investigate the zeros of the Betti portion of the Weil rational zeta function for elliptic curves, towards a direct understanding of the Weil conjectures. Examples are provided and various directions of investigations are considered.
[760] vixra:2204.0123 [pdf]
On Riemann Zeros and Weil Conjectures
The article aims to motivate the study of the relations between the Riemann zeros, and the zeros of the Weil polynomial of a hyper-elliptic curve over finite fields, beyond the well-known formal analogy. The non-trivial distribution of the p-sectors of the Riemann spectrum recently studied by various authors, represent evidence of a yet unknown algebraic structure exhibited by the Riemann spectrum, supporting the above investigations. This preparatory article consists essentially in a review of the topics involved, and the ``maize'' of relationships to be clarified subsequently. Examples are provided and further directions of investigation are suggested. It is, if successful, a viable, possibly new approach to proving the Riemann Hypothesis, with hindsight from the proof in finite characteristic and function fields.
[761] vixra:2204.0116 [pdf]
The Metric of Parallel Universe
In this paper, I vividly illustrated how to obtain an equation to describe a parallel universe with our universe parameters such as velocity, speed of light, scale factor, and so forth. I did this work by theoretical methods. Moreover, I used the Robertson-Walker metric and metric definition to achieve an equation that is the metric for a parallel universe. Finally, I found 10 connecting points between the two universes. I assumed three hypotheses for this scientific project.
[762] vixra:2204.0115 [pdf]
Gravidynamics of an Affine Connection on a Minkowski Background
In this paper, a post-Riemannian formalism is constructed based on a minimalistic set of modifications and suggested as the framework for a classical alternative to General Relativity (GR) which, notably, can be formulated in Minkowski spacetime. Following the purely geometrical exposition, arguments are advanced for the transport of matter and radiation, a Lagrangean quadratic in the gravitational field strengths is considered, and several of the resulting properties are analyzed in brief. Simple models are then set up to explore the astrophysical and cosmological reach of the proposed ideas, including their potential (and so far tentative) agreement with the ’classical tests’of GR. Some arguments are also presented towards quantization within the proposed formalism, and a few other issues are discussed.
[763] vixra:2204.0114 [pdf]
Domination Number of Edge Cycle Graphs
Let G = (V, E) be a simple connected graph.A set S ⊂ V is a dominating set of G if every vertex in V \S is adjacent to some vertex in S. The domination number γ(G) of G is the minimum cardinality taken over all dominating sets of G. An edge cycle graph of a graph G is the graph G(Ck) formed from one copy of G and |E(G)| copies of Pk, where the ends of the i th edge are identified with the ends of i th copy of Pk. In this paper, we investigate the domination number of G(Ck), k ≥ 3.
[764] vixra:2204.0110 [pdf]
Ġasaq: Provably Secure Key Derivation
This paper proposes Ġasaq; a provably secure key derivation method that, when given access to a true random number generator (TRNG), allows communicating parties, that have a pre-shared secret password p, to agree on a secret key k that is indistinguishable from truly random numbers with a guaranteed entropy of min(H(p), |k|). Ġasaq's security guarantees hold even in a post-quantum world under Grover's algorithm, or even if it turns out that P = NP. Such strong security guarantees, that are similar to those of the one time pad (OTP), became attractive after the introduction of Băhēm; a similarly provably secure symmetric cipher that is strong enough to shift cipher's security bottleneck to the key derivation function. State of art key derivation functions such as the PBKDF, or even memory-hard variants such as Argon2, are not provably secure, but rather not fully broken yet. They do not guarantee against needlessly losing password entropies; that is, the output key could have an entropy lower than password's entropy, even if such entropy is less than key's bit length. In addition to assuming that P != NP, and, even then, getting their key space square-rooted under Grover's algorithm---none of which are limitations of Ġasaq. Using such key derivation functions, as the PBKDF or Argon2, is acceptable with conventional ciphers, such as ChaCha20 or AES, as they, too, suffer the same limitations, hence none of them are bottlenecks for the other. Similarly to how a glass door is not a security bottleneck for a glass house. However, a question is: why would a people secure their belongings in a glass made structure, to justify a glass door, when they can use a re-enforced steel structure at a similar cost? This is where Ġasaq comes to offer Băhēm the re-enforced steel door that matches its security.
[765] vixra:2204.0107 [pdf]
佛经宇宙观的现代解读 (第1部分节选) the Modern Interpretation of Buddhist Cosmology (Excerpts from Part 1)
<p>"佛经宇宙观的现代解读" 的第一部分节选,只包含了第一部分的10章内容,围绕众多尚未很好解决的佛学问题,提供了系统的符合逻辑一致性的<b>可验证式</b>解答。供研究人员和有兴趣的读者使用。 </p><p>第一部分为<b>《基础概述》</b>共 10 章。我们根据佛经描述中部分对象的空间规模,从小到大依次论证了它们与我们所观测世界中众多事物之间的 "对应非等价" (或 "对应非等同") 关系。对数千年来佛经神话传说中若干个悬而未决的问题,给出了符合经文描述内在逻辑一致性 (经文间可互相印证) 且符合现代科学观测的可验证式解答,其中包括 "须弥山"、"四大部洲"、"诸天宫殿"、"十八地狱"、"三千大千世界"等著名佛学术语所涉及的对象。我们的研究表明,这些对象绝非虚无缥缈,亦非纯粹的神话臆想。</p><p>更多内容请参考:<a href="https://vixra.org/abs/2209.0086">https://vixra.org/abs/2209.0086</a></p><p>This article contains excerpts (part 1) from the author's book "The Modern Interpretation of Buddhist Cosmology" in Simplified Chinese.</p><p>The first part is <b>"Basic Overview"</b> in total 10 chapters. According to the spatial scale of some objects described in Buddhist sutra, we successively demonstrate the "correspondence non-equivalence" (or "correspondence non-equivalence") relationship between them and many things in the world we observe from small to large. To a number of unresolved questions in Buddhist myths and legends for thousands of years, a verifiable solution that conforms to the logical consistency of the description of the scriptures (which can be verified by each other) and conforms to modern scientific observation is given. These include the objects involved in such famous Buddhist terms as "Mount Sumeru (Meru)", "Four major continents", "Heavenly Palace", "Eighteen hells", and "One billion worlds". Our research shows that these objects are neither ethereal nor purely mythological.</p>
[766] vixra:2204.0105 [pdf]
On Prime Numbers and Riemann Zeros
Intuitively, prime numbers of ``Number systems'' (rings) are the building blocks of their elements. We start from natural numbers and Gaussian integers to explain more general frameworks, like the structure theorem for finitely generated Abelian groups. We end with a 1 million dollar puzzle, the Riemann Hypothesis, and point to the fact that prime numbers are dual to the Riemann zeros. Some easy references are provided.
[767] vixra:2204.0104 [pdf]
On Galois Theory with an Invitation to Category Theory
Galois theory in the category of cyclic groups studies the automorphism groups of the cyclic group extensions and the corresponding Galois connection. The theory can be rephrased in dual terms of quotients, corresponding to extensions, when viewed as covering maps. The computation of Galois groups and stating the associated Galois connection are based on already existing work regarding the automorphism groups of finite p-adic groups. The initial goals for developing such a theory were: pedagogical, to introduce the basic language of Category Theory, while exposing the student to core ideas of Galois Theory, but also targeting applications to the Galois Theory of cyclotomic extensions. Some aspects of Abelian Class Field Theory and Anabelian Geometry are also mentioned.
[768] vixra:2204.0103 [pdf]
Spiral Galaxies and Extratropical Cyclone
A subtropical cyclone is a weather system that has some characteristics of a tropical cyclone and some characteristics of an extratropical cyclone. They can form between the equator and the 50th parallel. In mathematics, a spiral is a curve which emanates from a point, moving farther away as it revolves around the point. The characteristic shape of hurricanes, cyclones, typhoons is a spiral. There are several types of turns, and determining the characteristic equation of which spiral the Extratropical Cyclone (EC) fits into is the goal of the work. The study demonstrates a double spiral for the extratropical cyclone, similarly Lindblad (1964) demonstrates a double spiral, to demonstrate the structure of spiral galaxies. Despite the data obtained in the EC that passed through the southern tip of South America west and east of the Falklands Islands, everything indicates that the short occurrence ECs indicate the double spiral structure, but with the structure of a Cote’s double spiral.
[769] vixra:2204.0097 [pdf]
Explicit Approximate Formula for the Critical Exponent in Orthogonal Class using the Multi-points Summation Method
I suggest a new explicit formula for dimensional dependence of the critical exponent of the Anderson transition considering high dimensional asymptotic behavior and using the multi-points summation method. Asymptotic expansion at infinite dimension is estimated from numerical data. Combining known asymptotic series at two dimension and infinite dimension using the multi-points summation method, I obtained useful approximation formula for the critical exponent in the Orthogonal class.
[770] vixra:2204.0090 [pdf]
A Statistical Test of Gravitational Wave Events
Here I show some statistics of all the 93 gravitational wave (GW) events observed by LIGO in 3 phases during the last 6 years, with 3, 8 and 82 GW events for each phase. The detection sensitivity in O3 phase was increased by 40\% than that in O2 phase. The co-working ratio of the two LIGO observatories was 0.42 (O2 phase) and 0.60 (O3 phase), respectively. The product of sensitive volume and time (VT) was thus increased by a factor of $1.4^3 \times (0.60/0.42) \approx 4$. Statistical analyses of all the 93 GW events suggest that the observations so far do not meet an intuitive expectation, say, with higher detection sensitivity and longer observation time, we should observe more GW events.
[771] vixra:2204.0081 [pdf]
On the K Continuity of a Functor
We examine the concept of $K-$continuity of a functor from two perspectives: one considering $K-$continuity as given in some formulations of Shape theory and the other as a restriction of the usual definition of the continuity of a functor. We show that under a certain condition the concept of $K-$continuity from Shape theory includes the concept of $K-$continuity arising from the usual definition of continuity.
[772] vixra:2204.0074 [pdf]
Matter Theory on EM field
This article try to unified the four basic forces by Maxwell equations, the only experimental theory. Self-consistent Maxwell equations with the e-current coming from matter current is proposed, and is solved to electrons and the structures of particles and atomic nucleus. The static properties and decay are reasoned, all meet experimental data. The equation of general relativity sheerly with electromagnetic field is discussed as the base of this theory. In the end the conformation elementarily between this theory and QED and weak theory is discussed.
[773] vixra:2204.0073 [pdf]
On Factorization of Multivectors in Cl(2,1) by Exponentials and Idempotents
In this paper we consider general multivector elements of Clifford algebras Cl(2,1), and look for possibilities to factorize multivectors into products of blades, idempotents and exponentials, where the exponents are frequently blades of grades zero (scalar) to n (pseudoscalar). We will succeed mostly, with a minor open case remaining.
[774] vixra:2204.0064 [pdf]
Băhēm: A Provably Secure Symmetric Cipher
This paper proposes Băhēm; a symmetric cipher that, when used with a pre-shared secret key k, no cryptanalysis can degrade its security below H(k) bits of entropy, even under Grover's algorithm or even if it turned out that P = NP. Băhēm's security is very similar to that of the one-time pad (OTP), except that it does not require the communicating parties the inconvenient constraint of generating a large random pad in advance of their communication. Instead, Băhēm allows the parties to agree on a small pre-shared secret key, such as |k| = 128 bits, and then generate their random pads in the future as they go. For any operation, be it encryption or decryption, Băhēm performs only 4 exclusive-or operations (XORs) per cleartext bit including its 2 overhead bits. If it takes a CPU 1 cycle to perform an XOR between a pair of 64 bit variables, then a Băhēm operation takes 4 / 8 = 0.5 cycles per byte. Further, all Băhēm's operations are independent, therefore a system with n many CPU cores can perform 0.5 / n cpu cycles per byte per wall-clock time. While Băhēm has an overhead of 2 extra bits per every encrypted cleartext bit, its early single-threaded prototype implementation achieves a faster /decryption/ than OpenSSL's ChaCha20's, despite the fact that Băhēm's ciphertext is 3 times larger than ChaCha20's. This support that the 2 bit overhead is practically negligible for most applications. Băhēm's early prototype has a slower /encryption/ time than OpenSSL's ChaCha20 due to its use of a true random number generator (TRNG). However, this can be trivially optimised by gathering the true random bits in advance, so Băhēm gets the entropy conveniently when it runs. Aside from Băhēm's usage as a provably-secure general-purpose symmetric cipher, it can also be used, in some applications such as password verification, to enhance existing hashing functions to become provably one-way, by using Băhēm to encrypt a predefined string using the hash as the key. A password is then verified if its hash decrypts the Băhēm ciphertext to retrieve the predefined string.
[775] vixra:2204.0062 [pdf]
Current Survey of Clifford Geometric Algebra Applications
We extensively survey applications of Clifford Geometric algebra in recent years (mainly 2019–2022). This includes engineering, electric engineering, optical fibers, geographic information systems, geometry, molecular geometry, protein structure, neural networks, artificial intelligence, encryption, physics, signal-, image- and video processing, and software.
[776] vixra:2204.0057 [pdf]
A Note on Interpreting Special Relativity
In this note I have discussed Einstein's theory of special relativity, derived the coordinate transformations in a simple manner, and obtained expressions for time dilation, the relativistic Doppler effect and length contraction. In Einstein's original paper, a factor \phi appears, which he sets equal to unity. However, \phi can be reinterpreted as a scale factor when coordinate time is defined via a single coordinate clock and the time delay receiving information via light rays is included. Using Einstein's definition of coordinate time, a moving sphere is construed as an ellipsoid of revolution, whereas the practical definition of time intervals used here - implying simultaneity of reception rather than simultaneity of occurrence - shows it is a rotated sphere.
[777] vixra:2204.0054 [pdf]
Algorithm for Finding Q^k-th Root of a
Description of the algorithm for finding the q^k-th root of a. There is no basic difference in the calculation method created in the previous version. Some additions and changes have been made.
[778] vixra:2204.0050 [pdf]
Assuming C Less Than Rad^2(abc) and the Beal's Conjecture Hold, Then the Abc Conjecture is False
In this paper, assuming that the conjecture c<rad^2(abc) and Beal's Conjecture hold, I give, using elementary logic, the proof that the abc conjecture is false.
[779] vixra:2204.0040 [pdf]
Saving Proof-of-Work by Hierarchical Block Structure: Bitcoin 2.0?
We argue that the current Proof of Work based consensus algorithm of the Bitcoin network suffers from a fundamental economic discrepancy between the real-world transaction costs incurred by miners and the wealth that is being transacted. Put simply, whether one transacts 1 satoshi or 1 bitcoin, the same amount of electricity is needed when including this transaction into a block. The notorious Bitcoin blockchain problems such as its high energy usage per transaction or its scalability issues are, either partially or fully, mere consequences of this fundamental economic inconsistency. We propose making the computational cost of securing the transactions proportional to the wealth being transfered, at least temporarily. First, we present a simple incentive based model of Bitcoin's security. Then, guided by this model, we augment each transaction by two parameters, one controlling the time spent securing this transaction and the second determining the fraction of the network used to accomplish this. The current Bitcoin transactions are naturally embedded into this parametrized space. Then we introduce a sequence of hierarchical block structures (HBSs) containing these parametrized transactions. The first of those HBSs exploits only a single degree of freedom of the extended transaction, namely the time investment, but it allows already for transactions with a variable level of trust together with aligned network fees and energy usage. In principle, the last HBS should scale to tens of thousands timely transactions per second while preserving what the previous HBSs achieved. We also propose a simple homotopy based transition mechanism which enables us to relatively safely and continuously introduce new HBSs into the existing blockchain. Our approach is constructive and as rigorous as possible and we attempt to analyze all aspects of these developments, al least at a conceptual level. The process is supported by evaluation on recent transaction data.
[780] vixra:2204.0033 [pdf]
{The Irrationality of Odd and Even Zeta Values
We show that using the denominators of the terms of $\zeta(n)-1=z_n$ as decimal bases gives all rational numbers in (0,1) as single decimals. We also show the partial sums of $z_n$ are not given by such single digits using the partial sum's terms. These two properties yield a proof that $z_n$ is irrational. As partials require denominators exceeding the denominators of their terms, possible single decimal convergence points are, using properties of decimal expansions, systematically eliminated.
[781] vixra:2204.0031 [pdf]
Improvement of Prime Number Theorem using the Multi-Point Summation Method
I propose a new approximate asymptotic formula for Prime number theorem. The new formula is derived by the multi-point summation method. It has additional term expressed with elementary function and gives better estimate of the prime-counting function from small value to big value. It also satisfies asymptotic formula with n -> ∞ limit.
[782] vixra:2204.0030 [pdf]
On the Need to Generalize the Theory of Algorithms
Traditionally, the concept of an algorithm is introduced into the theory through a sequence of elementary steps leading to the solution of a problem, and parallel algorithms are considered as a technical solution external to the Theory of Algorithms, which allows speeding up the execution process. However, a number of physical processes currently used for computing, such as quantum computing, do not fit into the framework of the predictions of the Theory of Algorithms, in particular --- in terms of computational complexity, which suggests that our understanding of parallel computing processes, limited by the framework of the classical Theory of Algorithms, may not be complete. A qualitative leap in the Theory of Computability is possible if parallel algorithms are understood as a generalization of the classical ones within the framework of the hypothetical Theory of Parallel Algorithms. In this paper, pre-quantum physical processes are considered, which are already beyond the scope of the classical Theory of Algorithms. Conceptual primitives suitable for the analysis of parallel flows are proposed.
[783] vixra:2204.0024 [pdf]
21 cm Quantum Amplifier
Hydrogen being the most common element in the universe is almost invisible in atomicform though it is common as a as minor contaminating component in most terrestrialcompounds. Atomic hydrogen and its isotopes are the only chemically active atomswhose valence electron is not screened from the nucleus. This unique property leadsto a rich spectroscopic behavior when weakly bonded to other molecules, surfaces, orembedded within solids. The spectra’s origin lie in rotational nuclear degrees of freedomthat become active when the atoms are polarization bonded to other structures. Freeneutral atomic hydrogen is difficult to detect by its 1420.4 MHz emission even in objectsas large as the local Virgo cluster of galaxies. Our surprise was in detecting intensesignals with an inexpensive receiver near 1420.4 MHz in the spectral band reserved forradio astronomy where broadcasting is forbidden. These signals behaved like emissionsfrom slightly perturbed 1S atomic hydrogen possessing rotational states with very smallenergy shifts. These signals are ubiquitous when there is any low level electromagneticnoise present.
[784] vixra:2204.0021 [pdf]
Generalized Branes in Noncommutative Clifford Spaces
Starting with a brief review of our prior construction of $n$-ary algebras, based on the relation among the {\bf n}-ary commutators of noncommuting spacetime coordinates $ [ X^1, X^2, ......, X^n ] $ with the polyvector valued coordinates $X^{123 ...n} $ in noncommutative Clifford spaces, $ [ X^1, X^2, ......, X^n ] = n ! ~X^{123 ...n} $, we proceed to construct generalized brane actions in noncommutative matrix coordinates backgrounds in Clifford-spaces ($C$-spaces). An instrumental role is played by the Clifford-valued field $\Phi (\sigma^A) = \Phi^M (\sigma^A) \Gamma_M $ which allows to construct a matrix realization of the $n$-ary algebra of the form $ {\bf X}^M \equiv \Phi^{ -1} ( \sigma^A) \Gamma^M \Phi (\sigma^A) $, and that is given in terms of the world manifold's $\sigma^A$ polyvector-valued coordinates of the generalized brane, and which by construction, $satisfy$ the $n$-ary algebra. One then learns that is the presence of matter which $endows$ the spacetime points with a noncommutative algebraic structure. We finalize with an extension of coherent states in $ C$-spaces and provide a preliminary study of strings in target $C$-space backgrounds.
[785] vixra:2204.0019 [pdf]
Geometrical Optics as U(1) Local Gauge Theory in Flat Space-Time
We treat the geometrical optics as the classical limit of quantum electrodynamics i.e. an Abelian $U(1)$ local gauge theory in flat space-time. We formulate the eikonal equation in a (1+1)-dimensional Minkowskian (flat) space-time and we found that the refractive index as a function of the $U(1)$ gauge potential.
[786] vixra:2204.0014 [pdf]
Asymptotics of Solutions of Differential Equations with a Spectral Parameter
The main goal of this paper is to construct the so-called Birkhoff-type solutions for linear ordinary differential equations with a spectral parameter. Such solutions play an important role in direct and inverse problems of spectral theory. In Section 1, we construct the Birkhoff-type solutions for n-th order differential equations. Section 2 is devoted to first-order systems of differential equations.
[787] vixra:2204.0008 [pdf]
Possibility of Ads Space Evolving Into ds Space
This article puts forward a hypothesis.In this article, the derivative of the cosmological constant is positive, and there is a possibility that the constant evolves from negative in the early universe to positive in the later period. We assume that both macroscopic and microscopic systems are closed systems and the entropy of the system increases to 0. The entropy of the macroscopic open system increases and the entropy of the microscopic system decreases. We know that Adspace can constitute Ads/CFT theory, but there are serious difficulties with ds-space (experimentally, the universe is ds-spacetime). Assuming that the microsystem has a spontaneous entropy reduction process, Ads-space can evolve into ds-space.
[788] vixra:2204.0004 [pdf]
A Different Look at Gravity
This paper presents a new formula for the gravitational force – formula (1). It is based on the following reaction of an Electron Antineutrino with a Proton: ̃ve + p+ → n + e+ and is only the principal component of the possible gravitational forces that may exist in the Universe. I assume that all interactions that are weak and do not belong to these four types of known interactions and decrease with the square of the distance, can also be considered as gravitational interactions. Gravity is a product of the so-called weak interactions, if one interprets this reaction and the formula for the gravitational force associated with it correctly. The new formula uses the numerical values measured by Cowan and Reines in an experiment conducted by these two physicists with electron antineutrinos to determine the probability of this above reaction occurring. In the new formula, there is a constant value for the energy density of relic electron antineutrinos which, however, only to a limited extent guarantees the stability of the gravitational forces, since there are neutrino sources in the Universe and even in our immediate surroundings, such as the Sun or even nuclear reactors. The gravitational field is not a fictitious property of space, but is directly related to the transfer of momentum and energy of neutrinos to particles of matter. I assume that the mathematical formulas used in this work are understandable to anyone with some interest in mathematics and physics.
[789] vixra:2204.0003 [pdf]
A Different Look at the Power of the Sun
Many philosophers believe that our world can only be described accurately using mathematical equations. Mathematical equations allow a strictly defined interpretation that can serve as a reflection, or picture, of reality in the Universe, and it is supposed that this is the only possible and allowed view of our world. However, it is possible to find mathematical equations that allow the calculation of certain numerical values related to our physical world, not found, however, in physics or astrophysics books. These equations presumably allow for a new different interpretation of the reality of our Universe. But what should we think about it, if a single property, e.g. the gravitational force, could be calculated using different mathematical equations? Here a mathematical formula is presented from which it is possible to calculate the energy density of solar radiation on the surface of the Sun and thus the total power of the Sun and this without using the Solar Constant. There is as yet no theoretical model describing any physical phenomena from which this formula would follow. One possible conclusion from this formula is that physical constants like the Proton mass or the Gravitational Constant are not constants in the Universe and are not even constants in the Milky Way galaxy in which the Sun is located. I assume that the mathematical formulae presented here are understandable to most people interested in physics. At the same time it is one of the following three works "A Different Look at the Power of the Sun, "A Different Look at the Hydrogen Atom" and "A Different Look at Gravity" allowing to explain, with the help of simple mathematical equations of classical physics, the reality in our Universe more simply and comprehensibly, assuming that new models of physical phenomena will arise, from which the mathematical solutions presented here result.
[790] vixra:2204.0001 [pdf]
Relativistic Gravitational Equations
The emphasis is not on precisely specifying the physical meaning of the scheme of the proposed gravitational equations, but on the calculation process from which they derive and whether they can be consistent and provide a calculation alternative that allows greater simplicity in obtaining acceptably satisfactory results to those already veried by general relativity.
[791] vixra:2203.0184 [pdf]
A New Permittivity of the Rotational Electric Field
The electric field in Maxwell’s equations can be written as a sum of the rotational and the irrotational electric fields. In this paper, it will be shown that Maxwell’s equations is formulated such that the permittivity of the rotational electric field is set to 1.0, and the permittivity of the irrotational electric field is commonly denoted as \epsilon_r. Faraday’s law can be reformulated in a little more general equation, so that a non-unity permittivity of the rotational electric field is possible. Although only a theoretical formulation is proposed, a way by which the permittivity of the rotational electric field can be measured is discussed.
[792] vixra:2203.0183 [pdf]
Collatz Conjecture: An Order Machine
Collatz conjecture (3n+1 problem) is an application of Cantor's isomorphism theorem (Cantor-Bernstein) under recursion. The set of 3n+1 for all odd positive integers n, is an order isomorphism for (odd X, 3X+1). The other (odd X, 3X+1) linear order has been discovered as a bijective order-embedding, with values congruent to powers of four. This is demonstrated using a binomial series as a set rule, then showing the isomorphic structure, mapping, and cardinality of those sets. Collatz conjecture is representative of an order machine for congruence to powers of two. If an initial value is not congruent to a power of two, then the iterative program operates the (odd X, 3X+1) order isomorphism until an embedded value is attained. Since this value is a power of four, repeated division by two tends the sequence to one. Because this same process occurs, regardless of the initial choice for a positive integer, Collatz conjecture is true.
[793] vixra:2203.0174 [pdf]
Spacetime as a Whole
There is no formal difference between particles and black holes. This formal similarity lies in the intersection of gravity and quantum theory; quantum gravity. Motivated by this similarity, 'wave-black hole duality' is proposed, which requires having a proper energy-momentum tensor of spacetime itself. Such a tensor is then found as a consequence of 'principle of minimum gravitational potential'; a principle that corrects the Schwarzschild metric and predicts extra periods in orbits of the planets. In search of the equation that governs changes of observables of spacetime, a novel Hamiltonian dynamics of a Pseudo-Riemannian manifold based on a vector Hamiltonian is adumbrated. The new Hamiltonian dynamics is then seen to be characterized by a new 'tensor bracket' which enables one to finally find the analogue of Heisenberg equation for a 'tensor observable' of spacetime.
[794] vixra:2203.0170 [pdf]
The Spanning Method and the Lehmer Totient Problem
In this paper we introduce and develop the notion of spanning of integers along functions $f:\mathbb{N}\longrightarrow \mathbb{R}$. We apply this method to a class of problems requiring to determine if the equations of the form $tf(n)=n-k$ has a solution $n\in \mathbb{N}$ for a fixed $k\in \mathbb{N}$ and some $t\in \mathbb{N}$. In particular, we show that \begin{align} \# \{n\leq s~|~t\varphi(n)+1=n,~t,n \in \mathbb{N}\}\geq \frac{s}{2\log s}\prod \limits_{p | s}(1-\frac{1}{p})^{-1}+O(1)\nonumber \end{align}where $\varphi$ is the euler totient function.
[795] vixra:2203.0160 [pdf]
What is the Dark Matter?
The structure of the particles of the galactic systems, as well as by their gravity tails, has created new dynamics of them, resulting in their chaotic motion. So, the search for an unknown form of dark matter is no longer necessary. This gravity deviation concerning the moving bodies is a criterion to define the absolute motion. Hence, the inability of detection of a uniform motion in inertial systems has been lifted.
[796] vixra:2203.0153 [pdf]
Are Gravimeters Sensitive Enough to Measure Gravitational Waves?
Calculations show that the sensitivity of common gravimeters is sufficient to measure GW in wide frequency range around 0.1 Hz. Initial evaluations have confirmed that it is possible to extract the coordinates and frequency drift of known binary star systems with good accuracy from multi-year data records of gravimeters distributed around the world. This opens the possibility of an Earth-based search for continuous GW several years before LISA.
[797] vixra:2203.0130 [pdf]
Measurement of a Continuous Gravitational Wave Near 2619.9 µHz
Superconducting gravimeters respond to deformations of the test body Earth by gravitational waves. The frequencies of continuous gravitational waves are identified by means of selective integration of long-term data. The modified superhet method is a suitable method to determine the direction of the GW source. The measured frequency deviation of the phase modulation exceeds the upper limit allowed by the Doppler effect. This problem can be can be solved by assuming that the propagation velocity of GW is lower than the speed of light.
[798] vixra:2203.0128 [pdf]
Every Sufficiently Large Even Number Is the Sum of Two Primes
The binary Goldbach conjecture asserts that every even integer greater than $4$ is the sum of two primes. In this paper, we prove that there exists an integer $K_\alpha > 4$ such that every even integer $x > p_k^2$ can be expressed as the sum of two primes, where $p_k$ is the $k$th prime number and $k > K_\alpha$. To prove this statement, we begin by introducing a type of double sieve of Eratosthenes as follows. Given a positive even integer $x > 4$, we sift from $[1, x]$ all those elements that are congruents to $0$ modulo $p$ or congruents to $x$ modulo $p$, where $p$ is a prime less than $\sqrt{x}$. Therefore, any integer in the interval $[\sqrt{x}, x]$ that remains unsifted is a prime $q$ for which either $x-q = 1$ or $x-q$ is also a prime. Then, we introduce a new way of formulating a sieve, which we call the sequence of $k$-tuples of remainders. By means of this tool, we prove that there exists an integer $K_\alpha > 4$ such that $p_k / 2$ is a lower bound for the sifting function of this sieve, for every even number $x$ that satisfies $p_k^2 < x < p_{k+1}^2$, where $k > K_\alpha$, which implies that $x > p_k^2 \; (k > K_\alpha)$ can be expressed as the sum of two primes.
[799] vixra:2203.0117 [pdf]
Corrections about V.S. Adamchik's Papers
I study several papers of V.S. Adamchik and I find several mistakes about integrals and the Melzak's product.In the same time, I give more general formulas of three integrals.
[800] vixra:2203.0113 [pdf]
Where it is Shown that the Oscillation Symmetry is Also Verified in the Physical Properties of the Periodic Table of the Atomic Elements
The oscillation symmetry is applied with success to some physical atomic properties of many Periodic Elements. It allows to tentatively predict possible values for several unknown properties. A regularity is observed between oscillating periods. These values, according to different studied bodies, take discrete values as if they were quantified.
[801] vixra:2203.0110 [pdf]
Ordinary Scalars Will Undergo Topological Phase Transitions Under $f(R, \phi)$
In this article, we make a simulation, when the boundary conditions are preset, the ratio of the temperature of the two systems is a complex number, which is consistent with $f(R, \phi)$ theory, then a new solution (prediction) appears, ordinary scalars will undergo topological phase transitions under $f(R, \phi)$.
[802] vixra:2203.0107 [pdf]
Split Property in Black-hole Information Problem and the Stability of de Sitter Space-time
Split Property of the black hole information paradox, the problem of the loss of information from black holes, are discussed by means of a novel approach based on the evolution of a quantum scalar field in a background that contains a black hole. We show that the paradox cannot be resolved by assuming split property. A new definition of the area of the horizon is proposed. The horizon area is an observable, and the value of this observable for a black hole is related to its entropy. The entropy of a black hole has been derived. The loss of information from a black hole is a consequence of the loss of entropy. We also calculated the Hawking temperature and some thermodynamic entities of black holes.
[803] vixra:2203.0096 [pdf]
On the Solution of the Strong Gravitational Field, the Solution of the Singularity Problem, the Origin of Dark Energy and Dark Matter
In order to apply general relativity to a strong gravitational field, the gravitational self-energy of the object itself must be considered. By considering the gravitational self-energy, it is possible to solve the singularity problem, which is the biggest problem with general relativity. When an object acts on gravity, the gravitational action of gravitational potential energy is also included. Therefore, even in the case of the universe, the gravitational action of gravitational potential energy must be considered. Gravitational potential energy generates a repulsive force because it has a negative equivalent mass. For the observable universe, I calculated the negative gravitational self-energy, which is approximately three times greater than the positive mass energy, which can explain the accelerated expansion of the universe. The source of dark energy is presumed to be due to gravitational self-energy and an increase in mass due to the expansion of the particle horizon. It is calculated that the effect of dark energy is occurring because matter and galaxies entering the particle horizon contribute to the total gravitational potential energy. While mass energy is proportional to M, gravitational self-energy increases faster because gravitational self-energy is proportional to -M^2/R. Accordingly, an effect of increasing dark energy occurs. I present Friedmann's equations and dark energy function obtained through gravitational self-energy model. There is no cosmological constant, dark energy is a function of time. This model predicts an inflection point where dark energy becomes larger and more important than the energy of matter and radiation.
[804] vixra:2203.0092 [pdf]
On the Scholz Conjecture
In this paper we prove an inequality relating the length of addition chains producing number of the form $2^n-1$ to the length of their shortest addition chain producing their exponents. In particular, we obtain the inequality $$\delta(2^n-1)\leq n-1+\iota(n)+G(n)$$ where $\delta(n)$ and $\iota(n)$ denotes the length of an addition chain and the shortest addition chain producing $n$, respectively, with $G:\mathbb{N}\longrightarrow \mathbb{R}$.
[805] vixra:2203.0087 [pdf]
Frequentist and Bayesian Analysis Methods for Case Series Data and Application to Early Outpatient Covid-19 Treatment Case Series of High Risk Patients
When confronted with a public health emergency, significant innovative treatment protocols can sometimes be discovered by medical doctors at the front lines based on repurposed medications. We propose a very simple hybrid statistical framework for analyzing the case series of patients treated with such new protocols, that enables a comparison with our prior knowledge of expected outcomes, in the absence of treatment. The goal of the proposed methodology is not to provide a precise measurement of treatment efficacy, but to establish the existence of treatment efficacy, in order to facilitate the binary decision of whether the treatment protocol should be adopted on an emergency basis. The methodology consists of a frequentist component that compares a treatment group against the probability of an adverse outcome in the absence of treatment, and calculates an efficacy threshold that has to be exceeded by this probability, in order to control the corresponding $p$-value, and reject the null hypothesis. The efficacy threshold is further adjusted with a Bayesian technique, in order to also control the false positive rate. A selection bias threshold is then calculated from the efficacy threshold to control for random selection bias. Exceeding the efficacy threshold establishes efficacy by the preponderance of evidence, and exceeding the more demanding selection bias threshold establishes efficacy by the clear and convincing evidentiary standard. The combined techniques are applied to case series of high-risk COVID-19 outpatients, that were treated using the early Zelenko protocol and the more enhanced McCullough protocol. The resulting efficacy thresholds are then compared against our prior knowledge of mortality and hospitalization rates of untreated high-risk COVID-19 patients, as reported in the research literature.
[806] vixra:2203.0073 [pdf]
Problems and Solutions of Black Hole Cosmology
The Black Hole Cosmology that "the universe we observe is the interior of a black hole" was proposed in the 1970's. However, this Black Hole Cosmology is known to have several fatal flaws. In the black hole, singularity exist in the future, whereas in the real universe, singularity exist in the past. And, while the objects inside the black hole are moving toward the singularity, the observed universe is an expanding universe, and it looks completely opposite to each other. Moreover, since black holes are images that decompose humans into atomic units through strong tidal forces, the claim that humans are living inside black holes has not been seriously considered. In this study, I will solve the singularity problem and prove that humans can live inside a sufficiently large black hole. Inside a universe black hole, there is an almost flat space-time larger than the observable universe. There is also the possibility of solving the problem of cosmic expansion inside a black hole. Therefore, I would like to request new interest and research on Gravitational Self-Energy and Black Hole Cosmology through this study.
[807] vixra:2203.0070 [pdf]
What is the Value of the Function X/x at X=0? What is 0/0?
It will be a very pity that we have still confusions on the very famous problem on 0/0 and the value of the elementary function of x/x at x=0. In this note, we would like to discuss the problems in some elementary and self contained way in order to obtain some good understanding for some general people.
[808] vixra:2203.0061 [pdf]
Obtaining Information About Nature with Finite Mathematics
The main goal of this note is to explain that classical mathematics is a special degenerate case of finite mathematics in the formal limit p→∞, where p is the characteristic of the ring or field in finite mathematics. This statement is not philosophical but has been rigorously proved mathematically in our publications. We also describe phenomena which finite mathematics can explain but classical mathematics cannot. Classical mathematics involves limits, infinitesimals, continuity etc., while finite mathematics involves only finite numbers.
[809] vixra:2203.0056 [pdf]
Flyby Radio Doppler and Ranging Data Anomalies Are Due to Different Inbound and Outbound Velocities in the CMB Rest Frame
The COBE, WMAP, and Planck data analyses exhibit that the CMB restframe can be seen as a fundamental, absolute space, the CMB-space. All Earth flyby radio Doppler data anomalies can be resolved by applying the general, classical Doppler formula (CMB-Doppler formula) of first order for two-way signals between earthbound Deep Space Network stations and a spacecraft during an Earth flyby. For that purpose, the annually varying absolute velocity vector <b>u<sub>e</sub></b> of Earth is used, derived from the absolute velocity vector of the solar system barycenter, <b>u<sub>sun</sub></b>, magnitude <i>u<sub>sun</sub></i> = 369.82±0.11km s<sup>-1</sup>, in direction of constellation Crater, near Leo. Together with the relative, asymptotic inbound and outbound velocity vectors <b>v<sub>in</sub></b> and <b>v<sub>out</sub></b> in the equatorial frame, we obtain the absolute inbound and outbound velocity vectors <b>u<sub>in</sub></b> and <b>u<sub>out</sub></b> in the equatorial frame. The relative, asymptotic inbound and outbound velocities are actually equal in magnitude (<i>v<sub>in</sub></i> = <i>v<sub>out</sub></i>), while the magnitudes of the absolute inbound and outbound velocities of a spacecraft are in general different (<i>u<sub>in</sub></i> ≠ <i>u<sub>out</sub></i>), leading to the apparent anomaly. Thus the use of the CMB-Doppler formula explains the so far as residual considered positive or negative differences in energy. The measured, different absolute velocities in the CMB rest frame explain the supposed radar ranging data residuals as well.
[810] vixra:2203.0050 [pdf]
A Proof that Zeta(n >= 2) is Irrational
We show that using the denominators of the terms of Zeta(n)-1=z_n as decimal bases gives all rational numbers in (0,1) as single decimals. We also show the partial sums of z_n are not given by such single digits using the partial sum's terms. These two properties yield a proof that z_n is irrational.
[811] vixra:2203.0038 [pdf]
Evolution of the Donut Chain Theory of Space and Matter
The Donut Chain Theory of Space and Matter started during the late 1970’s as one person’s attempt to understand the journey taken by nature to create space and matter in the universe. The development focused on gaining a personal conceptual understanding of how such a journey might occur. Originally, the process was never intended to be definitive; nor, was it intended to produce numerical results of any significance. It was simply meant to provide a personally plausible understanding of space and matter.
[812] vixra:2203.0032 [pdf]
Chasing Oumuamua: an Apology for a Cyclic Gravity and Cosmology, Consistent with an Adaptation of General Relativity
Oumuamua was the first interstellar object observed to pass through the solar system. It did not follow the expected hyperbolic path, as if the pull of the Sun’s gravity was less than expected. Off-gassing normally present in comets was not observed. A modified gravity hypothesis — cyclic gravity and cosmology (CGC) — is proposed here to explain this motion. This hypothesis also would entail a greatly simplified and cyclic cosmology, potentially resolving the Hubble tension controversy.
[813] vixra:2203.0029 [pdf]
Quantum Mechanics Emerging from Complex Brownian Motions
The connection between the Schrodinger equation and Einstein diffusion theory on basis of Brownianmotion of independent particles is well known. However, in contrast to diffusion theory, quantummechanics theory has suffered controversial interpretations due to the counterintuitive concept ofwavefunction. Here, while we confirm there is no difference in the mathematical form of these twoequations, we derive the complex version of displacement. Using diffusion theory of particles in amedium, as simple as it is, we describe that quantum mechanics is just an elegant and subtle equationto describe the probability of all the trajectories that a particle can take to propagate in time by apredictive wavefunction. Therefore, information on the position of particles through time in quantumtheory is embedded in the wavefunction which predicts the evolution of an ensemble of individualBrownian particles.
[814] vixra:2203.0025 [pdf]
Double-Slit and Aharonov-Bohm Experiments in Magnetic Field
We discuss the two-slit experiment and the Aharonov-Bohm (AB) experiment in the magnetic field. The electron moving in magnetic field produces so called synchrotron radiation. In other words the photons are emitted from the points of the electron trajectory and it means that the trajectory of electron is visible in the synchrotron radiation spectrum. The extension of the discussion to the cosmical rays moving in the magnetic field of the Saturn magnetosphere and its rings is mentioned. It is related to the probe CASSINI. The solution of the problem in the framework of the hydrodynamical model of quantum mechanics and the nonlinear quantum mechanics is also mentioned.
[815] vixra:2203.0022 [pdf]
Upgrading of Entropy of the Universe
The dynamic space is structured by three fundamental elements, namely length, elementary electric charges (units) and forces. A spherical deformity of the space has occurred, which has created an equality of the peripheral and radial cohesive forces (Universal symmetry). Close to the Universe center this equality is breached, thus causing the Genesis of the primary form of matter and the Universal antigravity force, whereby the Hubble’s Law is proved. At the periphery of the Universe the dynamic formation of particles is turned back to the dynamic space and the vacuums of their cores end up into the vacuum-nonexistence, thus resulting the upgrading of entropy of the Universe. Actually, by the dissolution of the space deformations, the oriented forces (high entropy) are restored in the form of space cohesive forces (zero entropy). However, collisions of the charged particles take place onto the elastic membrane at the periphery of the Universe. The above membrane consequently oscillates and causes the acceleration of a residue of the charged particles towards the interior of the Universe and, since the charged particles arrive at the periphery of the Universe at the same centrifugal speed and are degraded by the same mechanism, they provoke a weak and constant Cosmic background radiation.
[816] vixra:2203.0015 [pdf]
Planck Plasma and the Debye Length
The Debye length plays a central role in plasma physics and also for semiconductors. We are investigating what the Debye length would be for a hypothetical plasma consisting of Planck mass particles; in other words, what we could coin: Planck plasma. This, we think, could be of interest as the Planck scale is assumed to play a central role in quantum gravity theory and potentially also quantum gravity computers.
[817] vixra:2203.0014 [pdf]
Properties of a Possible Unification Algebra
An algebra providing a possible basis for the standard model is presented. The algebra is generated by combining the trigintaduonion Cayley-Dickson algebra with the complexified space-time Clifford algebra. Subalgebras are assigned to represent multivectors for transverse coordinates. When a requirement for isotropy with respect to spatial coordinates is applied to those subalgebras, the structure generated forms a pattern matching that of the fermions and bosons of the standard model.
[818] vixra:2203.0011 [pdf]
A Dictionary of Plant Sciences by Michael Allaby and the Graphical Law
We study A Dictionary of Plant Sciences, the fourth edition, by Michael Allaby from the Oxford University Press. We draw the natural logarithm of the number of entries, normalised, starting with a letter vs the natural logarithm of the rank of the letter, normalised. We conclude that the Dictionary can be characterised by BP(4, $\beta H=0.02$), i.e. the Bethe-Peierls curve in the presence of four nearest neighbours and little external magnetic field, $\beta H= 0.02$. $\beta$ is $\frac{1}{k_{B}T}$ where, T is temperature and $k_{B}$ is the tiny Boltzmann constant. This is the case with A Dictionary Of Psychology by A. M. Colman from the Oxford University Press we have studied before. It appears that the branch of Plant Sciences is tantalizingly closer to the branch of Psychology, internally.
[819] vixra:2203.0009 [pdf]
Proof of Fermat's Last Theorem (Using 6 Methods)
The Pythagorean theorem is perhaps the best known theorem in the vast world of mathematics.A simple relation of square numbers, which encapsulates all the glory of mathematical science, isalso justifiably the most popular yet sublime theorem in mathematical science. The starting pointwas Diophantus’ 20 th problem (Book VI of Diophantus’ Arithmetica), which for Fermat is for n= 4 and consists in the question whether there are right triangles whose sides can be measuredas integers and whose surface can be square. This problem was solved negatively by Fermat inthe 17 th century, who used the wonderful method (ipse dixit Fermat) of infinite descent. Thedifficulty of solving Fermat’s equation was first circumvented by Willes and R. Taylor in late1994 ([1],[2],[3],[4]) and published in Taylor and Willes (1995) and Willes (1995). We presentthe proof of Fermat’s last theorem and other accompanying theorems in 4 different independentways. For each of the methods we consider, we use the Pythagorean theorem as a basic principleand also the fact that the proof of the first degree Pythagorean triad is absolutely elementary anduseful. The proof of Fermat’s last theorem marks the end of a mathematical era; however, theurgent need for a more educational proof seems to be necessary for undergraduates and students ingeneral. Euler’s method and Willes’ proof is still a method that does not exclude other equivalentmethods. The principle, of course, is the Pythagorean theorem and the Pythagorean triads, whichform the basis of all proofs and are also the main way of proving the Pythagorean theorem in anunderstandable way. Other forms of proofs we will do will show the dependence of the variableson each other. For a proof of Fermat’s theorem without the dependence of the variables cannotbe correct and will therefore give undefined and inconclusive results . It is, therefore, possible to prove Fermat's last theorem more simply and equivalently than the equation itself, without monomorphisms. "If one cannot explain something simply so that the last student can understand it, it is not called an intelligible proof and of course he has not understood it himself." R.Feynman Nobel Prize in Physics .1965.
[820] vixra:2203.0004 [pdf]
Literature Review of Recent Advancements in Hypergraph Learning as it Relates to Optimizer
Hypergraphs are a generalization of a graph in which an edge can join any number of vertices. In contrast, in an ordinary graph, an edge connects exactly two vertices.The applications of hypergraphs can range from analogical explainations such as social networks to hard generalities in the case of collabarative game theory where they are known as simple games. The more abstract applications can be used in localized and global optimizations of radial function under computational geometry , and the optmizers generated could also be used to solve linear scheduling problems. The theoretical approach developed under these categories can be used in embedding . cluster-ing and classification which can be solved through the application of spectral hypergraph clustering too.
[821] vixra:2203.0001 [pdf]
One Century since Bergman, Szego and Bochner on Reproducing Kernels
In this note, we wrote the preface for the first volume of the International Journal of Reproducing Kernels (The Roman Science Publications and Distributions (RSPD): https://romanpub.com/ijrk.php). Incidentally, this year is one century since the origin of reproducing kernels at Berlin. For some detailed information of the origin and some global situation of the theory of reproducing kernels with the content of the first volume are introduced.
[822] vixra:2202.0171 [pdf]
Comparison of Instrumentally Measured Temperature with Other Instrumentally Measured or Observed Geophysical Quantities.
In this review, we demonstrate a striking similarity between instrumentally measured temperature, the speed of the magnetic North Pole as a proxy for the changes in the Earth's magnetic field, seismic activity, and UFO sightings as a proxy for energy transfer between near-Earth space and the Earth's atmosphere. New research (some as recent as 2021) points towards the Van Allen Belts as the main contributor to global warming.
[823] vixra:2202.0170 [pdf]
Evolution of the Universe in an Infinite Space
This hypothesis considers the current universe to be a result of evolution in an infinite data space. The laws and properties of the universe are explained in terms of their function as evolutionary products. There is evidence for this hypothesis in the form of error correcting codes (see section 2.4).
[824] vixra:2202.0168 [pdf]
Chlorine Dioxide: Does it Contribute to Human Health? a Brief Review
Chlorine dioxide, ClO2, a non-patentable substance, is a molecule composed by two of the most disinfectant elements found in nature, chlorine and oxygen, both of them electronegative. As early as 1850, ClO2 has been being used in the oxidation of water and, since 1944, in the treatment of waste water and the bleaching of cellulose. Similarly, oxygen, in the form of hydrogen peroxide, is used to disinfect ambulances, hospital rooms and medical equipment, among other applications. Recently, the Global Health and Life Coalition (GHLC) has reported favourable results in the treatment of COVID-19 using ClO2 under a parameterized protocol design by scientists members of this organization. Other research works carried out in dierent parts of the world sustain the hypothesis that, as a relatively stable radical and as a highly oxidant regardless of the pH in its surroundings, ClO2 and its application in an area so sensitive as human health presents itself as an alternative worth studying further.
[825] vixra:2202.0165 [pdf]
The Expansion of Spacetime
In this paper, the physical universe is modelled as an expanding Minkowski space, and this obviates the need for dark energy to be included in the cosmological model. The observed accelerated expansion in the current epoch can be understood purely on the basis of a mass-dominated universe, where deceleration due to gravity is more than compensated for by expansion of the time dimension. In the epoch prior to this, when a linear expansion of the scale factor occurred, the universe was radiation-dominated, and in the very early exponentially expanding universe, cosmic inflation can be attributed to an expanding ensemble of non-interacting particles. This is very different behaviour from that deduced from the currently accepted cosmological model.
[826] vixra:2202.0162 [pdf]
Hypergraph Deployment with Self-abrasive Deep Neural Networks and CSGANS
The objective of the study is to develop a definitive meta-analysis of the recent developments in hyper-graph theories’ application in the field and study of deep learning and more widely in Machine learning , the applications of this particular technique may range simple classification tuning to more advanced abstract GANs in the field of regenerative graphical systems and computer vision in general,In our experiments, we use a novel random walk procedure and show that our model achieves and, in most cases, surpasses state-of-the-art performance on benchmark data sets. Additionally we also try to display our classification performance as compared to traditional Statistical Techniques , ML algorithms as well as Classical and new Deep learning algorithms.
[827] vixra:2202.0160 [pdf]
On the Non-Linear Refractive Index-Curvature Relation
The refractive index-curvature relation is formulated using the second rank tensor of Ricci curvature as a consequence of a scalar refractive index. A scalar refractive index describes (an isotropic) linear optics. In (an isotropic) non-linear optics, this scalar refractive index is decomposed into a contravariant fourth rank tensor of non-linear refractive index and a covariant fourth rank tensor of susceptibility. In topological space, both a contravariant fourth rank tensor of non-linear refractive index and a covariant fourth rank tensor of susceptibility, are related to the Euler-Poincare characteristic, a topological invariant.
[828] vixra:2202.0157 [pdf]
Ekagi-Dutch-English-Indonesian Dictionary by J. Steltenpool and the Onsager's Solution
We consult Ekagi-Dutch-English-Indonesian Dictionary by J. Steltenpool. Here we count all the Ekagi head words initiating with a letter. We draw the natural logarithm of the number of words, normalised, starting with a letter vs the natural logarithm of the rank of the letter. We find that the words underlie a magnetisation curve. The magnetisation curve i.e. the graph of the reduced magnetisation vs the reduced temperature is the exact Onsager solution of the two dimensional Ising model in the absence of external magnetic field.
[829] vixra:2202.0151 [pdf]
Pati-Salam Gut from Grassmann Number Factorization in SU (2) Supergauge Theories
This paper will propose a new construction of the SU(4)×SU(2)×SU(2) Pati-Salam gauge symmetry. It is based on a particular construction of supersymmetric theory where the vector multiplet is in the adjoint representation of the SU(2) group. A factorization of the Grassmann numbers from the commutator of vector multiplets will give new non-trivial terms which will correspond to a SU (4) gauge theory.
[830] vixra:2202.0149 [pdf]
Proof of Beal's Conjecture
The difference between the Beal equation and the Fermat equation is the different exponents of the variables and the method of solving it. As we will show, for the proof of the Beal equation to be complete, Fermat's theorem will be must hold. There are only 10 known solutions and all of them appear with exponent 2. This very fact is proved here using a uniform method. Therefore, Beal's conjecture is true under the above conditions because it accepts that there is no solution if the condition that all exponent values are greater than 2 occurs, the truth of which is proved in Theorem 6, based on the results of Theorem 5. The primary purpose for solving the equation is to see what happens for solving the equation ax + by = cz i.e. for Pythagorean triples of degree 1. This is the generator of the theorems and programs that follow.
[831] vixra:2202.0146 [pdf]
1D and 2D Global Strong Solutions of Navier Stokes Existence and Uniqueness
Consider the Navier-Stokes equation for a one-dimensional and two-dimensional compressible viscous liquid. It is a well-known fact that there is a strong solution locally in time when the initial data is smooth and the initial density is limited down by a positive constant. In this article, under the same hypothesis, I show that the density remains uniformly limited in time from the bottom by a positive constant, and therefore a strong solution exists globally in time. In addition, most existing results are obtained with a positive viscosity factor, but current results are true even if the viscosity factor disappears with density. Finally, I prove that this solution is unique in a class of weak solutions that satisfy the usual entropy inequalities. The point of this work is the new entropy-like inequalities that Bresch and Desjardins introduced into the shallow water system of equations. This discrepancy gives the density additional regularity (assuming such regularity exists first).
[832] vixra:2202.0144 [pdf]
Conservation of Mass-Energy and Reinterpretation of the Einstein Field Equations
Energy is everywhere. Energy propagates by wave and light, and has no mass. So light is a wavelength. Einstein introduced a particle called photon to explain the photoelectric effect, and it is said that light causes the photoelectric effect. Therefore, the conclusion so far is that ``Light is both a particle and a wavelength''. Energy is an independent entity with a physical quantity and is quantized according to Planck's law. Energy can be measured in terms of temperature, and can also be expressed in terms of mass-energy according to the mass-energy equivalence principle. Energy can raise the temperature of a matter, or it can provide energy for a matter to move. They are related to each other but act independently. All matters have potential energy and thermal energy separately inside. The internal potential energy $(E_p)$ of a matter does not interact with thermal energy or external kinetic energy $(E_k)$. Particles in the microscopic world can either emit or absorb energy, or they can release energy through mass deficits that release parts of matter. It can be seen that Einstein's field equation is an equation to which the law of conservation of mass-energy applies. Therefore,from Einstein's field equations, we can derive out the matter-dominated universe and energy-dominated universe, respectively.
[833] vixra:2202.0138 [pdf]
Diophantine Physics
Viewing the Kepler's laws as Diophantine non-local equations introduces the action quantum and the Diophantine Coherence Theorem which generalizes the method of Arthur Haas, which anticipated the Bohr's radius. This leads to a Space quantum breaking the Planck wall by a factor 10^{61} and the associated Holographic Cosmos, identified as the source of the Background Radiation. An Electricity-Gravitation symmetry, connected with the Combinatorial Hierarchy, defines the steady-state Universe with invariant Hubble radius 13.81 Glyr, corresponding to 70.79 (km/s)/Mpc, a value anticipated since 1997 by the Three Minutes Formula, confirmed by the Eddington Number, the Kotov period and the recent Carnegie-Chicago Hubble Program. This specifies G, compatible with the BIPM measurements, and confirms definitely the Anthropic Principle.
[834] vixra:2202.0132 [pdf]
On the Refractive Index-Curvature Relation
In a two-dimensional space, a refractive index-curvature relation is formulated using the second rank tensor of Ricci curvature. A scalar refractive index describes an isotropic linear optics. In a fibre bundle geometry, a scalar refractive index is related to an Abelian (a linear) curvature form. The Gauss-Bonnet-Chern theorem is formulated using a scalar refractive index. Because the Euler-Poincare characteristic is the topological invariant then a scalar refractive index is also a topological invariant.
[835] vixra:2202.0130 [pdf]
A Dictionary of World History by Edmund Wright and the Graphical Law
We study A Dictionary of World History, third edition, by Edmund Wright from Oxford University Press. We draw the natural logarithm of the number of entries, normalised, starting with a letter vs the natural logarithm of the rank of the letter, normalised. We conclude that the Dictionary can be characterised by BP(4,$\beta H=0.01$) i.e. a magnetisation curve for the Bethe-Peierls approximation of the Ising model with four nearest neighbours with $\beta H=0$, in the absence of external magnetic field, H. $\beta$ is $\frac{1}{k_{B}T}$ where, T is temperature and $k_{B}$ is the tiny Boltzmann constant. This is the case with the Dictionary of Science we have studied before. It appears that the branch of Science, plausibly, is closer to the World history, internally.
[836] vixra:2202.0125 [pdf]
On the Quantification of Relativistic Trajectories
Solving the geodesic equation on a relativistic manifold is possible numerically step by step. This process can be transposed into a quantisation. We study here the effect of this quantisation on the Schwarzschild spacetime, more precisely in the Kruskal-Szekeres map.
[837] vixra:2202.0117 [pdf]
Note on the Plane Oblique Mercator Representation
In this paper about the oblique Mercator representation, we present the calculation of the geographical coordinates $(\Phi,\Lm)$ images of the coordinates $(\fii,\lm)$ of a point on the sphere. We have added some exercises.
[838] vixra:2202.0115 [pdf]
The Heaven's Palaces Above Mount Sumeru: Beyond Desire Realm (须弥山上"诸天宫殿":欲界天外)
This paper continues the interpretation of “The Palace of the heavens” in the Buddhist scriptures for paper "The heaven's palaces above Mount Sumeru:Desire Realm". First of all, through the modern interpretation of the sutras and the benefit from modern science, we are surprised to find that the sutras accurately describe the distance doubling relationship between the semi-major axes of the orbits of some Trans-Neptunian objects with perihelion greater than 50AU and semi-major axis greater than or equal to 80AU in the solar system. Secondly, based on the actual semi-major axis data of celestial bodies “2014 ST373”、“2012 VP113” and “2003 VB12 Sedna” and the distance doubling relationship described by Buddhist sutras, we can fit the semi-major axis of nine trans-Neptunian celestial bodies described by Buddhist sutras. Among them, the known celestial bodies “2013 SY99” and “Leleākūhonua (2015 TG387)” did not participate in the numerical fitting as a test set, and the errors between their predicted values and actual observed values were only 0.07% and 12.22%, which preliminarily verified the relevant descriptions of Buddhist sutras. In addition, the Sutras indicate the existence of eight unknown celestial bodies, four of which have predicted semi-major axis values of 2,024AU、4,047AU、8,095AU and 16,189AU, respectively. Some of the values are close to the simulation values calculated by scientists, namely 2,000AU、7,850AU and 15,000AU, which further verifies the relevant descriptions of distance doubling relation in buddhist scriptures and also confirms the relevant interpretations of the paper "The heaven's palaces above Mount Sumeru: Desire Realm". Finally, these contents are not only far beyond the cognitive level of ancient people thousands of years ago, but even far beyond the exploration level of modern astronomical science. So there is an incredible era transcendence, which is extremely shocking again! <p> 本文延续论文"须弥山上“诸天宫殿":欲界"中关于佛经“诸天宫殿”的解读。首先,得益于现代科学,我们惊讶的发现,佛经对太阳系内近日点大于50AU,同时半长轴大于等于80AU的海王星外天体的轨道半长轴之间存在距离倍增关系有着准确的符合现代科学的描述。其次,根据已探知的关于天体“2014 ST373”、“2012 VP113”和“2003 VB12 Sedna (塞德娜)”的实际半长轴数据以及佛经所阐述的距离倍增关系,我们可以拟合出佛经描述的9颗海王星外天体的半长轴。其中,已知天体“2013 SY99”和“Leleākūhonua (2015 TG387)”作为测试集并不参与数值拟合,而它们的预测值与实际观测值的误差仅为0.07%和12.22%,初步验证了佛经的相关描述。再次,佛经还表明存在8颗未知天体,其中的4颗,它们半长轴的预测值分别为2,024AU、4,047AU、8,095AU和16,189AU,而其中部分数值与科学家的计算模拟值,即2,000AU、7,850AU以及15,000AU相近,进一步验证了佛经关于距离倍增关系的相关描述,也与论文"须弥山上“诸天宫殿":欲界》的相关解读互相印证。最后,这些内容不仅远超数千年前古人的认知水平,甚至远超现代天文科学的探索水平,存在不可思议的时代超越性,让人再一次极度震撼!
[839] vixra:2202.0109 [pdf]
Extending Lasenby’s Embedding of Octonions in Space-Time Algebra Cl(1,3), to All Three and Four Dimensional Clifford Geometric Algebras Cl(p,q), N = P + Q = 3,4
We study the embedding of octonions in the Clifford geometric algebra for spacetime STA Cl(1,3), as suggested by Anthony Lasenby at AGACSE 2021. As far as possible, we extend the approach to similar octonion embeddings for all three- and four dimensional Clifford geometric algebras Cl(p,q), n = p + q = 3,4. Noticeably, the lack of a quaternionic subalgebra in Cl(2,1), seems to prevent the construction of an octonion embedding in this case, and necessitates a special approach in Cl(2,2). As examples, we present for Cl(3,0) the non-associativity of the octonionic product in terms of multivector grade parts with cyclic symmetry, show how octonion products and involutions can be combined to make the opposite transition from octonions to the Clifford geometric algebra Cl(3,0), and how octonionic multiplication can be represented with (complex) biquaternions or Pauli matrix algebra.
[840] vixra:2202.0107 [pdf]
Exact Expansions
In this paper, we continue the development of multivariate expansivity theory. We introduce and study the notion of an exact expansion and exploit some applications.
[841] vixra:2202.0106 [pdf]
Bayesian Network and Information Theory
In this paper, we will expose the BIC score expressed as a function of the Bayesian network's entropy. We will then use this BIC score to learn a Bayesian network from an example of data frame.
[842] vixra:2202.0099 [pdf]
New Applications of Clifford’s Geometric Algebra
The new applications of Clifford's geometric algebra surveyed in this paper include kinematics and robotics, computer graphics and animation, neural networks and pattern recognition, signal and image processing, applications of versors and orthogonal transformations, spinors and matrices, applied geometric calculus, physics, geometric algebra software and implementations, applications to discrete mathematics and topology, geometry and geographic information systems, encryption, and the representation of higher order curves and surfaces.
[843] vixra:2202.0094 [pdf]
Folium of Descartes and Division by Zero Calculus -  An Open Question
In this note, in the folium of Descartes, with the division by zero calculus we will see some interesting results at the point at infinity with some interesting geometrical property. We will propose an interesting open question.
[844] vixra:2202.0088 [pdf]
The Analysis of Siyu Bian,yi Wang, Zun Wang and Mian Zhu Applied to the Natario-Broeck Spacetime: a Very Interesting Approach Towards a More Realistic Interstellar Warp Drive
Warp Drives are solutions of the Einstein Field Equations that allows superluminal travel within the framework of General Relativity. There are at the present moment two known solutions: The Alcubierre warp drive discovered in $1994$ and the Natario warp drive discovered in $2001$. However one of the major drawbacks that affects both warp drive spacetimes are the collisions with hazardous interstellar matter (asteroids, comets, interstellar dust etc) that will unavoidably occurs when a ship travels at superluminal speeds across interstellar space. The problem of collisions between a warp drive spaceship moving at superluminal velocity and the potentially dangerous particles from the Interstellar Medium $IM$ is not new.It was first noticed in $1999$ in the work of Chad Clark, Will Hiscock and Shane Larson.Later on in $2010$ it appeared again in the work of Carlos Barcelo, Stefano Finazzi and Stefano Liberatti. In $2012$ the same problem of collisions against hazardous $IM$ particles would appear in the work of Brendan McMonigal, Geraint Lewis and Philip O'Byrne. Some years ago in $1999$ Chris Van Den Broeck appeared with a very interesting idea. Broeck proposed a warp bubble with a large internal radius able to accommodate a ship inside while having a submicroscopic outer radius and a submicroscopic contact external surface in order to better avoid the collisions against the interstellar matter. The Broeck spacetime distortion have the shape of a bottle with $200$ meters of inner diameter able to accommodate a spaceship inside the bottle but the bottleneck possesses a very small outer radius with only $10^{-15}$ meters $100$ billion time smaller than a millimeter therefore reducing the probabilities of collisions against large objects in interstellar space. Recently a very interesting work appeared.It covers the analysis of Siyu Bian, Yi Wang, Zun Wang and Mian Zhu applied to the Alcubierre warp drive spacetime. But the most important fact: their analysis also applies to the Natario warp drive spacetime.In this work we applied the analysis of Siyu Bian ,Yi Wang, Zun Wang and Mian Zhu to the Natario-Broeck warp drive spacetime and we arrived at the following conclusion: The analysis of Siyu Bian ,Yi Wang, Zun Wang and Mian Zhu proves definitely that the Natario-Broeck warp drive spacetime is the best candidate for a realistic interstellar space travel.
[845] vixra:2202.0086 [pdf]
Note on Laborde Plane Representation Used In Madagascar
In this first paper, we present the plane representation of Laborde applied to Madagascar. We come back to the formulas and the mathematical details from the source document namely fascicle n°4 of the "Treaty of Projections of Geographical Maps, for the Use of Cartographers and Geodesists ", published in 1926, by L. Driencourt and J. Laborde. We have used modern language for certain old expressions cited in the source document.
[846] vixra:2202.0079 [pdf]
Improving Multi Expression Programming: an Ascending Trail from Sea-level Even-3-parity Problem to Alpine Even-18-Parity Problem
Multi Expression Programming is a Genetic Programming variant that uses a linear representation of individuals. A unique feature of Multi Expression Programming is its ability of storing multiple solutions of a problem in a single chromosome. In this paper, we propose and use several techniques for improving the search performed by Multi Expression Programming. Some of the most important improvements are Automatically Defined Functions and Sub-Symbolic node representation. Several experiments with Multi Expression Programming are performed in this paper. Numerical results show that Multi Expression Programming performs very well for the considered test problems.
[847] vixra:2202.0076 [pdf]
The New Notation for Hyperoperation of a Sequence
For a sequence $a_1, a_2, \ldots, a_n$, we define the exponent, tetration and pentation of a sequence $a_n$ as $\overset{n}{\underset{k = 1}{\textrm{E}}} (a_k) = a_1[3]a_2[3]\cdots[3]a_n$, $\overset{n}{\underset{k = 1}{\textrm{T}}} (a_k) = a_1[4]a_2[4]\cdots[4]a_n$, $\overset{n}{\underset{k = 1}{\mathrm{\Phi}}} (a_k) = a_1[5]a_2[5]\cdots[5]a_n$. Also, we define the $i$-th hyperoperation of a sequence $a_n$ as $\overset{n}{\underset{k = 1}{\textrm{H}_i}} (a_k) = a_1[i]a_2[i]\cdots[i]a_n$.
[848] vixra:2202.0066 [pdf]
LIGO's Spiral Binary Black Holes Failed to Merge
Frequency distribution and variation law of the GW150914 waveform, one of a large number of ancient and distant binary merging gravitational waves claimed by LIGO, are deeply studied, and the recurrence relation of frequency of GW150914 waveform with macro quantization significance is accurately fitted. Firstly, the characteristic equation for effectively correcting the amplitude time of the waveform is proposed, the normative conditions of the numerical analysis method of the minimum solution of the characteristic Diophantine equations are determined, the new correction value of the amplitude time of the GW150914 waveform is given, and then the quantized recursive equation characterizing the frequency distribution of the signal is obtained. Secondly, the complex but clear data processing procedure for drawing the com quantum theory waveform is introduced, and the standard waveform of GW150914 waveform is drawn. It is pointed out that the drawing of LIGO numerical relativistic waveform is opaque, and the conclusion is vague and lacks due scientific analysis. Thirdly, the missing characteristic frequency in GW150914 waveform is confirmed, and then the reason why Hanford waveform and Livingston waveform have no corresponding band of characteristic frequency is analyzed. It is proved that LIGO's GW150914 spiral binary black holes failed to merge successfully, which shows that no matter how powerful the research team is it is difficult to create astronomical events without any flaws. Finally, an experimental proposal to effectively test the confidence of a laser interferometer gravitational wave detector is proposed to detect the gravitational wave of simulated spiral binary system. PS: LIGO made up enough lies that the gravitational wave of the merger of spiral binaries was detected because they naively believed that as long as they teased the readers and announced that the binaries in all gravitational wave events were merged, there would be no corresponding gravitational wave in the future, and it would never be possible for anyone to discover the truth of the so-called gravitational wave of spiral binaries. However, there is no perfect lie in the world. Theoretical physicists only need to master the correct theoretical knowledge of gravity and understand the precise motion law of binary stars, and they can find many fatal loopholes in LIGO's lies. What is introduced here is just one of the loopholes in LIGO's lie of detecting the gravitational wave of the merger of spiral black holes: if the story of spiral double black holes is true, it must radiate a peak corresponding to a specific frequency. LIGO is obviously very ignorant about the basic knowledge of these gravitational theories, so it is unable to carry out the necessary calculations. This unfortunate ignorance has become one of the huge loopholes in the lie of LIGO's spiral binary gravitational wave. In fact, readers can also expose the lie of LIGO gravitational wave from other aspects, and finally find out what kind of laboratory signal LIGO's so-called gravitational wave signal is simulated with. Evidence to expose LIGO lies will be more wonderful one by one. However, LIGO and its cooperative institutions are likely to use the precise inferences of our paper to fabricate spikes of a specific frequency and further release false gravitational waves. Therefore, we have to keep other wonderful evidence secret for the time being, prepare for Jedi counterattack, and finally conclude for LIGO: there are many lies in the scientific community, and LIGO's lies, although very cunning, are the most ignorant, It can only deceive those non real theoretical physics scholars, especially the news media.
[849] vixra:2202.0060 [pdf]
On the Number of Integral Points Between a K Dimensional Sphere and a Grid
Using the method of compression we show that the number of integral points in the region bounded by the $2r\times 2r \times \cdots \times 2r~(k~times)$ grid containing the sphere of radius $r$ and a sphere of radius $r$ satisfies the lower bound \begin{align} \mathcal{N}_{r,k} \gg r^{k-\delta}\times \frac{1}{\sqrt{k}}\nonumber \end{align}for some small $\delta>0$.
[850] vixra:2202.0057 [pdf]
The Heaven’s Palaces Above Mount Sumeru: Desire Realm (须弥山上“诸天宫殿”:欲界)
Through the modern interpretation of the sutras and the benefit from modern science, we are surprised to find that the Sutras have accurate modern scientific descriptions of the existence of the eight planets in the solar system, the Asteroid belt, Ceres, and the distance doubling relation among some of the semi-major axes for the interplanetary orbits. Among them, the Buddha’s description of the distribution characteristics of the asteroid belt, the existence of Ceres in the asteroid belt and the existence of a liquid layer inside Ceres are consistent with the results of modern scientific exploration. All the above content are far beyond our imagination, and there is an incredible era transcendence which is shocking again. <p> 通过经文的现代解读以及得益于现代科学,我们惊讶的发现,佛经对太阳系存在八大行星、小行星带(Asteroid belt)、谷神星(Ceres)、以及部分行星间轨道半长轴的距离倍增关系等有着准确的现代科学描述。其中,佛陀关于关于小行星带的分布特点、小行星带中存在谷神星和谷神星内部存在液体层的描述与现代科学的探索结果一致。这些内容远远的超出我们的想象,存在不可思议的时代超越性,让人再次震惊。
[851] vixra:2202.0054 [pdf]
Inadequacy of Classical Logic in Classical Harmonic Oscillator and the Principle of Superposition
In course of the development of modern science, inadequacy of classical logic and Eastern philosophy have generally been associated only with quantum mechanics in particular, notably by Schroedinger, Finkelstein and Zeilinger among others. Our motive is to showcase a deviation from this prototypical association. So, we consider the equation of motion of a classical harmonic oscillator, and demonstrate how our habit of writing the general solution, by applying the principle of superposition, can not be explained by remaining within the bounds of classical logic. The law of identity gets violated. The law of non-contradiction and the law of excluded middle fail to hold strictly throughout the whole process of reasoning consequently leading to a decision problem where we can not decide whether these two `laws' hold or not. We discuss how we, by habit, apply our intuition to write down the general solution. Such intuitive steps of reasoning, if formalized in terms of propositions, result in a manifestation of the inadequacy of classical logic. In view of our discussion, we conclude that the middle way ({\it Mulamadhyamakakarika}), a feature of Eastern philosophy, founds the basis of human reasoning. The essence of the middle way can be realized through self-inquiry ({\it Atmavichar}), another crucial feature of Eastern philosophy, which however is exemplified by our exposition of the concerned problem. From the Western point of view, our work showcases an example of Hilbert's axiomatic approach to deal with the principle of superposition in the context of the classical harmonic oscillator. In the process, it becomes a manifestation of Brouwer's views concerning the role of intuition in human reasoning and inadequacy of classical logic which were very much influenced by, if not founded upon, Eastern philosophy.
[852] vixra:2202.0051 [pdf]
Progress in the Composite View of the Newton Gravitational Constant and Its Link to the Planck Scale
The Newtonian gravity constant G plays a central role in gravitational theory. Researchers have since at least the 1980’s tried to see if the Newton gravitational constant can be expressed or replaced with more fundamental units, such as the Planck units. However already in 1987 it was pointed out that this led to a circular problem, namely that one must know G to find the Planck units, and that it is therefore is of little or no use to express G through the Planck units. This is a view repeated in the literature in recent years, and is the view held by the physics community. However a few years ago we will claim the circular problem was solved. In addition when one express the mass from the Compton wavelength formula then this leads to that the three universal constants G, h and c can be replaced with only lp and c to predict observable gravitational phenomena. This paper will review the history as well recent progress in the composite view of the gravitational constant.
[853] vixra:2202.0044 [pdf]
Paraphrasing Magritte’s Observation
Contrast Sensitivity of the human visual system can be explained from certain low-level vision tasks (like retinal noise and optical blur removal), but not from others (like chromatic adaptation or pure reconstruction after simple bottlenecks). This conclusion still holds even under substantial change in stimulus statistics, as for instance considering cartoon-like images as opposed to natural images (Li, Gomez-Villa, Bertalmio, & Malo, 2022). In this note we present a method to generate original cartoon-like images compatible with the statistical training used in (Li et al., 2022). Following the classical observation in (Magritte, 1929), the stimuli generated by the proposed method certainly are not what they represent: Ceci n’est pas une pipe. The clear distinction between representation (the stimuli generated by the proposed method) and reality (the actual object) avoids eventual problems for the use of the generated stimuli in academic, non-profit, publications.
[854] vixra:2202.0037 [pdf]
Proof of Fermat's Last Theorem by means of Elementary Probability Theory
In this work, we introduce the concept of Fermat’s Urn, an urn containing three types of marbles, and such that it holds a peculiar constraint therein: The probability to get at least one marble of a given type (while performing multiple independent drawings) is equal to the probability not to get any marble of another type. Further, we discuss a list of implicit hypotheses related to Fermat's Equation, which would allow us to interpret this equation exactly as the mentioned constraint in Fermat's Urn. Then, we study the properties of this constraint in relation with the capability to distinguish the types of marbles within the urn, namely in case of the event ''to get at least one marble of each type''. Eventually, on the basis of a simple theorem related to this event, we prove that Fermat's Equation and Fermat's Urn may share those properties only if we perform at most two drawings from the urn. This result reflects then in the solution of Fermat's Equation.
[855] vixra:2202.0030 [pdf]
A Dictionary of the Kachin Language by Rev. o. Hanson and the Graphical Law
We study A Dictionary of the Kachin Language by Rev. O. Hanson, 1954 printing. We draw the natural logarithm of the number of the Kachin words, normalised, starting with a letter vs the natural logarithm of the rank of the letter, normalised( unnormalised). We find that the Kachin words underlie a magnetisation curve of a Spin-Glass in the presence of little external magnetic field. Moreover, the naturalness number of the Kachin language as seen through this dictionary is nine by sixteen.
[856] vixra:2202.0028 [pdf]
Penrose Suggestion as to Pre-Planck-Era Black Holes Showing up in Present Universe Data Sets Discussed, with a Possible Candidate as to GW Radiation Which May Provide Initial CMBR Data
What we are doing is three-fold. First, we examine the gist of the Penrose suggestion as to signals from a prior universe showing up in the CMBR. That is, this shows up as data in the CMBR. Second, we give a suggestion as to how super massive black holes could be broken up s of a prior universe cycle by pre-big-bang conditions, with say millions of pre-Planck black holes coming up out of a breakup of prior universe black holes. Three, we utilize a discussion as to Bose{Einstein condensates set as gravitons as to composing the early universe black holes. The BEC formulation gives a number N of gravitons, linked to entropy, per black hole, which could lead to contributions to the alleged CMBR perturbations, which were identied by Penrose et al.
[857] vixra:2202.0026 [pdf]
A Formula for the Function π(x) to Count the Number of Primes Exactly if 25 ≤ X ≤ 1572 with Python Code to Test it v. 4.0
This paper shows a very elementary way of counting the number of primes under a given number with total accuracy. Is the function π(x) if 25 ≤ x ≤ 1572.
[858] vixra:2202.0024 [pdf]
Experimental Test of the Equivalence Principle: Result of Studyingfree Fall of a Metal Disk and a Helium Balloon in a Vacuum (Low Vacuum)
The equivalence principle states that gravitational mass and inertial mass are two equivalent quantities, that in a gravitational field all bodies fall at the same rate during free fall in a vacuum regardless of their mass and composition. In the past, the free fall of bodies has been studied multiple times and the equivalence principle has always been confirmed so far. However, mainly solid bodies and liquids were used as test bodies in the experiments. In this experiment, in addition to a pure solid body, namely a metal disk, a solid body in hollow form filled with gas, specifically a helium balloon, has been studied during free fall in a vacuum (low vacuum). The analysis of the measured data shows a clear deviation of the measured values from the expected nominal values according to Galileo's law of falling bodies during free fall of the helium balloon and thus a violation of the equivalence principle.
[859] vixra:2202.0016 [pdf]
Relations of Deterministics and Associated Stochastics in the Sense of an Ensemble Theory Lead to Many Solutions in Theoretical Physics
With the method of establishing a clear connection between deterministics and associated stochastics in terms of an ensemble theory Maxwell's equations are theoretically derived and a geometrodynamics of collective turbulent motions is developed. This in turn leads to a unification of Maxwell's and gravitational field as well as the explanation and emergence of photons.
[860] vixra:2202.0009 [pdf]
On the General Gauss Circle Problem
Using the method of compression we show that the number of integral points in a $k$ dimensional sphere of radius $r>0$ is \begin{align} N_k(r)\gg \sqrt{k} \times r^{k-1+o(1)}.\nonumber \end{align}
[861] vixra:2202.0006 [pdf]
On a Variant of the Gauss Circle Problem
Using the method of compression we show that the number of integral points in the region bounded by the $2r\times 2r$ grid containing the circle of radius $r$ and a circle of radius $r$ satisfies the lower bound \begin{align} \mathcal{N}_r \gg r^{2-\delta}\nonumber \end{align}for some small $\delta>0$.
[862] vixra:2202.0004 [pdf]
A New Representation for Dirac $\delta$-function
A polynomial power series is constructed for the one-sided step function using a modified Taylor series, whose derivative results in a new representation for Dirac $\delta$-function.
[863] vixra:2202.0002 [pdf]
The Trajectory Nonstability of Charges in LHC Due to Radiation Loss
The quasi-classical behavior of a charged particle moving in a magnetic field is derived by the WKB approximation and wave-packet method from the Klein-Gordon equation with the Schwinger radiative term. The lifetime of the wave-packet state is calculated for a constant magnetic field. The fnite lifetime of the trajectory is the proof of the nonstationary motion of charges moving in magnetic eld.
[864] vixra:2202.0001 [pdf]
Characterizing Spectral Properties of Bridge Graphs
Bridge graphs are special type of graphs which are constructed by connecting identical connected graphs with path graphs. We discuss different type of bridge graphs $B_{n\times l}^{m\times k}$ in this paper. In particular, we discuss the following: complete-type bridge graphs, star-type bridge graphs, and full binary tree bridge graphs. We also bound the second eigenvalues of the graph Laplacian of these graphs using methods from Spectral Graph Theory. In general, we prove that for general bridge graphs, $B_{n\times l}^2$, the second eigenvalue of the graph Laplacian should between $0$ and $2$, inclusive. At the end, we talk about the future work about infinite bridge graphs. We create definitions and found the related theorems to support our future work about infinite bridge graphs.
[865] vixra:2201.0218 [pdf]
A Paradox of “Adjacent” Real Points and Beyond
We reveal adjacent real points in the real set using a concise logical reference. This raises a paradox while the real set is believed as existing and complete. However, we prove each element in a totally ordered set has adjacent element(s); there is no densely ordered set. Furthermore, since the natural numbers can also be densely ordered under certain ordering, the set of natural numbers, which is involved with each infinite set in ZFC set theory, does not exist itself.
[866] vixra:2201.0217 [pdf]
Quadruplet Sums of Quark-Lepton Masses
Adding the charm quark to the Koide triplet forms a quadruplet that approximates 2/5. The precision of this result is accurate to O(10^-5). We find that the charm mass sits at a minimum of a general quadruplet curve. Using this calculated charm mass and the heavy leptons which are directly measured, we predict the mass of the up, down, strange, and bottom quarks. Determining mass in this way avoids the inconsistency of mixing the running mass with the pole mass for the sums of these quark masses and serves as a prediction for more accurate techniques.
[867] vixra:2201.0215 [pdf]
The Gaps Between Primes
It is proved that · For any positive integer d, there are infinitely many prime gaps of size 2d. · For every integer greater than 2 is the sum of two prime numbers. Our method from the analysis of distribution density of pseudo primes in specific set is to transform them into upper bound problem of the maximum gaps between overlapping pseudo primes, then the two are essentially the same problem.
[868] vixra:2201.0205 [pdf]
The Concise Gojri-English Dictionary by Dr. Rafeeq Anjum and the Graphical Law
We study the Concise Gojri-English Dictionary by Dr. Rafeeq Anjum. We draw the natural logarithm of the number of words, normalised, starting with a letter vs the natural logarithm of the rank of the letter, normalised. We conclude that the Dictionary can be characterised by BP(4,$\beta H=0$) i.e. a magnetisation curve for the Bethe-Peierls approximation of the Ising model with four nearest neighbours with $\beta H=0$, in the absence of external magnetic field, H. $\beta$ is $\frac{1}{k_{B}T}$ where, T is temperature and $k_{B}$ is the tiny Boltzmann constant. Moreover, the naturalness number of the Gojri language as seen through this dictionary is one.
[869] vixra:2201.0204 [pdf]
The Local Product and Local Product Space
In this note we introduce the notion of the local product on a sheet and associated space. As an application we prove under some special conditions the following inequalities \begin{align} 2\pi \frac{|\log(\langle \vec{a},\vec{b}\rangle)|}{(||\vec{a}||^{4s+4}+||\vec{b}||^{4s+4})|\langle \vec{a},\vec{b}\rangle|}\bigg |\int \limits_{|a_n|}^{|b_n|} \int \limits_{|a_{n-1}|}^{|b_{n-1}|}\cdots \int \limits_{|a_1|}^{|b_1|}\sqrt[4s+3]{\sum \limits_{i=1}^{n}x^{4s+3}_i}dx_1dx_2\cdots dx_n\bigg|\nonumber \\ \leq \bigg|\int \limits_{|a_n|}^{|b_n|} \int \limits_{|a_{n-1}|}^{|b_{n-1}|}\cdots \int \limits_{|a_1|}^{|b_1|}\mathbf{e}\bigg(-i\frac{\sqrt[4s+3]{\sum \limits_{j=1}^{n}x^{4s+3}_j}}{||\vec{a}||^{4s+4}+||\vec{b}||^{4s+4}}\bigg)dx_1dx_2\cdots dx_n\bigg|\nonumber \end{align} and \begin{align} \bigg|\int \limits_{|a_n|}^{|b_n|} \int \limits_{|a_{n-1}|}^{|b_{n-1}|}\cdots \int \limits_{|a_1|}^{|b_1|}\mathbf{e}\bigg(i\frac{\sqrt[4s+3]{\sum \limits_{j=1}^{n}x^{4s+3}_j}}{||\vec{a}||^{4s+4}+||\vec{b}||^{4s+4}}\bigg)dx_1dx_2\cdots dx_n\bigg|\nonumber \\ \leq 2\pi \frac{|\langle \vec{a},\vec{b}\rangle|\times |\log(\langle \vec{a},\vec{b}\rangle)|}{(||\vec{a}||^{4s+4}+||\vec{b}||^{4s+4})}\bigg |\int \limits_{|a_n|}^{|b_n|} \int \limits_{|a_{n-1}|}^{|b_{n-1}|}\cdots \int \limits_{|a_1|}^{|b_1|}\sqrt[4s+3]{\sum \limits_{i=1}^{n}x^{4s+3}_i}dx_1dx_2\cdots dx_n\bigg|\nonumber \end{align} and \begin{align} \bigg |\int \limits_{|a_n|}^{|b_n|} \int \limits_{|a_{n-1}|}^{|b_{n-1}|}\cdots \int \limits_{|a_1|}^{|b_1|}\sqrt[4s]{\sum \limits_{i=1}^{n}x^{4s}_i}dx_1dx_2\cdots dx_n\bigg|\nonumber \\ \leq \frac{|\langle \vec{a},\vec{b}\rangle|}{2\pi |\log(\langle \vec{a},\vec{b}\rangle)|}\times (||\vec{a}||^{4s+1}+||\vec{b}||^{4s+1}) \times \bigg|\prod_{i=1}^{n}|b_i|-|a_i|\bigg|\nonumber \end{align}for all $s\in \mathbb{N}$, where $\langle,\rangle$ denotes the inner product and where $\mathbf{e}(q)=e^{2\pi iq}$.
[870] vixra:2201.0194 [pdf]
Another Values of Barnes Function and Formulas
In this paper,I study values of Barnes G function as G(k/8) and G(k/12) with Wallis product as applications.And I write several formulas, so we can evaluate elementary values.
[871] vixra:2201.0192 [pdf]
A Note on the Understanding of Quantum Mechanics
Quantum Mechanics is understood by generalizing models for cause-effect from functions, e.g. Differential Equations, to graphs and, via linearization, to linear operators. This also leads from classical logic to quantum logic.
[872] vixra:2201.0188 [pdf]
Preliminary Concept of General Intelligent Network (Gin) for Brain-Like Intelligence
Preliminary concept of AGI for brain-like intelligence is presented in this paper. The solution is mainly in two aspects: firstly, we combine information entropy and generative network (GAN like) model to propose a paradigm of General Intelligent Network (GIN). In the GIN network, the original multimodal information can be encoded as low information entropy hidden state representations (HPPs), which can be reverse parsed by the contextually relevant generative network into observable information. Secondly,we propose a generalized machine learning operating system (GML system), which includes an observable processor (AOP), an HPP storage system, and a multimodal implicit sensing/execution network. Our code will be released at https://github.com/ggsonic/GIN
[873] vixra:2201.0186 [pdf]
A Probabilistic Proof for the Syracuse Conjecture
We prove the veracity of the Syracuse conjecture by establishing that from an arbitrary positive integer different from $1$ and $4$, the Syracuse process will never return to any positive integer already reached and we conclude using a probabilistic approach.
[874] vixra:2201.0185 [pdf]
Peacocks and the Zeta Distributions
We prove in this short paper that the stochastic process defined by: $$Y_{t} := \frac{X_{t+1}}{\mathbb{E}\left[ X_{t+1}\right]},\; t\geq a > 1,$$ is an increasing process for the convex order, where $ X_{t}$ a random variable taking values in $\mathbb{N}$ with probability $\mathbb{P}(X_{t}= n) = \frac{n^{-t}}{\zeta(t)}$ and $\zeta(t) = \sum \limits_{k=1}^{+\infty} \frac{1}{k^{t}}, \;\; \forall t> 1$.
[875] vixra:2201.0178 [pdf]
On Maximal Acceleration, Strings with Dynamical Tension, and Rindler Worldsheets
Starting with a different action and following a different procedure than the construction of strings with dynamical tensions described by Guendelman [1], a variational procedure of our action leads to a coupled nonlinear system of D + 4 partial differential equations for the D string coordinates X and the quartet of scalar fields including the dilaton and the tension T field. Trivial solutions to this system of complicated equations lead to a constant tension and to the standard string equations of motion. One of the most relevant features of our findings is that the Weyl invariance of the traditional Polyakov string is traded for the invariance under area-preserving diffeomorphisms. The final section is devoted to the physics of maximal proper forces (acceleration), minimal length within the context of Born's Reciprocal Relativity theory [6] and to the Rindler world sheet description of accelerated open and closed strings from a very different approach and perspective than the one undertaken by [7].
[876] vixra:2201.0174 [pdf]
The Discretization of the Full Potenial Equation
The discretization process of the full potential equation (FPE) both in the quasi-linear and in the conservation form, is addressed. This work introduce the rst stage toward a development of a fast and ecient FPE solver, which is based on the algebraic multigrid (AMG) method. The mathematical diculties of the problem are associated with the fact that the governing equation changes its type from elliptic (subsonic ow) to hyperbolic (supersonic ow). A pointwise relaxation method when applied directly to the upwind discrete operator, in the supersonic ow regime, is unstable. Resolving this diculty is the main achievement of this work. A stable pointwise direction independent relaxation was developed for the supersonic and subsonic ow regimes. This stable relaxation is obtained by post-multiplying the original operator by a certain simple rst order downwind operator. This new operator is designed in such a way that makes the pointwise relaxation applied to the product operator to become stable. The discretization of the FPE in the conservation form is based on the body-tted structured grid approach. In addition the 2D stable operator in the supersonic ow regime was extended to 3D case. We present a 3D pointwise relaxation procedure that is stable both in the subsonic and supersonic ow regimes. This was veried by the Von-Neumann stability analysis.
[877] vixra:2201.0173 [pdf]
A Full Potential Equation Solver Based 0n the Algebraic Multigrid Method: Elementary Applications
This article reports the development of an ecient, and robust full potential equation (FPE) solver for transonic ow problems, which is based on the algebraic multigrid (AMG) method. AMG method solves algebraic systems based on multigrid principles but in a way that it is independent on the problem's geometry. The mathematical diculties of the problem are associated with the fact that the governing equation changes its type from elliptic (subsonic ow) to hyperbolic (supersonic ow). The ow solver is based of the body-tted structured grid approach in complex geometries. We demonstrate the AMG performance on various model problems with dierent ow speed from subsonic to transonic conditions. The computational method was demonstrated to be capable of predicting the shock formation and achieving residual reduction of roughly an order of magnitude per cycle, both for elliptic and hyperbolic problems.
[878] vixra:2201.0172 [pdf]
Detection of the Continuous Gravitational Wave of HM Cancri
HM Cancri is expected to be be one of the brightest sources of gravitational waves in our galaxy. Despite its known frequency, the radiation could not be detected so far. A novel technique can compensate for phase modulation and detect this GW in the records of superconducting gravimeters. This new observational window will allow a deeper understanding of the enigmatic stellar system.
[879] vixra:2201.0171 [pdf]
Nachweis Der Kontinuierlichen Gravitationswelle Von HM Cancri (Detection of the Continuous Gravitational Wave of HM Cancri)
HM Cancri ist möglicherweise die intensivste Quelle kontinuierlicher GW in unserer Galaxie. Trotz bekannter Frequenz konnte die Strahlung mangels geeigneter Sensoren bisher nicht nachgewiesen werden. Ein neuartiges Verfahrens kann die Phasenmodulation kompensieren und diese CGW in den Aufzeichnungen supraleitender Gravimeter zweifelsfrei nachweisen. Diese neuartige Beobachtungsfenster wird ein tieferes Verständnis des rätselhaften Sternsystems ermöglichen. <p> HM Cancri is expected to be one of the brightest sources of gravitational waves in our galaxy. Despite its known frequency, the radiation could not be detected so far. A new technique can compensate for the phase modulation and detect this CGW in the records of supraconducting gravimeters beyond doubt. This novel observational window will allow a deeper understanding of the enigmatic stellar system.
[880] vixra:2201.0170 [pdf]
The Extremal Nature of Membrane Newton-Cartan Formulations with Exotic Supergravity Theories
We construct a non-relativistic limit of eleven and ten-dimensional supergravity theories from the point of view of the fundamental symmetries, the higher-dimensional effective action, and the equations of motion. This fundamental limit can only be realized in a supersymmetric way provided we impose by hand a set of geometric constraints, invariant under all the symmetries of the non-relativistic theory, that define a so-called Dilatation-invariant Superstring Newton-Cartan geometry and Membrane Newton-Cartan expansion. In order to obtain a finite fundamental limit, the field strength of the eleven-dimensional four-form is required to obey a transverse self-duality constraint, ultimately due to the presence of the Chern-Simons term in eleven dimensions. The present research consider a non-relativistic fundamental limit of the bosonic sector of eleven-dimensional supergravity, leading to a theory based on a Covariant Membrane Newton-Cartan Supergeometry. We further show that the Membrane Newton-Cartan theory can be embedded in the U-duality symmetric formulation of exceptional field theory, demonstrating that it shares the same exceptional Lie algebraic symmetries as the relativistic supergravity, and providing an alternative derivation of the extra Poisson equation.
[881] vixra:2201.0169 [pdf]
The Penguin Dictionary of Economics and the Graphical Law
We study the Penguin Dictionary of Economics by Graham Bannock and R. E. Baxter, the eighth edition. We draw the natural logarithm of the number of entries, normalised, starting with a letter vs the natural logarithm of the rank of the letter, normalised. We conclude that the Dictionary can be characterised by BP(4,$\beta H=0$) i.e. a magnetisation curve for the Bethe-Peierls approximation of the Ising model with four nearest neighbours with $\beta H=0$, in the absence of external magnetic field, H. $\beta$ is $\frac{1}{k_{B}T}$ where, T is temperature and $k_{B}$ is the tiny Boltzmann constant. This is the case with the Oxford Dictionary of Economics by J. Black, N. Hashimzade and G. Myles, we have studied before.
[882] vixra:2201.0167 [pdf]
Contradiction Tolerance of Kirchhoff’s Diffraction Theory
Complex numbers are basic to exact science. When a flaw exists in complex numbers conceptual difficulties will arise for many subfields concerning wave mechanics. Kirchoff’s scalar diffraction theory of optics is already considered inconsistent. Nevertheless it is successfull in experiment. In our study we add the complex number inconsistency to Kirchoff diffraction and see what that does to the experimental value of the Kirchhoff diffraction theory. There are no a priori reasons to include or exclude the obtained inconsistent phase angle. Assuming that the inconsistent phase angle is excluded in nature, we were able to establish the theoretical possibility to observe a substantial diffraction despite a weak intensity point source and small wavelength.
[883] vixra:2201.0157 [pdf]
Consequences of Planck Constant for Relativity
It is known that the existence of Planck length and time contradicts the Lorentz-FitzGerald length contraction and time dilation of special relativity. After showing that the solution of this paradox leaves the spacetime transformations undetermined, it is shown that determining the transformations necessitates a new fundamental equation that governs the local amount of spacetime contraction/dilation.
[884] vixra:2201.0155 [pdf]
Image Deblurring: a Class of Matrices Approximating Toeplitz Matrices
We deal with the image deblurring problem. We assume that the blur mask has large dimensions. To restore the images, we propose a GNC-type technique, in which a convex approximation of the energy function is first minimized. The computational cost of the GNC algorithm depends strongly on the cost of such a first minimization. So, we propose of approximating the Toeplitz symmetric matrices in the blur operator by means of suitable matrices. Such matrices are chosen in a class of matrices which can be expressed as a direct sum between a circulant and a reverse circulant matrix.
[885] vixra:2201.0153 [pdf]
Simple Definitions of the Division by Zero and the Division by Zero Calculus: $[a^x/\log A]_{a=1}= X + 1/2$
In this note, we will state the definitions of the division by zero and division by zero calculus for popular using for the sake of their generality and great applications to mathematical sciences and the universe containing our basic ideas. In particular, we consider the value of the function $f(x,a)/\log a$ at $a=1$.
[886] vixra:2201.0148 [pdf]
Gravity and Speed of Light
One of the postulates of the Special Theory of Relativity is that the speed of light is constant in all inertial reference frames regardless of the motion of the observer or source. This postulate also apply to the General Theory of Relativity. There are many experiments whose results are consistent with the assumption that gravity does not affect the speed of light. But notwithstanding all this we may argue that it has not been formally proved. One way to prove its correctness is to replace it with the opposite axiom, axiom of variable speed of light. If there is at least one experiment whose result would possibly be in contradiction with this axiom, then it must be rejected and the axiom of a constant speed of light must be accepted.
[887] vixra:2201.0147 [pdf]
On Alzofon Experiments of Gravity Control
Physicist Frederic Alzofon provided the first effective theory of gravity beyond just providing a static model, like those of Newton and Einstein. The goal was to explain how that it is possible to control gravity, as hinted by indirect evidence collected from external sources. The 1994 experiments confirmed this possibility. Recently, a theory of Gravity based on the Standard Model was also provided by the author, in and independent line of research. It sets an explicit foundation for Alzofon's Theory.
[888] vixra:2201.0146 [pdf]
Graviton Regarded as the Goldstone Boson of Symmetry Breaking $\mathrm{so}(4) / \mathrm{so}(3)$
This paper introduces the construction of spontaneous symmetry breaking When the Goldstone boson effective Laplaceman calculates the effective Laplaceman of the Goldstone boson with symmetry breaking $\mathrm{SO}(4) / \mathrm{SO}(3)$. By the result we put the graviton regarded as the Goldstone boson of symmetry breaking $\mathrm{SO}(4) / \mathrm{SO}(3)$.
[889] vixra:2201.0143 [pdf]
Geometric Qubits: Leptons, Quarks and Gravitons
I present an axiomatically constructed model for an underlying description of particles and their interactions, in particular the fermions and gravitation. Using set axioms as a guide, qubits are the fundamental building blocks. It is proposed that the existence of fundamental laws of physics is precluded and only random events occur at the fundamental level. The uncertainty relations and the complex state vectors of quantum theory are a consequence of a Gibbs measure on random variances. This leads to a simple resolution of the measurement problem in quantum mechanics. Quarks and Leptons in 3 generations are Fock states of 4d spaces and their calculated electric charges agree with observations. In addition spin 2 massless gravitons are a 4d Fock state and is the maximum spin state for these 4d Fock states. All particles are geometric, and the dynamics of particles and Space-Time are governed by CAR algebra. One consequence of the model is the cosmological constant being a result of the modification of momentum in curved space.
[890] vixra:2201.0141 [pdf]
A Blind Source Separation Technique for Document Restoration Based on Edge Estimation
In this paper we study a Blind Source Separation (BSS) problem, and in particular we deal with document restoration. We consider the classical linear model. To this aim, we analyze the derivatives of the images instead of the intensity levels. Thus, we can establish a non-overlapping constraints on document sources. Moreover, we impose that the rows of the mixture matrices of the sources have sum equal to 1, in order to keep equal the lightnesses of the estimated sources and of the data. Here we give a technique which uses the symmetric factorization, whose goodness is tested by the experimental results.
[891] vixra:2201.0139 [pdf]
A New Look at Black Holes via Thermal Dimensions and the Complex Coordinates/Temperature Vectors Correspondence
It is shown how the crucial {\bf active diffs} symmetry of General Relativity allows to shift the radial location $ r = 2 G M $ of the horizon associated with the Schwarzschild metric to the $ r = 0^+$ location of a $diffeomorphic$ metric. In doing so, one ends up with a spacetime void surrounding the singularity at $ r = 0$. In order to explore the ``interior" region of this void we introduce complex radial coordinates whose imaginary components have a direct link to the inverse Hawking temperature, and which furnish a path that provides access to interior region. In addition, we show that the black hole entropy $ { A \over 4 } $ (in Planck units) is equal to the $area$ of a rectangular strip in the $complex$ radial-coordinate plane associated to this above path. The gist of the physical interpretation behind this construction is that there is an emergence of thermal dimensions which unfolds as one plunges into the interior void region via the use of complex coordinates. And whose imaginary components capture the span of the thermal dimensions. The filling of the void leads to an $emergent$ internal/thermal dimension via the imaginary part $ \beta_r$ of the complex radial variable $ {\bf r } = r + i \beta_r$.
[892] vixra:2201.0137 [pdf]
Proposed Water Electrolysis Experiment May Refute Mass-energy Equivalenc Of E=mc2
This paper is a continuation of a previous paper of the author which explains how a chemical analysis of theratio by weight of O-16 and H-1 in plain water (the presence of different isotopes of O and H would not affect the experiment) could decide if the hypothesis of mass-energy equivalence based on E=mc2 is verified or refuted; a refutation would mean a full revival of the classical law of conservation of mass without any need of mass-energy equivalence consideration. The proposed experiment is by electrolysis of water as an aqueous solution of potassium sulfate. Oxygen produced at the anode is trapped while the hydrogen produced at the cathode are allowed to escape freely. With three weighing with an analytical balance in vacuum, the ratio of O/H could be determined with a high degree of accuracy. The mass-energy equivalence principle accepted in present day physics may be said to be the foundational assumption in present day physics. If it fails, then current high energy physics would collapse. This includes the Standard Model of particle physics widely promulgated by CERN and much of all modern physics. The irony is that mass-energy equivalence and the equation E=mc2 have never been experimentally verified. This has been explained in detail in the author’s other paper.
[893] vixra:2201.0136 [pdf]
A Dictionary of Critical Theory by Ian Buchanan and the Graphical Law
We study A Dictionary of Critical Theory by Ian Buchanan from Oxford University Press. We draw the natural logarithm of the number of entries, normalised, starting with a letter vs the natural logarithm of the rank of the letter, normalised. We conclude that the Dictionary can be characterised by BP(4,$\beta H=0$) i.e. a magnetisation curve for the Bethe-Peierls approximation of the Ising model with four nearest neighbours with $\beta H=0$, in the absence of external magnetic field, H. $\beta$ is $\frac{1}{k_{B}T}$ where, T is temperature and $k_{B}$ is the tiny Boltzmann constant. This is the case with the two Dictionaries of Mathematics we have studied before. We surmise that the branch of Mathematics, plausibly, is dual to the Critical Theory.
[894] vixra:2201.0117 [pdf]
Proof of a Combinatorial Identity
In this present paper we will show you some interesting identity involving combina- torial symbols and a proof of it as a theorem. The theorem was a discovery from the times when I was studying Calculus at USAC/CUNOC University in Quetzaltenango, Guatemala around 2004 year.
[895] vixra:2201.0112 [pdf]
A Note on Mass and Gravity
The principle of equivalence implies the inertial mass equals to gravitational mass. Gravity is understood in terms of the quark model, amended by Platonic symmetry. This allows to comment on the origin of inertial mass and how it can be controlled when controlling gravity.
[896] vixra:2201.0106 [pdf]
Scale-Invariant Conformal Waves
Investigating conformal metrics on (pseudo-) Riemannian spaces, a ‘scale-invariant’ choice for the Lagrange density leads to homogeneous d’Alembert eqations which allow for source-free wave phenomena in any number of dimensions. This suggests to apply a scale-invariant action principle rather than the Hilbert-Einstein action to general relativity to also find general, non-conformal solutions.
[897] vixra:2201.0104 [pdf]
Resolving the Singularity by Looking at the Dot and Demonstrating the Undecidability of the Continuum Hypothesis
Einstein's theory of general relativity, which Newton's theory of gravity is a part of, is fraught with the problem of singularity that has been established as a theorem by Hawking and Penrose, the later being awarded the Nobel Prize in recent years. The crucial {\it hypothesis} that founds the basis of both Einstein's and Newton's theories of gravity is that bodies with unequal magnitudes of masses fall with the same acceleration under the gravity of a source object. Since, the validity of the Einstein's equations is one of the assumptions based on which Hawking and Penrose have proved the theorem, therefore, the above hypothesis is implicitly one of the founding pillars of the singularity theorem. In this work, I demonstrate how one can possibly write a non-singular theory of gravity which manifests that the above mentioned hypothesis is only valid in an approximate sense in the ``large distance'' scenario. To mention a specific instance, under the gravity of the earth, a $5$ kg and a $500$ kg fall with accelerations which differ by approximately $113.148\times 10^{-32}$ meter/sec$^2$ and the more massive object falls with less acceleration. Further, I demonstrate why the concept of gravitational field is not definable in the ``small distance'' regime which automatically justifies why the Einstein's and Newton's theories fail to provide any ``small distance'' analysis. In course of writing down this theory, I demonstrate why the continuum hypothesis as spelled out by Goedel, is undecidable. The theory has several aspects which provide the following realizations: (i) Descartes' self-skepticism concerning exact representation of numbers by drawing lines (ii) Born's wish of taking into account ``natural uncertainty in all observations'' while describing ``a physical situation'' by means of ``real numbers'' (iii) Klein's vision of having ``a fusion of arithmetic and geometry'' where ``a point is replaced by a small spot'' (iv) Goedel's assertion about ``non-standard analysis, in some version'' being ``the analysis of the future''. A major drawback of this work is that it can easily appear to the authorities of modern science as too simple to believe in. This is, firstly due to the origin of the motivations being rooted to the truthfulness of the language in which physics is written and secondly due to the lucidity of the calculations involved. However, at the same time, this work can also appear as a fresh and non-standard approach to do physics from its roots, where the problem of singularity is not even there to begin with. The credibility of this work depends largely on whether the reader is willing adopt the second mindset.
[898] vixra:2201.0101 [pdf]
On the Simple Identity (1/(x-1)) + (1/(x 2)) = (2x 3)/((x 1)(x 2)) and the Expression that G(z,a) + Log |z a| is Harmonic Around Z=a from the Viewpoint of the Division by Zero Calculus
In this note, we will refer to the simple identity $(1/(x-1)) + (1/(x -2)) = (2x -3)/((x -1)(x- 2))$ and the expression that $g(z,a) + \log |z - a| $ is harmonic around $z=a$ from the viewpoint of the division by zero calculus that are very popular expressions in elementary mathematics. With these simple and very popular expressions, we would like to show clearly the importance of the division by zero calculus for some general people in a self-contained way.
[899] vixra:2201.0099 [pdf]
The Area Induced by Circles of Partition and Applications
In this paper we continue with the development of the circles of partitions by introducing the notion of the area induced by circles of partitions and explore some applications.
[900] vixra:2201.0094 [pdf]
Cardiovascular Disease Diagnosis using Deep Neural Networks
Cardiovascular disease causes 25% of deaths in America (Heart Disease Facts). Specifically, misdiagnosis of cardiovascular disease results in 11,000 American deaths annually, emphasizing the increasing need for Artificial Intelligence to improve diagnosis. The goal of our research was to determine the probability that a given patient has Cardiovascular Disease using 11 easily-accessible objective, examination, and subjective features from a data set of 70,000 people. To do this, we compared various Machine Learning and Deep Learning models. Exploratory Data Analysis (EDA) identified that blood pressure, cholesterol, and age were most correlated with an elevated risk of contracting heart disease. Principal Component Analysis (PCA) was employed to visualize the 11-D data onto a 2-D plane, and distinct aggregations in the data motivated the inference of specific cardiovascular conditions beyond the binary labels in the data set. To diagnose patients, several Machine Learning and Deep Learning models were trained using the data and compared using the metrics Binary Accuracy and F1 Score. The initial Deep Learning model was a Shallow Neural Network with 1 hidden layer consisting of 8 hidden units. Further improvements, such as adding 5 hidden layers with 8 hidden units each and employing Mini-Batch Gradient Descent, Adam Optimization, and He’s Initialization, were successful in decreasing train times. These models were coded without the utilization of Deep Learning Frameworks such as TensorFlow. The final model, which achieved a Binary Accuracy of 74.2% and an F1 Score of 0.73, consisted of 6 hidden layers, each with 128 hidden units, and was built using the highly optimized Keras library. While current industrial models require hundreds of comprehensive features, this final model requires only basic inputs, allowing versatile applications in rural locations and third-world countries. Furthermore, the model can forecast demand for medical equipment, improve diagnosis procedures, and provide detailed personalized health statistics.
[901] vixra:2201.0092 [pdf]
Gravitation as a Secondary Effect of Electromagnetic Interaction
It has been invested so much vain effort into the unification of gravity and quantum physics,that meanwhile it does not seem to be fallacious any more to estimate the so far pursued way as adead end. Therefore, I resume an old approach and start from the precondition, that gravitationcan be understood as a secondary effect of electromagnetic interaction. The unification of theforces, thus, is a prerequisite. Based on that, the four classical tests of General Relativity Theoryincluding the shift of Mercury’s perihelion can be reproduced. Mach’s principle harmonicallyfits into the presented model. The covariance principle is renounced.
[902] vixra:2201.0083 [pdf]
A Note on Constructions of Quantum-Field Operators
We show some tune points in constructions of field operators in quantum field theory (QFT). They are related to the old discussions on interpretations of the negative-energy solutions of relativistic equations. It is easy to check that both algebraic equation Det(ˆp − m) = 0 and Det(ˆp + m) = 0 for u− and v− 4-spinors have solutions with p0 = ±Ep = ± p p2 + m2. The same is true for higher-spin equations. Meanwhile, every book considers the equality p0 = Ep for both u− and v− spinors of the (1/2, 0) ⊕ (0, 1/2) representation only, thus applying the Dirac-Feynman-Stueckelberg procedure for elimination of the negative-energy solutions. The recent Ziino works (and, independently, the articles of several others) show that the Fock space can be doubled. We re-consider this possibility on the quantum field level for both s = 1/2 and higher spin particles. Keywords: QFT, Dark Matter, Dirac Equation.
[903] vixra:2201.0082 [pdf]
Prolegomena to Any Kinematics of Quantum Gravity
It is known that existence of Planck length and time is in contradiction with fundamental results of special relativity, i.e. length contraction and time dilation. In a previous attempt. I approached the problem from a blindly-formal perspective, but Nature can be more subtle than what formal reasoning can achieve. Although that attempt of mine always had a little value for myself, I (and for that matter nobody) has still come up with a completely satisfactory solution, leaving my previous attempt the only work that at least gets some transformations. In this note I sketch the outlines of another approach that is completely satisfactory, but much more difficult to work out. Although I am not able to get any final result, I write this work to merely share the raw idea; maybe someone can build on this.
[904] vixra:2201.0080 [pdf]
Discussion of Cosmological Acceleration and Dark Energy
The title of this workshop is: "What comes beyond standard models?". Standard models are based on Poincare invariant quantum theory. However, as shown in the famous Dyson's paper "Missed Opportunities" and in my publications, such a theory is a special degenerate case of de Sitter invariant quantum theory. I argue that the phenomenon of cosmological acceleration has a natural explanation as a consequence of quantum de Sitter symmetry in semiclassical approximation. The explanation is based only on universally recognized results of physics and does not involve models and/or assumptions the validity of which has not been unambiguously proved yet (e.g., dark energy and quintessence). I also explain that the cosmological constant problem and the problem why the cosmological constant is as is do not arise.
[905] vixra:2201.0076 [pdf]
Effect Structure and Thermodynamics Formulation of Demand-side Economics
We propose concept of equation of state (EoS) effect structure in form of diagrams and rules. This concept helps justifying EoS status of an empirical relation. We apply the concept to closed system of consumers and we are able to formulate its EoS. According to the new concept, EoS are classified into three classes. Manifold space of thermodynamics formulation of demand-side economics is identified. Formal analogies of thermodynamics and economics consumers' system are made. New quantities such as total wealth, generalized utility and generalized consumer surplus are defined. Microeconomics' concept of consumer surplus is criticized and replaced with generalized consumer surplus. Smith's law of demand is included in our new paradigm as a specific case resembling isothermal process. Absolute zero temperature state resembles the nirvana state in Buddhism philosophy. Econometric modelling of consumers' EoS is proposed at last.
[906] vixra:2201.0073 [pdf]
The Subtle Curse of Creative-Social Creatures, the Truth Behind a Inevitable Mankind Invention: Christianity
Do Christians understand Christianity, or do they have faith? Can you destroy a religion by replacing understanding with faith? Using overwhelming objective arguments, we claim to have decoded and "unearthed" Christianity (and probably, if they exist, Christian-like religions too), introducing the authentic version of Christianity, explaining many of it's fundamental aspects. We even proved the compatibility between Christianity (creationist) and Darwinism, and proposed a shockingly eerie hypothesis for the question: "Why would God allow undeserved suffering?". The philosophy of life, with it's objective arguments, was hiding under our noses, disguised as something else. Like gravity is only an illusion (according to Einstein's General Theory of Relativity), and like borders are social constructs, and like fiat money (which has no intrinsic value) is social construct, is corruption also only an illusion, and a social construct? We suggest that the answer is yes: corruption is (sometimes, partly) a (curable) social construct, that stems from misunderstandings, psychological defects (created by evolution), incentives, conflict of interest, and lack of trust, a social construct supported by inheritable things (sins) such as war, and antisocial systems. Occasionally we propose ideas to help combat and prevent both corruption and inheritance of sin. For many years, I thought that I was an agnostic atheist, but now I know that the reason why I was agnostic, is because my God was literally the truth. If you exist, then that means other people like you also exist. Remember: freedom of expression and the truth are the most valuable things, however, the authorities might disagree with our version of freedom of expression. This article has missing information, because "art is never finished, only abandoned" (Leonardo da Vinci), experts make mistakes, the author is brainwashed, so please do not believe everything that is written, in this article, just because you like or believe some or most things you found here (see cognitive biases: the halo effect, confirmation bias, frequency bias, and potentially others)! However, if you are not hated or quizzaciously ridiculed for the things you say, then you are not a good philosopher.
[907] vixra:2201.0069 [pdf]
The Concise Oxford Dictionary of Politics and the Graphical Law
We study the Concise Oxford Dictionary of Politics, third edition, edited by Iain Mclean and Alistair Mcmillan. We draw the natural logarithm of the number of entries, normalised, starting with a letter vs the natural logarithm of the rank of the letter, normalised. We conclude that the Dictionary can be characterised by BP(4,$\beta H=0$) i.e. a magnetisation curve for the Bethe-Peierls approximation of the Ising model with four nearest neighbours with $\beta H=0$, in the absence of external magnetic field, H. $\beta$ is $\frac{1}{k_{B}T}$ where, T is temperature and $k_{B}$ is the tiny Boltzmann constant.
[908] vixra:2201.0050 [pdf]
Blind Source Separation in Document Restoration: an Interference Level Estimation
We deal with the problem of blind separation of the components, in particular for documents corrupted by bleed-through and show-through. So, we analyze a regularization technique, which estimates the original sources, the interference levels and the blur operators. We treat the estimate of the interference levels, given the original sources and the blur operators. In particular, we investigate several GNC-type algorithms for minimizing the energy function. In the experimental results, we find which algorithm gives more precise estimates of the interference levels.
[909] vixra:2201.0048 [pdf]
Generalized Cannonball Problem
The cannonball problem asks which numbers are both square and square pyramidal. In this paper I consider the cannonball problem for other $r$-regular polygons. I carried out a computer search and found a total of $858$ solutions for polygons $3\le r\le10^5$. By using elliptic curves I also found that there are no solutions for $r=5$ (pentagon), $r=7$ (heptagon), and $r=9$ (enneagon).
[910] vixra:2201.0046 [pdf]
The Penguin Dictionary of Sociology and the Graphical Law
We study the Penguin Dictionary of Sociology by Nicholas Abercrombie, Stephen Hill and Bryan S. Turner. We draw the natural logarithm of the number of entries, normalised, starting with a letter vs the natural logarithm of the rank of the letter, normalised. We conclude that the Dictionary can be characterised by BP(4,$\beta H=0$) i.e. a magnetisation curve for the Bethe-Peierls approximation of the Ising model with four nearest neighbours with $\beta H=0$, in the absence of external magnetic field, H. $\beta$ is $\frac{1}{k_{B}T}$ where, T is temperature and $k_{B}$ is the tiny Boltzmann constant. This is the case with the Oxford Dictionary of Sociology by J. Scott and G. Marshall, we have studied before.
[911] vixra:2201.0040 [pdf]
Nuclei Are Energy Stores of Universe
The first geometric deformation (Universal) of the isotropic space has as a consequence the second one (local) as space holes (bubbles of empty space), the primary neutron close to the Universe center. When the neutron is found in an environment of stronger cohesive pressure, it becomes unstable and is cleaved (beta decay), producing a proton by the detachment of negative electrical units. These negative units form an electron, while on the remaining proton cortex the positive units outmatch. Moreover, the nuclei have been structured through the inverse electric field of the proton and the electric entity of the macroscopically neutral neutron. It will be proved that the nuclei contain energy, which is equivalent to the structural energy of their nucleons, their kinetic energy and the dynamic energy of their fields. Consequently, energy is concentrated in the nucleons, which are distributed in the nucleus over huge distances, likewise in the case of atoms and planetary systems. So, matter is thin in all its scales.
[912] vixra:2201.0036 [pdf]
Gravity Control and Cold Fusion
In this short note, Gravity Control is related to Cold Fusion. In recent articles it was explained the quantum origin of gravity, derived from finite gauge groups: Platonic. As a byproduct, the gravitational potential can be controlled in a similar way to temperature, via dynamic nuclear orientation of spins. It is surprising that another consequence is the possibility to reorient the spins to allow for weaker electrostatic repulsion in nuclei, with obvious applications to cold fusion.
[913] vixra:2201.0028 [pdf]
The Quantum Motion of the Nerve with an Interstitial Defect
We consider the nerve as the elastic string, the left end of which is fixed at the beginning of the coordinate system, the right end is fixed at point l and mass m is fixed between the ends of the string. We determine the classical and the quantum vibration of such system. The quantum motion is obtained by the so-called non-conventional oscillator quantization method by author. The proposed model can be also related in the modified form to the problem of the Moessbauer effect, being the recoilless nuclear resonance uorescence, which is the resonant and recoil-free emission and absorption of gamma radiation by atomic nuclei bound in a solid. (Mossbauer,1958). It is not excluded that our oscillator quantization of the string can be extended to generate the new way of the string theory of matter and physiology of nerves.
[914] vixra:2201.0027 [pdf]
Analysing the Time Period of Vela Pulsar
In this project, we have implemented our basic understanding of Pulsar Astronomy to calculate the Time Period of Vela Pulsar. Our choice of pulsar rests on the fact that it is the brightest object in the high energy gamma ray sky. The simplistic data set consisting of only voltage signals makes our preliminary attempt as closely accurate as possible. The observations had been made at 326.5 MHz through a cylindrically paraboloid telescope at Ooty. A higher frequency creates a much lower delay in the arrival time of pulses and makes our calculations even more accurate. Being an already widely studied celestial body, it gives us the opportunity to compare our findings and make necessary modifications.
[915] vixra:2201.0024 [pdf]
Analyse der Messungen von D. C. Miller in Cleveland 1927–1929 <br> Analysis of D. C. Miller's Measurements in Cleveland 1927–1929
Dayton C. Miller machte nach den Experimenten auf dem Mount Wilson weitere Experimente mit dem selben Interferometer. Diese Daten werden analysiert, um die Ergebnisse einer Analyse der Daten vom Mount Wilson zu bestätigen. Eine Bestätigung wird nicht gefunden, es gibt aber Hinweise auf das erwartete theoretische Signal. <p> After the experiments on Mount Wilson, Dayton C. Miller carried out further experiments with the same interferometer. These data are analysed to confirm the results of an analysis of the Mount Wilson data. Confirmation is not found, but there is evidence of the expected theoretical signal.
[916] vixra:2201.0023 [pdf]
Conservation of Energy and Particle Moving Towards a Mass
We consider a zero rest mass classical particle moving from infinity towards a point mass along a fixed line containing the mass. We show gravitation with only constants $c$ and $G$ with dimension does not satisfy conservation of energy.
[917] vixra:2201.0022 [pdf]
Analysis of D. C. Miller's Measurements in Cleveland 1927–1929 (English Version)
After the experiments on Mount Wilson, Dayton C. Miller carried out further experiments with the same interferometer. These data are analysed to confirm the results of an analysis of the Mount Wilson data. Confirmation is not found, but there is evidence of the expected theoretical signal.
[918] vixra:2201.0013 [pdf]
The Dynamics of D-branes with Dirac-Born-Infeld and Chern-Simons/Wess-Zumino Actions
We have explained, and shown by feature stringy examples, why a D-brane in superstring theory, when treated as a fundamental dynamical object, can be described by a map from an Azumaya/matrix manifold with a fundamental module with a connection to the target spacetime. In this sequel, we construct a non-Abelian Dirac-Born-Infeld action functional for such pairs when the target spacetime is equipped with a background (dilaton, metric, B)-field from closed strings. We next develop a technical tool needed to study variations of this action and apply it to derive the first variation of with respect. The equations of motion that govern the dynamics of D-branes then follow. We introduce a new action standard for D-branes that is to D-branes as the Polyakov action is to fundamental strings. This ‘standard action’ is abstractly a non-Abelian gauged sigma model based on maps from an Azumaya/matrix manifold with a fundamental module with a connection enhanced by the dilaton term, the gauge-theory term, and the Chern-Simons/Wess-Zumino term that couples to Ramond-Ramond field. In a special situation, this new theory merges the theory of harmonic maps and a gauge theory, with a nilpotent type fuzzy extension. A complete action for a D-brane world-volume must include also the Chern-Simons/Wess-Zumino term that governs how the D-brane world-volume couples with the Ramond-Ramond fields . The current notes lay down a foundation toward the dynamics of D-branes along the line of this research project.
[919] vixra:2112.0161 [pdf]
The Diagonalization Method and Brocard's Problem
In this paper we introduce and develop the method of diagonalization of functions $f:\mathbb{N}\longrightarrow \mathbb{R}$. We apply this method to a class of problems requiring to determine if the equations of the form $f(n)+k=m^2$ has a finite number of solutions $n\in \mathbb{N}$ for a fixed $k\in \mathbb{N}$.
[920] vixra:2112.0157 [pdf]
Graph Thinness, a Lower Bound and Complexity
The thinness of a simple graph G= (V,E) is the smallest integer k for which there exist a total order (V, <) and a partition of V into k classes (V_1,...,V_k) such that, for all u, v, w in V with u<v<w, if u and v belong to the same class and {u,w} is in E, then {v,w} is in E. We prove that (1) there are $n$-vertex graphs of thinness $n-o(n)$, which answers a question of Bonomo-Braberman, Gonzalez, Oliveira, Sampaio, and Szwarcfiter, (2) the computation of thinness is NP-hard, which is a solution to a long standing open problem posed by Mannino and Oriolo.
[921] vixra:2112.0156 [pdf]
Condensate of Spacetime Quanta
Assuming quanta of spacetime to be spin-2 particles, after developing a statistical theory that has no reference to Boltzmann constant, it is argued that below a certain critical pressure of vacuum, quanta of spacetime form a condensate. The possibility of explanation of Sonoluminescence as a quantum-gravitational effect is also envisaged.
[922] vixra:2112.0155 [pdf]
Comparison of Various Models for Stock Prediction
Due to the high volatility of the COVID-19 pandemic, interest in stock invest-ment is focused. Also, it is said that the atmosphere is gathering again fromthe cryptocurrency market to the domestic stock market. In this situation, welooked at which model could more accurately predict the closing
[923] vixra:2112.0153 [pdf]
Singly and Doubly Even Multiples of 6 and Statistical Biases in the Distribution of Primes
Computer experiments show that singly even multiples of 6 sur-rounded by prime pairs exhibit a larger ratio of nonsquarefree to squarefree multiples than generic singly even multiples of 6, a bias of ca 10.6% measured against the expected value. The same bias occurs for isolated primes next to singly even multiples of 6; here the deviation from the expected value is ca 3.3% of this value. The expected value of the ratio of singly even to doubly even nonsquarefree multiples of 6 also differs from values found experimentally for prime pairs centered on such multiples or isolated primes next to them. For pairs, this ratio exceeds its unbiased value by ca 6.2%, for isolated primes by ca 2.0%. The values cited are for the first 10^10 primes, the largest range we investigated. This paper broadens our recent study of a newly found bias in the distribution of primes by examining singly and doubly even multiples of 6. In particular, it shows that for primes centered on or next to singly even multiples of 6, the statistical biases in question are more pronounced than in the general case studied by us before.
[924] vixra:2112.0151 [pdf]
Discrete Markov Random Field Relaxation
This paper gives a technique to approximate (relaxation) discrete Markov Random Field (MRF) using convex programming. This approximated MRF can be used to approximate NP problem. This also proves that NP is not equal P because the MRF convex programming and the approximate MRF convex programming are not the same with removal of some product terms.
[925] vixra:2112.0147 [pdf]
Superluminal Motion and Causality from a Laboratory Perspective
there are two different approaches to superluminal communication around a closed loop, with one leg of the loop purportedly leading into the past. One scheme employs direct signals between a receiver in motion relative to a transmitter. This is called Method I in this paper. In the other, moving observers "hand-off" information between momentarily-adjacent observers in relative motion passing each other, which is designated Method II. It is shown that the correct application of superluminal physics in the former method clearly precludes causality violation, but it is more subtle in the latter approach. An analysis of what would be observed in a physics laboratory, compared to what is inferred from a Minkowski diagram, attests that causality violation does not occur in either method. Thus causality is not violated by superluminal communication.
[926] vixra:2112.0145 [pdf]
Riemann’s Last Theorem
The central idea of this article is to introduce and prove a special form of the zeta function as proof of Riemann’s last theorem. The newly proposed zeta function contains two sub functions, namely f1(b,s) and f2(b,s) . The unique property of zeta(s)=f1(b,s)-f2(b,s) is that as tends toward infinity the equality zeta(s)=zeta(1-s) is transformed into an exponential expression for the zeros of the zeta function. At the limiting point, we simply deduce that the exponential equality is satisfied if and only if real(s)=1/2 . Consequently, we conclude that the zeta function cannot be zero if real(s)=1/2 , hence proving Riemann’s last theorem.
[927] vixra:2112.0143 [pdf]
Meaning of speed of light in the FLRW Universe
The ΛCDM is frequently referred to as the standard model of Big Bang cosmology because it is the simplest model that provides a reasonably good account of most cosmological observations. This model is based on the assumption of the cosmological principle, which states that the universe looks the same from all positions in space at a particular time and that all directions in space at any point are equivalent. One can define the surface of simultaneity of the local Lorentz frame (LLF) with a global proper time in the Friedmann–Lemaˆıtre–Robertson–Walker (FLRW) universe. The Lorentz invariance is locally exact along the worldline and well defined on the hypersurface at a given time tk. However, it is meaningless to argue the validity of the local Lorentz invariance along the geodesic in the manifold. We show that the speed of light on each hypersurface (LLF) is constant but its value should be the function of a scale factor (i.e., cosmological redshift) to define the null interval consistently. This means the varying speed of light as a function of cosmic time. Also, the entropy of the Universe should be conserved to preserve the homogeneity and isotropy of the Universe. This adiabatic expansion condition induces the cosmic evolutions of other physical constants including the Planck constant. We conclude that the conventional assumption of the constant speed of light in the FLRW universe should be abandoned to obtain a consistent and accurate interpretation of cosmological measurement.
[928] vixra:2112.0135 [pdf]
Directed Dependency Graph Obtained from a Correlation Matrix by the Highest Successive Conditionings Method
In this paper we will propose a directed dependency graph obtained from a correlation matrix. This graph will include probabilistic causal sub-models for each node modeled by conditionings percentages. The directed dependency graph will be obtained using the highest successive conditionings method with a conditioning percentage value to be exceeded.
[929] vixra:2112.0130 [pdf]
The SP Challenge: that the SP System is More Promising as a Foundation for the Development of Human-Level Broad ai Than Any Alternative
The "SP Challenge" is the deliberately provocative theme of this paper: that the "SP System" (SPS), meaning the "SP Theory of Intelligence" and its realisation in the "SP Computer Model", is more promising as a foundation for the development of human-level broad AI, aka 'artificial general intelligence' (AGI), than any alternative. In that connection, the main strengths of the SPS are: 1) The adoption of a top-down, breadth-first research strategy with wide scope; 2) Recognition of the importance of information compression (IC) in human learning, perception, and cognition -- and, correspondingly, a central role for IC in the SPS; 3) The working hypothesis that all kinds of IC may be understood in terms of the matching and unification of patterns (ICMUP); 4) A resolution of the apparent paradox that IC may achieve decompression as well as compression. 5) The powerful concept of SP-multiple-alignment, a generalisation of six other variants of ICMUP; 6) the clear potential of the SPS to solve 19 problems in AI research; 7) Strengths and potential of the SPS in modelling several aspects of intelligence, including several kinds of probabilistic reasoning, versatility in the representation and processing of AI-related knowledge, and the seamless integration of diverse aspects of intelligence, and diverse kinds of knowledge, in any combination; 8) Several other potential benefits and applications of the SPS; 9) In "SP-Neural", abstract concepts in the SPS may be mapped into putative structures expressed in terms of neurons and their interconnections and intercommunications; 10) The concept of ICMUP provides an entirely novel perspective on the foundations of mathematics; 11) How to make generalisations from data, including the correction of over- and under-generalisations, and how to reduce or eliminate errors in data. There is discussion of how the SPS compares with some other potential candidates for the SP-Challenge. And there is an outline of possible future directions for the research.
[930] vixra:2112.0126 [pdf]
Pcarst: a Method of Weakening Conflict Evidence Based on Principal Component Analysis and Relatively Similar Transformation
How to deal with conflict is a significant issue in Dempster-Shafer evidence theory (DST). In the Dempster combination rule, conflicts will produce counter-intuitive phenomena. Therefore, many effective conflict handling methods have been presented. This paper proposes a new framework for reducing conflict based on principal component analysis and relatively similar transformation (PCARST), which can better reduce the impact of conflict evidence on the results, and has more reasonable results than existing methods. The main characteristic feature of the BPAs is maintained while the conflict evidence is regarded as a noise signal to be weakened. A numerical example is used to illustrate the effectiveness of the proposed method. Results show that a higher belief degree of the correct proposition is obtained comparing previous methods.
[931] vixra:2112.0124 [pdf]
On the Calculation of the Ripple Voltage in Half-Wave Rectifier Circuits
In this article we propose a computational algorithm written in Matlab as well as a mathematical formula to calculate the magnitude of the ripple voltage encountered in halfwave rectifying circuits. After rectification of a symmetric AC harmonic voltage signal, this ripple voltage remains, frequently measurable, superimposed on the resulting DC voltage signal. Both, the algorithm as well as the derived formula enable us to calculate the magnitude of the ripple voltage to a degree of precision any better than $1$ to $10^{6}$, i.e. 1 ppm. We conclude this article by comparing the accuracy of the proposed algorithm to simulated findings. The technique discussed here for calculating the magnitude of the remaining ripple voltage can easily be generalized and extended to the calculation of the magnitude of the ripple voltage encountered in full wave rectifying circuits.
[932] vixra:2112.0114 [pdf]
Two-Point Momentless: Space and Time in the Context of Quantum Entanglement.
In this paper, a cause for the lack of unification between quantum field theory and general relativity is identified. It is shown that the description of spacetime in current theoretical models assumes energy to be unchanging at small units of time on the quantum scale. As a means of counteraction, a condition is established for quantum gravity theories and the concept of ’two-point momentless’ is presented.
[933] vixra:2112.0113 [pdf]
Un Percorso Nel Formalismo Lagrangiano (a Path Through the Lagrangian Formalism)
Il manuale intende orire un percorso sul formalismo lagrangiano dei sistemi olonomi cercando il giusto equilibrio tra l'approccio sicoapplicativo ed il contesto astratto dell'ambiente geometrico. Agli ordinari argomenti collegati ai sistemi olonomi si aggiunge uno studio sui potenziali generalizzati, l'applicazione ai sistemi non inerziali ed un breve percorso sui sistemi anolonomi. <p> The handbook aims to oer a path on the Lagrangian formalism of holonomic systems by seeking the right balance between the physical-application approach and the abstract context of the geometric environment. In addition to the ordinary arguments related to holonomic systems, there is a study of generalized potentials, application to non-inertial systems and a short path on nonholonomic systems.
[934] vixra:2112.0110 [pdf]
Un Percorso Nel Formalismo Hamiltoniano (a Path Through the Hamiltonian Formalism)
Partendo dai classici argomenti delle equazioni canoniche di Hamilton (ottenute tramite la trasformazione di Legendre) e dei campi vettoriali hamiltoniani, il manuale intende proseguire il percorso sul formalismo hamiltoniano presentando l'approccio variazionale collegato all'integrale di Hilbert e i campi di Weierstrass. In questo modo si ottiene l'invariante integrale di PoincaréCartan che caratterizza i sistemi hamiltoniani e si ha accesso alla teoria delle trasformazioni canoniche e delle funzioni generatrici. Si conclude presentando l'equazione di HamiltonJacobi e accennando alla denizione di sistema integrabile. <p> Starting from the classic arguments of Hamilton's canonical equations (obtained through Legendre's transformation) and Hamiltonian vector elds, the manual intends to continue the path on Hamiltonian formalism presenting the variational approach linked to the Hilbert's integral and the Weierstrass elds. In this way the invariant integral of Poincar'eCartan that characterizes Hamiltonian systems is obtained and one can access to the theory of canonical transformations and generating functions. We conclude by presenting the Hamilton-Jacobi equation and mentioning the denition of an integrable system.
[935] vixra:2112.0108 [pdf]
An Essential History of Euclidean Geometry
In this note, we would like to refer simply to the great history of Euclidean geometry and as a result we would like to state the great and essential development of Euclidean geometry by the new discovery of division by zero and division by zero calculus. We will be able to see the important and great new world of Euclidean geometry by Hiroshi Okumura.
[936] vixra:2112.0097 [pdf]
Phish: A Novel Hyper-Optimizable Activation Function
Deep-learning models estimate values using backpropagation. The activation function within hidden layers is a critical component to minimizing loss in deep neural-networks. Rectified Linear (ReLU) has been the dominant activation function for the past decade. Swish and Mish are newer activation functions that have shown to yield better results than ReLU given specific circumstances. Phish is a novel activation function proposed here. It is a composite function defined as f(x) = xTanH(GELU(x)), where no discontinuities are apparent in the differentiated graph on the domain observed. Generalized networks were constructed using different activation functions. SoftMax was the output function. Using images from MNIST and CIFAR-10 databanks, these networks were trained to minimize sparse categorical crossentropy. A large scale cross-validation was simulated using stochastic Markov chains to account for the law of large numbers for the probability values. Statistical tests support the research hypothesis stating Phish could outperform other activation functions in classification. Future experiments would involve testing Phish in unsupervised learning algorithms and comparing it to more activation functions.
[937] vixra:2112.0096 [pdf]
The Information Paradox of Black Holes May not Exist
In this paper, through the numerical simulation of the process in which the particle falls into the Schwarzschild black hole and the analysis of the causal relationship of the events on the event horizon of the Schwarzschild black hole, it is concluded that "if the black hole itself will die due to evaporation, the event that the object falling into the black hole will pass through the event horizon and fall into the black hole will not happen", Because the event that objects falling on the black hole pass through the black hole divine world always occurs in the causal future of the event that the black hole dies due to evaporation Since the object will not really fall into the black hole, the problem of "the information paradox of black holes" itself is no longer a problem.
[938] vixra:2112.0095 [pdf]
Triplere: Knowledge Graph Embeddings Via Triple Relation Vectors
Knowledge representation is a classic problem in Knowledge graphs. Distance-based models have made great progress. The most significant recent developments in this direction have been those of Rotate[1] and PairRE[2], which focuses on expressing relationships as projections of nodes. However TransX series Model(TransE[3], TransH[4], TransR[5]) expresses relationships as translations of nodes. To date, the problem of the Combination of Projection and translation has received scant attention in the research literature. Hence, we propose TripleRE, a method that models relationships by projections and translations. Compared with the other knowledge representation model, we achieve the best results on the ogbl-wikikg2 dataset.
[939] vixra:2112.0090 [pdf]
Orbit Precession in Classical Mechanics
As you know, precessing ellipses appear as solutions to the equations of the general theory of relativity. At the same time, it is generally accepted that in classical mechanics there are only the following equations of orbits: circles, ellipses, parabolas and hyperbolas. However, precessing ellipses also appear in classical mechanics. As you know, orbital precession is observed not only when the planets move in the Solar System. The precession of the periastron of the orbit is also observed in close binary systems, the components of which have evolved into pulsars. In such systems, the masses of the components – neutron stars – are of the same order of magnitude. Consequently, they will move in similar orbits around the center of mass. The orbits will be uniformly precessing ellipses. We write down the equation of such an orbit and derive from it an expression for the force of attraction acting between bodies. As a result, it turns out that, in addition to the Newtonian force, which is inversely proportional to the square of the distance between the bodies, a term appears in the expression for the force that is inversely proportional to the cube of the distance.
[940] vixra:2112.0088 [pdf]
Knowledge Graph Based Query Processing by Matching Words to Entities Using Wikipedia
The thirty-years development of Search Engines could never set apart the fundamental problem: to improve relativity for the page rankings with the given query. As the NLP is now widely used, this paper discusses a data abstraction via knowledge graph (KG) for NLP models that could be applied in relating keywords to entities with a higher probability.
[941] vixra:2112.0084 [pdf]
Light-speed Acceleration Radius
Introduced and discussed is what is termed the light-speed acceleration radius. This is the radius of a spherical gravitational object at which an object (particle) at rest will accelerate to the speed of light in the Planck time. Because the Planck time is likely the shortest possible time interval and the speed of light is the maximum possible speed, this is the radius for the maximum gravitational field as measured by gravitational acceleration. This radius differs from the Schwarzschild radius except for so-called micro black holes.
[942] vixra:2112.0083 [pdf]
Theoretical Ratio of the Gravitational Force to the Electromagnetic Force Between Two Electrons
<p> This paper develops the theoretical ratio of the gravitational force to electromagnetic force between two electrons. I refer to this ratio as the <i>ggee</i> ratio. The <i>ggee</i><sub>theory</sub> ratio equals the product of a factor multiplied by <i>alpha</i><sup>2</sup>.</p> <p> The factor portion of the calculation comes from an unusual source. An underlying model posits a metaphysical structure of space and of the electron. A rational number solution to the geometry of the model leads directly to the factor. This solution emerges completely independently from the <i>ggee</i> ratio it produces. The rational factor seems to be an exact solution.</p> <p> The precision of <i>ggee</i> normally depends on the precision of G. The precision of ggee developed depends on the precision of <i>alpha</i><sup>2</sup>. Using Codata values for 2018, <i>ggee</i><sub>codata</sub> = 2.400610(54)E-43 and <i>ggee</i><sub>theory</sub> = 2.40071068266(72)E-43. The theoretical value is 1.85 sigma greater than the Codata derived value.</p> <p> The gravitational constant, G, has a history of disparate value ranges. A deviation of 1.85 sigma may fall into an acceptable range more so than would normally be the case.</p>
[943] vixra:2112.0081 [pdf]
On the Last Numbers of Positive Integers
In this note, we are interested in the last numbers of positive integers; for example, for 20211206, the last number is 6, typically we note that for any positive integer a, the last numbers of a5 and a are the same.
[944] vixra:2112.0068 [pdf]
Measuring the One-Way Speed of Light
In this article, we present a method for measuring the one-way speed of light, for both flat (euclidean) and curved space (non-euclidean), with subtle differences in approach. So far, there is no solution for measuring the one-way speed of light, but there is for the two-way speed of light, where, in essence, only a mirror is used to reflect light back to where the light began it's journey, assuming that the speed of light is equal in all directions (which may not be true, according to Einstein).
[945] vixra:2112.0067 [pdf]
Tentatives For Obtaining The Proof of The Riemann Hypothesis
This report presents a collection of some tentatives to obtain a final proof of the Riemann Hypothesis. The last paper of the report is submitted to a mathematical journal for review.
[946] vixra:2112.0062 [pdf]
The New Tunisian Triangulation
Tunisian geodesy has known a multitude of geodetic systems giving different coordinates. On the occasion of the unification of these systems into a new system called "The New Tunisian Triangulation - NTT", we have written these notes for the attention of the technical assistants of the OTC to prepare them for the use of the new system and above all the question of switching from existing systems to the new NTT system.
[947] vixra:2112.0061 [pdf]
On the General Erd\h{o}s-Tur\'{a}n Additive Base Conjecture
In this paper we introduce a multivariate version of circles of partition introduced and studied in \cite{CoP}. As an application we prove a weaker general version of the Erd\H{o}s-Tur\'{a}n additive base conjecture. The actual Erd\H{o}s-Tur\'{a}n additive base conjecture follows from this general version as a consequence.
[948] vixra:2112.0054 [pdf]
Oxford Concise Dictionary of Mathematics, Penguin Dictionary of Mathematics and the Graphical Law
We study the Oxford Concise Dictionary Of Mathematics by C. Clapham and J. Nicholson and the Penguin Dictionary of Mathematics By D. Nelson, separately. We draw the natural logarithm of the number of entries, normalised, starting with a letter vs the natural logarithm of the rank of the letter, normalised for both the dictionaries. We conclude that both the Dictionaries can be characterised by BP(4,$\beta H=0$) i.e. a magnetisation curve for the Bethe-Peierls approximation of the Ising model with four nearest neighbours with $\beta H=0$, in the absence of external magnetic field, H. $\beta$ is $\frac{1}{k_{B}T}$ where, T is temperature and $k_{B}$ is the tiny Boltzmann constant.
[949] vixra:2112.0043 [pdf]
The Unification of the Tunisian Terrestrial Geodetic Networks and the Establishment of a New Plane Representation Explanatory Statement
This report is a new version of that of 2008. It is a statement of the reasons concerning the unification of the Tunisian terrestrial geodetic systems to meet the needs of the cartographic, topographic works and the installation of the geographical and cadastral information systems.
[950] vixra:2112.0041 [pdf]
An Easy Proof of the Triangle Inequality
High school and undergraduate algebra and calculus textbooks don't provide a fast and easy proof of the triangle inequality. Here is a proof that seems relatively easy. It does require a little bit of logic, but that can be a plus.
[951] vixra:2112.0038 [pdf]
A Stringy Model of Pointlike Particles
A previous supersymmetric preon scenario for visible matter particles is extended to the dark sector. In addition, the scenario is reformulated as a Double Field Theory (DFT) with four extra dimensions, to avoid a singular Big Bang in cosmology. T-duality and doubled local Lorentz symmetry of the model are genuine stringy properties. It is proposed that DFT preons may be an approximate pointlike projection of string theory.
[952] vixra:2112.0027 [pdf]
The Binary Goldbach Conjecture Via the Notion of Signature
In this paper we prove the binary Goldbach conjecture. By exploiting the language of circles of partition, we show that for all sufficiently large $n\in 2\mathbb{N}$ \begin{align} \# \left \{p+q=n|~p,q\in \mathbb{P}\right \}>0.\nonumber \end{align}This proves that every sufficiently large even number can be written as the sum of two prime numbers.
[953] vixra:2112.0023 [pdf]
Relativity in Function Spaces and the Need for Fractional Exterior Calculus
We look at Lorentz transformations from the perspective of functional analysis and show that the theory of functional analysis so far has neglected a critical point by not taking into consideration inputs of functions when measuring distances in function spaces.
[954] vixra:2112.0022 [pdf]
A New Relation Between Lerch's $\Phi$ and the Hurwitz Zeta
A new relation between the Lerch's transcendent, $\Phi$, and the Hurwitz zeta, $\zeta(k,b)$, at the positive integers is introduced. It is derived simply by inverting the relation presented in the precursor paper with one of two approaches (its generating function or the binomial theorem). This enables one to go from Lerch as a function of Hurwitz zetas (of different orders), to Hurwitz as a function of Lerches. A special case of this new functional equation is a relation between the Riemann's zeta function and the polylogarithm.
[955] vixra:2112.0018 [pdf]
From Classical to the Quantum Motion of Strings
We consider some problems concerning the classical string and the quantum strings which can have the deep physical meaning. We show that there is the asymmetry of the action and reaction in the string motion for the string with the left end fixed and the right end being in periodic motion. We derive also the quantum internal motion of this system. The quantization of the string with the interstitial massive defect is performed. The classical motion of uniformly accelerated string and its relation the Bell paradox is considered. We discuss the relation of this accelerated string to the Bell spaceship paradox involving the Lorentz contraction. It is evident that the acceleration of the string can be caused by gravity and we show that such acceleration causes the different internal motion of the string. Gravity can be described by the string medium in the Newton model of gravity. We show, that in case of the string model of gravity the motion of planets and Moon are oscillating along the classical trajectories In the string model of hadrons the quarks are treated to be tied together by a gluon tube which can be approximated by the tube of vanishing width, or by string. We apply the delta-function form of force to the left side of the string and calculate the propagation of the pulse in the system.
[956] vixra:2112.0013 [pdf]
Minimum with Inequality Constraint Applied to Increasing Cubic, Logistic and Gomperz or Convex Quartic and Biexponential Regressions
We present a method of minimizing an objective function subject to an inequality constraint. It enables us to minimize the sum of squares of deviations in linear regression under inequality restrictions. We demonstrate how to calculate the coefficients of cubic function under the restriction that it is increasing, we also mention how to fit a convex quartic polynomial. We use such results for interpolation as a method for calculation of starting values for iterative methods of fitting some specific functions, such as four-parameter logistic, positive bi-exponential, or Gomperz functions. Curvature-driven interpolation enables such calculations for otherwise solutions to interpolation equations may not exist or may not be unique. We also present examples to illustrate how it works and compare our approach with that of Zhang (2020).
[957] vixra:2112.0010 [pdf]
Com Quantum Laws of LIGO Signal
The academic circles over publicize the difficult process of LIGO exploring the signal and extracting the signal according to the predetermined target, which not only exposes the essence that such so-called scientific experiments are more like secret children's play, but also makes readers' attention far deviate from the important theme of how to use scientific methods to test whether LIGO signal is the gravitational wave generated in the process of spiral binary star merger, so as to blindly believe in science fiction news. What exact law should gravitational waves obey? Since LIGO gives the so-called observation data of gw150914 signal, as long as the accurate law obeyed by gw150914 signal is analyzed and compared with the accurate law of gravitational wave, an irrefutable scientific conclusion can be obtained. In fact, the gw150914 signal does not follow the relativistic Blanchet frequency equation of gravitational waves that LIGO likes to talk about (please refer to the paper: relativistic equation failure for LIGO signals). It has a unique law and seems to be a signal on the earth. However, further analysis shows that it has some specific differences from such signals on the earth. The comprehensive conclusion from multiple perspectives shows that the key operators of LIGO secretly extract data of the motion process of the simulation device to confuse the public and thus forge gw150914 gravitational wave, which is very likely. This paper accurately fits the com quantum law obeyed by gw150914 signal frequency. However, almost all famous mainstream academic journals unanimously refuse to publish such papers on the accurate analysis of the precise law of LIGO signal, and continue to publish more science fiction stories without experimental data analysis to further maintain lies. The author now offers a reward of 1 million yuan to reward scholars who strictly deduce the accurate co quantization law of gw150914 signal in theory rather than guessing through hypothesis. People who pursue truth all over the world unite to prevent the further spread of mainstream academic corruption that ignores academic morality, only seeks fame and wealth, cooperates in fraud and stifles truth. This reward is valid before the author publishes the core principles of COM quantum theory and is limited to the author's lifetime.
[958] vixra:2112.0009 [pdf]
Higher Multiplicative Series
In the Fibonacci series, we have two numbers by adding them we get a series consisting of even and odd numbers in this it goes up to infinity we can track any n the number by Binet’s formula. I have just thought of the multiplication of the first two terms and continued till where I can go, it means that the first two terms in the form (a, b) we will continue the multiplication as we do the addition in the Fibonacci series. As a result, we will get the big integers from the 7th term approximately which is obvious by multiplying to its previous one it will come to a very big integer which cannot be accountable by some range. If we do the multiplication the first two terms will be the same however from the third term it can be written as the power of those integers in which the powers will be following the Fibonacci series in this we can also find the nth term for the multiplicative series. Here the first two terms will be in the same order as they will be given to find the series by changing the order it will violate the rule of the restricted term. The meaning of the restricted here is that the order of (a, b) will be the same throughout the calculation of the whole series we cannot alter that if we do so then it will not be a more restricted term. So there are two concepts in the multiplicative series restricted and non-restricted series. If the (a, b) is there and the operation is going on then it can be said as the restricted series if it is given (a, b) and asked for the (b, a) series then it is said as non-restricted series. I have considered 4 possible criteria to check the pairing of the variables (a, b). We will get to know about the series and also the nth term value of that series for all possible solutions.
[959] vixra:2112.0001 [pdf]
Our Collapsing Friedman Universe
In 1907 using special relativity, Albert Einstein proved that vacuum permittivity, ε, changes in accelerating coordinate reference frames. ε is the scalar in Maxwell’s equations that determines the speed of light and the strength of electrical fields. In 1952, Møller confirmed Einstein’s discovery by proving that ε is a function of the curvature of static spacetimes. In 1994, Sumner proved that ε changes with the curvature of Friedmann spacetime. Photon energies are proportional to ε, but the energies of photons emitted by atoms are proportional to ε^2. This difference reverses the interpretation of Hubble redshifts. Hubble redshifts only result when a Friedmann universe is collapsing. This is confirmed by the Pantheon redshift data fit of 1048 supernovas with a negative Hubble constant Ho =-72.10±0.75 km s−1 Mpc−1 and a deceleration parameter 1/2 < q_o < 0.51. The velocity of light in Friedmann geometry is inversely proportional to the radius of the universe. The velocity of light was infinite at the Big Bang and decreased to zero at maximum size when the universe began to collapse. The velocity of light is now accelerating towards infinity. Its current value is c. Collapse will be complete in 9.05 billion years. The current age of the universe is estimated to be 1.54 x 10^4 billion years.
[960] vixra:2111.0172 [pdf]
New Evolutionary Computation Models and their Applications to Machine Learning
Automatic Programming is one of the most important areas of computer science research today. Hardware speed and capability have increased exponentially, but the software is years behind. The demand for software has also increased significantly, but it is still written in old fashion: by using humans. There are multiple problems when the work is done by humans: cost, time, quality. It is costly to pay humans, it is hard to keep them satisfied for a long time, it takes a lot of time to teach and train them and the quality of their output is in most cases low (in software, mostly due to bugs). The real advances in human civilization appeared during the industrial revolutions. Before the first revolution, most people worked in agriculture. Today, very few percent of people work in this field. A similar revolution must appear in the computer programming field. Otherwise, we will have so many people working in this field as we had in the past working in agriculture. How do people know how to write computer programs? Very simple: by learning. Can we do the same for software? Can we put the software to learn how to write software? It seems that is possible (to some degree) and the term is called Machine Learning. It was first coined in 1959 by the first person who made a computer perform a serious learning task, namely, Arthur Samuel. However, things are not so easy as in humans (well, truth to be said - for some humans it is impossible to learn how to write software). So far we do not have software that can learn perfectly to write software. We have some particular cases where some programs do better than humans, but the examples are sporadic at best. Learning from experience is difficult for computer programs. Instead of trying to simulate how humans teach humans how to write computer programs, we can simulate nature.
[961] vixra:2111.0169 [pdf]
Evolving Evolutionary Algorithms using Multi Expression Programming
Finding the optimal parameter setting (i.e. the optimal population size, the optimal mutation probability, the optimal evolutionary model etc) for an Evolutionary Algorithm (EA) is a difficult task. Instead of evolving only the parameters of the algorithm we will evolve an entire EA capable of solving a particular problem. For this purpose the Multi Expression Programming (MEP) technique is used. Each MEP chromosome will encode multiple EAs. An nongenerational EA for function optimization is evolved in this paper. Numerical experiments show the effectiveness of this approach.
[962] vixra:2111.0168 [pdf]
The Tower Function and Applications
In this paper we study an extension of the Euler totient function to the rationals and explore some applications. In particular, we show that \begin{align} \# \{\frac{m}{n}\leq \frac{a}{b}~|~m\leq a,~n\leq b,~\gcd(m,a)=\gcd(n,b)=1,~\gcd(n,a)>1\nonumber \\~\vee~\gcd(m,b)>1~\vee ~\gcd(m,n)>1\}=\sum \limits_{\substack{\frac{m}{n}\leq \frac{a}{b}\\mn\leq ab\\m>a,n\leq b~\vee~m\leq a,n>b~\vee~\gcd(m,n)>1\\ \gcd(mn,ab)=1}}1\nonumber \end{align} provided $\gcd(a,b)=1$.
[963] vixra:2111.0167 [pdf]
Seeking the Analytic Quaternion
By combining the complex analytic Cauchy-Riemann derivative with the Cayley-Dickson construction of a quaternion, possible formulations of a quaternion derivative are explored with the goal of finding an analytic quaternion derivative having conjugate symmetry. Two such analytic derivatives can be found. Although no example is presented, it is suggested that this finding may have significance in areas of quantum mechanics where quaternions are fundamental, especially regarding the enigmatic phenomenon of complementarity, where a quantum process seems to present two essential aspects.
[964] vixra:2111.0161 [pdf]
ANN Synthesis and Optimization of Electronically Scanned Coupled Planar Periodic and Aperiodic Antenna Arrays Modeled by the MoM-GEC Approach
This paper proposes a new formulation that relied on the moment technique combined with the equivalent circuit (MoM-GEC) to study a beamforming application for the coupled periodic and quasi-periodic planar antenna array. Numerous voltage designs are utilized to show the adequacy and unwavering quality of the proposed approach. The radiators are viewed as planar dipoles and consequently shared (mutual) coupling effects are considered. The recommended array shows a noticeable improvement against the current structures as far as size, 3-D scanning, directivity, SLL reduction, and HPBW. The results verify that multilayer feed-forward neural networks are vigorous and can take care of complex antenna problems. Even so, an artificial neural network (ANN) is ready to create quickly the results of optimization and synthesis by utilizing generalization with an early stopping method. Significant gain in the running time consumption and memory used is acquired employing this last technique for improving generalization (named early stopping). Simulation results are carried out using MATLAB. To approve this work, several simulation examples are shown.
[965] vixra:2111.0154 [pdf]
Zero Represents Impossibility From the Viewpoint of Division by Zero
In this note, by using an elementary property of reproducing kernels, we will show that zero represents impossibility from the viewpoint of the division by zero.
[966] vixra:2111.0150 [pdf]
Bayesian Inference Via Generalized Thermodynamic Integration
The idea of using a path of tempered posterior distributions has been widely applied in the literature for the computation of marginal likelihoods (a.k.a., Bayesian evidence). Thermodynamic integration, path sampling and annealing importance sampling are well-known examples of algorithms belonging to this family of methods. In this work, we introduce a generalized thermodynamic integration (GTI) scheme which is able to perform a complete Bayesian inference, i.e., GTI can approximate generic posterior exceptions (not only the marginal likelihood). Several scenarios of application of GTI are discussed and different numerical simulations are provided.
[967] vixra:2111.0148 [pdf]
Looping and Divergence in the Collatz Conjecture
In this paper, we investigate the possible scenarios in which a number does not satisfy the Collatz Conjecture. Specifically, we examine numbers which may have a looping Collatz reduction sequence as well as numbers which may lead to a diverging Collatz reduction sequence. In order to investigate these, we look at the parity of the numbers in a general Collatz reduction sequence. Further, we examine cases in which these parity cycles repeat themselves infinitely in the reduction sequence. Through the research conducted in the paper, we formulate a necessary condition for looping in the Collatz Conjecture. We also prove that if a number has a diverging reduction sequence, then it must generate an infinite non-repeating parity cycle.
[968] vixra:2111.0145 [pdf]
Effective Sample Size Approximations as Entropy Measures
In this work, we analyze alternative e ective sample size (ESS) measures for importance sampling algorithms. More specifically, we study a family of ESS approximations introduced in [11]. We show that all the ESS functions included in this family (called Huggins-Roy's family) satisfy all the required theoretical conditions introduced in [17]. We also highlight the relationship of this family with the Renyi entropy. By numerical simulations, we study the performance of different ESS approximations introducing also an optimal linear combination of the most promising ESS indices introduced in literature. Moreover, we obtain the best ESS approximation within the Huggins-Roy's family, that provides almost a perfect match with the theoretical ESS values.
[969] vixra:2111.0143 [pdf]
Operator Evolution Equations of Angular Motion Law
Quantum mechanics based on Planck hypothesis and statistical interpretation of wave function has achieved great success in describing the discrete law of micro motion. However, the idea of quantum mechanics has not been successfully used to describe the discrete law of macro motion, and the causality implied in the Planck hypothesis and the application scope of the basic principles of quantum mechanics have not been clarified. In this paper, we first introduce the angular motion law and its application, which seems to be of no special significance as a supplement to the perfect classical mechanics, but plays an irreplaceable role in testing whether the core mathematical procedure of quantum mechanics of operator evolution wave equation satisfies the unitary principle. Then, the operator evolution wave equations corresponding to the angular motion law are discussed, and the necessity of generalized optimization of differential equations are illustrated by the form of ordinary differential equations. Finally, the real wave equation which is superior to the Schr\"{o}dinger equation in physical meaning but not necessarily the ultimate answer is briefly introduced. The implicit conclusion is that Hamiltonian can not be the only inevitable choice of constructing wave equation in quantum mechanics, and there is no causal relationship between operator evolution wave equation and quantized energy in bound state system, which indicates that whether the essence of quantum mechanics can be completely revealed is the key to unify macro and micro quantized theory.
[970] vixra:2111.0139 [pdf]
Unified Quantum Gravity Field Equation Describing the Universe from the Smallest to the Cosmological Scales
This paper introduces a new quantum gravity field equation that is derived from collision space-time. It show how changes in energy (collision-space) is linked to changes in matter (collision-time). This field equation can be written in several different forms. Gravity, at the deepest level, is linked to change in gravitational energy over the Planck time. In our view, this is linked to the collision between two indivisible particles, and this collision has a duration of the Planck time. We also show how an equation of the universe, that was recently derived from relativistic Newtonian theory, can also be derived in a new way from the quantum gravity field equation presented in this paper. This equation gives a new explanation for a cosmological redshift that does not seem to be related to expanding space or the big bang hypothesis. Also, the approximately 13.9 billion years of the Hubble time do not seem to be at all related to the age of the universe, but to the collision time of the mass in the universe.
[971] vixra:2111.0132 [pdf]
Proof of Riemann Hypothesis (3)
This paper is a trial to prove Riemann hypothesis according to the following process. 1. We make (N+1)/2 infinite series from one equation that gives ζ(s) analytic continuation and 2 formulas (1/2+a+bi, 1/2−a−bi) that show non-trivial zero point of ζ(s). (N = 1, 3, 5, 7, · · · · · · ) 2. We find that a cannot have any value but zero from the above infinite series by performing N → ∞. 3. Therefore non-trivial zero point of ζ(s) must be 1/2 ± bi.
[972] vixra:2111.0130 [pdf]
An Exact Formula for the Prime Counting Function
This paper discusses a few main topics in Number Theory, such as the M\"{o}bius function and its generalization, leading up to the derivation of a neat power series for the prime counting function, $\pi(x)$. Among its main findings, we can cite the extremely useful inversion formula for Dirichlet series (given $F_a(s)$, we know $a(n)$, which may provide evidence for the Riemann hypothesis, and enabled the creation of a formula for $\pi(x)$ in the first place), and the realization that sums of divisors and the M\"{o}bius function are particular cases of a more general concept. One of its conclusions is that it's unnecessary to resort to the zeros of the analytic continuation of the zeta function to obtain $\pi(x)$.
[973] vixra:2111.0129 [pdf]
Lerch's $\phi$ and the Polylogarithm at the Positive Integers
We review the closed-forms of the partial Fourier sums associated with $HP_k(n)$ and create an asymptotic expression for $HP(n)$ as a way to obtain formulae for the full Fourier series (if $b$ is such that $|b|<1$, we get a surprising pattern, $HP(n) \sim H(n)-\sum_{k\ge 2}(-1)^k\zeta(k)b^{k-1}$). Finally, we use the found Fourier series formulae to obtain the values of the Lerch transcendent function, $\Phi(e^m,k,b)$, and by extension the polylogarithm, $\mathrm{Li}_{k}(e^{m})$, at the positive integers $k$.
[974] vixra:2111.0128 [pdf]
Lerch's $\phi$ and the Polylogarithm at the Negative Integers
At the negative integers, there is a simple relation between the Lerch $\Phi$ function and the polylogarithm. The literature has a formula for the polylogarithm at the negative integers, which utilizes the Stirling numbers of the second kind. Starting from that formula, we can deduce a simple closed formula for the Lerch $\Phi$ function at the negative integers, where the Stirling numbers are not needed. Leveraging that finding, we also produce alternative formulae for the $k$-th derivatives of the cotangent and cosecant (ditto, tangent and secant), as simple functions of the negative polylogarithm and Lerch $\Phi$, respectively, which is evidence of the importance of these functions (they are less exotic than they seem). Lastly, we present a new formula for the Hurwitz zeta function at the positive integers using this novelty.
[975] vixra:2111.0127 [pdf]
A Reformulation of the Riemann Hypothesis
We present some novelties on the Riemann zeta function. Using the analytic continuation we created for the polylogarithm, $\mathrm{Li}_{k}(e^{m})$, we extend the zeta function from $\Re(k)>1$ to the complex half-plane, $\Re(k)>0$, by means of the Dirichlet eta function. More strikingly, we offer a reformulation of the Riemann hypothesis through a zeta's cousin, $\varphi(k)$, a pole-free function defined on the entire complex plane whose non-trivial zeros coincide with those of the zeta function.
[976] vixra:2111.0124 [pdf]
PhD's Thesis of Youcef Naas
Le troisième chapitre concerne le cas L p (0, 1; X). Plus précisément, on s’intéresse à l’équation différentielle abstraite du second ordre de type elliptique (1) avec les conditions aux limites de type mêlé (4) où A est un opérateur linéaire fermé sur un espace de Banach complexe X et u0, u 0 1 sont des éléments donnés dans X. Ici f ∈ L p (0, 1; X), 1 < p < ∞, 5 INTRODUCTION INTRODUCTION et X a la proporiété géométrique dite UMD. On suppose que A est un opérateur Bip et on montre que (1)-(4) admet une unique solution stricte, sous certaines hypothèses naturelles d’ellipticité de l’opérateur et de régularité sur les données, on donne alors, une représentation explicite de la solution stricte. La formule de représentation de la solution est donnée par deux méthodes, la première se base sur le calcul fonctionnel de Dunford et la deuxième sur la méthode de Krein[27], l’unicité de la représentation est démontrée. Dans ce chapitre, on fait une nouvelle approche du problème (1)-(4) en utilisant le théorème de Mikhlin. Dans cette partie on utilise les techniques des multiplicateurs de Fourier et la théorie de Mikhlin pour majorer les puissances imaginaires pures d’opérateurs. Le quatrième chapitre illustre notre théorie abstraite par quelques exemples concrets d’applications en EDP dans le cas des espaces L p et C α .
[977] vixra:2111.0119 [pdf]
Modified Equations for Pressure and Temperature of Ideal Gas
The universal unitary principle of logic test is used to test the mathematical reasoning of pressure equation of ideal gas, and a negative conclusion is given. The study found that, the classical molecular kinetic theory establishes a physical model of the uniform motion of a molecule under the action of an equivalent constant force, which violates the principle of mechanics, and the classical equations for the pressure and temperature of ideal gas derived from such a model are all incorrect. Here we set up a variety of physical models of molecular interaction in accordance with the principle of mechanics, and consistently derive the modified equation of ideal gas pressure. It is proved that the pressure of ideal gas is equal to the molecular energy in unit volume, and the thermodynamic temperature of ideal gas is equal to the quotient of molecular average kinetic energy and Boltzmann constant. Reasoning accords with the unitary principle. The inferences of these different models accords with the unitary principle. Furthermore, the problem of the definite solution of the gas molecular velocity distribution function satisfying the limit condition of light speed is proposed. Finally, the experimental suggestion to verify the theoretical gas temperature correction equation is given.
[978] vixra:2111.0117 [pdf]
Relativistic Equation Failure for LIGO Signals
Signal waves of the monotone increasing frequency detected by LIGO are universally considered to be gravitational waves of spiral binary stars, and the general theory of relativity is thus universally considered to have been confirmed by the experiments. Here we present a universal method for signal wave spectrum analysis, introducing the true conclusions of numerical calculation and image analysis of GW150914 signal wave. Firstly, numerical calculation results of GW150914 signal wave frequency change rate obey the com quantization law which needs to be accurately described by integers, and there is an irreconcilable difference between the results and the generalized relativistic frequency equation of the gravitational wave. Secondly, the assignment of the frequency and frequency change rate of GW10914 signal wave to the generalized relativistic frequency equation of gravitational wave constructs a non-linear equation group about the mass of wave source, and the computer image solution shows that the equation group has no GW10914 signal wave solution. Thirdly, it is not unique to calculate the chirp mass of the wave source from the different frequencies and change rates of the numerical relativistic waveform of the GW150914 signal wave, and the numerical relativistic waveform of the GW150914 signal wave deviates too far from the original waveform actually. Other LIGO signal waveforms do not have obvious characteristics of gravitational frequency variation of spiral binary stars and lack precise data, so they cannot be used for numerical analysis and image solution. Therefore, LIGO signals represented by gw50914 signal do not support the relativistic gravitational wave frequency equation. However, whether gravitational wave signals from spiral binaries that may be detected in the future follow the same co quantization law? Only the numerical analysis results of detailed observation data can give an accurate answer.
[979] vixra:2111.0116 [pdf]
A Refractive Index of a Kink in Curved Space
The refractive index and curvature relation is formulated using the Riemann-Christoffel curvature tensor. As a consequence of the fourth rank tensor of the Riemann-Christoffel curvature tensor, we found that the refractive index should be a second rank tensor. The second rank tensor of the refractive index describes a linear optics. It implies naturally that the Riemann-Christoffel curvature tensor is related to the linear optics. In case of a non-linear optics, the refractive index is a sixth rank tensor, if susceptibility is a fourth rank tensor. The Riemann-Christoffel curvature tensor can be formulated in the non-linear optics but with a reduction term. The relation between the (linear and non-linear) refractive index and a (linear and non-linear) mass in curved space are formulated. Related to the Riemann-Christoffel curvature tensor, we formulate "the (linear and non-linear) generalized Einstein field equations". Sine-Gordon model in curved space is shown, where the Lagrangian is the total energy. This total energy is the mass of a kink (anti-kink) associated with a topological charge (a winding number). We formulate the relation between the (linear and non-linear) refractive index of the kink (anti-kink) and the topological charge-the winding number. Deflection of light is discussed in brief where the (linear and non-linear) angle of light deflection are formulated in relation with the mass (the topological charge, the winding number) of the kink (anti-kink).
[980] vixra:2111.0107 [pdf]
The Morbid Equation of Quantum Numbers
The quantum model of valence electron generation orbital penetration of alkali metal elements with unique stable structure is investigated. The electric field outside the atomic kernel is usually expressed by the Coulomb field of the point charge mode, and the composite electric field in atomic kernel can be equivalent to the electric field inside the sphere with uniform charge distribution or other electric fields without divergence point. The exact solutions of two Schrodinger equations for the bound state of the Coulomb field outside the atom and the binding state of the equivalent field inside the atom determine two different quantization energy formulas respectively. Here we show that the atomic kernel surface is the only common zero potential surface that can be selected. When the orbital penetration occurs, the law of conservation of energy requires that the energy level formulas of the two bound states must have corresponding quantum numbers to make them equal. As a result, there is no solution to the quantum number equation, indicating that the two quantum states of the valence electron are incompatible. This irreconcilable contradiction shows that the quantized energy of quantum mechanics cannot absolutely satisfy the law of conservation of energy.
[981] vixra:2111.0105 [pdf]
The Center of Small World: Mount Sumeru (Meru)
Through the modern interpretation of the Sutras and the benefit from modern science, we are surprised to find that the Sutras have accurate modern scientific description of the phenomena of polar day and polar night on the Earth, tropic of Cancer, lunar phase changes, the causes of lunar eclipses and so on. The Buddha’s extremely precise numerical descriptions of the layered structure, layer height, and layer extent of the Earth’s Ionosphere are far beyond our imagination, and there is an incredible era transcendence which is extremely shocking. Key words: Polar day and Polar night, Tropic of Cancer, lunar phases, lunar eclipse, the layered structure of the Earth’s Ionosphere, Era transcendence, extremely shocking
[982] vixra:2111.0103 [pdf]
Models that Link and Suggest Data About Elementary Particles, Dark Matter, and the Cosmos
We suggest progress regarding the following six physics opportunities. List all elementary particles. Describe dark matter. Explain ratios of dark matter to ordinary matter. Explain eras in the history of the universe. Link properties of objects. Interrelate physics models. We use models based on Diophantine equations.
[983] vixra:2111.0099 [pdf]
Analysis and Research on Superradiant Stability of Kerr-Sen Black Hole
Kerr-Sen black holes have stretchon parameters and hidden conformal symmetries. The superradiation stability and steady-state resonance are worth further study. This is the research motivation of this paper.In that article, a new variable y is added here to expand the results of the above article. When$\sqrt{2a^2}/{r^2_+}< \omega< m\varOmega_H+q\varPhi_H$,so the Kerr-sen black hole is superradiantly stable at that time,similar to the superradiation result of the Kerr-Newman black hole.
[984] vixra:2111.0096 [pdf]
The Planck Constant and its Relation to the Compton Frequency
The Planck constant is considered one of the most important universal constants of physics, but its physical nature still has not been fully understood. Further investigation and new perspectives on this quantity should therefore be of interest. We demonstrate that the Planck constant can be directly linked to the Compton frequency of one divided by the Compton frequency of one kg. This further implies that the Planck constant is related to the quantization of matter, not only energy. We will also show that the frequency of one, when expressed in relation to kg, depends on the observation time. This new interpretation of the Planck constant could be an important step towards more in-depth understanding its physical nature, and potentially explaining the origin of the mass-gap and the rest mass of a photon.
[985] vixra:2111.0095 [pdf]
Application of Multipoints Summation Method to Nonlinear Differential Equations
I suggest a new approximate approach, the Multipoints Summation method, to solve non-linear differential equations analytically. The method connects several local asymptotic series. I present applications of the method to two examples of non-linear differential equations: saddle-node bifurcation and the non-linear differential equation of the pendulum. Explicit approximate solutions expressed in terms of elementary functions are obtained from an analysis of phase space. This approach may be also applied to other non-linear differential equations.
[986] vixra:2111.0094 [pdf]
A Revisit to Lemoine's Conjecture
In this paper we prove Lemoine's conjecture. By exploiting the language of circles of partition, we show that for all sufficiently large $n\in 2\mathbb{N}+1$ \begin{align} \# \left \{p+2q=n|~p,q\in \mathbb{P}\right \}>0.\nonumber \end{align}This proves that every sufficiently large odd number can be written as the sum of a prime and a double of a prime.
[987] vixra:2111.0092 [pdf]
On the Relativity of the Speed of Light
Einstein's assumption that the speed of light is constant is a fundamental principle of modern physics with great influence. However, the nature of the principle of constant speed of light is rarely described in detail in the relevant literatures, which leads to a deep misunderstanding among some readers of special relativity. Here we introduce the unitary principle, which has a wide application prospect in the logic self consistency test of mathematics, natural science and social science. Based on this, we propose the complete space-time transformation including the Lorentz transformation, clarify the definition of relative velocity of light and the conclusion that the relative velocity of light is variable, and further prove that the relative variable light speed is compatible with Einstein's constant speed of light. The specific conclusion is that the propagation speed of light in vacuum relative to the observer's inertial reference frame is always constant c, but the propagation speed of light relative to any other inertial reference frame which has relative motion with the observer is not equal to the constant c; observing in all inertial frame of reference, the relative velocity of light propagating in the same direction in vacuum is 0, while that of light propagating in the opposite direction is 2c. The essence of Einstein's constant speed of light is that the speed of light in an isolated reference frame is constant, but the relative speed of light in vacuum is variable. The assumption of constant speed of light in an isolated frame of reference and the inference of relative variable light speed can be derived from each other.
[988] vixra:2111.0085 [pdf]
Where is Mount Sumeru (Meru)?
This paper gives a solution to the specific location of “Mount Sumeru(Meru)” in thousands of years of Buddhist myths and legends, which conforms to the internal description logic of sutras (sutras that can verify each other). First of all, different from the theravada Buddhism’s traditional view that “Mount Sumeru(Meru)” is located in the Himalayas for thousands of years, this paper comprehensively proves that the central location of “Mount Sumeru(Meru)” is related to the earth’s south magnetic pole. Based on this, we also determined the specific location of the legendary “Four continents”. Secondly, this paper finds that there is a unique phenomenon of “one body multiple sides” in the description of sutras, that is, there exist multiple world representations of the same thing. Finally, thanks to modern scientific research, we are surprised to find that the Buddha’s knowledge of the heat distribution in the atmosphere near the earth’s south magnetic pole and the existence of local high temperatures in the atmosphere is far beyond our imagination and there is an incredible era transcendence which is amazing. Key words: Mount Sumeru(Meru),Four continents,one body multiple sides (Multiworld representations), Era transcendence
[989] vixra:2111.0074 [pdf]
Relativity in Function Spaces
After proposing the Principle of Minimum Gravitational Potential, in a pursuit to find the explanation behind the correction to Newton's gravitational potential that accounts for Mercury's orbit, by finding all the higher-order corrections it is shown that the consequences of the existence of speed of light for gravity are not yet fully explored.
[990] vixra:2111.0072 [pdf]
Fractional Distance: The Topology of the Real Number Line with Applications to the Riemann Hypothesis
Recent analysis has uncovered a broad swath of rarely considered real numbers called real numbers in the neighborhood of infinity. Here, we extend the catalog of the rudimentary analytical properties of all real numbers by defining a set of fractional distance functions on the real number line and studying their behavior. The main results are (1) to prove with modest axioms that some real numbers are greater than any natural number, (2) to develop a technique for taking a limit at infinity via the ordinary Cauchy definition reliant on the classical epsilon-delta formalism, and (3) to demonstrate an infinite number of non-trivial zeros of the Riemann zeta function in the neighborhood of infinity. We define numbers in the neighborhood of infinity with a Cartesian product of Cauchy equivalence classes of rationals. We axiomatize the arithmetic of such numbers, prove the operations are well-defined, and then make comparisons to the similar axioms of a complete ordered field. After developing the many underlying foundations, we present a basis for a topology.
[991] vixra:2111.0070 [pdf]
A New Method for the Cubic Polynomial Equation
I present a method to solve the general cubic polynomial equation based on six years of research that started back in 1985 when, in the fifth grade, I first learned of Bhaskara's formula for the quadratic equation. I was fascinated by Bhaskara's formula and naively thought I could replicate his method for the third degree equation, but only succeeded in 1990, after countless failed attempts. The solution involves a simple transformation to form a cube and which, by chance, happens to reduce the degree of the equation from three to two (which seems to be the case of all polynomial equations that admit solutions by means of radicals). I also talk about my experiences trying to communicate these results to mathematicians, both at home and abroad.
[992] vixra:2111.0069 [pdf]
A Modified Belief Functions Distance Measure for Orderable Set
This paper proposes a new method of measuring the distance between conflicting order sets, quantifying the similarity between focal elements and their own size. This method can effectively measure the conflict of belief functions on an ordered set without saturation due to the non-overlapping focus elements. It has proven that the method satisfies the property of the distance. Examples of the engineering budget and sensors show that the distance can effectively measure the conflict between ordered sets, and prove the distance we propose to reflect the information of order sets more comprehensively by comparison with existing methods and the conflict metric between ordered sets is more robust and accurate
[993] vixra:2111.0067 [pdf]
A New Proposition of Fibonacci Number
C.A.Church and Marjorie Bicknell gave a version which was exponential generating function for Fibonacci number, in 1973. In this paper, I will give some results about the Fibonacci identities.
[994] vixra:2111.0065 [pdf]
Robotic Autonomy: A Survey
Robotic autonomy is key to the expansion of robotic applications. The paper reviews the success of robotic autonomy in industrial applications, as well as the requirements and challenges on expanding robotic autonomy to in needing applications, such as education, medical service, home service, etc. Through the discussions, the paper draws the conclusion that robotic intelligence is the bottleneck for the broad application of robotic technology.
[995] vixra:2111.0061 [pdf]
Quantum Field Theory Models and the Generating Function Technique
Quantum Field Theory, or QFT, is a well-accepted set of theories used in particle physics that involves Lagrangian mechanics. An individual can generate a rich variety of Hamiltonian equation systems from the Lagrangian associated QFT to describe simultaneous or cofounding processes which occur in particle physics. Unfortunately, the equation systems associated with QFT are relatively hard to solve. This paper will show that the generating function technique (GFT) can be used to directly solve these equation systems while also producing renormalization results. The usage of the latter is necessary to display the consistency of the solutions and equation systems. Ultimately, an astute scientist in QFT can claim GFT is a valuable tool to be utilized in the field of particle physics.
[996] vixra:2111.0057 [pdf]
Hybrid Learning Aided Technology-Rich Instructional Tools - A Case Study: Community College of Qatar
Educational Institutions have an essential role in promoting the teaching and learning process, within universities, colleges, and communities. Due to the recent coronavirus COVID 19 pandemic, many educational institutions adopted hybrid learning (HL), which is a combination of classic and online learning. It integrates the advantages of both, and it is a fundamental factor to ensure continued learning. Technological innovations such as HL are changing the teaching process, and how students, lecturers, and administrators interact. Based on this, the Community College of Qatar (CCQ) focused on researching the structures and elements related to the adoption of HL. Thus, the goal of this research paper is to reveal the impact of HL on the learning process in CCQ, and the effective didactic tools required for a successful HL program. Our research questions for assessing and evaluating the learning process at CCQ are as follows: a) Is HL a suitable learning strategy that would best suit the students ?; b) What are the didactic tools needed in the HL program at CCQ ?; c) Will the students meet the learning objectives if HL program is adopted?. A quantitative method was used in this study. Furthermore, a questionnaire was designed for the survey which was designed to measure the opinions of the students, instructors, and administrators about the HL program. It is observed from the results that the majority of students, instructors, and administrators showed a positive attitude toward HL, but some had negative views and experienced challenges. The results were analyzed and discussed to better utilize HL to meet the growing demands of the community.
[997] vixra:2111.0054 [pdf]
Affine Connection Representation of Gauge Fields
There are two ways to unify gravitational field and gauge field. One is to represent gravitational field as principal bundle connection, and the other is to represent gauge field as affine connection. Poincar\'{e} gauge theory and metric-affine gauge theory adopt the first approach. This paper adopts the second. In this approach: (i) Gauge field and gravitational field can both be represented by affine connection; they can be described by a unified spatial frame. (ii) Time can be regarded as the total metric with respect to all dimensions of internal coordinate space and external coordinate space. On-shell can be regarded as gradient direction. Quantum theory can be regarded as a geometric theory of distribution of gradient directions. Hence, gauge theory, gravitational theory, and quantum theory all reflect intrinsic geometric properties of manifold. (iii) Coupling constants, chiral asymmetry, PMNS mixing and CKM mixing arise spontaneously as geometric properties in affine connection representation, so they are not necessary to be regarded as direct postulates in the Lagrangian anymore. (iv) The unification theory of gauge fields that are represented by affine connection can avoid the problem that a proton decays into a lepton in theories such as $SU(5)$. (v) There exists a geometric interpretation to the color confinement of quarks. In the affine connection representation, we can get better interpretations to the above physical properties, therefore, to represent gauge fields by affine connection is probably a necessary step towards the ultimate theory of physics.
[998] vixra:2111.0053 [pdf]
Quantum Cosmology: Cosmology Linked to the Planck Scale
As we have recently shown, the Planck length can be found independently of G and h, despite its common physical notion. This enabled us to make a series of cosmological predictions based on only two constants: the Planck length and the speed of light. The present paper explores further the link between the Planck scale and large-scale Universe structures. We look at both the Friedmann cosmology and the recently proposed Haug cosmology from this new perspective.
[999] vixra:2111.0048 [pdf]
The 2019 Convention, Quantum Gravity and the Definition of Kilogram
There appears to be a lack of general consensus among the BIPM and the quantum gravity community regarding the definition of kilogram, in light of the 2019 convention, concerning the role of the gravitational constant $G$ as a defining constant. Unless a decision is reached, not only the task of the BIPM to ``ensure worldwide unification of measurements'' remains unfulfilled, but also the proposals of experimental tests of quantum gravity remain devoid of any scientific value.
[1000] vixra:2111.0039 [pdf]
Rigorous Proof for Riemann Hypothesis Obtained by Adopting Algebra-Geometry Approach in Geometric Langlands Program
The 1859 Riemann hypothesis conjectured all nontrivial zeros in Riemann zeta function are uniquely located on sigma = 1/2 critical line. Derived from Dirichlet eta function [proxy for Riemann zeta function] are, in chronological order, simplified Dirichlet eta function and Dirichlet Sigma-Power Law. Computed Zeroes from the former uniquely occur at sigma = 1/2 resulting in total summation of fractional exponent (-sigma) that is twice present in this function to be integer -1. Computed Pseudo-zeroes from the later uniquely occur at sigma = 1/2 resulting in total summation of fractional exponent (1 - sigma) that is twice present in this law to be integer 1. All nontrivial zeros are, respectively, obtained directly and indirectly as the one specific type of Zeroes and Pseudo-zeroes only when sigma = 1/2. Thus, it is proved (using equation-type proof) that Riemann hypothesis is true whereby this function and law rigidly comply with Principle of Maximum Density for Integer Number Solutions. The geometrical-mathematical [unified] approach used in our proof is equivalent to the algebra-geometry [unified] approach of geometric Langlands program that was formalized by Professor Peter Scholze and Professor Laurent Fargues. A succinct treatise on proofs for Polignac's and Twin prime conjectures (using algorithm-type proofs) is also outlined in this anniversary research paper.
[1001] vixra:2111.0029 [pdf]
The "Quantum Game Show": a Very Simple Explanation of Bell's Theorem in Quantum Mechanics
In this article give a very simple presentation of Bell's inequality by comparing it to a ``quantum game show'', followed by a simple description of Aspect's 1985 experiment involving entangled photons which confirms the inequality. The entire article is non-technical and requires no mathematical background other than high school mathematics and an understanding of basic concepts in probability. The physics involved in Aspect's experiment is also explained.
[1002] vixra:2111.0027 [pdf]
Scientific Value of the Quantum Tests of Equivalence Principle in Light of Hilbert's Sixth Problem
In his sixth problem, Hilbert called for an axiomatic approach to theoretical physics with an aim to achieve precision and rigour in scientific reasoning, where logic and language (semantics) of physics play the pivotal role. It is from such a point of view, we investigate the scientific value of the modern experiments to perform quantum tests of equivalence principle. Determination of Planck constant involves the use of acceleration due to gravity of the earth (g) that results in the force on a test mass. The equivalence between inertial mass and gravitational mass of a test object is assumed in the process of logically defining g from the relevant hypotheses of physics. Consequently, if Planck constant is used as input in any experiment (or in the associated theory that founds such an experiment) that is designed to test the equivalence between inertial and gravitational mass, then it is equivalent to establish a scientific truth by implicitly assuming it i.e. a tautology. There are several notable examples which plague the frontiers of current scientific research which claim to make quantum test of equivalence principle. We question the scientific value of such experiments from Hilbert's axiomatic point of view. This work adds to the recently reported semantic obstacle in any axiomatic attempt to put "quantum" and "gravity" together, albeit with an experimental tint.
[1003] vixra:2111.0026 [pdf]
Cauchy's Logico-Linguistic Slip, the Heisenberg Uncertainty Principle and a Semantic Dilemma Concerning ``quantum Gravity''
The importance of language in physics has gained emphasis in recent times, on the one hand through Hilbert's views that concern formalism and intuition applied for outer inquiry, and on the other hand through Brouwer's point of view that concerns intuition applied for inner inquiry or, as I call, self-inquiry. It is to demonstrate the essence of such investigations, especially self-inquiry (inward intuition), I find it compelling to report that a careful analysis of Cauchy's statements for the definition of derivative, as applied in physics, unveils the connection to the Heisenberg uncertainty principle as a condition for the failure of classical mechanics. Such logico-linguistic, or semantically driven, self-inquiry of physics can provide new insights to physicists in the pursuit of truth and reality, for example, in the context of Schroedinger equation. I point out an explicit dilemma that plagues the semantics of physics, as far as general relativity and quantum mechanics are concerned, which needs to be taken into account during any attempt to pen down a theory of ``quantum gravity''.
[1004] vixra:2111.0024 [pdf]
Tornado in Guatambu, Santa Catarina, Southern Brazil, Late Winter 2021 (Case Study)
The objective of this work is to analyze the occurrence or not of tornadoes in the city of Guatambu, state of Santa Catarina (SC), southern Brazil, at the end of the night of 13, at dawn on September 14, 2021. Alerts by the official agencies of the region of the probable occurrence of tornadoes and strong storms in the area between the northeast of Argentina, Uruguay and Rio Grande do Sul. A tornado is the most violent windstorm on earth. The tornado is a rotating column of air that extends from a cloud to the ground. The analysis of satellite maps indicated the occurrence of storms, with probable formation of tornadoes in the municipality of Guatambu. Thus, confirming reports from residents, official bodies such as the Civil Defense of Santa Catarina, and the state’s meteorological system. It is likely that the formation of a tornado in the unicipality of Guatambu occurred between 01:20 UTC on and 02:10 UTC on Sep 14, 2021.
[1005] vixra:2111.0019 [pdf]
Margenau's Reduction of the Wave Packet
Margenau wanted to see reduction of the wave packet in terms of the Schrödinger equation. Here we will look at it in terms of non-locality.
[1006] vixra:2111.0015 [pdf]
A New Algorithm based on Extent Bit-array for Computing Formal Concepts
The emergence of Formal Concept Analysis (FCA) as a data analysis technique has increased the need for developing algorithms which can compute formal concepts quickly. The current efficient algorithms for FCA are variants of the Close-By-One (CbO) algorithm, such as In-Close2, In-Close3 and In-Close4, which are all based on horizontal storage of contexts. In this paper, based on algorithm In-Close4, a new algorithm based on the vertical storage of contexts, called InClose5, is proposed, which can significantly reduce both the time complexity and space complexity of algorithm In-Close4. Technically, the new algorithm stores both context and extent of a concept as a vertical bit-array, while within In-Close4 algorithm the context is stored only as a horizontal bit-array, which is very slow in finding the intersection of two extent sets. Experimental results demonstrate that the proposed algorithm is much more effective than In-Close4 algorithm, and it also has a broader scope of applicability in computing formal concept in which one can solve the problems that cannot be solved by the In-Close4 algorithm.
[1007] vixra:2111.0014 [pdf]
Granule Description based on Compound Concepts
Concise granule descriptions for definable granules and approaching descriptions for indefinable granules are challenging and important issues in granular computing. The concept with only common attributes has been intensively studied. To investigate the granules with some special needs, we propose a novel type of compound concepts in this paper, i.e., common-and-necessary concept. Based on the definitions of concept-forming operations, the logical formulas are derived for each of the following types of concepts: formal concept, object-induced three-way concept, object oriented concept and common-and-necessary concept. Furthermore, by utilizing the logical relationship among various concepts, we have derived concise and unified equivalent conditions for definable granules and approaching descriptions for indefinable granules for all four kinds of concepts.
[1008] vixra:2111.0010 [pdf]
Observation of Oscillation Symmetry in Nuclei Excited State Masses and Widths
A systematic study of hadron masses and widths shows regular oscillations which can be fitted by a simple cosine function. This oscillation symmetry is observed studing the differences between adjacent masses of each nucleon family plotted versus the corresponding mean masses. It is also observed in the widths of excited levels, when plotted versus the corresponding masses. We observe the same distribution of periods versus the atomic number A, between the nuclear mass data and the periods describing the atomic energy levels of several neutral atoms. The nuclear level widths data are analysed in a way similar to that done for the masses. The distributions of the mass data between some different body families are compared.
[1009] vixra:2111.0008 [pdf]
Do the Exoplanet Properties Verify the Oscillation Symmetry ?
The oscillation symmetry is extended to exoplanets. A systematic study is done on a wide selection of data. The following properties: masses, periods, radii and distances, when known, are studied in order to check their agreement with oscillation symmetry. It is shown that the data indeed oscillate. The parameters allowing to fit the data are discussed. It is shown that the same shape describes the oscillations of very different mass objects.
[1010] vixra:2110.0180 [pdf]
On Factorization of Multivectors in Cl(3,0), Cl(1,2) and Cl(0,3), by Exponentials and Idempotents
In this paper we consider general multivector elements of Clifford algebras Cl(3,0), Cl(1,2) and Cl(0,3), and look for possibilities to factorize multivectors into products of blades, idempotents and exponentials, where the exponents are frequently blades of grades zero (scalar) to n (pseudoscalar).
[1011] vixra:2110.0178 [pdf]
On the Prime Distribution
In this paper, the estimation formula of the number of primes in a given interval is obtained by using the prime distribution property. For any prime pairs $p>5$ and $ q>5 $, construct a disjoint infinite set sequence $A_1, A_2, \ldots, A_i. \ldots $, such that the number of prime pairs ($p_i$ and $q_i $, $p_i-q_i = p-q $) in $A_i $ increases gradually, where $i>0$. So twin prime conjecture is true. We also prove that for any even integer $m>2700$, there exist more than 10 prime pairs $(p,q)$, such that $p+q=m$. Thus Goldbach conjecture is true.
[1012] vixra:2110.0176 [pdf]
Classical Equations of an Electron from the Majestic Dirac System
The equivalent system of equations corresponding to the Dirac equation is derived and the WKB approximation of this system is found. Similarly, the WKB approximation for the equivalent system of equation corresponding to the squared Dirac equation is found and it is proved that the Lorentz equation and the Bargmann-Michel-Telegdi iquations follow from the new Dirac-Pardy system. The new tensor equation with sigma matrix is derived for the verification by adequate laboratories.
[1013] vixra:2110.0157 [pdf]
A Note on the Muon's Anomalous Magnetic Dipole Moment
We consider the computation of the muon's anomalous magnetic moment within the theoretical framework proposed in [1], in which field theory is only an approximation of a more fundamental description of the physical world. We discuss how the hadron contribution to the electromagnetic coupling strength is larger than in the Standard Model, while the other contributions remain unchanged. The extra amount precisely fills the gap between theoretical estimate and experimental value.
[1014] vixra:2110.0151 [pdf]
Foundations for Strip Adjustment of Airborne Laserscanning Data with Conformal Geometric Algebra
Typically, airborne laserscanning includes a laser mounted on an airplane or drone (its pulsed beam direction can scan in flight direction and perpendicular to it) an intertial positioning system of gyroscopes, and a global navigation satellite system. The data, relative orientation and relative distance of these three systems are combined in computing strips of ground surface point locations in an earth fixed coordinate system. Finally, all laserscanning strips are combined via iterative closes point methods to an interactive three-dimensional terrain map. In this work we describe the mathematical framework for how to use the iterative closest point method for the adjustment of the airborne laserscanning data strips in the framework of conformal geometric algebra.
[1015] vixra:2110.0148 [pdf]
Is the Photon's Superluminal Motion Possible?
In this article the concept of ''tachy-photons'' is introduced. The tachy-photons are photons emitted by an accelerating light source. The tachy-photons can travel faster than the speed of light, but their average speed is equal to the speed of light. Using the trajectories of tachy-photons, the apparent motion of an accelerating light source is calculated. This apparent motion of the light source is dramatically different from its actual motion.
[1016] vixra:2110.0146 [pdf]
Oscillations in Hypothalamic-Pituitary-Adrenal Axis
A structured model of the HPA axis that includes the glucocorticoid receptor (GR) is considered. The model includes nonlinear dynamics of pituitary GR synthesis. The nonlinear eect arises from the fact that GR homodimerizes after cortisol activation and induces its own synthesis in the pituitary. This homodimerization makes possible two stable steady states (low and high) and one unstable state. The model includes also delay on stress. It is shown that concurrence between trajectories of dynamical system, which are produced by the unstable manifold and the value of delay time τ produce slow oscillating asymptotic periodic oscillations of cortisol with a period, which is grater then 2τ. It is shown that such oscillations exist only in an interval τ1 < τ < τ2 , where exact formulas for τ1 and τ2 has been obtained. Such oscillation arise when an initial values of stress are lager of some threshold.
[1017] vixra:2110.0143 [pdf]
A Logico-Linguistic Inquiry Into the Foundations of Physics: Part I
Physical dimensions like ``mass'', ``length'', ``charge'', represented by the symbols $[M], [L], [Q]$, are {\it not numbers}, but used as {\it numbers} to perform dimensional analysis in particular, and to write the equations of physics in general, by the physicist. The law of excluded middle falls short of explaining the contradictory meanings of the same symbols. The statements like ``$m\to 0$'', ``$r\to 0$'', ``$q\to 0$'', used by the physicist, are inconsistent on dimensional grounds because ``$ m$'', ``$r$'', ``$q$'' represent {\it quantities} with physical dimensions of $[M], [L], [Q]$ respectively and ``$0$'' represents just a number -- devoid of physical dimension. Consequently, the involvement of the statement ``$\lim_{q\to 0}$, where $q$ is the test charge'' in the definition of electric field, leads to either circular reasoning or a contradiction regarding the experimental verification of the smallest charge in the Millikan-Fletcher oil drop experiment. Considering such issues as problematic, by choice, I make an inquiry regarding the basic language in terms of which physics is written, with an aim of exploring how truthfully the verbal statements can be converted to the corresponding physico-mathematical expressions, where ``physico-mathematical'' signifies the involvement of physical dimensions. Such investigation necessitates an explanation by demonstration of ``self inquiry'', ``middle way'', ``dependent origination'', ``emptiness/relational existence'', which are certain terms that signify the basic tenets of Buddhism. In light of such demonstration I explain my view of ``definition''; the relations among quantity, physical dimension and number; meaninglessness of ``zero quantity'' and the associated logico-linguistic fallacy; difference between unit and unity. Considering the importance of the notion of electric field in physics, I present a critical analysis of the definitions of electric field due to Maxwell and Jackson, along with the physico-mathematical conversions of the verbal statements. The analysis of Jackson's definition points towards an expression of the electric field as an infinite series due to the associated ``limiting process'' of the test charge. However, it brings out the necessity of a postulate regarding the existence of charges, which nevertheless follows from the definition of quantity. Consequently, I explain the notion of {\it undecidable charges} that act as the middle way to resolve the contradiction regarding the Millikan-Fletcher oil drop experiment. In passing, I provide a logico-linguistic analysis, in physico-mathematical terms, of two verbal statements of Maxwell in relation to his definition of electric field, which suggests Maxwell's conception of dependent origination of distance and charge (i.e. $[L]\equiv[Q]$) and that of emptiness in the context of relative vacuum (in contrast to modern absolute vacuum). This work is an appeal for the dissociation of the categorical disciplines of logic and physics and on the large, a fruitful merger of Eastern philosophy and Western science. Nevertheless, it remains open to how the reader relates to this work, which is the essence of emptiness.
[1018] vixra:2110.0142 [pdf]
Logic, Philosophy and Physics: a Critical Commentary on the Dilemma of Categories
I provide a critical commentary regarding the attitude of the logician and the philosopher towards the physicist and physics. The commentary is intended to showcase how a general change in attitude towards making scientific inquiries can be beneficial for science as a whole. However, such a change can come at the cost of looking beyond the categories of the disciplines of logic, philosophy and physics. It is through self-inquiry that such a change is possible, along with the realization of the essence of the middle that is otherwise excluded by choice. The logician, who generally holds a reverential attitude towards the physicist, can then actively contribute to the betterment of physics by improving the language through which the physicist expresses his experience. The philosopher, who otherwise chooses to follow the advancement of physics and gets stuck in the trap of sophistication of language, can then be of guidance to the physicist on intellectual grounds by having the physicist's experience himself. In course of this commentary, I provide a glimpse of how a truthful conversion of verbal statements to physico-mathematical expressions unravels the hitherto unrealized connection between Heisenberg uncertainty relation and Cauchy's definition of derivative that is used in physics. The commentary can be an essential reading if the reader is willing to look beyond the categories of logic, philosophy and physics by being `nobody'.
[1019] vixra:2110.0138 [pdf]
Enhancing the Weakening of the Conflict Evidence Using Similarity Matrix and Dispersion of Similarities in Dempster-Shafer Evidence Theory
Classic Dempster combination rule may result in illogical results when combining highly conflict evidence. How to deal with highly conflict evidence and get a reasonable result is critical. Modifying the evidence is one of significant strategies according to the importance of each evidence (e.g. similarity matrix). However, the dispersion of evidence similarity is rarely taken into consideration, which is also an important feature to distinguish the conflict evidence and normal evidence. In this paper, a new method based on similarity matrix and dispersion of evidence similarity is proposed to evaluate the importance of evidence in Dempster-Shafer theory (DST). The proposed method enhances to weaken the influence of the conflict evidence. Robustness of the proposed method is verified through the sensitivity analysis the changes of degree of conflict and amount of credible evidence changes in DST. Some numerical examples are used to show the effectiveness of the proposed method.
[1020] vixra:2110.0121 [pdf]
Problem of Identity and Quadratic Equation
Given “ab = 0”, considering the arithmetic truth “0.0 = 0” we conclude that one possibility is “both a = 0 and b = 0”. Consequently, the roots of a quadratic equation are mutually inclusive. Therefore, the concerned variable can acquire multiple identities in the same process of reasoning or, at the same time. The law of identity gets violated, which we call the problem of identity. In current practice such a step of reasoning is ignored by choice, resulting in the subsequent denial of “0.0 = 0”. Here, we deal with the problem of identity without making such a choice of ignorance. We demonstrate that the concept “identity of a variable” is meaningful only in a given context and does not have any significance in isolation other than the symbol, that symbolizes the variable, itself. We demonstrate visually how we actually realize multiple identities of a variable at the same time, in practice, in the context of a given quadratic equation. In this work we lay the foundations, based on which we intend to bring forth some hitherto unattended facets of reasoning that concern two basic differential equations which are pivotal to the literature of physics.
[1021] vixra:2110.0120 [pdf]
On Odd Perfect Numbers
In this note, we introduce the notion of the disc induced by an arithmetic function and apply this notion to the odd perfect number problem. We show that under certain special local condition an odd perfect number exists by exploiting this concept.
[1022] vixra:2110.0114 [pdf]
Algorithm for Finding Q^k-th Root of a [q=prime A≡x^(q^k) (Mod p) ]
We have created a handy tool that allows you to calculate the q^k power root of a easily and quickly. However, the calculation may require a primitive root, and if the calculation requires a primitive root and you do not know the primitive root, please use the Tonelli-Shanks algorithm.
[1023] vixra:2110.0111 [pdf]
On a New Rule of Approximating Area under the Curve
I am going to provide a new technique of approximating area under the curve, using the Newton-Raphson Method. I am also going to provide a formula that would help us approximate any Definite Integral or help us find the area under the curve, under certain conditions. The relative error of this formula is very small, which makes it even more interesting.
[1024] vixra:2110.0110 [pdf]
A Course of Mathematical Cartography For Engineers
This monograph presents a course of mathematical cartography for engineers including essentially the following elements: - the definitions of characteristic terms, - the types of plane cartographic representations or "projections", - some known examples, - and a set of problems and exercises for the reader.
[1025] vixra:2110.0105 [pdf]
Connections Between Hadronic Masses in the One Hand and Between Fundamental Particle Masses in the Other Hand
The oscillation symmetry is used to study the connections between masses and widths of a selection of the following states studied separately: mesons, baryons, nuclei, and hypernuclei. It is also applied to study the connection between leptonic, quark and boson masses and widths. With the exception of M$\approx$0 mass particles, all the fundamental particle masses are fitted by a single distribution inside the oscillation symmetry.
[1026] vixra:2110.0091 [pdf]
The Chessboard Puzzle
We introduce compact subsets in the plane and in R 3,which we call Polyorthogon and Polycuboid, respectively. We ask whether we can represent these sets by congruent bricks or mirrored bricks.
[1027] vixra:2110.0086 [pdf]
Application of the Oscillation Symmetry to the Electromagnetic Interactions of Some Particles and Nuclei
The oscillation symmetry is first applied to electromagnetic interactions of particles and nuclei. It is shown that the differences between successive masses plotted versus their mean values and the electromagnetic decay widths $\Gamma_{ee}$ of $0^{-}(1^{--})$ $b\bar b$ and $c\bar c$ mesons, plotted versus their masses, agree with such symmetry. Then it is shown that the variation of the energy differences between different levels of several nuclei from $^{8}$Be to $^{20}$Ne, corresponding to given electric or magnetic transitions, display also oscillating behaviours. The electromagnetic widths of the electric and magnetic transitions between excited levels of these nuclei, plotted versus the corresponding differences between energies agree also with this property. The oscillating periods describe also an oscillation, the same for E1, M1, and E2 transitions. It is also the case for the multiplicative factor used $\beta$, and for ratios between these parameters. It is shown that the oscillation symmetry is then applied to atomic energy levels of several neutral atoms from hydrogen up to phosphorus. The data exhibit nice oscillations when plotted in the same way as described before.
[1028] vixra:2110.0085 [pdf]
AniVid: A Novel Anime Video Dataset with Applications in Animation
Automating steps of the animation production process using AI-based tools would ease the workload of Japanese animators. Although there have been recent advances in the automatic animation of still images, the majority of these models have been trained on human data and thus are tailored to images of humans. In this work, I propose a semi-automatic and scalable assembling pipeline to create a large-scale dataset containing clips of anime characters’ faces. Using this assembling strategy, I create AniVid, a novel anime video dataset consisting of 34,221 video clips. I then use a transfer learning approach to train a first order motion model (FOMM) on a portion of AniVid, which effectively animates still images of anime characters. Extensive experiments and quantitative results show that FOMM trained on AniVid outperforms other trained versions of FOMM when evaluated on my test set of anime videos.
[1029] vixra:2110.0083 [pdf]
Quantum Gravitation and Inertia
Newton's Law of Universal Gravitation provides the basis for calculating the attraction force between two bodies, which is called the "gravitational force" \cite{Newton gravitation}. This Law uses the "mass" of bodies. Einstein General Relativity Theory proposes to calculate this gravitational force by using the curvature of space-time. This space-time curvature is supposedly due to the same "mass" \cite{Einstein}. Stephan Hawkings in his book (A Brief History of Time)\cite{Hawkings} supposes that gravitons particles of quantum mechanics are the intermediaries that "give mass" to the bodies. However, there is no explanation about the nature of the gravitons or how their interaction with bodies could "give them mass". This paper presents a new way of explaining how the "mass" can be given to bodies. The starting point is an idea proposed in 1690 by Nicolas Fatio de Duillier and revisited here with new hypotheses, and then further developped with the use of the Bohmian quantum mechanics. It is shown, by means of reasoning and equations reflecting these reasoning, that the gravitational force between two bodies comes from the interaction between the revisited Nicolas Fatio's aether and matter atomic nuclei. It is also shown that the "mass" of a body is not a real entity, but is an emerging phenomenon. This idea has already been suggested by Erick Verlinde in another context \cite{Verlinde}. Here, the emergence of "mass" is given by the interaction of the aether particles with matter atomic nuclei. The interesting point of Nicolas Fatio’s theory is that it is able to solve not only the origin of gravitational force, but also the origin of inertial force. The origin of inertia comes from an induction phenomena between Nicolas Fatio's aether and matter atomic nuclei. This paper uses Nicolas Fatio's medium own word, aether, to describe gravitation and inertia. It has nothing to do with Lorentz or Maxwell luminiferous aether that has been disproved by the scientific community after the Michelson and Morley experiment.
[1030] vixra:2110.0062 [pdf]
Refutation of the Illusions of General Relativity using Maxwell Gravity
By using the gravitatomagnetic effect and the special relativity theory, it is possible to accurately compute the gravitational redshift, the perihelion precession of Mercury, and the refraction of light by the sun, which are initial bases proofs of the general relativity theory. So, it shows that basis of the general relativity theory does not exist. And in addition, the above corrections to the Lorentz force are presented as the gravitatomagnetic and electromagnetic effect and the effect of special relativistic Thomas–Wigner rotation.
[1031] vixra:2110.0059 [pdf]
Oscillation Symmetry Applied to Several Astrophysical Data. Attempt to Predict Some Properties of the Putative Ninth and Tenth New Solar Planets
The existence of opposite forces acting on astrophysical bodies, involve that their properties should obey to oscillation symmetry. The oscillation symmetry is applied to several astrophysical properties, like Nebulae radii and magnitudes, Local Group Galaxy masses, luminosities, and diameters, Comet radii, orbital periods, and eccentricities, Black hole masses, orbital periods, and distance from earth, Star masses, magnitudes, and distances from Sun. This symmetry is used to predict some still unknown astronomical properties, namely the properties of two additionnal possible solar planets. Using the predicted possible masses of these planets, the method allows to predict their possible densities, rotation durations, revolution periods, orbital speeds, and eccentricities.
[1032] vixra:2110.0055 [pdf]
Benchmarking of Lightweight Deep Learning Architectures for Skin Cancer Classification using ISIC 2017 Dataset
Skin cancer is one of the deadly types of cancer and is common in the world. Recently, there has been a huge jump in the rate of people getting skin cancer. For this reason, the number of studies on skin cancer classification with deep learning are increasing day by day. For the growth of work in this area, the International Skin Imaging Collaboration (ISIC) organization was established and they created an open dataset archive. In this study, images were taken from ISIC 2017 Challenge. The skin cancer images taken were preprocessed and data augmented. Later, these images were trained with transfer learning and fine-tuning approach and deep learning models were created in this way. 3 different mobile deep learning models and 3 different batch size values were determined for each, and a total of 9 models were created. Among these models, the NASNetMobile model with 16 batch size got the best result. The accuracy value of this model is 82.00%, the precision value is 81.77% and the F1 score value is 0.8038. Our method is to benchmark mobile deep learning models which have few parameters and compare the results of the models.
[1033] vixra:2110.0038 [pdf]
Theory of Errors For The Technicians - Notions of The Least Squares Method
In this booklet, we give elements of the theory of Errors and notions of the Least Squares method for technicians working in the field of topography, geodesy and geomatics.
[1034] vixra:2110.0036 [pdf]
Directed Dependency Graph Obtained from a Continuous Data Matrix by the Highest Successive Conditionings Method.
In this paper, we propose a directed dependency graph learned from a continuous data matrix in order to extract the hidden oriented dependencies from this matrix. For each of the dependency graph's node, we will assign a random variable as well as a conditioning percentage linking parents and children nodes of the graph. Among all the dependency graphs learned from the continuous data matrix, we will choose the one using the highest successive conditionings method.
[1035] vixra:2110.0035 [pdf]
Index Type Hand Symbol(2)
We apply the principle of index type keyboard to hand sign. Because the learning is very easy, not only a person without hearing but also a person with that can make full use of it.
[1036] vixra:2110.0033 [pdf]
Proof of Goldbach's Conjecture and Twin Prime Conjecture
In this paper, we prove Twin prime conjecture and Goldbach’s conjecture. We do this in three stages by turns; one is ‘Application Principle of Mathematical Induction’, another is ‘Proof of Twin prime conjecture’ and the other is ‘Proof of Goldbach’s conjecture’. These three proofs are interconnected, so they help prove it. Proofs of Twin prime conjecture and Goldbach’s conjecture are proved by Application Principle of Mathematical Induction. And Twin prime conjecture is based on Goldbach’s conjecture. So, we can get the result, Twin prime and Goldbach’s conjecture are true. The reason why we could get the result is that I use twin prime’s characteristic that difference is 2 and apply this with Application Principle of Mathematical Induction. If this is proved in this way, It implies that the problem can be proved in a new way of proof.
[1037] vixra:2110.0019 [pdf]
Comment on: An Explanation of Dayton Miller’s Anomalous “Ether Drift” Result (English Version)
Thomas J. Roberts published an article in 2006 in which he claims, that Dayton C. Miller did not measure a real signal in his experiments. In this paper, the methods used are examined and it is shown that all claims are false.
[1038] vixra:2110.0013 [pdf]
Reinterpretation of Length Contraction Derivation from Lorentz Transformation and Derivation of Logical Relativistic Length
Roy Weinstein’s length contraction derivation process has been acknowledged by several re- searchers, from Gamov to Einstein. However, there are several problems in his derivation, and this study looked into them in detail. I have confirmed that if the problems found in the Weinstein derivation process are removed, the length contraction equation is not derived, but rather the length expansion equation is derived. I also looked at the fact that some experimental facts support length expansion.
[1039] vixra:2110.0012 [pdf]
Kommentar zu: An Explanation of Dayton Miller’s Anomalous “Ether Drift” Result <br>Comment on: An Explanation of Dayton Miller’s Anomalous “Ether Drift” Result
Thomas J. Roberts veröffentlichte 2006 einen Artikel, in dem er behauptet,daß Dayton C. Miller in seinen Experimenten kein echtes Signal gemessen hat. In dieser Arbeit werden die verwendeten Methoden untersucht und es wird gezeigt, daß alle Behauptungen falsch sind. <p> Thomas J. Roberts published an article in 2006 in which he claims, that Dayton C. Miller did not measure a real signal in his experiments. In this paper, the methods used are examined and it is shown that all claims are false.
[1040] vixra:2110.0010 [pdf]
Some Relatively High Inconsistencies in the Official Apollo Missions Data and an Alternative Scenario in Historical Context
The aim of the following article is not to doubt about some successful American manned lunar landings since the $12$ Saturn V rockets involved in the official Apollo missions have more than enough Delta-v to achieve that goal whatever the small precise mission details. The aim of the following article is to propose some alternative scenario to the official Apollo missions data since the cold war, the deterrence strategy, the secret military, the propaganda war, the ideological war, the pressure and the stress from a space race competition could affect greatly the released official Apollo missions data. For example, only decades later we knew Yuri Gagarin have not landed inside his atmospheric re-entry capsule but with some individual parachute. To achieve that aim, we simulate or calculate the most we can and look what was the easier practical solutions at that time and check the consistency of the official Apollo missions data.
[1041] vixra:2110.0009 [pdf]
Structures and Dynamics of Lamed Schur Flows with Vorticity but no Swirls
We consider the nontrivial existence, dynamics and indications of the flows when all eigenvalues of the velocity gradients are real, thus `lone', \textit{i.e.}, without forming the complex conjugate pairs which are associated to the swirls. A generic prototype is the `lone Schur flow (LSF)' whose velocity gradient tensor is uniformly of Schur form but free of complex eigenvalues. A (partial) integral-differential equation governing such LSF is established, and a semi-analytical algorithm is accordingly designed for computation. Simulated evolutions of example LSFs in 2- and 3-spaces show rich dynamics and vortical structures, but no obvious swirls (nor even the homoclinic loops in whatever distorted forms) could be found. We discovered the flux loop scenario and the anisotropic analogy of the incompressible turbulence at or close to the critical dimension $D_c =4/3$ decimated from 2-space.
[1042] vixra:2110.0008 [pdf]
The Application of Mean Curvature Flow into Cosmology
In this paper, I used the application of something mathematically into physics. A mean curvature flow is a good candidate for the curvature of our universe; Therefore, I exerted Wilmore energy as dark energy and Hawking mass as dark matter. I also found out that there was space-time before the Big Bang. Moreover, I derived new Friedmann equations by adding Wilmore energy into the Einstein-Hilbert action to describe the evolution of our universe. Another discovery in this study was the unveiling of the fine-tuning problem.
[1043] vixra:2110.0004 [pdf]
On the Method of Dynamical Balls
In this paper we introduce and develop the notion of dynamical systems induced by a fixed $a\in \mathbb{N}$ and their associated induced dynamical balls. We develop tools to study problems requiring to determine the convergence of certain sequences generated by iterating on a fixed integer.
[1044] vixra:2109.0213 [pdf]
Нелинейные уравнения Максвелла (Nonlinear Maxwell Equations)
На основании анализа бикватернионных квадратичных форм поля показано, что уравнения Максвелла возникают как следствие принципа сохранения потока энергии-импульса поля в пространстве-времени. При этом оказывается, что этот принцип предполагает существование более общих нелинейных уравнений поля. Классические линейные уравнения Максвелла особым образом вложены в новые нелинейные уравнения и являются их частным случаем. Показано, что в ряде важных случаев нелинейные уравнения в отличие от линейных допускают решения, обладающие закрученностью потока энергии. Решения полученных нами уравнений дают возможность волнового описания заряженных частиц в рамках нелинейной классической электродинамики. Особое внимание в работе уделяется проблеме разделения поля на «собственное» поле заряженной частицы и «внешнее» по отношению к нему поле. Из нелинейных уравнений поля следуют как сами классические уравнения Максвелла, так и уравнения движения заряда под действием силы Лоренца. Таким образом решается задача нахождения нелинейных уравнений поля, включающих в себя взаимодействие. В рамках нашего подхода заряд частицы является электромагнитным (комплекснозначным), периодически проходящим различные линейные комбинации электрического и магнитного зарядов от чисто электрического до чисто магнитного. В реальных процессах играет роль не сам заряд частицы, а его соотношение по фазе с другими зарядами и полями. <p> Based on the analysis of biquaternionic quadratic forms of the field, it is shown that Maxwell's equations arise as a consequence of the principle of conservation of the energy-momentum flux of the field in space-time. It turns out that this principle presupposes the existence of more general nonlinear field equations. The classical linear Maxwell equations are embedded in new nonlinear equations in a special way and are their special case. It is shown that, in a number of important cases, nonlinear equations, in contrast to linear ones, admit solutions with a swirling energy flux. The solutions of the equations obtained by us make it possible to describe the charged particles in the framework of nonlinear classical electrodynamics. Particular attention is paid to the problem of separating the field into the "own" field of the charged particle and the "external" field in relation to it. Both the classical Maxwell equations themselves and the equations of motion of the charge under the action of the Lorentz force follow from the nonlinear equations of the field. This solves the problem of finding nonlinear field equations that include interaction. Within the framework of our approach, the particle charge is electromagnetic (complex-valued), periodically passing through various linear combinations of electric and magnetic charges from purely electric to purely magnetic. In real processes, it is not the particle charge itself that plays a role, but its phase relationship with other charges and fields.
[1045] vixra:2109.0212 [pdf]
Infrared Spectroscopy of the Two Esters from 2,3,4,5-Tetrahydro-Oxepine Derivatives, New Nano Molecules
The work focused on determining the Infrared spectroscopy (IRS) of the two compounds calculated are from two esters (compounds C1 and C2) from 2,3,4,5-Tetrahydro-oxepine derivatives, here called C1 and C2. The IRS was obtained via computational methods ab initio Restricted Hartree-Fock. Optimization of molecular structure via UFF, followed by PM3, RHF/EPR-II and RHF/STO-6G, thus obtaining a stable structure, in STP. The molecule was obtained, whose composition is C: 81.7%; H: 7.1%; N: 3.4%; O: 7.8%, 411.53536 g, and molecular formula: C28H29NO2 for C1 and C: 70.6%; H: 7.4%; N: 10.3%; O: 11.7%, 544.68439 g, and molecular formula: C32H40N4O4. The highest vibrational absorbance frequency peaks for the C1 and C2 molecule are found at the frequency of 1793.58 cm-1, 1867.14 cm-1 and 1956.39 cm-1, for C1 and 1368.99 cm-1, 1409.43 cm-1 and 1790.47 cm-1, for C2, respectively. Limitations our study has so far been limited to computational simulation via quantum mechanics (QM) an applied theory. Our results and calculations are compatible with the theory of QM.
[1046] vixra:2109.0208 [pdf]
The History of The Astronomical Campaign of Laplace Geodetic Points
In this note, we will describe the history of the astronomical observations and those of the measurements of the bases of the eight triangles of the Primordial Tunisian Geodetic Network carried out in 1982 in the framework of the works of the modernization of the Tunisian Geodesy started by the Office of Topography and Cartography (OTC).
[1047] vixra:2109.0202 [pdf]
Oxford Dictionary of Media and Communication and the Graphical Law
We study the Oxford Dictionary Of Media and Communication by Daniel Chandler and Rod Munday. We draw the natural logarithm of the number of entries, normalised, starting with a letter vs the natural logarithm of the rank of the letter, normalised. We conclude that the Dictionary can be characterised by BP(4,$\beta H=0$) i.e. a magnetisation curve for the Bethe-Peierls approximation of the Ising model with four nearest neighbours with $\beta H=0$, in the absence of external magnetic field, H. $\beta$ is $\frac{1}{k_{B}T}$ where, T is temperature and $k_{B}$ is the tiny Boltzmann constant.
[1048] vixra:2109.0197 [pdf]
Sums of P-Sequences
In this article, we obtain closed expressions for odd and even sums, the sum of the first n numbers, and the sum of squares of the first n numbers of the "exponent" p-sequence whose "seeds" are (0,1,...,p-1).
[1049] vixra:2109.0192 [pdf]
Golden Ratios and Golden Angles
In a p-sequence, every term is the sum of p previous terms given p initial values called seeds. It is an extension of the Fibonacci sequence. In this article, we investigate the p-golden ratio of p-sequences. We express a positive integer power of the p-golden ratio as a polynomial of degree p-1, and obtain values of golden angles for different p-golden ratios. We also consider further generalizations of the golden ratio.
[1050] vixra:2109.0185 [pdf]
Fibonacci Sequence, Golden Ratio and Generalized Additive Sequences
In this article, we recall the Fibonacci sequence, the golden ratio, their properties and applications, and some early generalizations of the golden ratio. The Fibonacci sequence is a 2-sequence because it is generated by the sum of two previous terms. As a natural extension of this, we introduce several typical p-sequences where every term is the sum of p-previous terms given p initial values called seeds. In particular, we introduce the notion of 1-sequence. We then discuss generating functions and limiting ratio values of p-sequences. Furthermore, inspired by Fibonacci's rabbit pair problem, we consider a general problem whose particular cases lead to nontrivial additive sequences.
[1051] vixra:2109.0179 [pdf]
Quantum Theory of Gravity: A New Formulation of the Gupta-Feynman Based Quantum Field Theory of Einstein Gravity II
In this manuscript we do the Quantum Field Theory (QFT) of Einstein's Gravity (EG) based on the developments previously made by Suraj N. Gupta and Richard P. Feynman, using a new and more general mathematical theory based on Ultrahyperfunctions \cite{ss} \\ \nd Ultrahyperfunctions (UHF) are the generalization and extension to the complex plane of Schwartz 'tempered distributions. This manuscript is an {\bf application} to Einstein's Gravity (EG) of the mathematical theory developed by Bollini et al \cite{br1, br2, br3, br4} and continued for more than 25 years by one of the authors of this paper. A simplified version of these results was given in \cite{pr2} and, based on them, (restricted to Lorentz Invariant distributions) QFT of EG \cite{pr1} was obtained. We will quantize EG using the {\bf most general quantization approach}, the Schwinger-Feynman variational principle \cite{vis}, which is more appropriate and rigorous than the popular functional integral method (FIM). FIM is not applicable here because our Lagrangian contains derivative couplings. \\ \nd We use the Einstein Lagrangian as obtained by Gupta \cite{g1,g2,g3}, but we added a new constraint to the theory. Thus the problem of lack of unitarity for the $S$ matrix that appears in the procedures of Gupta and Feynman.\\ \nd Furthermore, we considerably simplify the handling of constraints, eliminating the need to appeal to ghosts for guarantying unitarity of the theory. \\ \nd Our theory is obviously non-renormalizable. However, this inconvenience is solved by resorting to the theory developed by Bollini et al. \cite{br1,br2,br3,br4,pr2}\\ \nd This theory is based on the thesis of Alexander Grothendieck \cite{gro} and on the theory of Ultrahyperfunctions of Jose Sebastiao e Silva \cite{ss} \\ Based on these papers, a complete theory has been constructed for 25 years that is able to quantize non-renormalizable Field Theories (FT). \\ Because we are using a Gupta-Feynman based EG Lagrangian and to the new mathematical theory we have avoided the use of ghosts, as we have already mentioned, to obtain a unitary QFT of EG
[1052] vixra:2109.0178 [pdf]
Optimality in Noisy Importance Sampling
Many applications in signal processing and machine learning require the study of probability density functions (pdfs) that can only be accessed through noisy evaluations. In this work, we analyze the noisy importance sampling (IS), i.e., IS working with noisy evaluations of the target density. We present the general framework and derive optimal proposal densities for noisy IS estimators. The optimal proposals incorporate the information of the variance of the noisy realizations, proposing points in regions where the noise power is higher. We also compare the use of the optimal proposals with previous optimality approaches considered in a noisy IS framework.
[1053] vixra:2109.0171 [pdf]
Bell's Theorem Refuted: Einstein and Locality Prevail
In our terms, this is Bell's 1964 theorem, ‘No local hidden-variable theory can reproduce exactly the quantum mechanical predictions.' Against this, and bound by what Bell takes to be Einstein's definition of locality, we refute Bell's theorem and reveal his error. We show that Einstein was right: the physical world is local; and we advance Einstein's quest to make quantum mechanics intelligible in a classical way. With respect to understanding, and taking mathematics to be the best logic, the author is as close as an email: eprb@me.com
[1054] vixra:2109.0166 [pdf]
Proof of the Riemann Hypothesis
In this article we will prove the problem equivalent to the Riemann Hypothesis developed by Luis-Báez in the article ``A sequential Riesz-like criterion for the Riemann hypothesis''.
[1055] vixra:2109.0164 [pdf]
The Graphical Law Behind the NTC's Hebrew and English Dictionary by Arie Comey and Naomi Tsur
We study the NTC's Hebrew and English Dictionary, 2000 edition (The Most Practical and Easy-to-Use Dictionary of Modern Hebrew and English), by Arie Comey and Naomi Tsur. We draw the natural logarithm of the number of the Hebrew words, normalised, starting with a letter vs the natural logarithm of the rank of the letter, normalised. We find that the NTC's Hebrew words underlie a magnetisation curve of a Spin-Glass in the presence of little external magnetic field. We obtain one third as the naturalness number of the Hebrew as seen through this dictionary.
[1056] vixra:2109.0162 [pdf]
Weak-Measurement Induced Quantum Discord and Monogamy of X States
Weak-measurement induced quantum discord or super quantum discord (SQD) is a generalization of the normal quantum discord and is defined as the difference between quantum mutual information and classical correlation obtained by weak measurements in a given quantum system. This correlation is an information-theoretic measure and is, in general, different from entanglement-separability measures such as entanglement. Super quantum discord may be nonzero even for certain separable states. So far, SQD has been calculated explicitly only for a limited set of two-qubit quantum states and expressions for more general quantum states are not known. In this article, we derive explicit expressions for SQD for X states, a seven real-parameter family of two-qubit states and investigate its monogamy properties. The monogamy behaviour of SQD depends on the measurement strength. The formalism can be easily extended to N-qubit X states.
[1057] vixra:2109.0161 [pdf]
Is The Riemann Hypothesis True? Yes, It Is. v(4)
In 1859, Georg Friedrich Bernhard Riemann announced the following conjecture, called Riemann Hypothesis: The nontrivial roots (zeros) $s=sigma+it$ of the zeta function, defined by: $$zeta(s) = sum_{n=1}^{+infty}frac{1}{n^s},,mbox{for}quad Re(s)>1$$have real part $sigma= ds frac{1}{2}$.We give the proof that $sigma= frac{1}{2}$ using an equivalent statement of the Riemann Hypothesis concerning the Dirichlet $eta$ function.
[1058] vixra:2109.0159 [pdf]
Quantum Theory of Gravity: A New Formulation of the Gupta-feynman Based Quantum Field Theory of Einstein Gravity
In this manuscript we do the Quantum Field Theory (QFT) of Einstein's Gravity (EG) based on the developments previously made by Suraj N. Gupta and Richard P. Feynman, using a new and more general mathematical theory based on Ultrahyperfunctions \cite{ss} \\ \nd Ultrahyperfunctions (UHF) are the generalization and extension to the complex plane of Schwartz 'tempered distributions. This manuscript is an {\bf application} to Einstein's Gravity (EG) of the mathematical theory developed by Bollini et al \cite{br1, br2, br3, br4} and continued for more than 25 years by one of the authors of this paper. A simplified version of these results was given in \cite{pr2} and, based on them, (restricted to Lorentz Invariant distributions) QFT of EG \cite{pr1} was obtained. We will quantize EG using the {\bf most general quantization approach}, the Schwinger-Feynman variational principle \cite{vis}, which is more appropriate and rigorous than the popular functional integral method (FIM). FIM is not applicable here because our Lagrangian contains derivative couplings. \\ \nd We use the Einstein Lagrangian as obtained by Gupta \cite{g1,g2,g3}, but we added a new constraint to the theory. Thus the problem of lack of unitarity for the $S$ matrix that appears in the procedures of Gupta and Feynman.\\ \nd Furthermore, we considerably simplify the handling of constraints, eliminating the need to appeal to ghosts for guarantying unitarity of the theory. \\ \nd Our theory is obviously non-renormalizable. However, this inconvenience is solved by resorting to the theory developed by Bollini et al. \cite{br1,br2,br3,br4,pr2}\\ \nd This theory is based on the thesis of Alexander Grothendieck \cite{gro} and on the theory of Ultrahyperfunctions of Jose Sebastiao e Silva \cite{ss} \\ Based on these papers, a complete theory has been constructed for 25 years that is able to quantize non-renormalizable Field Theories (FT). \\ Because we are using a Gupta-Feynman based EG Lagrangian and to the new mathematical theory we have avoided the use of ghosts, as we have already mentioned, to obtain a unitary QFT of EG \\
[1059] vixra:2109.0158 [pdf]
Signal Transmission in the Schwarzschild Metric: an Analogy with Special Relativity
From the perspective of a distant observer, a free-falling body in the Schwarzschild metric would require an infinite time to reach the Schwarzschild radius, whereas a comoving observer would measure just a finite interval of proper time along that path. This paradoxical situation is commonly interpreted considering the perspective of the distant observer as a simple “artifact” due to the enormous delay of the light signals emitted by the free-falling body during its fall, “already” completed. This interpretation of relativistic mechanics is intrinsically inconsistent, as shown in this article. We propose an alternative elucidation based on the analogy between the asymptotic trajectory of a free-falling body approaching the horizon event of a Schwarzschild black hole and an accelerated body exponentially asymptotically tending to the speed of light in Special Relativity.
[1060] vixra:2109.0146 [pdf]
Reckoning Dimensions
In this article, we seek an alternative avenue--in contrast to the conventional hypercube approach--to reckon physical or abstract dimensions from an information perspective alone. After briefly reviewing ``bit'' and ``quantum of information--it'', we propose a scheme to perceive higher dimensions using bits and concentric spherical shells that are intrinsically entangled.
[1061] vixra:2109.0145 [pdf]
Matter, Consciousness, and Causality--Space, Time, Measurement, and more
In this article, we refine the elements of physics. We consider [primordial] matter and consciousness as eternal and the causes of the creation of the universe via causality. We regard causality as the fundamental and ecumenical principle of theuniverse. Furthermore, we define space and time in terms of cause and effect, and revisitother important notions in physics.
[1062] vixra:2109.0142 [pdf]
Hypothesis of a Violation of Lorentz Invariance in the Aether Theory and Confirmation by the Experiments of D. C. Miller (English Version)
It is hypothesized that the refractive index of moving gases in their rest frame becomes anisotropic. Therefore interferometers with air in the light path should be able to measure a phase shift. The theoretical signal is derived from Lorentz's aether theory. The hypothesis is tested against historical data from Dayton C. Miller's experiments on Mount Wilson in 1925–1926. A suitable signal is found in selected data, confirming the aether theory. Using curve fitting, the speed v and the apex, in equatorial coordinates (α, δ), of the motion of the solar system in the aether were determined. The smallest deviation of the theory from the data results with the parameters v = (326 ± 17) km/s, α = (11.0 ± 0.2) h, δ = (-11 ± 5)°.
[1063] vixra:2109.0141 [pdf]
Langenscheidt Taschenw$\ddot{o}$rterbuch Deutsch-Englisch / Englisch-Deutsch, V$\ddot{o}$llige Neubearbeitung and the Graphical Law
We study Langenscheidt Taschenw$\ddot{o}$rterbuch Deutsch-Englisch / Englisch-Deutsch, V$\ddot{o}$llige Neubearbeitung, dictionary, 2014 edition. We draw the natural logarithm of the number of the German language words, normalised, starting with a letter vs the natural logarithm of the rank of the letter, normalised. We find that the words underlie a magnetisation curve of a Spin-Glass in the presence of little external magnetic field. We notice that there is no qualitative change compared to the Langenscheidt's German-English English-German Dictionary, 1970 edition.
[1064] vixra:2109.0133 [pdf]
Adding Boundary Terms to Anderson Localized Hamiltonians Leads to Unbounded Growth of Entanglement
It is well known that in Anderson localized systems, starting from a random product state the entanglement entropy remains bounded at all times. However, we show that adding a single boundary term to an otherwise Anderson localized Hamiltonian leads to unbounded growth of entanglement. Our results imply that Anderson localization is not a local property. One cannot conclude that a subsystem has Anderson localized behavior without looking at the whole system, as a term that is arbitrarily far from the subsystem can affect the dynamics of the subsystem in such a way that the features of Anderson localization are lost.
[1065] vixra:2109.0129 [pdf]
Basic Mathematical Reminders For Assistants and Technical Agents
In this booklet, we provide the mathematical foundations necessary to follow the training courses in geodesy and topography. It is a reminder of the main formulas and knowledge in mathematics for assistants and technical agents.
[1066] vixra:2109.0126 [pdf]
对情杀案件的心理因素分析 <br> Analysis of Psychological Factors in Cases of Love Killing
This article analyzes the nature, influencing factors, and countermeasures of love killings, and proposes that love killings are extreme incidents of killing because of emotional conflicts and entanglements. The emotional disputes are divided into four types: breakup disputes, love rival disputes, emotional infidelity, and courtship rejection. kind. The author tried to explore the psychological factors related to the occurrence of love killings: social rejection, frustration, stress, self-esteem, parental rearing styles, and psychological flexibility. Based on the above factors, the author quoted Fromm's views on love to explain what mature love is, and proposed that in order to avoid more love killings, it is advocated to improve the psychological flexibility of individuals, and to know how to identify and stay away from dangerous lovers in time.
[1067] vixra:2109.0124 [pdf]
A Proposed Solution to Problems in Learning the Knowledge Needed by Self-Driving Vehicles
Three problems in learning knowledge for self-driving vehicles are: how a finite sample of information about driving, N, can yield an ability to deal with the infinity of possible driving situations; the problem of generalising from N without over- or under-generalisation; and how to weed out errors in N. A theory developed with computer models to explain a child’s learning of his or her first language, now incorporated in the SP System, suggests: compress N as much as possible by a process that creates a grammar, G, and an encoding of N in terms of G called E. Then discard E which contains all or most of the errors in N, and retain G which solves the first two problems.
[1068] vixra:2109.0121 [pdf]
The Riemann Hypothesis and Tachyonic Off-Shell String Scattering Amplitudes
The study of the ${\bf 4}$-tachyon off-shell string scattering amplitude $ A_4 (s, t, u) $, based on Witten's open string field theory, reveals the existence of a continuum of poles in the $s$-channel and corresponding to a continuum of complex spins $ J $. The latter spins $ J$ belong to the Regge trajectories in the $ t, u$ channels which are defined by $ - J (t) = - 1 - { 1\over 2 } t = \beta (t)= { 1\over 2 } + i \lambda $; $ - J (u) = - 1 - { 1\over 2 } u = \gamma (u) = { 1\over 2 } - i \lambda $, with $ \lambda = real$. These values of $ \beta ( t ), \gamma (u) $ given by ${ 1\over 2 } \pm i \lambda $, respectively, coincide precisely with the location of the critical line of nontrivial Riemann zeta zeros $ \zeta (z_n = { 1\over 2 } \pm i \lambda_n) = 0$. We proceed to prove that if there were nontrivial zeta zeros (violating the Riemann Hypothesis) outside the critical line $ Real~ z = 1/2 $ (but inside the critical strip) these putative zeros $ don't$ correspond to any $poles$ of the ${\bf 4}$-tachyon off-shell string scattering amplitude $ A_4 ( s, t , u ) $. One of the most salient features of these results is the $collinearity$ of the ${\bf 4}$ off-shell tachyons. We may speculate that this spatial $collinearity$ is actually reflected in the $collinearity$ of the poles of the string amplitude, lying in the critical line : $ \beta = \gamma^* = { 1\over 2 } + i \lambda$, where the nontrivial zeta zeros are located. We finalize with some concluding remarks on continuous spins, non-commutative geometry and other relevant topics.
[1069] vixra:2109.0119 [pdf]
An Arabic Dictionary: "Al-Mujam al-W\'{a}fi" Or, "Adhunik Arabi-Bangla Abhidhan" and the Onsager's Solution
We consult an Arabic dictionary: "al-Mujam al-w\'{a}fi" or, "adhunik arabi-bangla abhidhan" by Dr. M. Fazlur Rahman. We draw the natural logarithm of the number of words, normalised, starting with a letter vs the natural logarithm of the rank of the letter. We find that the words underlie a magnetisation curve. The magnetisation curve i.e. the graph of reduced magnetisation vs reduced temperature is the exact Onsager solution of two dimensional Ising model in the absence of external magnetic field.
[1070] vixra:2109.0113 [pdf]
Unishox a Hybrid Encoder for Short Unicode Strings
Unishox is a hybrid encoding technique, with which short unicode strings could be compressed using context aware pre-mapped codes and delta coding resulting in surprisingly good ratios.
[1071] vixra:2109.0111 [pdf]
Gravitational Monopole
I prove the existence of a new exact solution of the Einstein field equation for a massless gravitoelectromagnetic monopole in the case of the linear approach for a weak gravitational field.
[1072] vixra:2109.0080 [pdf]
The French, Larousse Dictionnaire De Poche and the Graphical law
We study Larousse Dictionnaire De Poche, Fran\c{c}ais Anglais / Anglais Fran\c{c}ais. We draw the natural logarithm of the number of French words, normalised, starting with a letter vs the natural logarithm of the rank of the letter, normalised. We conclude that the Dictionary can be characterised by BP(4,$\beta H$=0) i.e. a magnetisation curve for the Bethe-Peierls approximation of the Ising model with four nearest neighbours in absence of external magnetic field. H is external magnetic field, $\beta$ is $\frac{1}{k_{B}T}$ where, T is temperature and $k_{B}$ is the Boltzmann constant. We infer that the French, the Italian and the Spanish come in the same language group. Moreover, we put two dictionaries of philosophy and science on the same platform with that of the French. We surmise that philosophy owes more to the French speakers than science does.
[1073] vixra:2109.0072 [pdf]
Orthogonality of Two Lines and Division by Zero Calculus
In this paper, we will give a pleasant representation of the orthogonality of two lines by means of the division by zero calculus. For two lines with gradients $m$ and $ M$, they are orthogonal if $ m M = - 1. $ Our common sense will be so stated. However, note that for the typical case of $x,y$ axes, the statement is not valid. Even for the high school students, the new result may be pleasant with surprising new results and ideas.
[1074] vixra:2109.0071 [pdf]
Revised Collatz Graph Explains Predictability
The Collatz Conjecture remains an intriguing problem. Expanding on an altered formula first presented in “Bits of Complexity”, this paper explores the concept that “3N + Least-Significant-Bit” of a number allows the complete separation of the step of “dividing by two on an even number” within the Collatz Conjecture. This alternate formula replaces the original Collatz on a one-for-one basis. The breadth and depth of graphs of the resulting path of numbers resolving with this new formula illustrate its fractal nature. Lastly, we explore the predictability of this data, and how the ultimate goal of reaching one prevented previous work from finding the key to understanding 3N+1.
[1075] vixra:2109.0052 [pdf]
Pinhole Cameras and Division by Zero Calculus
From the elementary example of pinhole cameras, the essential fact of the division by zero calculus may be looked and at the same time, some great impacts to rational mappings are referred with the basic interrelation with zero and infinity. Some strong discontinuity property at infinity may be looked as a very interesting property.
[1076] vixra:2109.0038 [pdf]
The Free Fall of Photon in Gravity
We calculate the acceleration of point mass and photon by gravity. We discuss the motion of photon accelerated by gravity and we discover substantial statements leading to the physical meaning of photon.
[1077] vixra:2109.0035 [pdf]
Sedeonic Generalization of Hydrodynamic Model of Vortex Plasma
The noncommutative algebra of space-time sedeons is used for the generalization of the system of nonlinear self-consistent equations in hydrodynamic two-fluid model of vortex plasma. This system describes both longitudinal flows as well as the rotation and twisting of vortex tubes taking into account internal electric and magnetic fields generated by fluctuations of plasma parameters. As an illustration we apply the proposed equations for the description of sound waves in electron-ion and electron-positron plasmas.
[1078] vixra:2109.0023 [pdf]
Gupta-Feynman Based Quantum Theory of Gravity and the Compressed Space
In this work we develop the quantum theory of gravity in the gravitational compressed space. The equivalence of spatial compression to the Lorentz contraction of special relativity, supported by the relative gravitational red-shift using the black hole clock leads to the brane potential and gives the minimum length at which the extra dimensions become dominant, comparable to that of the Schwarzschild radius. For Planck mass the minimum length is almost Planck's length. When doing the quantization of the theory, we find that those responsible for the evolution of time for luminous matter, graviton and for dark matter, the axion, have the property that in compressed gravitational space, naked and dressed propagators are equal and coincide with the corresponding naked propagators.
[1079] vixra:2109.0021 [pdf]
On Sums of Product of Powers of Palindromic Sequence and Arithmetic Progression
In this paper, we combine a real or complex palindromic sequence and arithmetic sequence to produce the sums of product of powers of palindromic-arithmetic sequences. As a result, we generate new expressions for Franel numbers as well as first Strehl identity.
[1080] vixra:2109.0013 [pdf]
A Possible Logical Constraint on the Validity of General Relativity for Strong Gravitational Fields
The axiomatic foundation of Einstein's theory of General Relativity is discussed based on an epistemological point of view, yielding a possible logical restriction on the range of validity regarding the strength of gravitational fields. To be precise, the validity of the geodetic equations of motion derived from the Einstein Equivalence Principle is examined in view of Einstein's thought experiment, where an observer situated in a closed elevator cannot distinguish between acceleration and gravitation.
[1081] vixra:2108.0167 [pdf]
Elliptic Equations of Heat Transfer and Diffusion in Solids
We propose a modified phenomenological equation for heat and impurity fluxes in solids by analogy with the Cattaneo-Vernotte concept. It leads to the second-order elliptical equations describing the evolution of temperature and impurity profiles with finite rate of propagation. The comparison of transfer peculiarities in the framework of parabolic and elliptic equations is discussed.
[1082] vixra:2108.0165 [pdf]
Differential Coefficients at Corners and Division by Zero Calculus
For a $C_1$ function $y=f(x)$ except for an isolated point $x=a$ having $f^\prime(a-0)$ and $f^\prime(a+0)$, we shall introduce its natural differential coefficient at the singular point $x=a$. Surprisingly enough, the differential coefficient is given by the division by zero calculus and it will give the gradient of the natural tangential line of the function $y=f(x)$ at the point $x=a$.
[1083] vixra:2108.0163 [pdf]
1H NMR Spectroscopy of the New Xalapa Molecule
Proton nuclear magnetic resonance (1H NMR) is the application of nuclear magnetic resonance in NMR spectroscopy with respect to 1H within the molecules of a substance, in order to determine the structure of its molecules. The work focused on determining the 1H NMR spectrum of the molecule here called Xalapa, in homage of the city of Xalapa, the capital city of the Mexican state of Veracruz and the name of the surrounding municipality. The 1H NMR spectrum was obtained via computational methods ab initio Restricted Hartree-Fock. Optimization of molecular structure via UFF, followed by PM3, RHF/EPR-II and RHF/STO-6G, thus obtaining a stable structure, in STP, NMR via the GIAO(Gauge-Independent Atomic Orbital) method. The IUPAC name of the molecule was obtained, whose composition is C: 81.7%; H: 7.1%; N: 3.4%; O: 7.8%, formula weight: 411.53536 g, and molecular formula: C28H29NO2. Limitations our study has so far been limited to computational simulation via quantum mechanics (QM) an applied theory. Our results and calculations are compatible with the theory of QM, but their physical experimental verification depends on experimental data that should be laboratory for experimental biochemical.
[1084] vixra:2108.0160 [pdf]
Experimental Investigation of an Unusual Induction Effect and Its Interpretation as a Necessary Consequence of Weber Electrodynamics
The magnetic force acts exclusively perpendicular to the direction of motion of a test charge, whereas the electric force does not depend on the velocity of the charge. This article provides experimental evidence that, in addition to these two forces, there is a third electromagnetic force that (i) is proportional to the velocity of the test charge and (ii) acts parallel to the direction of motion rather than perpendicular. This force cannot be explained by the Maxwell equations and the Lorentz force, since it is mathematically incompatible with this framework. However, this force is compatible with Weber electrodynamics and Ampère's original force law, as this older form of electrodynamics not only predicts the existence of such a force but also makes it possible to accurately calculate the strength of this force.
[1085] vixra:2108.0159 [pdf]
Inadequacies of Sommerfeld's Front Velocity Definition
Current practice defined the front velocity of a signal as the limit of the phase velocity for infinitely high frequency. However, the present article provides evidence that the propagation velocities of signal fronts for input signals of nonzero temporal duration result from the phase velocities in the low-frequency range. In conclusion, although the impulse response propagates at the so-called front velocity, this is shown not to be true for the step response, and this aspect is shown not to represent a contradiction.
[1086] vixra:2108.0156 [pdf]
The Effect of Artificial Amalgamates on Identifying Pathogenesis
The purpose of this research was to define acceleration in diagnostic procedures for airborne diseases. Airborne pathogenicity can be troublesome to diagnose due to intrinsic variation and overlapping symptoms. Coronavirus testing was an instance of a flawed diagnostic biomarker. The levels of independent variables (IV) were vanilla, sparse, and dense amalgamates formed from multilayer perceptrons and image processing algorithms. The dependent variable (DV) was the classification accuracy. It was hypothesized that if a dense amalgamate is trained to identify Coronavirus, the accuracy would be the highest. The amalgamates were trained to analyze the morphological patches within radiologist-verified medical imaging retrieved from online databanks. Using cross-validation simulations augmented with machine-learning, the DV was consulted for each amalgamate. Self-calculated t-tests supported the research hypothesis, with the dense amalgamate achieving 85.37% correct classification rate. The null hypothesis was rejected. Flaws within the databanks were possible sources of error. A new multivariate algorithm invented here performed better than the IV. It identified Coronavirus and other airborne diseases from 96-99% accuracy. The model was also adept in identifying heterogeneity and malignancy of lung cancer as well as differentiating viral and bacterial pathogenicity of infections. Future modifications would involve extending the algorithm to diseases in other anatomical structures such as osteopenia/osteoporosis in the vertebral column.
[1087] vixra:2108.0148 [pdf]
Two-Dimensional Fourier Transformations and Double Mordell Integrals II
Several Fourier transforms of functions of two variables are calculated. They enable one to calculate integrals that contain trigonometric and hyperbolic functions and also evaluate certain double Mordell integrals in closed form.
[1088] vixra:2108.0143 [pdf]
How Quantum Mechanics Could be Hidden Within the Spacetime
The article shows how Quantum Mechanics and General Relativity could be unified within Einstein-Cartan spacetime which can be both compressed and torqued. The compression is responsible as well-known for astronomic gravity whereas the torsion for microscopic quantum effects as the Compton wavelength appearing into the Kerr metric strongly suggests. This could lead to a better understanding of both theories and promising applications such as macroscopic quantum and gravitational science and engineering based for instance on tremendous angular momenta.
[1089] vixra:2108.0124 [pdf]
Wormholes Do Not Exist, They are Mathematical Artifacts from an Incomplete Gravitational Theory (?)
The Schwarzschild solution of the Einstein field equation leads to a solution that has been interpreted as wormholes. Many have been skeptical about this interpretation. However, many researchers have also been positive about it. We show that wormholes are not mathematically allowed in the spherical metric of a newly- released unified quantum gravity theory known as collision space-time [1–3]. We therefore have reasons to believe that wormholes in general relativity theory are nothing more than a mathematical artifact due to an incomplete theory, but we are naturally open for discussions about this point. That wormholes likely do not exist falls nicely into line with a series of other intuitive predictions from collision space-time where general relativity theory falls short, such as matching the full specter of the Planck scale for micro black “holes”.
[1090] vixra:2108.0111 [pdf]
A Novel Solution for the General Diffusion
The Fisher-KPP equation is a reaction-diffusion equation originally proposed by Fisher to represent allele propagation in genetic hosts or population. It was also proposed by Kolmogorov for more general applications. A novel method for solving nonlinear partial differential equations is applied to produce a unique, approximate solution for the Fisher-KPP equation. Analysis proves the solution is counterintuitive. Although still satisfying the maximum principle, time dependence collapses for all time greater than zero, therefore, the solution is highly irregular and not smooth, invalidating the traveling wave approximation so often employed.
[1091] vixra:2108.0103 [pdf]
Deflection of Light by Kink Mass
The relation of an angle of deflection-mass is given where we replace mass with the mass of kink (anti-kink). The mass of kink (anti-kink), in turn, can be replaced with topological charge and winding number. Because mass is related with the refractive index, the angle of deflection can be formulated in relation with the decomposed form of refractive index.
[1092] vixra:2108.0101 [pdf]
Swahili, a Lingua Franca, Swahili-English Dictionary by C. W. Rechenbach and the Graphical Law
We study the Swahili-English Dictionary by Charles W. Rechenbach. We draw the natural logarithm of the number of the words, normalised, starting with a letter vs the natural logarithm of the rank of the letter, normalised. We find that the dictionary is classified by BP(4,$\beta H$=0.04). $\beta$ is $\frac{1}{k_{B}T}$ where, T is temperature and $k_{B}$ is the Boltzmann constant. BP(4,$\beta H$=0.04) is the magnetisation curve for the Bethe-Peierls approximation of the Ising model with four nearest neighbours in the presence of little external magnetic field, H, such that $\beta H$ is equal to 0.04. Moreover, the words of the dictionary go over to the Onsager solution on successive normalisations.
[1093] vixra:2108.0098 [pdf]
Refractive Index and Mass of Kink in Curved Space
The proposals that mass-energy is topological charge and topological charge is winding number are given. The refractive index-mass relation using the decomposed form of refractive index is showed where we replace mass with topological charge and winding number.
[1094] vixra:2108.0095 [pdf]
A New Interpolation Approach and Corresponding Instance-Based Learning
Starting from finding approximate value of a function, introduces the measure of approximation-degree between two numerical values, proposes the concepts of “strict approximation” and “strict approximation region”, then, derives the corresponding one-dimensional interpolation methods and formulas, and then presents a calculation model called “sum-times-difference formula” for high-dimensional interpolation, thus develops a new interpolation approach, that is, ADB interpolation. ADB interpolation is applied to the interpolation of actual functions with satisfactory results. Viewed from principle and effect, the interpolation approach is of novel idea, and has the advantages of simple calculation, stable accuracy, facilitating parallel processing, very suiting for high-dimensional interpolation, and easy to be extended to the interpolation of vector valued functions. Applying the approach to instance-based learning, a new instance-based learning method, learning using ADB interpolation, is obtained. The learning method is of unique technique, which has also the advantages of definite mathematical basis, implicit distance weights, avoiding misclassification, high efficiency, and wide range of applications, as well as being interpretable, etc. In principle, this method is a kind of learning by analogy, which and the deep learning that belongs to inductive learning can complement each other, and for some problems, the two can even have an effect of “different approaches but equal results” in big data and cloud computing environment. Thus, the learning using ADB interpolation can also be regarded as a kind of “wide learning” that is dual to deep learning.
[1095] vixra:2108.0091 [pdf]
Asymmetry in the Real Number Line and: A Proof that \pi + e is an Irrational Number
The set of all Real numbers, R, consists of all Rational numbers, Q, being any ratio of two Integer numbers that does not divide by 0. All other Real numbers that are not a Rational number are contained in the set of Irrational numbers, R/Q. These two subsets comprising all of the Real numbers are known to have distinct cardinalities of differing magnitudes of infinity[2]. When a consecutive ordering of all Rational numbers is established, whereby any unique Rational number can be shown to be disconnected from all other Rational numbers[3], a theorem regarding asymmetry on the Real number line is established. This theorem simplifies the necessary requirements to prove that the summation of two known Irrational numbers is Rational or Irrational.
[1096] vixra:2108.0083 [pdf]
Coherent and Cat States of Open and Closed Ttrings
The covariant quantization and light cone quantization formalisms are followed to construct the coherent states of both open and closed bosonic strings. We make a systematic and straightforward use of the original definition of coherent states of harmonic oscillators to establish the coherent and their corresponding cat states. We analyze the statistics of these states by explicitly calculating the Mandel parameter and obtained interesting results about the nature of distribution of the states. A tachyonic state with imaginary mass and positive norm is obtained
[1097] vixra:2108.0082 [pdf]
Supercoherent States of the Open NS World Sheet Superstring
The supercoherent states of the RNS string are constructed using the covariant quantization and analogously the light cone quantization formalisms. Keeping intact the original definition of coherent states of harmonic oscillators, we extend the bosonic annihalation operator into the superspace by inclusion of fermionic contribution to oscillator modes thus construct the supercoherent states with supersymetric harmonic oscillator. We analyse the statistics of these states by explicitly calculating the Mandel parameter and obtained interesting results about the nature of distribution of the states.
[1098] vixra:2108.0078 [pdf]
An Upper Bound for the Erd\h{o}s Unit Distance Problem in the Plane
In this paper, using the method of compression, we prove a stronger upper bound for the Erd\H{o}s unit distance problem in the plane by showing that \begin{align} \# \bigg\{||\vec{x_j}-\vec{x_t}||:\vec{x}_t, \vec{x_j}\in \mathbb{E}\subset \mathbb{R}^2,~||\vec{x_j}-\vec{x_t}||=1,~1\leq t,j \leq n\bigg\}\ll_2 n^{1+o(1)}.\nonumber \end{align}
[1099] vixra:2108.0077 [pdf]
Riemann Hypothesis Proof Using an Equivalent Criterion of Balazard, Saias and Yor
In this manuscript we denote a unit disc by $\mathbb{D}=\{z\in \mathbb{C} \mid |z|<1\}$ and a semi plane as\\ $\mathbb{P}=\{s\in\mathbb{C}\mid \Re(s)>\frac{1}{2}\}$. We denote, $\mathbb{R}_{\geq 0}=\{x\in \mathbb{R}\mid x\geq 0\}$ and $\mathbb{R}_{\geq 1}=\{x\in \mathbb{R}\mid x\geq 1\}$. Considering non negative real axis as a branch cut, we define a map from slit unit disc to the slit plane as $s:\mathbb{D}\setminus \mathbb{R}_{\geq 0}\to \mathbb{P}\setminus\mathbb{R}_{\geq 1}$ defined as $s(z)=\frac{1}{1-\sqrt{z}}$ which is proved to be one-one and onto. Next, we define a function $f(z)=(s-1)\zeta(s)$ where $s=s(z)$ and both $s(z)$ and $f(z)$ are proved to be analytic in $\mathbb{D}\setminus \mathbb{R}_{\geq 0}$. Next we prove that $s=s(z)$ is a conformal map. We also show that $f$ is continuous at $0$. Using Cauchy's residue theorem to a keyhole contour and Lebesgue's dominated convergence theorem along with Schwarz reflection principle, we prove that, $$\int_{-\infty}^\infty \frac{\log|\zeta(\frac{1}{2}+it)|}{\frac{1}{4}+t^2}dt=0$$ This settles the Riemann Hypothesis because this relation is an equivalent version of Riemann Hypothesis as proved by Balazard, Saias and Yor [1].
[1100] vixra:2108.0066 [pdf]
Is Complex Number Theory Free from Contradiction
With simple basic mathematics it is possible to demonstrate a conflicting result in complex number theory using Euler’s identity, simple trigonometry and deMoivre’s formula for n=2.
[1101] vixra:2108.0065 [pdf]
Redundant Primes In Lemoine’s Conjecture
Lemoine’s conjecture (LC), still unsolved, states that all positive odd integer ≥ 7 can be expressed as the sum of a prime and an even semiprime. But do we need all primes to satisfy this conjecture? This work is devoted to selection of must-have primes and formulation of stronger version of LC with reduced set of primes.
[1102] vixra:2108.0064 [pdf]
On Fast Search of First Confirmation Of Goldbach’s Strong Conjecture
Goldbach strong conjecture states that all even integers n>2 can be expressed as the sum of two prime numbers (Goldbach partitions of n). Hypothesis still remains open and is confirmed experimentally for bigger and bigger n. This work studies different approaches to finding the first confirmation of this conjecture in order to select the most effective confirmation method.
[1103] vixra:2108.0060 [pdf]
A New Solvable Quintic Equation of the Bring Jerrard Form X^5 + ax + B = 0
In the previous post, we gave one more irreducible equation of the shape x^5 + ax^2 + b = 0, which is solvable. In this paper, we give an irreducible equation of the shape x^5 + ax + b = 0, which is also solvable, contrary to some available arguments.
[1104] vixra:2108.0058 [pdf]
On 6k ± 1 Primes in Goldbach Strong Conjecture
Goldbach strong conjecture, still unsolved, states that all even integers n>2 can be expressed as the sum of two prime numbers (Goldbach partitions of n). Each prime p>3 can be expressed as 6k ± 1. This work is devoted to studies of 6k ± 1 primes in Goldbach partitions and enhanced Goldbach strong conjecture with the lesser of twin primes of form 6k − 1 used as a baseline.
[1105] vixra:2108.0057 [pdf]
Redundant Primes In Goldbach Partitions
Goldbach Strong Conjecture (GSC), still unsolved, states that all even integers n>2 can be expressed as the sum of two prime numbers (Goldbach partitions of n). But do we need all primes to satisfy this conjecture? This work is devoted to selection of must-have primes and formulation of stronger version of GSC with reduced set of primes.
[1106] vixra:2108.0056 [pdf]
Goldbach Strong Conjecture Verification Using Prime Numbers
Goldbach strong conjecture, still unsolved, states that all even integers n>2 can be expressed as the sum of two prime numbers (Goldbach partitions of n). We can also formulate it from the opposite perspective: from a set of prime numbers you may pick two primes, sum them, and you will be able to build every even number n>2. This work is devoted to studies on sum of two prime numbers.
[1107] vixra:2108.0055 [pdf]
Studies on Twin Primes in Goldbach Partitions of Even Numbers
Goldbach strong conjecture states that all even integers n>2 can be expressed as the sum of two prime numbers (Goldbach partitions of n). This work is devoted to studies on twin primes present in Goldbach partitions. Based on executed experiments original Goldbach conjecture has been extended to a form that all even integers n>4 can be expressed as the sum of twin prime and prime.
[1108] vixra:2108.0050 [pdf]
From Einstein Theory to Spin 2 Gravity
We consider here the simple derivation of the Einstein equations by Fock. Then, we approach the way from the spin 1 fields to the spin 2 fields for massive and massless particles and we derive the gravity equations from this base. In conclusion, we discuss the principle of equivalence in classical Einstein theory and in the Schwinger spin 2 gravity
[1109] vixra:2108.0040 [pdf]
Essential Dutch Dictionary by G. Quist and D. Strik, the Graphical Law Classification
We find that the Essential Dutch dictionary by G. Quist and D. Strik is classified by BP(4,$\beta H$=0.10). $\beta$ is $\frac{1}{k_{B}T}$ where, T is temperature and $k_{B}$ is the Boltzmann constant. BP(4,$\beta H$=0.10) is the magnetisation curve for the Bethe-Peierls approximation of the Ising model with four nearest neighbours in presence of little external magnetic field, H, such that $\beta H$ is equal to 0.10.
[1110] vixra:2108.0029 [pdf]
Information Theory Applied to Bayesian Network for Learning Continuous Data Matrix
In this paper, we are proposing a learning algorithm for continuous data matrix based on entropy absorption of a Bayesian network.This method consists in losing a little bit of likelihood compared to a chain rule's best likelihood, in order to get a good idea of the higher conditionings that are taking place between the Bayesian network's nodes. We are presenting the known results related to information theory, the multidimensional Gaussian probability, AIC and BIC scores for continuous data matrix learning from a Bayesian network, and we are showing the entropy absorption algorithm using the Kullback-leibler divergence with an example of continuous data matrix.
[1111] vixra:2108.0024 [pdf]
On the Shortest Addition Chain of Numbers of Special Forms
In this paper we study the shortest addition chains of numbers of special forms. We obtain the crude inequality $$\iota(2^n-1)\leq n+1+G(n)$$ for some function $G:\mathbb{N}\longrightarrow \mathbb{N}$. In particular we obtain the weaker inequality $$\iota(2^n-1)\leq n+1+\left \lfloor \frac{n-2}{2}\right \rfloor$$ where $\iota(n)$ is the length of the shortest addition chain producing $n$.
[1112] vixra:2108.0015 [pdf]
Бивекторная алгебра (Bivector Algebra)
В настоящей работе изучается алгебра икватернионов ненулевой меры с их главной подалгеброй в виде комплекснозначных трехмерных векторов, которые в свою очередь подразделяются на моновекторы и бивекторы. Исследуются свойства комплексных векторов аналогичные параллельности и ортогональности обычных вещественных векторов. Найдены векторные структуры, цикличные относительно произведения, и доказана теорема о тождественности векторного цикла и ориентированного базиса. Как мы выяснили, базисы комплексного векторного пространства так же, как и в вещественном случае распадаются на две ориентации, непереводимые друг в друга непрерывными преобразованиями. Сравнение свойств бивекторов и нульвекторов при унитарных преобразованиях и их циклических структур позволяет говорить об однозначном соответствии этих алгебр заряженным частицам и свету. Тем самым даётся алгебраическое обоснование ключевого для физики векторного характера электромагнитного поля. <p> In this paper, we study the algebra of nonzero measure icaternions with their principal subalgebra in the form of complex-valued three-dimensional vectors, which in turn are subdivided into monovectors and bivectors. The properties of complex vectors similar to the parallelism and orthogonality of ordinary real vectors are investigated. Vector structures that are cyclic with respect to the product are found, and a theorem on the identity of a vector cycle and an oriented basis is proved. As we have found out, the bases of the complex vector space, as in the real case, split into two orientations, which cannot be translated into each other by continuous transformations. Comparison of the properties of bivectors and zero vectors under unitary transformations and their cyclic structures allows us to speak about the unambiguous correspondence of these algebras to charged particles and light. Thus, an algebraic substantiation of the key vector nature of the electromagnetic field for physics is given.
[1113] vixra:2108.0006 [pdf]
On the Geometrical Optics and the Atiyah-Singer Index Theorem
We assume that the curvature in the Atiyah-Singer index theorem is related with the Riemann-Christoffel curvature tensor where the Riemann-Christoffel curvature tensor is decomposed into the unrestricted electric (scalar) potential part and the restricted magnetic (vector) potential part. This decomposition is a consequence of the magnetic symmetry existence for the gauge potential in the geometrical optics.
[1114] vixra:2107.0179 [pdf]
Langenscheidt's German-English English-German Dictionary and the Graphical law
We study Langenscheidt's German-English English-German Dictionary. We draw the natural logarithm of the number of the German language entries, normalised, starting with a letter vs the natural logarithm of the rank of the letter, normalised. We find that the words underlie a magnetisation curve of a Spin-Glass in the presence of little external magnetic field. Moreover, we compare the German language with two Romance languages, the Basque and the Romanian, respectively, with respect to Spin-Glass magnetisation.
[1115] vixra:2107.0178 [pdf]
On Spin-Charge Separation
Recently, we have demonstrated that the Dirac equation can be cast into a form involving higher-order spinors. We have shown that the transformed Dirac equation splits into two equations, describing charged spin $0$ and (massless) spin $\frac{1}{2}$ particles. We apply this result to the problem of spin-charge separation.
[1116] vixra:2107.0177 [pdf]
An Equation Relating Planck Length, Planck's Constant and the Golden Ratio
Planck's constant, Planck length and the golden ratio can all be written as a simple equation. It is not yet clear if this is simply a mathematical coincidence or rather something with a deeper fundamental meaning. A few ideas will be suggested that might be physical in nature.
[1117] vixra:2107.0174 [pdf]
A Problem on Sum of Powers of Binomial Coefficients
In this paper, we present a problem concerning the sum of powers of Binomial coefficients. We prove two special cases of the problem using some simple identities involving Binomial coefficients, and list another two cases but without proof.
[1118] vixra:2107.0173 [pdf]
Interpretation of Some Nuclear Phenomena
The stability of nickel nucleus Ni-68 compared to the Ni-60 and the instability of light nuclei with many neutrons are interpreted. Also, the elongation of heavy nuclei and the mechanism that acts as a catalyst for the nuclear fission of uranium U-235 are interpreted.
[1119] vixra:2107.0171 [pdf]
A Dictionary of Modern Italian, the Graphical law and Dictionary of Law and Administration, 2000, National Law Development Foundation
We study A Dictionary of Modern Italian by John Purves. We draw the natural logarithm of the number of entries, normalised, starting with a letter vs the natural logarithm of the rank of the letter, normalised. We conclude that the Dictionary can be characterised by BP(4,$\beta H$=0) i.e. a magnetisation curve for the Bethe-Peierls approximation of the Ising model with four nearest neighbours in absence of external magnetic field. H is external magnetic field, $\beta$ is $\frac{1}{k_{B}T}$ where, T is temperature and $k_{B}$ is the Boltzmann constant. Moreover, we compare the Italian language with other Romance languages. These are the Spanish, the Basque and the Romanian languages respectively. On the top of it, we compare A Dictionary of Modern Italian with Dictionary of Law and Administration, 2000, by National Law Development Foundation. We find a tantalizing similarity between the Modern Italian and the jargon of law and administartion.
[1120] vixra:2107.0165 [pdf]
On Completeness of One Analytical Solution in Electrodynamics
An analytical solution of the equation $\dif\,{}^\ast\dif\alpha=0$ where the 1-form $\alpha$ stands for ``vector potential'' of electromagnetic field of uniformly accelerated charge presented in the work \cite{jgp}, was obtained in an incomplete coordinate system. Incompleteness of the system used gives rise to doubts about correctness of the solution because of possible presence of extra sources of the filed beyond the chart of the covered by coordinates. A rigorous criterion of existence or non-existence of extra sources of this sort is proposed which was appliel to the solution. As a result, it is found that no extra sources beyond the chart exist and hence, the solution describes the field on uniformly accelerated charge properly. However, this fact discloses another discrepancy in foundations of the field theory.
[1121] vixra:2107.0152 [pdf]
Saturn Hexagon - A Telltale of Quantum Gravity
The quantized model of Newtonian gravity indicates that the quantum effects of gravity become apparent when particles of sufficiently small mass are in the orbit of a gravitating body of mass. In particular, the stable orbital path of particles in such conditions are shown to be polygons. The stable circular path of classical mechanics emerges when the side counts of these polygons increase to infinity, as the quantum effects of gravity vanish due to the excessive mass of the orbiting particles. In this article, it is hypothesized that the particle mass in Saturn's North Pole jet stream is such that the quantum effects of gravity have become apparent. The hexagonal shape of Saturn's jet stream is therefore used to constrain the mass of its cloud particles to 7.4E-20 (kg). This in turn constrains the dimensions of ammonia ice crystals in the clouds to less than 100 nanometers. This theory also indicates that polygons of different side counts are also feasible at different latitudes, should the local particle mass permit the quantum effects of gravity to become visible. This aspect of the theory is also consistent with the presence of faint but still visible edges of some polygons at lower latitudes.
[1122] vixra:2107.0150 [pdf]
Delayed-Choice Quantum Erasure Experiment: A Causal Explanation Using Wave-Particle Non-Duality
According to the recently proposed ``wave-particle non-dualistic interpretation of quantum mechanics", the physical nature of Schrodinger's wave function is an `instantaneous resonant spatial mode' in which a quantum flies akin to the case of a test particle moving along a geodesic in the curved space-time of general theory of relativity. By making use of this physical nature, a causal explanation is provided for the delayed-choice quantum erasure experiment.
[1123] vixra:2107.0144 [pdf]
Index Type Hand Symbol
We apply the principle of index type keyboard to hand sign. Because the learning is very easy, not only a person without hearing but also a person with that can make full use of it.
[1124] vixra:2107.0139 [pdf]
Is the Gravity-matter System Time-reversible?
Presented are logical arguments for Dark Matter. You are free not to get enlightened about that fact. But please pay respect to new dispositions of the Dark Matter and research methods in this note.
[1125] vixra:2107.0137 [pdf]
Acceptable Facts Point to Validity of Riemann Hypothesis
In this short note, I provide a proof for the Riemann Hypothesis. You are free not to get enlightened about that fact. But please pay respect to new dispositions of the Riemann Hypothesis and research methods in this note. I start with Dr.Zhu who was the first to show me that instead of the known 40 %, the maximum percentage of the zeroes of the Riemann zeta function belongs to the 1/2 critical line.
[1126] vixra:2107.0136 [pdf]
An Algebraic Treatment of Congruences in Number Theory
In this article we will examine the behavior of certain free abelian subgroups of the multiplicative group of the positive rationals and their relationship with the group of units of integers modulo $n$.
[1127] vixra:2107.0135 [pdf]
On the Lehmer's Totient Problem on Number Fields
Lehmer's totient problem asks if there exists a composite number $d$ such that its totient divide $d-1$. In this article we generalize the Lehmer's totient problem in algebraic number fields. We introduce the notion of a Lehmer number. Lehmer numbers are defined to be the natural numbers which obey the Lehmer's problem in the ring of algebraic integers of a number field.
[1128] vixra:2107.0134 [pdf]
A Geometric Variational Problem on a Periodic Domain
A diblock copolymer melt is a soft material, characterized by fluid-like disorder on the molecular scale and a high degree of order at a longer length scale. A molecule in a diblock copolymer is a linear sub-chain of A-monomers grafted covalently to another subchain of B-monomers. The Ohta-Kawasaki density functional theory of diblock copolymer gives rise to a non local free boundary problem. We will work on a periodic lattice in C generated by two complex numbers and we will assume periodic boundary conditions. In this thesis we will find two stationary sets of the energy functional of the problem. The first set is a perturbation of a round disk in C. More specifically, we will perturb a round disk under the polar coordinates. The radius of this perturbed disk will be sufficiently small. Also, we will minimize the energy of this stationary set with respect to the shape and size of the lattice. Additionally, we will show that for every K >= 2, K 2 N there exists a stationary set of the free energy functional that is the union of K disjoint perturbed disks in C. Later, we will assume that K = 2 and we will deal with the problem of finding the centers of these two perturbed disks. We will show that the centers of these disks are close to a global minimum of the Green’s function of the problem. We will minimize the Green’s function of the problem looking at some special cases of lattice structures. These lattices are the hexagonal lattice and a family of rectangular lattices.
[1129] vixra:2107.0132 [pdf]
Geometrisation of Electromagnetism
After generalising the equivalence principle and introducing gravito-electric and gravito-magnetic transformations, we show that a metric which has the same form as the Kerr metric correctly describes the electrodynamics of a charged singularity.
[1130] vixra:2107.0125 [pdf]
A Simple Cellular Automaton Model of Visible and Dark Matter
We apply 6D phase space analysis and the deterministic Cellular Automaton Theory to propose a simple deterministic model for fundamental matter. Quantum mechanics, the Standard Model and supersymmetry are found to emerge from cell level. Composite states of three cells are manifested as quarks, leptons and the dark sector. Brief comments on unification, cosmology, and black hole issues are made.
[1131] vixra:2107.0124 [pdf]
Breaking Free from the Stability-Plasticity Dilemma with Incremental Domain Inference on Sequential Data
We make the case for identifying the input domain prior to running downstream models and propose an architecture that opens the door to lifelong learning systems that forget at a decreasing rate as the tasks grow in complexity. Our model accurately identifies domains and is compatible with other continual learning algorithms, provided they benefit from knowing the current domain beforehand.
[1132] vixra:2107.0123 [pdf]
Big Bang's Quantum Problem
The early twentieth century produced the beginnings of relativity, quantum mechanics, and the big bang, but then went off the rails like much of the world in the early 1930s. The rest of the world recovered but quantum mechanics did not recover. Physics was weighed down with a continuum geometry that did not allow quantum mechanics and relativity to be united. Then came 30 years of cold fusion experiments that could not be explained. To get things back on track we will dispense with the creation myth of this New Age physics that Edwin Hubble’s work produced, the big bang. There is an intimate connection between cold fusion and the improbability of any great bang emanating from a point. The underlying problem was the suppression of the development of both quantum mechanics and relativity.
[1133] vixra:2107.0122 [pdf]
Open Science with Respect to Artificial Intelligence
Artificial Intelligence is one of those fields in computer science that is currently being extensively studied. In this paper, the author attempts to summarise the current state of research in the field with respect to openness to the general community, and has found a profound lack of opportunity to contribute to the field as a novice, and a near monopoly of effective research by large industries while production environments continue to largely remain safe from such influences.
[1134] vixra:2107.0121 [pdf]
A Progress on the Binary Goldbach Conjecture
In this paper we develop the method of circle of partitions and associated statistics. As an application we prove conditionally the binary Goldbach conjecture. We develop series of steps to prove the binary Goldbach conjecture in full. We end the paper by proving the binary Goldbach conjecture for all even numbers exploiting the strategies outlined.
[1135] vixra:2107.0112 [pdf]
A Topological Approach to the Twin Prime and De Polignac Conjectures
Abstract. We introduce a topology in the set of natural numbers via a subbase of open sets. With this topology, we obtain an irreducible (hyperconnected) space with no generic points. This fact allows proving that the cofinite intersections of subbasic open sets are always empty. That implies the validity of the Twin Prime Conjecture. On the other hand, the existence of strictly increasing chains of subbasic open sets shows that the Polignac Conjecture is false for an infinity of cases.
[1136] vixra:2107.0109 [pdf]
Geometry of the Ellipse and the Ellipsoid
It is a chapter that concerns the geometry of the ellipse and the ellipsoid of revolution. We give the formulas of the 2D plane coordinates and the 3D Cartesian coordinates $(X,Y,Z)$ in function of the geodetic coordinates $(\varphi,\lm,he)$. A section is devoted to the geodesic lines of the ellipsoid of revolution. We give the proofs of the differential equations of the geodesic lines and their integration.
[1137] vixra:2107.0106 [pdf]
Calculation of The Integrals of The Geodesic Lines of The Torus
In this second paper about the geodesic lines of the torus, we calculate in detail the integrals giving the length $s=s(\fii)$ and the longitude $\lm=\lm(\fii)$ of a point on the geodesic lines of the torus.
[1138] vixra:2107.0105 [pdf]
An Identity Involving Tribonacci Numbers
In this paper, we present an identity involving Tribonacci Numbers. We will prove this identity by extending the number of variables of Candido's identity to three.
[1139] vixra:2107.0101 [pdf]
Extra-Spatial Basis of Spatial World. Principles of Panory
A description of the matter and the world structure based on the action-duration change is proposed. The concept of place and spatial relations for its harmonics are introduced. The world emergence from extra-spatial noise and its development to the over-noisy spatial structure are studied. The environment hidden behind the seemingly empty space has a huge density and is the cause of electric and gravity fields. There is an endless chain of interconnected and controlled worlds. An explanation of the particle structure was proposed. Modern physical theories may be derived from this representation.
[1140] vixra:2107.0092 [pdf]
Black Holes and Cosmology
I have written two papers arguing that the usual description of a black hole is misleading, and since the behaviour of space/time outside the black hole is not affected, that might seem to be at least one paper too many. The second paper looked at what meaning one might ascribe to the present tense when describing distant objects, and in particular, black holes, and to the implication of time running slow near massive bodies. In this paper I point out some possible cosmological implications.
[1141] vixra:2107.0089 [pdf]
Einstein-Rosen Proposition in 1935 Revisited
I remark the existence of circumstances which are compatible with coincidence between (i) the Bowen solutions for the York Lichnerowicz equations associated with the initial data problem in Einstein’s theory of gravitation and (ii) the decompositions proposed by the TEQ for deformed angular momentum. This discovery suggests that we are living at the surface of some Lambda surface and that this surface is surrounding a Bowen-York-Lichnerowicz like black hole (BYLBH), alias a void.
[1142] vixra:2107.0074 [pdf]
On a Modular Property of Tetration
This paper generalizes problem 3 of the 2019 PROMYS exam, which asks to show that the last 10 digits (in base 10) of the n-th tetration of 3 are independent of ]n if n>10. The generalization shows that given any positive integers $a$ and b satisfying certain conditions, the last n digits (in base b) of the m-th tetration of a are independent of m if m>n. We use numerical patterns as a guide towards the solution and explore an additional numerical pattern which shows a relation between decimal expansions and multiplicative inverses of powers of 3 modulo powers of 10.
[1143] vixra:2107.0071 [pdf]
Self-Consistent EM field
This article try to unified the four basic forces by Maxwell equations, the only experimental theory. Self-consistent Maxwell equation with the e-current coming from matter current is proposed, and is solved to electrons and the structures of particles and atomic nucleus. The static properties and decay are reasoned, all meet experimental data. The equation of general relativity sheerly with electromagnetic field is discussed as the base of this theory. In the end the conformation elementarily between this theory and QED and weak theory is discussed.
[1144] vixra:2107.0064 [pdf]
Semicircles in the Arbelos with Overhang and Division by Zero
We consider special semicircles, whose endpoints lie on a circle, for a generalized arbelos called the arbelos with overhang considered in [4] with division by zero.
[1145] vixra:2107.0059 [pdf]
On Prime Numbers in Linear Form
A lower bound is given for the number of primes in a special linear form less than N, under the assumption of the weakened Elliott-Halberstam conjecture.
[1146] vixra:2107.0053 [pdf]
On the Elementary Function y=|x| and Division by Zero Calculus
In this paper, we will consider the elementary function $y=|x|$ from the viewpoint of the basic relations of the normal solutions (Uchida's hyper exponential functions) of ordinary differential equations and the division by zero calculus. In particular, $y^\prime(0) =0$ in our sense and this function will show the fundamental identity with the natural sense $$ \frac{0}{0} =0 $$ with the sense $$ \frac{1}{0} =0 $$ that may be considered as $0$ as the inversion of $0$ through the Uchida's hyper exponential function.
[1147] vixra:2107.0049 [pdf]
How Hard is the Tensor Rank?
We build a combinatorial technique to solve several long standing problems in linear algebra with a particular focus on algorithmic complexity of matrix completion and tensor decomposition problems. For all appropriate integral domains R, we show the polynomial time equivalence of the problem of the solvability of a system of polynomial equations over R to • the minimum rank matrix completion problem (in particular, we answer a question asked by Buss, Frandsen, Shallit in 1999), • the determination of matrix rigidity (we answer a question posed by Mahajan, Sarma in 2010 by showing the undecidability over Z, and we solve recent problems of Ramya corresponding to Q and R), • the computation of tensor rank (we answer a question asked by Gonzalez, Ja'Ja' in 1980 on the undecidability over Z, and, additionally, the special case with R = Q solves a problem posed by Blaser in 2014), • the computation of the symmetric rank of a symmetric tensor, whose algorithimic complexity remained open despite an extensive discussion in several foundational papers. In particular, we prove the NP-hardness conjecture proposed by Hillar, Lim in 2013. In addition, we solve two problems on fractional minimal ranks of incomplete matrices recently raised by Grossmann, Woerdeman, and we answer, in a strong form, a recent question of Babai, Kivva on the dependence of the solution to the matrix rigidity problem on the choice of the target field.
[1148] vixra:2107.0047 [pdf]
Preset Boundary Conditions and the Possibility of Making Time Crystals
Using the quaternion algebraic tools widely used at the end of the 19th century, we deduce a novel theory of space-time unity that can enhance the theories of special relativity and general relativity.When the preset boundary condition-the ratio of the temperature of the two systems is a complex number, then the entropy can be constructed into the ring structure of the algebraic system, and the entropy and the time dimension are in the same direction, then it is possible to construct a time crystal.
[1149] vixra:2107.0046 [pdf]
One Fundamental Theorem Concerning Infinite Countable Sets
For A an infinite countable set containing infinitely many distinct natural integersand B an infinite countable set containing infinitely many distinct natural integerssuch that ∀n ∈ A, n ∈ B and ∀m ∈ B, m ∈ A, we demonstrate that it is possiblethat A≠B by exposing infinitely many counter-examples in which, for each counter-example, A and B are respectively two sample spaces of two probability spaces havingdifferent probabilities for similar events. We thus prove that the axiom of extensionality is false for infinite countable sets.
[1150] vixra:2107.0029 [pdf]
Consideration of Electron-Positron Pair Annihilation by Thermal Oscillations and an Inelastic Collision
In this paper, we discuss the phenomenon that in the photons generated after electron-positron annihilation, the sources of thermal potential energy that make up the electrons and positrons are equally divided. As a result, the photon contains one thermal point each for the electron and positron, and a picture of a single system emerges. This annihilation could be predicted to occur at the point where the two domains intersect if the electron and positron phases are properly aligned on the Riemann surface. Taking advantage of the model in which the interior of the electron radiates and radiates thermophores, the electron-positron annihilation could be likened to an inelastic collision observed from opposite directions in time. In addition, we consider that the oscillation caused by thermal radiation inside the electron cancels out that of the positron, causing the electron and positron to lose mass and transform into a photon.
[1151] vixra:2107.0023 [pdf]
Real Schur Flows
The problem of a flow with its velocity gradient being of \textit{real Schur form} uniformly in a cyclic box is formulated for numerical simulation, and a semi-analytic algorithm is developed from the precise structures. Computations starting from two-component-two-dimensional-coupled-with-one-component-three-dimensional initial velocity fields of the Taylor-Green and Arnold-Beltrami-Childress fashions are carried out, and some discussions related to turbulence are offered for the multi-scale eddies which, though, present precise order and symmetry. Plenty of color pictures of patterns of these completely new flows are presented for general and specific conceptions.
[1152] vixra:2107.0005 [pdf]
The Free Fall of the String
We consider the motion of string in free fall in gravity. The solutions are not identical with the string accelerated kinetically by acceleration a. So, we distinguish between non-inertial field and the gravity field and we discuss the principle of equivalence. In conclusion we suggest to drop the charged objects from the very high tower Burj Khalifa in order to say crucial words on the principle of equivalence.
[1153] vixra:2107.0004 [pdf]
Quantum Description of Newtonian Gravity
The smooth Newtonian model of gravity is quantized using the results obtained from the combined theory of Special Relativity (SR) and Quantum Mechanics (QM). The resulting quantum model of gravity, unlike the classical Newtonian model, predicts that there exists an upper limit to the distance between a given pair of masses, called the action distance, beyond which they become gravitationally unbound. Equivalently, at any given radial distance from a large gravitating body of mass, there exists a minimum mass below which the particle would not gravitationally bind to the gravitating body. The attractable mass limit of a gravitating body is determined by equating action distance with the surface radius of the body. Moreover, the quantum model of gravity indicates that the escape velocity from a large gravitating body is a function of the mass of the escaping particle as well. This quantum effect of gravity become significant if the mass of the escaping particles, such as the gas molecules from the exosphere of a planet, are comparable to the attractable mass limit of the planet. The significant discrepancy observed in the escape rates of CH_4 and N_2 species from Pluto's exosphere is used to constrain the reference mass of the combined SR-QM theory to m= 3.2E-45 (kg). The latter is thought to be the physical cut-off limit for massless particles. An Earth-bound experiment is also proposed to test the predictions of the combined SR-QM theory and determine the reference mass with a higher accuracy.
[1154] vixra:2107.0003 [pdf]
Transmission of a Single-Photon Through a Polarizing Filter: an Analysis Using Wave-Particle Non-Duality
The inner-product, $<\psi|\psi>$, between a state vector, $|\psi>$ and its dual, $<\psi|$, is thoroughly analyzed using the recently developed `wave-particle non-dualistic interpretation of quantum mechanics'; here, $|\psi>$ is a solution of the Schrodinger wave equation. Using this analysis, ``questions about what decides whether a photon is to go through or not and how it changes its direction of polarization when it does go through a polarizing filter" - a statement by Prof. Dirac - is unambiguously explained.
[1155] vixra:2106.0174 [pdf]
Enomoto's Problem in Wasan Geometry
We consider Enomoto's problem involving a chain of circles touching two parallel lines and three circles with collinear centers. Generalizing the problem, we unexpectedly get a generalization of a property of the power of a point with respect to a circle.
[1156] vixra:2106.0173 [pdf]
Geometry and Division by Zero Calculus
We demonstrate several results in plane geometry derived from division by zero and division by zero calculus. The results show that the two new concepts open an entirely new world of mathematics.
[1157] vixra:2106.0167 [pdf]
Combined Theory of Special Relativity and Quantum Mechanics
Lorentz transformation plays a key role in Special Relativity by relating the space- time distance between events being observed in a pair of inertial frames of reference. Depending on the relative velocity of the inertial frames, the magnitude of Lorentz transformation varies between the limits 0 and 1. The upper limit 1 represents a case where the pair of inertial frames of reference are stationary relative to each other. The lower limit 0 represents the other extreme case where the relative velocity of the frames is at the speed of light c. Similar numerical limits, on the other hand, appear in Quantum Mechanics but in the context of the summation of the probability density distribution of a particle over a region of space. The upper limit 1 represents a case where the probability of finding a particle in a region of space is certain. The lower limit, represents the opposite case where the probability of finding a particle in a region of space is not likely. The range of the limits being between 0 and 1 in both theories is not a numerical coincidence. In this paper, a combined theory is introduced which relates the Lorentz transformation of Special Relativity to the wavefunction of Quan- tum Mechanics. The combined theory offers a new insight to the physical reality. For instance, it is found that the inherent quantum uncertainties in the spacetime coordinate of a quantum particle in vacuum constitute a timelike four-vector whose length A is invariant. It is also found that local acceleration, like velocity itself, has an upper limit; such that no physical object can undergo a local acceleration higher than c2/A. The latter, in turn, constrains the mass of the smallest possible black hole - called Unit Black Hole (UBH) - to Ac2/4G and its event horizon diameter to the invariant A. The diameter of the event horizon, the mass and the Hawking temperature of more massive black holes are subsequently quantized starting from those of the UBH.
[1158] vixra:2106.0165 [pdf]
On the General no-Three-in-Line Problem
In this paper we show that the number of points that can be placed in the grid $ntimes ntimes cdots times n~(d~times)=n^d$ for all $din mathbb{N}$ with $dgeq 2$ such that no three points are collinear satisfies the lower boundbegin{align}gg n^{d-1}sqrt[2d]{d}.onumberend{align}This pretty much extends the result of the no-three-in-line problem to all dimension $dgeq 3$.
[1159] vixra:2106.0159 [pdf]
The Fog Covering Cantor's Paradise: Some Paradoxes on Infinity and Continuum
We challenge Georg Cantor's theory about infinity. By attacking the concept of “countable/uncountable” and diagonal argument, we reveal the uncertainty, which is obscured by the lack of clarity. The problem arises from the basic understandings of infinity and continuum. We perform many thought experiments to refute current standard views. The results support the opinion that no potential infinity leads to an actual infinity, nor is there any continuum composed of indivisibles statically, nor is Cantor's theory consistent in itself.
[1160] vixra:2106.0158 [pdf]
A Quantitative Version of the Erd\h{o}s-Anning Theorem
Let $\mathcal{R}\subset \mathbb{R}^n$ be an infinite set of collinear points and $\mathcal{S}\subset \mathcal{R}$ be an arbitrary and finite set with $\mathcal{S}\subset \mathbb{N}^n$. Then the number of points with mutual integer distances on the shortest line containing points in $\mathcal{S}$ satisfies the lower bound \begin{align} \gg_n \sqrt{n}|\mathcal{S}\bigcap \mathcal{B}_{\frac{1}{2}\mathcal{G}\circ \mathbb{V}_1[\vec{x}]}[\vec{x}]|\sum \limits_{\substack{k\leq \mathrm{max}_{\vec{x}\in \mathcal{S}\cap \mathcal{B}_{\frac{1}{2}\mathcal{G}\circ \mathbb{V}_1[\vec{x}]}[\vec{x}]}\mathcal{G}\circ \mathbb{V}_1[\vec{x}]\\k\in \mathbb{N}\\k>1}}\frac{1}{k},\nonumber \end{align}where $\mathcal{G}\circ \mathbb{V}_1[\vec{x}]$ is the compression gap of the compression induced on $\vec{x}$. This proves that there are infinitely many collinear points with mutual integer distances on any line in $\mathbb{R}^n$ and generalizes the well-known Erd\H{o}s-Anning Theorem in the plane $\mathbb{R}^2$.
[1161] vixra:2106.0144 [pdf]
Wave Packets of Relaxation Type in Boundary Problems of Quantum Mechanics
An initial value boundary problem for the linear Schr ˙odinger equation with nonlinear functional boundary conditions is considered. It is shown that attractor of problem contains periodic piecewise constant functions on the complex plane with finite points of discontinuities on a period. The method of reduction of the problem to a system of integro-difference equations has been applied. Applications to optical resonators with feedback has been considered. The elements of the attractor can be interpreted as white and black solitons in nonlinear optics.
[1162] vixra:2106.0132 [pdf]
Lower Bound for Arbitrarily Aligned Minimal Enclosing Rectangle
We determine the lower bound for arbitrarily aligned perimeter and area Minimal Enclosing Rectangle (MER) problems to be Omega(n log n) by using a reduction technique for a problem with this known lower bound.
[1163] vixra:2106.0108 [pdf]
Division by Zero Calculus in Figures - Our New Space Since Euclid -
We will show in this paper in a self contained way that our basic idea for our space is wrong since Euclid, simply and clearly by using many simple and interesting figures.
[1164] vixra:2106.0101 [pdf]
Three Dimensional Space-Time Gravitational Metric, 3 Space + 3 Time Dimensions
We have recently suggested a new quantum gravity theory that can be unified with quantum mechanics. We have coined this theory collision space-time. This new theory seems to be fully consistent with a 3-dimensional space-time, that is, three space dimensions and three time-dimensions, so some would call it six-dimensional. However, we have shown that collision-time and collision-length (space) are just two different sides of the same ”coin” (space-time), so it is more intuitive to think of them as 3-dimensional space-time. In previous papers, we have not laid out a geometric coordinate system for our theory that also considers gravity, but we will do that here. We are pointing out that Einstein’s negative attitude towards relativistic mass can perhaps cause a weakness in the foundation of general relativity theory. When a relativistic mass is incorporated in the theory, this mass also seems to indicate one needs to move to three-dimensional space-time. Then, for example, our new theory matches fully up with all the properties of the Planck scale in relation to the mathematical properties of micro black holes, not only mathematically but also logically, something we demonstrate clearly is not the case of general relativity theory. Our new metric has many benefits as an alternative to the Schwarzschild metric and general relativity theory. It seems to be more consistent with the Planck units than the Schwarzschild metric. Most importantly, it seems to be fully consistent with a new quantum gravity theory that seems to unify gravity with quantum mechanics.
[1165] vixra:2106.0088 [pdf]
The General Aspects of Heterotic Superstring/F-Theory Duality Correspondence of Moduli Superspaces
In the present article we aim to broaden the consideration of background geometry of manifolds/bundles arising in heterotic compactifications with an aim towards extending the validity and understanding of heterotic/F-theory duality. In particular, we will focus on elliptically fibered Calabi-Yau geometries arising in heterotic theories in the context of the so-called Fourier Mukai transforms of vector bundles on elliptically fibered manifolds. The duality between the Heterotic and F-theory is a powerful tool in gaining more insights into F-theory description of low-energy chiral multiplets. We propose a generalization of heterotic/F-theory duality and in order to complete the translation, the dictionary of the heterotic/F-theory duality has to be refined in some aspects. The precise map of spectral surface and complex structure moduli is obtained, and with the map, we find that divisors specifying the line bundles correspond precisely to codimension singularities in F-theory.
[1166] vixra:2106.0087 [pdf]
The Extremal Higher Dimensional Constructions of Fundamental Brane–Antibrane Systems
We calculate the extremal higher dimensional effective actions of fundamental brane-antibrane systems elegantly presented in the theoretical framework of advanced membrane theory constructions. Detailed study of brane-antibrane systems reveals when brane separation is smaller than the superstring length scale, spectrum of this system has different tachyonic modes and interaction regimes in the moduli superspace. The higher dimensional effective actions should then include these modes because they are the most important ones which rule the extremal dynamics of the fundamental brane systems. In this regard, it has been shown that an effec- tive action of Born-Infeld type proposed in the current literature can capture many properties of the decay of non-BPS Dp-branes in superstring and membrane theory. The effective actions of brane-antibrane systems in Types IIA and IIB superstring theories should be given by some extension of the DBI action and the WZ terms which include the tachyon field configurations. The DBI part may be given by the projection of the effective action of two non-BPS Dp-branes in Type IIB theory. We are interested in this paper in the appearance of tachyon, gauge field and the RR field in these extremal higher dimensional actions. Using the consistency of the present constructions, we have also found the first higher derivative corrections to the exceptional part of the extremal effective actions with brane-antibrane systems.
[1167] vixra:2106.0085 [pdf]
On the State of Convergence of the Flint Hill Series
In this paper we study the convergence of the flint hill series of the form \begin{align} \sum \limits_{n=1}^{\infty}\frac{1}{(\sin^2n) n^3}\nonumber \end{align}via a certain method. The method works essentially by erecting certain pillars sufficiently close to the terms in the series and evaluating the series at those spots. This allows us to relate the convergence and the divergence of the series to other series that are somewhat tractable. In particular we show that the convergence of the flint hill series relies very heavily on the condition that for any small $\epsilon>0$ \begin{align} \bigg|\sum \limits_{i=0}^{\frac{n+1}{2}}\sum \limits_{j=0}^{i}(-1)^{i-j}\binom{n}{2i+1} \binom{i}{j}\bigg|^{2s} \leq |(\sin^2n)|n^{2s+2-\epsilon}\nonumber \end{align}for some $s\in \mathbb{N}$.
[1168] vixra:2106.0083 [pdf]
Granular: "Stochastic Space-Time and Quantum Theory"
In an earlier paper, a stochastic model had been presented for the Planck-scale nature of space-time. From it, many features of quantum mechanics and relativity were derived. But as mathematical points have no extent, the stochastic manifold cannot be tessellated with points (if the points are independently mobile) and so a granular model is required. As grains have orientations as well as positions, spinors (or quaternians) are required to describe them, resulting in phenomena as described by the Dirac equation. We treat both space and time stochastically and thus require a new interpretation of time to prevent an object being in multiple places at the same time. As the grains do have a definite volume, a mechanism is required to create and annihilate grains (without leaving gaps in space-time) as the universe, or parts thereof, expands or contracts. Making the time coordinate complex provides a mechanism. From geometric considerations alone, both the General Relativity field equations (the master equations of Relativity) and the Schrödinger equation (the master equation of quantum mechanics) are produced. Finally, to preserve the constancy of the volume element even internal to a mass, we propose a rolled-up fifth-dimension which is non-zero only in the presence of mass or energy.
[1169] vixra:2106.0082 [pdf]
Further Insights into Thermal Relativity Theory and Black Hole Thermodynamics
We continue to explore the consequences of Thermal Relativity Theory to the physics of black holes. The thermal analog of Lorentz transformations in the $tangent$ space of the thermodynamic manifold are studied in connection to the Hawking evaporation of Schwarzschild black holes and one finds that there is $no$ bound to the thermal analog of proper accelerations despite the maximal bound on the thermal analog of velocity given by the Planck temperature. The proper entropic infinitesimal interval corresponding to the Kerr-Newman black hole involves a $ 3 \times 3 $ non-Hessian metric with diagonal and off-diagonal terms of the form $ ( d{\bf s} )^2 = g_{ ab } ( M, Q, J ) d Z^a dZ^b$, where $ Z^a = M, Q, J $ are the mass, charge and angular momentum, respectively. Black holes in asymptotically Anti de Sitter (de Sitter) spacetimes are more subtle to study since the mass turns out to be related to the $enthalpy$ rather that the internal energy. We finalize with some remarks about the thermal-relativistic analog of proper force, the need to extend our analysis of Gibbs-Boltzmann entropy to the case of Reny and Tsallis entropies, and to complexify spacetime.
[1170] vixra:2106.0076 [pdf]
Riemann Hypothesis Proof Using Balazard, Saiasand Yor and Criterion
In this manuscript, we define a conformal map from the unit disc onto the semi plane. Later, we define a function f(z) = (s−1)ζ(s). We prove that f(z) belongs to the Hardy space,H1/3(D). We apply Jensen’s formula noting that the measure associated with the singularinterior factor of f is zero. Finally, we get∫∞−∞log|ζ(12+it)|14+t2dt=0.
[1171] vixra:2106.0064 [pdf]
Stellar Distance and Velocity (Iii)
The use of parallax angles is one of the standard methods for determining stellar distance. The problem that arises in using this method is how to measure that angle. In order for the measurement to be correct, it is necessary for the object we are observing to be stationary in relation to the sun. This is generally not true. One way to overcome this problem is to observe the object from two different places at the same time. This would be technically possible but will probably never be realized. Another way to determine the distance is given in [1]. With certain assumptions, this is a mathematically completely correct method. After the publication of the third Gaia's catalog [2], we are now able to test the proposed method using real data. Unfortunately, for the majority of stars it is not possible to obtain the distance directly, but with the help of some additional measurements we would be able to indirectly determine the distance of such stars.
[1172] vixra:2106.0056 [pdf]
Alzofon-Ionescu Theory of Gravity
Gravity is not a fundamental force. Alzofon’s Thermodynamic GravityTheory is derived from Qubit Model, an upgrade of the Quark Model within theStandard Model. Alzofon’s experiment is discussed: pros and cons. At the level of the Standard Model, Gravity is a result of the structure of the electric charges of quarks in nucleons, subject to Platonic symmetry and lack of parity invariance, related to CP-violation, due to the dihedral group as the QuantumMirror Symmetry group.The AGNUE experiment performed at Hathaway Research International is briefly explained. It is designed to test Alzofon’s Theory. A glimpse of Gravity Control and what inertial mass is are presented. FurtherR&D will be funded under the upcoming Kickstarter Gravity.
[1173] vixra:2106.0049 [pdf]
Interactive STEM Curriculum: Technological Tools and Programming Interface
The most important drawback of teaching mathematical equations to the middle school children is the lack of practical examples and interactive tools which can be provided to make concepts easier to grasp. Additionally, on a parallel note, computer programming has become increasingly important in the current era. The amalgamation of programming languages into the STEM curriculum in the early stages of the students’ education would expose them to learn and be acquainted with these concepts at a much earlier age. Teaching STEM concepts using interactive learning tools would benefit students to visualizing the concepts in a more intuitive way. Traditional ways of teaching for linear algebra concepts such as linear equations, quadratic equations, and their associated graphs are not sufficient to reach students deeply with these concepts. However, with the use of technology and right tools (Stepper motor and Drone), we can make the curriculum fun, interactive and link the real-world applications of these concepts, and make students engage deeply into the curriculum.
[1174] vixra:2106.0047 [pdf]
Design and Analysis of a Multiband Fractal Antenna for Applications in Cognitive Radio Technologies
Rapid development in wireless communication systems and an increase in the number of users of wireless devices is bound to result in spectrum shortage in the near future. The concept of Cognitive radio is envisaged to be a paradigm of new methodologies for achieving performance enhanced radio communication system through an efficient utilization of available spectrum. Research on antenna design is very critical for the implementation of cognitive radio. A special antenna is required in cognitive radio for sensing and communication purposes. This papers investigates the use of multiband fractal antennas for spectrum sensing application in cognitive radio units. The performance of a new fractal antenna design which generates four bands of operation in the range of 900-4000 MHz has also been studied. Through a thorough discussion on its return loss and radiation plots as well as other parameters such as gain and radiation efficiency, it is proved that the it is a promising antenna for future cognitive radio systems
[1175] vixra:2106.0046 [pdf]
Triple Band Antenna Design for Bluetooth, WLAN and WiMAX Applications
A novel and compact tri-band planar antenna for 2.4/5.2/5.8-GHz wireless local area network (WLAN), 2.3/3.5/5.5- GHzWorldwide Interoperability for Microwave Access (WiMAX) and Bluetooth applications is proposed and studied in this paper. The antenna comprises of a L-shaped element which is coupled with a ground shorted parasitic resonator to generate three resonant modes for tri-band operation. The L-shaped element which is placed on top of the substrate is fed by a 50 microstrip feed line and is responsible for the generation of a wide band at 5.5 GHz. The parasitic resonator is placed on the other side of the substrate and is directly connected to the ground plane. The presence of the parasitic resonator gives rise to two additional resonant bands at 2.3 GHz and 3.5 GHz. Thus, together the two elements generate three resonant bands to cover WLAN, WiMAX and Bluetooth bands of operation. A thorough parametric study has been performed on the antenna and it has been found that the three bands can be tuned by varying certain dimensions of the antenna. Hence, the same design can be used for frequencies in adjacent bands as well with minor changes in its dimensions. Important antenna parameters such as return loss, radiation pattern and peak gains in the operating bands have been studied in detail to prove that the proposed design is a promising candidate for the aforementioned wireless technologies.
[1176] vixra:2106.0040 [pdf]
Vudoku - A Visual Sudoku Solver
<p>It is no secret that AI is an upcoming titan. Even though people are stunned to hear that AI has been here for around a century, due to the advancement in computational methods and resources, today AI peaks like never before. As a tiny glimpse into the field of Digit Recognition, this project aims to understand the underlying cogs and wheels on which the neural networks spin. This paper tries to elucidate a project which solves the Sudoku puzzle drawn and written by hand. The paraphernalia for that project includes programming language: Python3; libraries: OpenCV, Numpy, Keras; datasets: MNIST handwritten digit database. Digit recognition is a classical problem which will introduce neurons, neural networks, connections hidden layers, weights, biases, activation functions like sigmoid, back-propagation and other related topics as well. Algorithm(s) in the project employed to solve Sudoku is also explored in this paper.</p>
[1177] vixra:2106.0039 [pdf]
Analytic Expansions and an Application to Function Theory
In this paper we introduce and study the notion of singularity, the kernel and analytic expansions. We provide an application to the existence of singularities of solutions to certain polynomial equations.
[1178] vixra:2106.0033 [pdf]
Locally Accurate Matrix Product Approximation to Thermal States
In one-dimensional quantum systems with short-range interactions, a set of leading numerical methods is based on matrix product states, whose bond dimension determines the amount of computational resources required by these methods. We prove that a thermal state at constant inverse temperature $\beta$ has a matrix product representation with bond dimension $e^{\tilde O(\sqrt{\beta\log(1/\epsilon)})}$ such that all local properties are approximated to accuracy $\epsilon$. This justifies the common practice of using a constant bond dimension in the numerical simulation of thermal properties.
[1179] vixra:2106.0019 [pdf]
Magnetic Symmetry, Curvature and Gauss-Bonnet-Chern Theorem
We reformulate Gauss-Bonnet-Chern theorem in relation with magnetic symmetry of geometrical optics. If Euler-Poincare characteristic is a topological invariant, should unrestricted electric potential of $U(1)$ gauge potential be a topological invariant?
[1180] vixra:2106.0009 [pdf]
Proofs of Three Conjectures in Number Theory : Beal's Conjecture, Riemann Hypothesis and The $ABC$ Conjecture
This monograph presents the proofs of 3 important conjectures in the field of Number theory: - The Beal's conjecture. - The Riemann Hypothesis. - The $abc$ conjecture. We give in detail all the proofs.
[1181] vixra:2106.0004 [pdf]
Two Extreme Cases of Polarization Direction Alignment, One of Starlight and the Other of Radio Qsos
Starlight and radio waves from QSOs share the ability to be polarized. For many regions of the Milky Way the alignment of the polarization directions of starlight is evident. However, it is useful to have a numerical alignment function that can be used to judge the significance of the correlations. The Hub Test provides such a function. Surveying the Galaxy with data from two catalogs of polarized starlight, Heiles 2000 and Berdyugin 2014, reveals an unusually well-aligned region which is then studied in more detail. Applied to a catalog of polarized radio QSOs, Pelgrims 2014 which is in part derived from Jackson 2007, a survey reveals the most significantly aligned region, which is studied further. Stars and QSOs have contrasting characteristics in terms of distance, degree of polarization, and strength of the alignment. The two most significantly aligned samples of starlight and radio QSOs are analyzed here. The alignment of the starlight sample outperforms all other portions of the Galaxy at the scale of the survey, about ten degrees, while the QSO sample has its polarization directions focusing down on a point extremely close to the QSOs themselves on the sky.
[1182] vixra:2106.0003 [pdf]
The Uniformly Accelerated String and the Bell Paradox
We consider the string with the length l, the left end and the right end of which is non-relativistically and then relativistically accelerated by the constant acceleration a. We calculate the motion of the string with no intercalation of the Fitzgerald contraction of the string. We consider also the Bell spaceship paradox. The Bell paradox and our problem is in the relation with the Lorentz contraction in the Cherenkov effect (Pardy, 1997) realized by the carbon dumbbell moving in the LHC or ILC (Pardy, 2008). The Lorentz contraction and Langevin twin paradox (Pardy, 1969) is interpreted as the Fock measurement procedure (Fock, 1964;).
[1183] vixra:2105.0181 [pdf]
Prove Np not equal P using Markov Random Field and Boolean Algebra Simplification
In this paper, we proved that Non-deterministic Polynomial time complexity (NP) is not equal to Polynomial time complexity (P). We developed the Boolean algebra that will infer the solution of two variables of a Non-deterministic Polynomial computation time Markov Random Field. We showed that no matter how we simplified the Boolean algebra, it can never run in Polynomial computation time (NP not equal to P). We also developed proof that all Polynomial computation time multi-layer Boolean algebra can be transformed to another Polynomial computation time multi-layer Boolean algebra where there are only 'Not' operations in the first layer. So in the process of simplifying the Boolean algebra, we only need to consider factorization operations that only assumes only 'Not' operations in the first layer. We also developed Polynomial computation time Boolean algebra for Markov Random Field Chain and 2sat problem represented in Markov Random Field form to give examples of Polynomial computation time Markov Random Field.
[1184] vixra:2105.0180 [pdf]
A New Inequality for the Riemann Hypothesis
There have been published many research results on the Riemann hypothesis. In this paper, we first find a new inequality for the Riemann hypothesis on the basis of well-known Robin theorem. Next, we introduce the error terms suitable to Mertens' formula and Chebyshev's function, and obtain their estimates. With such estimates and primorial numbers, we finally prove that the new inequality holds unconditionally.
[1185] vixra:2105.0176 [pdf]
Gesture Classification using Machine Learning with Advanced Boosting Methods
In this paper, a detailed study on gesture classifica- tion using a dataset from Kaggle and optimizing the dataset is presented. The machine learning algorithms, which are SGD, kNN, SVM, MLP, Gaussian Naive Bayes classifier, Random Forest, LightGBM, XGBoost, and CatBoost classifiers, to conduct the research and, are used. The results are compared with each other to conclude which models perform the best in gesture classification. Except for the Gaussian Naive Bayes classifier, all methods resulted in high accuracy.
[1186] vixra:2105.0163 [pdf]
Linear and Non-Linear Refractive Indices in Riemannian and Topological Spaces
The refractive index and curved space relation is formulated using the Riemann-Christoffel curvature tensor. As a consequence of the fourth rank tensor of the Riemann-Christoffel curvature tensor, the refractive index should be a second rank tensor. The second rank tensor of the refractive index describes a linear optics. In case of a non-linear optics, if susceptibility is a fourth rank tensor, then the refractive index is a sixth rank tensor. In a topological space, the linear and non-linear refractive indices are related to the Euler-Poincare characteristic. Because the Euler-Poincare characteristic is a topological invariant then the linear and non-linear refractive indices are also topological invariants.
[1187] vixra:2105.0157 [pdf]
Teaching, Learning and AI
Teaching and Learning occur concomitantly, with various weights, in any interaction between two systems. In this article we will explore some general aspects, in order to better understand how to plug-in Mathematica, as a mathematical software, to a Math college course, like Calculus III. The role of formal languages, especially adaptive grammars, is emphasized, as the “other side” of the approach focusing on automata.
[1188] vixra:2105.0148 [pdf]
Solitons in Cellular Neural Networks
The two-dimensional autonomous cellular neural networks (CNNs) having one layer or two layers of memristor coupling can exhibit many interesting nonlinear waves and bifurcation phenomena. In this paper, we study the nonlinear waves (solitons) in the one-dimensional CNN difference equations. From our computer simulations, we found that the CNN difference equations can exhibit many interesting behaviors. The most remarkable thing is that the first-order linear CNN difference equation can exhibit a train of solitary waves, if the initial condition is given by the unit step function. Furthermore, the second-order linear CNN difference equation can exhibit soliton-like behavior, if the initial condition is given by a pulse wave. That is, the solitary waves pass through one another and emerge from the collision. Furthermore, the solution exhibits the area-preserving behavior, and it returns exactly to its initial state (the recurrence of the initial state). In the case of the nonlinear CNN difference equation, we observed the following interesting behaviors. In the Korteweg-de Vries CNN difference equation, the three-dimensional plot of the interaction of the solitary waves looks like a chicken cockscomb. In the Toda lattice CNN difference equation, a train of solitary waves with a negative amplitude interact with a train of solitary waves with a positive amplitude, and they emerge from the collisions. Furthermore, after a certain period of time, the solution breaks down. In the Sine-Gordon CNN difference equation, the solution moves at constant speed, and it emerges from the collision. Furthermore, the solution returns the state which is roughly similar to the initial state. In the memristor CNN difference equations, the three-dimensional plots of solitary waves exhibit more complicated (chaotic or distorted) behavior.
[1189] vixra:2105.0147 [pdf]
A Continuous Gravitational Wave at 404.3 μHz
Continuous gravitational waves are identified by their signatures: The amplitude is constant and the frequency increases slowly. In addition, the Doppler effect as a consequence of the Earth's orbit causes a characteristic phase modulation at 31.69 nHz. The long-term records from eleven superconducting gravimeters in Europe contain a set of spectral lines at 404.3 μHz that meet all three criteria and are likely generated by a gravitational wave. The determined values of frequency deviation and the time of maximum frequency deviation allow the calculation of the ecliptic coordinates of the source of the CGW.
[1190] vixra:2105.0146 [pdf]
A Relational Analysis of Quantum Symmetry
Carlo Rovelli's ``relational interpretation'' of quantum mechanics tells us that our understanding of quantum states is limited to their interactions with other quantum states. This implies that we have no understanding of the symmetry properties of a state vector except when considered in relation to at least one other state vector. The SU(3)xSU(2)xU(1) symmetry of the Standard Model is derived from their interactions and therefore cannot necessarily be applied to a single state vector. Steven Weinberg showed that mixed density matrices can have symmetries that are not available to state vectors. We explore the symmetry properties of finite symmetry groups for mixed density matrices. Just as mixed density matrices can define mixed states that require multiple state vectors, we find that the symmetries of mixed density matrices define ``mixed symmetries'' that are similar in structure to the symmetry of the Standard Model. We explore the point group symmetries and show one that gives the Standard Model symmetry. With this symmetry, we can generalize the Pauli spin matrices to a set that has irreducible representations matching the Standard Model plus a dark matter candidate.
[1191] vixra:2105.0143 [pdf]
Backtesting Investigation of Effect of the Optimized S&P 500 Portfolio Diversification with L2 Regularization
The article presents problems related the to long - only optimized (with the help of the Markowitz model) portfolio diversification with L2 regularization. The backtesting for stocks selected from some subset of S&P 500 index for the 2002 - 2019 year interval with Markowitz portfolio optimization with L2 regularization was performed. The expected return was varied from 0.1 to 0.25, the regularization parameter was varied in the range 0.05 - 0.15. The results of backtesting are shown. The geometrical average of annual return for year interval 2002 - 2019, was about 0.13 - 0.14 in all the parameter range. It has been exposed, that the mean annual portfolio return for this year range actually nearly not depends on the expected return and the regularization parameter, if they are changes in aforementioned range, but the minima and the maxima of the annual portfolio return, i.e. the variance of the annual portfolio return, moreover, shows more pronounced dependency on these parameters of the optimized portfolio. The data of the backtesting seems to show the heteroscedasticity of a stock market.
[1192] vixra:2105.0137 [pdf]
One Can not Observe the Impact with Detector the Zero Cross-Section Dark Matter Particle: it is Invisible Matter
The indirect detection of Dark Matter is the gravitational anomalies in the cosmos, e.g. flat rotation curves in galaxies. The leading journals explain the lack of direct detection by the very small Impact Cross Section of the Dark Matter compounds. I argue that in the case of Particle Dark Matter the cross-section is infinitely small, so can never be directly detected. In such a case I would use the term "Dark Matter of Virtual Particles". The representative of it is the hypothetical sterile neutrino. I am not limiting my research by the Particle Dark Matter model.
[1193] vixra:2105.0135 [pdf]
A Continuum Universe Interpretation of Quantum Mechanics
The measurement problem in quantum mechanics has been a cause of much puzzlement over the years. The very idea of having two different versions of reality for the same system has been a cause for much debate. Often quantum mechanics textbooks follow the ‘shut up and calculate’ paradigm. This denies the opportunity for the common student to understand the consequence of one of the most elegant and beautiful aspects of science. The state of the art textbooks give a purely algebraic, perfunctory and monotonous approach where the real consequence of the system is not fully appreciated. A good reason for this is the considerable deviation of the quantum mechanical process from the commonsensical idea of truth, reality and reason. We tend to look at the world in a materialistic, deterministic, causal and objectivistic way. We tend not to accept a world of contradictions. A quantum measurement is essentially an amalgamation of contradictions, mystery and duality. It encompasses an implicit dependence on subjectivity and contradicts with causality as we know it. We look at the world as in the present. But a quantum mechanical measurement is a prediction of the future influenced by the observer or the measurer. This offers a philosophical and pedagogical conundrum. It poses a challenge on not just how our perception of the world might change in addition to providing a big challenge on how to make it compatible with the other successful theories of physics. The most common text book interpretation of quantum mechanics has been the Copenhagen Interpretation suggests the ‘collapse of the wave function’ as a mechanism of transition between duality. But a more bizarre yet elegant theory, extending quantum formalism to the classical domain, called the Many Worlds Interpretation has been catching up very quickly; it stands for the split of the universe when we make a quantum mechanical measurement. Consequently, reality as we is redefined as a universal wave function which is a superposition of several outcomes, which is incompatible with many of the successful concepts of physics and has several problems like the correct idea about probability or basis. We adapt the idea of the universal wave function, but instead suggest a continuum interpretation of quantum mechanics where the universal wave function represents the entire singular universe and the concept of atoms or electrons as conceptually a continuous part of it rather than a distinct separate entity. Such an interpretation could be compatible with other continuum theories like the Superfluid vacuum theory or the Higgs Field.
[1194] vixra:2105.0134 [pdf]
A Non-linear Generalisation of Quantum Mechanics
A new definition for quantum-mechanical momentum is proposed which yields novel nonlinear generalisations of Schroedinger and Klein-Gordon equations. It is thence argued that the superposition and uncertainty principles as they stand cannot have general validity.
[1195] vixra:2105.0121 [pdf]
The String with the Interstitial Massive Point
We will consider the string, the left end of which is fixed at the beginning of the coordinate system, the right end is fixed at point l and mass m is interstitial between the ends of the string. We determine the vibration of such system. The proposed model can be also related to the problem of the Moessbauer effect, or, recoilless nuclear resonance fluorescence, being resonant and recoilfree emission and absorption of gamma radiation by atomic nuclei bound in a solid (Moessbauer,1958).
[1196] vixra:2105.0120 [pdf]
Quantum Partial Automorphisms of Finite Graphs
The partial automorphisms of a graph $X$ having $N$ vertices are the bijections $\sigma:I\to J$ with $I,J\subset\{1,\ldots,N\}$ which leave invariant the edges. These bijections form a semigroup $\widetilde{G}(X)$, which contains the automorphism group $G(X)$. We discuss here the quantum analogue of this construction, with a definition and basic theory for the quantum semigroup of quantum partial automorphisms $\widetilde{G}^+(X)$, which contains both $G(X)$, and the quantum automorphism group $G^+(X)$. We comment as well on the case $N=\infty$, which is of particular interest, due to the fact that $\widetilde{G}^+(X)$ is well-defined, while its subgroup $G^+(X)$, not necessarily, at least with the currently known methods.
[1197] vixra:2105.0119 [pdf]
A Statistical Approach to Two-Particle Bell Tests
Extensive experimental tests of the Bell inequality have been conducted over time and the test results are viewed as a testimony to quantum mechanics. In considering the close tie between quantum mechanics and statistical theory, this paper identifies the mistake in previous statistical explanation and uses an elegant statistical approach to derive general formulas for two-particle Bell tests, without invoking any wavefunctions. The results show that, for the special case where the spins/polarizations are in the same, opposite, or perpendicular directions, the general formulas derived in this paper convert to quantum predictions, which are confirmed by numerous experiments. The paper also investigates the linkages between the statistical and quantum predictions and finds that vector decomposition and probability law are at the heart of both approaches. Based on this finding, the paper explains statistically why the local hidden variable theory fails the Bell tests. The paper has important implications for quantum computing, quantum theory in general, and the role of randomism and realism in physics.
[1198] vixra:2105.0118 [pdf]
An Extension of the Erd\h{o}s-Tur\'{a}n Additive Base Conjecture Via Generalized Circles of Partition
In this paper we continue the development of the method of Circles of Partition by introducing the notion of generalized circles of partition. This is an extension program of the notion of circle of partition developed in our first paper \cite{CoP}. As an application we prove an analogue of the Erd\H{o}s-Tur\'{a}n additive base conjecture.
[1199] vixra:2105.0107 [pdf]
Embedding Cycles Within Adjacency Matrices to Represent Rational Generating Functions
This paper explores populating adjacency matrices with connected cycles whose final outputs represent the coefficients of rational generating functions (RGFs). An RGF takes the form of: $p(x)/q(x) + r(x)$. The denominator, $q(x)$, takes the form of: Constant $\cdot (1-c_1x^{x_1})(1-c_2x^{x_2})... (1-c_nx^{x_n})$ where the $c_i$ are complex numbers and where factors can possibly have multiplicities greater than one. It is well known that a closed form solution exists for computing coefficients of RGFs. Also, one can write the linear recurrence relation associated with every RGF into a matrix format. Using matrices, one can compute coefficients for an RGF, such as Molien series for finite groups, in logarithmic time. What has not yet been shown (or is not yet commonly discussed) is that one can conceptualize an RGF as a system of connected cycles within an overarching adjacency matrix. For example, a single cycle of length two would have vertex A connect to vertex B which itself connects back to vertex A with a directed arrow of weight $c_i$. In this conceptualization, each coefficient of an RGF can be reproduced by taking a suitable adjacency matrix to an integer power. Nothing essential is lost by taking this perspective. Due to the self-similar nature of the matrix, we devise an algorithm that can calculate coefficients of RGFs in constant time. Using memoization, a technique for caching intermediate results, calculating coefficients of RGFs can also be done in logarithmic time. One observation is that, depending on the situation (i.e. what $q(x)$ is), there may be a computational benefit to taking the cyclical perspective. For example, for certain $q(x)$, the traditional matrix has cells containing positive and negative values whereas the cyclical approach has cells containing only positive values. The computational benefit is probably irrelevant for computers; however, it may be important for restrictive systems, such as biological systems / neural networks that may have a tight operating envelope. We make a final observation that each cyclical matrix representation can be thought of as a graph which is an epsilon away from being strongly connected. Studying the behavior of these matrices may yield insight into the behavior of a broader class of function. In essence, the study of sequences modeled by RGFs can be converted to the study of connected cyclical graphs that model the RGFs or vice versa.
[1200] vixra:2105.0105 [pdf]
Could the Diameter of The So-Called Inflated Universe Actually be the Hubble Circumference?
This short paper points out that the so-called diameter of the inflated universe, approximately Θ ≈ 8.8 × 10^26 m, basically is very close to or perhaps even identical to what we can call the Hubble circumference: Θ ≈ 2πR = 2π×c/H , at a Hubble constant of 66 (km/s)/Mpc these values are identical. The question is if Ho these facts are a pure coincidence or if the diameter of the so-called inflated universe truly could be directly linked to the Hubble circumference? Further, we discuss some possible implications on suggested minimum acceleration models that, in this interpretation, seems to fit galaxy rotations well without relying on dark matter. In particular, the “recently” introduced quantized inertia model seems robust in its predictions under this perspective. Inside the uncertainty, we can find in the various measurements of the Hubble constant.
[1201] vixra:2105.0101 [pdf]
Neutrons Synod as a Cause of Nuclear γ-Radiation
The interpretation of γ-radiation at the nuclear decay is based on the structure of the nuclei, that is, on two fundamental phenomena. First, on the inverse electric field of the proton and second, on the electric entity of the macroscopically neutral neutron, which behaves, at the nuclear scale, as a positively charged particle. The γ-radiation at the alpha decay (e.g. in radio Ra-226) can occur due to the neutrons synod (session), which reduces the negativity of the nuclear field and attenuates the connection of a nucleus He-4 (alpha particle), that exits the parent nucleus with the whole energy without γ-radiation or with less energy but with γ-radiation. Also, a beta decay β− can occur due to the neutrons synod in the nucleus (e.g. in boron B-12), resulting the emitted electron exits the parent nucleus with the whole energy without γ-radiation or with less energy but with γ-radiation. These strange phenomena will be explained.
[1202] vixra:2105.0095 [pdf]
Biochemistry Provides Inspiration for a New Kind of ai
This article is about the origin, development, and benefits of the "SP System" (SPS), which means the "SP Theory of Intelligence" and its realisation in the "SP Computer Model" (SPCM). The SPS is radically different from deep neural networks (DNNs), with many advantages compared with DNNs. As will be described, the SPS provides a promising foundation for the development of human-like broad AI. The SPS was inspired in part by: evidence for the importance of information compression in human learning, perception, and cognition; and the concept of `multiple sequence alignment' in biochemistry. That latter concept led to the development of the powerful concept of SP-multiple-alignment, a concept which is largely responsible for the intelligence-related versatility of the SPS. The main advantages of the SPS are: 1) The clear potential of the SPS to solve 19 problems in AI research; 2) Versatility of the SPS in aspects of intelligence, including unsupervised learning, and several forms of reasoning; 3) Versatility of the SPS in the representation and processing of knowledge; 4) Seamless integration of diverse aspects of intelligence and diverse forms of knowledge, in any combination, a kind of integration that appears to be necessary in any artificial system that aspires to the fluidity and adaptability of the human mind; 5) Several other potential benefits and applications of the SPS. It is envisaged that the SPCM will provide the basis for the development of a first version of the {\em SP Machine}, with high levels of parallel processing and a user-friendly user interface. All software in the SP Machine would be open-source so that clones of the SP Machine may be created anywhere by individuals or groups, to facilitate further research and development of the SP System.
[1203] vixra:2105.0090 [pdf]
What Gravity Is: Beyond Newton, Einstein Etc
Gravity is not a fundamental force; in a nut-shell it is the result of a non-commutative interaction of the electric'' (i.e. Coulomb type) fractional charges of the proton and neutron U(1) neutral when compensated by the electronic cloud. This is no longer true at the SU(2) Electroweak level, once spherical symmetry is broken to a finite Platonic group of symmetry within it. The fine splitting of energy levels due to the SU(2) structure of the electric charge can be controlled using a MASER to invert the population and orient the nuclei the right way to reduce and turn-off Gravity.
[1204] vixra:2105.0072 [pdf]
A Novel Compact Tri-Band Antenna Design for WiMAX, WLAN and Bluetooth Applications
A novel and compact tri-band planar antenna for 2.4/5.2/5.8-GHz wireless local area network (WLAN), 2.3/3.5/5.5GHz Worldwide Interoperability for Microwave Access (WiMAX) and Bluetooth applications is proposed and studied in this paper. The antenna comprises of a L-shaped element which is coupled with a ground shorted parasitic resonator to generate three resonant modes for tri-band operation. The L-shaped element which is placed on top of the substrate is fed by a 50ohm microstrip feed line and is responsible for the generation of a wide band at 5.5 GHz. The parasitic resonator is placed on the other side of the substrate and is directly connected to the ground plane. The presence of the parasitic resonator gives rise to two additional resonant bands at 2.3 GHz and 3.5 GHz. Thus, together the two elements generate three resonant bands to cover WLAN, WiMAX and Bluetooth bands of operation. A thorough parametric study has been performed on the antenna and it has been found that the three bands can be tuned by varying certain dimensions of the antenna. Hence, the same design can be used for frequencies in adjacent bands as well with minor changes in its dimensions. Important antenna parameters such as return loss, radiation pattern and peak gains in the operating bands have been studied in detail to prove that the proposed design is a promising candidate for the aforementioned wireless technologies.
[1205] vixra:2105.0065 [pdf]
A Rotor Problem from Professor Miroslav Josipovic
We present two Geometric-Algebra (GA) solutions to a vector-rotation problem posed by Professor Miroslav Josipovic. We follow the sort of solution process that might be useful to students. First, we review concepts from GA and classical geometry that may prove useful. Then, we formulate and carry-out two solution strategies. After testing the resulting solutions, we propose an extension to the original problem.
[1206] vixra:2105.0049 [pdf]
Integral Forms for the Quantum he Hamiltonian Approximation of Chemical Bonds in Rna and Protein Scaling Comparisons
In the paper a mathematical method, originated from studies of nonlinear partial differential equations, is applied to the He approximation of outer electron chemical bonds. The results can be used in the study of large molecules like RNA and proteins. We follow a pairwise atom by atom coordinates approximation. Coordinates can be obtained from crystallography or electron microscopy. The present paper solely presents the proof of concept of the existence of an algorithm. It is expected that such algorithm can be employed in studies of larger molecules.
[1207] vixra:2105.0047 [pdf]
Is the Many Worlds Interpretation of Quantum Mechanics Consistent?
Duality in quantum mechanical wave functions is manifest through the famous measurement problem. There have been several interpretations to explain this duality, but none have seen full consensus among physicists. The Copenhagen interpretation, which is at least to some extent the most widely accepted interpretation has the 'collapse' of the wave function (or state vector reduction) during measurement, does not attribute a physical reality to the wave function. Moreover, the idea of measurement having a role in defining reality shakes the very foundation of classical physics. On the other hand, the Many worlds interpretation proposed by Everett is a very brave attempt to attribute physical significance to the wave function. Though mathematically sound and elegant, 'the splitting of the universe' in the Many Worlds Interpretation completely redefines reality as we know it. We test Everett's original thought experiment in the presence of a super observer and for sequential measurements as well. We observe that the no-clone theorem helps the Many Worlds Interpretation, yet it does not provide a consistent picture for sequential measurements, unlike the Copenhagen Interpretation.
[1208] vixra:2105.0033 [pdf]
Generalized Quantum Evidence Theory on Interference Effect
In this paper, CET is generalized to quantum framework of Hilbert space in an open world, called generalized quantum evidence theory (GQET). Differ with classical GET, interference effects are involved in GQET. Especially, when a GQBBA turns into a classical GBBA, interference effects disappear, so that GQB and GQP functions of GQET degenerate to classical GBel and GPl functions of classical GET, respectively.
[1209] vixra:2105.0023 [pdf]
A Mnemonic for One-Letter Elements in the Periodic Table of Elements
A mnemonic for one-letter elements in the periodic table of elements is shown. A mapping of these letters to keyboard letters is shown. This is also a basis for mnemonics for remembering the other chemical elements. At the same time, this is also a mnemonic for remembering some letters of the keyboard.
[1210] vixra:2105.0002 [pdf]
Sur Le Problème Des Trois Corps Et Les Equations De La Dynamique <br> Chapitre 1 - Nouvelle Edition Numérique
[Note: This is Henri Poincare's paper edited by Abdelmajid Ben Hadj Salem] <p></p> This article is a numerical version of the first chapter of the long paper of Henri Poincar\'e " The Three-Body Problem and the Equations of Dynamics " published by the celebrate journal \textit{Acta Mathematica} (Vol.13, n$^{\circ}1-2$, 1889), created by the Swedish mathematician Gösta Mittag-Leffler in 1882, and he was the Editor-in-Chief. The new version kept the original text with some minimal changes and adding the bibliography which summarizes all the references cited in the article.
[1211] vixra:2104.0189 [pdf]
Field Theory of Temperature
Motivated by the well-known contradiction of special relativity and the heat equation, a wave equation for temperature scalar field is presented that also resolves the old issue of (Lorentz) transformation of temperature and entropy. As an inductive consequence it is proposed that single particles posses entropy.
[1212] vixra:2104.0188 [pdf]
Magnetic Symmetry of Geometrical Optics
We show that there exists a magnetic monopole in the $U(1)$ geometrical optics as a consequence of the magnetic symmetry in a $(4+d)$-dimensional unified space where the magnetic symmetry is a consequence of the extra internal symmetry. This magnetic symmetry restricts the gauge potential. The restricted (decomposed) gauge potential is made of the scalar potential as the unrestricted electric part and the vector potential as the restricted magnetic part. We also show that the refractive indices can be formulated in relation to the decomposed gauge potential. We treat the curvature in the curvature-refractive index relation of the $U(1)$ geometrical optics as an Abelian curvature form in the fibre bundle.
[1213] vixra:2104.0177 [pdf]
The Generalized Bargman-Michel-Telegdi Equation for the Fermilab Muon Experiment
The influence of the bremsstrahlung on the spin motion of muon is expressed by the equation which is the analogue and generalization of the Bargmann-Michel-Telegdi equation. The new constant is involved in this equation. This constant can be immediately determined by the experimental measurement in FERMILAB of the spin motion of muon, or, it follows from the classical limit of genneralized SM electrodynamics with radiative corrections (Pardy, 2008; 2009).
[1214] vixra:2104.0172 [pdf]
How to Find the Surplus Root (Prime Number) in Power Surplus of Prime Numbers
There are already various formulas for calculating power remainders and roots of remainders. Based on these, I have created a simple and quick way to calculate it. However, there is no theoretical proof.
[1215] vixra:2104.0166 [pdf]
Anomalous Tracer Diffusion in Hard-Sphere Suspensions
Coupled equations describing diffusion and cross-diffusion of tracer particles in hard-sphere suspensions are derived and solved numerically. In concentrated systems with strong excluded volume and viscous interactions the tracer motion is subdiffusive. Cross diffusion generates transient perturbations to the host-particle matrix, which affect the motion of the tracer particles leading to nonlinear mean squared displacements. Above a critical host-matrix concentration the tracers experience clustering and uphill diffusion, moving in opposition to their own concentration gradient. A linear stability analysis indicates that cross diffusion can lead to unstable concentration fluctuations in the suspension. The instability is a potential mechanism for the appearance of dynamic and structural heterogeneity in suspensions near the glass transition.
[1216] vixra:2104.0156 [pdf]
Prospects of a Unified Field Theory Including Gravity
The generic relativistic version of a particle-eld theory, with non-isotropic sources, includes a Gravity force perturbation of Coulombian Force, with the usual Magnetic Force resulting from Lorentz transformations. The quark model of Standard Model, with fractional charge struc- ture of nucleons enveloped by electronic clouds, mandates such non- isotropic charges. Dynamic Nuclear Orientation (DNO), via electronic spin and LS- coupling, allows to invert the population of low energy Gravitational attraction states, and achieve Gravity Control. The 1994 scientic experiment of Dr. Frederick Alzofon has con- rmed Gravity Control can be achieved via DNO. Other researchers have contributed in the same general direction of unifying Electromagnetism and Gravity, supporting the non-isotropic charge concept, including Paul LaViolette, author of Subquantum Ki- netics.
[1217] vixra:2104.0152 [pdf]
Is Our World an Intelligent Simulation?
Elon Musk seems to believe that our world is an intelligent simulation; that part of our world is simulated (part A), and part is not (part B): it is like augmented reality, made by highly advanced beings. I argue that part B is a galaxy, but part A is the Dark Matter surrounding that galaxy.
[1218] vixra:2104.0151 [pdf]
Natural Boundaries
My understanding of modern physical discoveries that does not modify the existing equations. Only the values become bounded, to avoid infinities and singularities: "the sand a boundary for the sea, an everlasting barrier it cannot cross. The waves may roll, but they cannot prevail; they may roar, but they cannot cross it." Jeremiah 5:22 NIV.
[1219] vixra:2104.0145 [pdf]
On the Negation Intensity of a Basic Probability Assignment (Bpa)
How to obtain negation knowledge is a crucial topic, especially in the field of artificial intelligence. Limited work has been done on the negation of a basic probability assignment (BPA), and which has been studied in depth throughout the literature. However, the aspect of the intensity level of negation enforcement has not yet been investigated. Moreover, let us note that the main characteristic of intelligent systems is just the flexibility for the sake of being able to represent knowledge according to each situation. In general, researchers have a tendency to express the need for cognitive range in the negation. Thus, it would seem very useful to find a wide range of negations under intensity levels in a BPA. Based on these ideas, this paper first proposes a new approach of finding a BPA negation and gives a domain of intensity in which the negation is executed, which is called the negation space. Then, we investigate a number of desirable properties and explore their correlation with entropy. Numerical examples show the characteristics of the proposed negation solution. Finally, we validate the efficiency of the proposed method from the point of view of the Dempster-Shafer belief structure.
[1220] vixra:2104.0144 [pdf]
Limiting Fluctuations in Quantum Gravity to Diffeomorphisms
Within the background field formalism of quantum gravity, I show that if the quantum fluctuations are limited to diffeomorphic transformations, all the quantum corrections vanish on shell and the effective action is equivalent to the classical action. I also show that this choice of fields renders the path integral independent of the on-shell condition for the background field, and therefore incorporates a form of background independence. The proposed approach may provide insight into the development of a finite and background independent description of quantum gravity.
[1221] vixra:2104.0136 [pdf]
Quaternion Space-Time and Matter
In this work, we use the key assumption that division algebras play a key role in description and prediction of natural phenomena. Consequently, we use division algebra of quaternions to describe the four-dimensional space-time intervals. Then, we demonstrate that the quaternion space-time together with the finite speed of signal propagation allow for a simple, intuitive understanding of the space-time interval transformation during arbitrary motion between a signal source and observer. We derive a quaternion form of the Lorentz time dilation and suggest that it's real scalar norm is the traditional form of the Lorentz transformation, representing experimental measurements of the space-time interval. Thus, the new quaternion theory is inseparable from the experimental process. We determine that the space-time interval in the observer reference frame is given by a conjugate quaternion expression, which is essential for a proper definition of quaternion gradient operator. Then, we apply the quaternion gradient to an arbitrary quaternion potential function, which leads to the unified expressions of force fields. The second quaternion differentiation results in the unified Maxwell equations. Finally, we apply the resulting unified formalism to electromagnetic and gravitational interactions and show that the new expressions are similar to the traditional equations, with the novel terms related to scalar fields and velocity dependent components. Furthermore, we obtain two types of force fields and four types of matter density expressions, which require further theoretical and experimental study. Therefore, the new mathematical framework based on quaternion algebra and quaternion calculus may serve as the foundation for a unified theory of space-time and matter, leading to a useful enhancement of the traditional theories of special and general relativity.
[1222] vixra:2104.0123 [pdf]
Oligopoly Games for Use in the Classroom and Laboratory
To illustrate that a Nash Equilibrium results from a flawed attempt to solve a game, this article studies two extensions of the classic oligopoly model of Cournot. The common cost function is quadratic, and the (still linear) inverse demand functions allow of differentiated goods. The industry has a maximal-profit set that is characterised by a constant profit-output ratio, independent of the number of firms and the slopes of the marginal-cost function and inverse demand functions. The maximal-profit set and the Pareto optimal set—which is the model’s solution—have a number of points in common, but in general do not coincide. The choice of parameters is discussed and six model variants are analysed numerically; in five of them, the incentives for merger according to noncooperative game theory are at odds with the rationale of economics. Some comments are made on the use of the model in experimental economics.
[1223] vixra:2104.0111 [pdf]
A Novel Conflict Management Considering the Optimal Discounting Weights Using the BWM Method in Dempster-Shafer Evidence Theory
Dempster-Shafer evidence theory (DST) is an effective tool for data fusion. In this theory, how to handle conflicts between evidences is still a significant and open issue. In this paper, the best-worst method (BWM) is extended to conflict management in DST. Firstly, a way to determine the best and worst basic probability assignment (BPA) is proposed. Secondly, a novel strategy for determining the optimal weights of BPA using the BWM method is developed. Compared to traditional measure-based conflict management methods, the proposed method has three better performances: (1) A consistency ratio is considered for BPA to check the reliability of the comparisons, producing more reliable results. (2) The final fusion result has less uncertainty, which is more conducive to improve the performance of decision making. (3) The number of BPA comparisons performed during operation (in conflict management) is reduced (especially matrix-based). A practical application in motor rotor fault diagnosis is used to illustrate the effectiveness and practicability of the proposed methodology.
[1224] vixra:2104.0100 [pdf]
Balance Sheet and Seniority Constraints on the Repayment Value of Claims
The problem is addressed of how (different types of) funding transactions may affect the repayment value of (credit or equity) claims; to this purpose a novel prove for the existence and uniqueness of the payment vector, which does not make (explicit) use of the fixed point theorem and allows for the presence of claims with different seniorities (i.e. credit and equity claims), is proposed. Different components of the overall displacement (the reduction of repayment value), related to i) seniority structure, ii) network of bilateral exposure and iii) imbalances between external loss and external capital, are calculated by sequentially relaxing different constraints in the mixed linear program used for calculating overall displacement. The possibility that more credit may reduce overall displacement (due to borrowing-from-Peter-to-pay-Paul effect) and more equity capital may on the contrary increase overall displacement (due to its role in the transmission of financial displacement) is exemplified, along with the possible negative dependence of relative displacement (the ratio between overall displacement and total claims) on total claims.
[1225] vixra:2104.0097 [pdf]
The Geometry of the Discrete Act
At the heart of physics is the representation of movement. But what movement is and how we are given to represent it is a metaphysical question. This article attacks the metaphysics underlying current physics from its origins and points to a radically different one as true. On this new basis a new geometry is built, the Geometry of the Discrete Act, more primitive than the Euclidean geometry that actually arises from it, and therefore a new physics. Indeed, it allows the purification and unification of all current theories and opens the way to a total understanding of nature within the limits of knowledge.
[1226] vixra:2104.0091 [pdf]
Applying "Ab Initio" Hartree-Fock Methods to Exobiological Nanomolecules
he core of the work is based on the replace- ment of carbon atoms by silicon atoms, on the basis of four standard bases of DNA: A, C, G and T (adenine, cytosine, guanine, thymine). Determining with minimum computational methods via "ab initio" Hartree-Fock methods, infrared spectrum and their peak absorbance frequencies. The option for simple replacement of carbon by silicon is due to the peculiar characteristics between both. Atomic interactions under non-carbon conditions were studied, with only the Hydrogen, Silicon, Nitrogen and Oxygen atoms, in STP (Standard Temperature and Pressure), for the four standard bases of DNA, A, C, G and T, thus obtaining by quantum chemistry four new compounds, named here as: ASi, CSi, GSi and TSi. Computational calculations admit the possibility of the formation of such molecules, their existence being possible via quantum chemistry. Calculations obtained in the "ab initio" Unrestricted and Restrict Hartree-Fock method, (UHF and RHF) in the set of basis used Effective core potential (ECP) minimal basis, UHF CEP-31G (ECP split valance) and UHF CEP-121G (ECP triple-split basis), CCpVTZ (Correlation-consistent valence-only basis sets triple-zeta) and 6-311G**(3df, 3pd) (Gaussian functions quadruple-zeta basis sets).
[1227] vixra:2104.0085 [pdf]
The Collision Time of the Observable Universe is 13.8 Billion Years per Planck time: A New Understanding of the Cosmos based on Collision Space-Time
The escape velocity derived from general relativity coincides with the Newtonian one. However, the Newto- nian escape velocity can only be a good approximation when v ≪ c is sufficient to break free of the gravitational field of a massive body as it ignores higher-order terms of the relativistic kinetic energy Taylor series expansion. Consequently, it does not work for a gravitational body with a radius at which v is close to c, such as a black hole. To address this problem, we re-visit the concept of relativistic mass, abandoned by Einstein, and derive what we call a full relativistic escape velocity. This approach leads to a new escape radius where ve = c equal to a half of the Schwarzschild radius. Further, we show that one can derive the Friedmann equation for a critical universe from the escape velocity formula from general relativity theory. We also derive a new equation for a flat universe based on our full relativistic escape velocity formula. Our alternative to the Friedmann formula predicts exactly twice the mass density in our (critical) universe as the Friedemann equation after it is calibrated to the observed cosmological redshift. Our full relativistic escape velocity formula also appears more consistent with the uniqueness of the Planck mass (particle) than the general relativity theory: whereas the general relativity theory predicts an escape velocity above c for the Planck mass at a radius equal to the Planck length, our model predicts an escape velocity c in this case.
[1228] vixra:2104.0080 [pdf]
Another Look at "Faulhaber and Bernoulli"
Let "Faulhaber's formula" refer to an expression for the sum of powers of integers written with terms of n(n+1)/2. Initially, the author used Faulhaber's formula to explain why odd Bernoulli numbers are equal to zero. Next, Cereceda gave alternate proofs of that result and then proved the converse, if odd Bernoulli numbers are equal to zero then we can derive Faulhaber's formula. Here, the original author will give a new proof of the converse.
[1229] vixra:2104.0076 [pdf]
On the Gap Sequence and Gilbreath's Conjecture
Motivated by Gilbreath's conjecture, we develop the notion of the gap sequence induced by any sequence of numbers. We introduce the notion of the path and associated circuits induced by an originator and study the conjecture via the notion of the trace and length of a path.
[1230] vixra:2104.0075 [pdf]
Thermodynamic and Vortic Fine Structures of Real Schur Flows
A two-component-two-dimensional coupled with one-component-three-dimensional (2C2Dcw1C3D) flow may also be called a real Schur flow (RSF), as its velocity gradient is uniformly of real Schur form. The thermodynamic and ‘vortic’ fine structures of 2C2Dcw1C3D flows are exposed and, in particular, the Lie invariances of the decomposed vorticity 2-forms of RSFs in d-dimensional Euclidean space E d for any interger d ≥ 3 are also proved. The two Helmholtz theorems of the complementary components of vorticity found recently in 3-space RSF is not coincidental, but underlied by a gen- eral decomposition theorem, thus essential. Many Lie-invariant fine results, such as those of the combinations of the entropic and vortic quantities, including the invariances of the decomposed Ertel potential vorticit 3-formsy (and their multiplications by any interger powers of entropy), then follow.
[1231] vixra:2104.0072 [pdf]
Irrationality Proofs: From e to Zeta(n>=2)
We develop definitions and a theory for convergent series that have terms of the form $1/a_j$ where $a_j$ is an integer greater than one and the series convergence point is less than one. These series have terms with denominators that can be used as number bases. The series for $e-2$ and $z_n=\zeta(n)-1$ are of this type. Further, both series yield number bases that can represent all possible rational convergence points as single digits. As partials for these series are rational numbers, all partials can be given as single decimals using some $a_j$ as a base. In the case of $e-2$, the last term of a partial yields such a base and partials form systems of nesting inequalities yielding a proof of the irrationality of $e-2$. Using limits in an unusual way we are able to give a second proof for the irrationality of $e-2$. A third proof validates the second using Dedekind cuts. In the case of $z_n$, using the $z_2$ case we determine that such systems of nesting inequalities are not formed, but we discover partials require bases greater than the denominator of their last term. We prove this property for the general $z_n$ case and, using the unusual limit style proof mentioned, prove $z_n$ is irrational. We once again validate the proof using Dedekind cuts. Finally, we are able to give what we consider a satisfying proof showing why both $e-2$ and $z_n$ are irrational.
[1232] vixra:2104.0062 [pdf]
Gentle Beukers's Proofs that Zeta(2,3) are Irrational
Although Beukers's proof that Zeta(2) and Zeta(3) are irrational are at the level of advanced calculus, they are condensed. This article slows down the development and adds examples of the techniques used. In so doing it is hoped that more people might enjoy these mathematical results. We focus on the easier of the two Zeta(2).
[1233] vixra:2104.0061 [pdf]
Heuristics for Memorizing Trigonometric Identities
Trigonometric identities are hard to memorize. Frequently a plus or minus is the rub. We give various heuristics that help refine guesses as to an identity and get, with a little work, it correct. Heuristics, for us, are plausible arguments using graphs, consistency (with other identities), test points, and transformations. We also specify the utility of each identity in the context of advanced mathematics -- calculus -- with the hope that meaning adds to memorable credence.
[1234] vixra:2104.0059 [pdf]
The Greatest Mistakes of Modern Cosmology
Currently we have two theories describing the world: general theory of relativity describing the macroworld and quantum mechanics describing the microworld. Both theories are drastically different and do not match each other. In this article, I will try to show the greatest mistakes of modern cosmology and possible solutions that bring both theories together.
[1235] vixra:2104.0056 [pdf]
On the Fundamental Role of Massless Form of Matter in Physics: Quantum Gravity
In the paper, with the help of various models, the thesis on the fundamental nature of the field form of matter in physics is considered. In the first chapter a model of special relativity is constructed, on the basis of which the priority of the massless form of matter is revealed. In the second chapter, a field model of inert and heavy mass is constructed and on this basis the mechanism of inertia and gravity of weighty bodies is revealed. In the third chapter, the example of geons shows the fundamental nature of a massless form of matter on the Planck scale. The three-dimensionality of the observable space is substantiated. In the fourth chapter, we consider a variant of solving the problem of singularities in general relativity using the example of multidimensional spaces. The last chapter examines the author's approach to quantum gravity, and establishes the basic equation of quantum gravity. The conclusions do not contradict the main thesis of the paper on the fundamental nature of the massless form of matter.
[1236] vixra:2104.0051 [pdf]
A Noncommutative Spacetime Realization of Quantum Black Holes, Regge Trajectories and Holography
It is shown that the radial spectrum associated with a fuzzy sphere in a $noncommutative$ phase space characterized by the Yang algebra, leads $exactly$ to a Regge-like spectrum $G M^2_l = l = 1, 2,3, \ldots $, for $all$ positive values of $l$, and which is consistent with the extremal quantum Kerr black hole solution that occurs when the outer and inner horizon radius coincide $ r_+ = r_- = G M$. The condition $ G M_l^2 = l $ is tantamount to the mass-angular momentum relation $ M^2_l = l M_p^2 $ implying that the (extremal) horizon area is quantized in multiples of the minimal Planck area. Another important feature is the holographic nature of these results that are based in recasting the Yang algebra associated with an $8D$ noncommuting phase space, involving $ {\bf x}_\mu, {\bf p}_\nu, \mu, \nu = 0,1,2,3$, in terms of the $undeformed$ realizations of the Lorentz algebra generators $ J_{AB}$ corresponding to a $6D$-spacetime, and associated to a $12D$-phase-space with coordinates $ X_A, P_A; A = 0,1,2, \ldots, 5$. We hope that the findings in this work, relating the Regge-like spectrum $ l = G M^2 $ and the quantized area of black hole horizons in Planck bits, via the Yang algebra in Noncommutative phase spaces, will help us elucidate some of the impending issues pertaining the black hole information paradox and the role that string theory and quantum information will play in its resolution.
[1237] vixra:2104.0040 [pdf]
Hypothese Einer Verletzung Der Lorentz-Invarianz in Der Äthertheorie Und Bestätigung Durch Die Experimente Von D. C. Miller <br>Hypothesis of a Violation of Lorentz Invariance in the Aether Theory and Confirmation by the Experiments of D. C. Miller
Es wird die Hypothese aufgestellt, daß der Brechungsindex von bewegten Gasen in deren Ruhesystem anisotrop wird. Deswegen sollten Interferometer mit Luft im Lichtweg eine Phasenverschiebung messen können. Das theoretische Signal wird aus der Äthertheorie von Lorentz hergeleitet. Die Hypothese wird anhand von historischen Daten der Experimente von Dayton C. Miller auf dem Mount Wilson in den Jahren 1925–1926 geprüft. In ausgewählten Daten wird ein passendes Signal gefunden und bestätigt damit die Äthertheorie. Mit Hilfe einer Ausgleichsrechnung konnte die Geschwindigkeit v und der Apex, in äquatorialen Koordinaten (α, δ), der Bewegung des Sonnensystems im Äther bestimmt werden. Die kleinste Abweichung der Theorie von den Daten ergibt sich mit den Parametern v = (326 ± 17) km/s, α = (11,0 ± 0,2) h, δ = (-11 ± 5)°. <p> It is hypothesized that the refractive index of moving gases in their rest frame becomes anisotropic. Therefore interferometers with air in the light path should be able to measure a phase shift. The theoretical signal is derived from Lorentz's aether theory. The hypothesis is tested against historical data from Dayton C. Miller's experiments on Mount Wilson in 1925-1926. A suitable signal is found in selected data, confirming the aether theory. Using curve fitting, the velocity v and the apex, in equatorial coordinates (α, δ), of the motion of the solar system in the aether were determined. The smallest deviation of the theory from the data results with the parameters v = (326 ± 17) km/s, α = (11.0 ± 0.2) h, δ = (-11 ± 5) °. <p>
[1238] vixra:2104.0024 [pdf]
On Propagation of Light-Ray and Sagnac Effect in Kerr-Newman-Nut Spacetime
The paper explores the light-ray propagation and Sagnac effect in Kerr-Newman-NUT spacetime. It has been analyzed the spacetime curvature structure of these solutions and represented that the Kerr-Newman-NUT spacetime is one of the exact analytical solutions for rotating regular black holes. The area of the horizon and ergosphere of the black hole has been explicitly derived. The electromagnetic feature of the Kerr-Newman-NUT black hole has been also discussed. The effect of the NUT parameter in capture cross-section of the photon by the black hole so-called the shadow black hole has been analyzed. Finally, the Sagnac effect in the Kerr-Newman-NUT space has explicitly discussed.
[1239] vixra:2104.0022 [pdf]
Aerosol Transport by Turbulent Continua
The stochastic transport equations, derived rigorously under the condition of continnum uctuations in the framework of an ensemble theory, both in dierential and integral form, are then veried by establishing an unambiguous connection between this stochastics and the associated deterministics.
[1240] vixra:2104.0020 [pdf]
Mathematical Modelling of COVID-19 and Solving Riemann Hypothesis, Polignac's and Twin Prime Conjectures Using Novel Fic-Fac Ratio With Manifestations of Chaos-Fractal Phenomena
COVID-19 originated from Wuhan, China in December 2019. Declared by the World Health Organization on March 11, 2020; COVID-19 pandemic has resulted in unprecedented negative global impacts on health and economy. International cooperation is required to combat this "Incompletely Predictable" pandemic. With manifestations of Chaos-Fractal phenomena, we mathematically model COVID-19 and solve [unconnected] open problems in Number theory using our versatile Fic-Fac Ratio. Computed as Information-based complexity, our innovative Information-complexity conservation constitutes a unique all-purpose analytic tool associated with Mathematics for Incompletely Predictable problems. These problems are literally "complex systems" containing well-defined Incompletely Predictable entities such as nontrivial zeros and two types of Gram points in Riemann zeta function (or its proxy Dirichlet eta function) together with prime and composite numbers from Sieve of Eratosthenes. Correct and complete mathematical arguments for first key step of converting this function into its continuous format version, and second key step of using our unique Dimension (2x - N) system instead of this Sieve result in primary spin-offs from first key step consisting of providing proof for Riemann hypothesis (and explaining closely related two types of Gram points), and second key step consisting of providing proofs for Polignac's and Twin prime conjectures.
[1241] vixra:2104.0014 [pdf]
Neutrino Masses
Abstract: It is possible to predict the neutrino masses within existing knowledge. The elastic weak current neutrino-nucleon interactions ( vi,n)->(l,n')~ ve,n) -> (e,p) are considered, and the vector |m_{e},mv, mt > =|1.35\times 10^(-8),\, 5.81\times 10^(-4),\,1.64\times 10^(-1) > eV/c^{2} of the increasing neutrino masses is predicted.
[1242] vixra:2104.0011 [pdf]
From the Euclidean to the Hyperbolic Space in Particle Physics
The Euler tetrahedron volume formula is used to define dimensionality of space and the Euclidean straight line and area. The double disk modul (DDM) is used to define trajectories on the metrical surfaces. The relations of generalized Lobachevskii geometry are derived and related to the Einstein equations. It is considered spherical and pseudospherical geometry, Riemann geometry, Lobachevskii and generalized Lobachevskii geometry, Poincare model, Beltrami model, gravity as deformation of space and Schwinger theory of gravity.
[1243] vixra:2104.0008 [pdf]
Solvable Form of the Polynomial Equation X^n + an-1x^(n-1) + ...+a1x + A0= 0 (n = 2k + 1)
It is know, there is no solution in radicals to general polynomial equation of degree five or higher with arbitrary coefficient. In this article, we give a form of the polynomial equations with odd degree can be solved in radicals. From there, we come up some solvable equations with one or more zero coefficients, especially for the quintic and septic equations.
[1244] vixra:2104.0004 [pdf]
Black Hole Is Hole Which Is Black
Presented evidence, that Black Hole is the hole. Namely, right behind the black surface (event horizon for non-rotational BH), there is no space nor time. No spacetime. Just as it would be prior to the Big Bang. The first composite image of the Black Hole from the Event Horizon Telescope is another evidence for that, with the resulting correction of the reported mass.
[1245] vixra:2103.0200 [pdf]
Truly Two-Dimensional Black Holes Under Dimensional Transitions of Spacetime
A sufficiently massive star in the end of its life will inevitably collapse into a black hole as more deconfined degrees of freedom make the core ever softer. One possible way to avoid the singularity in the end is by dimensional phase transition of spacetime. Indeed, the black hole interior, two-dimensional in nature, can be described well as a perfect fluid of free massless Majorana fermions and gauge bosons under a 2-d supersymmetric mirror model with new understanding of emergent gravity from dimensional evolution of spacetime. In particular, the 2-d conformal invariance of the black hole gives rise to desired consistent results for the interior microphysics and structures including its temperature, density, and entropy.
[1246] vixra:2103.0199 [pdf]
Fermat's Last Theorem: A Simple Proof
This paper provides a simple proof of Fermat’s Last Theorem via elementary algebraic analysis of a level that would have been extant in Fermat’s day, the mid seventeenth century. The proof is effected by transforming Fermat's equation to an nth order polynomial, which is solved for 5 cases revealing a pattern that enables an extrapolation to the general case.
[1247] vixra:2103.0194 [pdf]
Uwb-GCN: Accelerating Graph Convolutional Networks Through Runtime Workload Rebalancing
In this paper, we propose an architecture design called Ultra-Workload-Balanced-GCN (UWB-GCN) to accelerate graph convolutional network inference. To tackle the major performance bottleneck of workload imbalance, we propose two techniques: dynamic local sharing and dynamic remote switching, both of which rely on hardware flexibility to achieve performance auto-tuning with negligible area or delay overhead. Specifically, UWB-GCN is able to effectively profile the sparse graph pattern while continuously adjusting the workload distribution among parallel processing elements (PEs). After converging, the ideal configuration is reused for the remaining iterations. To the best of our knowledge, this is the first accelerator design targeted to GCNs and the first work that auto-tunes workload balance in accelerator at runtime through hardware, rather than software, approaches. Our methods can achieve near-ideal workload balance in processing sparse matrices. Experimental results show that UWB-GCN can finish the inference of the Nell graph (66K vertices, 266K edges) in 8.1ms, corresponding to 199x, 16x, and 7.5x respectively, compared to the CPU, GPU, and the baseline GCN design without workload rebalancing.
[1248] vixra:2103.0186 [pdf]
Cantor's Paradox
This paper discusses Cantor’s paradox of the set all cardinals, and proves that in Cantor’s set theory every set of cardinal C originates at least 2C inconsistent infinite sets.
[1249] vixra:2103.0185 [pdf]
Hierarchical Relationship Alignment Metric Learning
Most existing metric learning methods focus on learning a similarity or distance measure relying on similar and dissimilar relations between sample pairs. However, pairs of samples cannot be simply identified as similar or dissimilar in many real-world applications, e.g., multi-label learning, label distribution learning. To this end, relation alignment metric learning (RAML) framework is proposed to handle the metric learning problem in those scenarios. But RAML learn a linear metric, which can’t model complex datasets. Combining with deep learning and RAML framework, we propose a hierarchical relationship alignment metric leaning model HRAML, which uses the concept of relationship alignment to model metric learning problems under multiple learning tasks, and makes full use of the consistency between the sample pair relationship in the feature space and the sample pair relationship in the label space. Further we organize several experiment divided by learning tasks, and verified the better performance of HRAML against many popular methods and RAML framework.
[1250] vixra:2103.0184 [pdf]
Representation Learning by Ranking Under Multiple Tasks
In recent years, representation learning has become the research focus of the machine learning community. Large-scale pre-training neural networks have become the first step to realize general intelligence. The key to the success of neural networks lies in their abstract representation capabilities for data. Several learning fields are actually discussing how to learn representations and there lacks a unified perspective. We convert the representation learning problem under multiple tasks into a ranking problem, taking the ranking problem as a unified perspective, the representation learning under different tasks is solved by optimizing the approximate NDCG loss. Experiments under different learning tasks like classification, retrieval, multi-label learning, regression, self-supervised learning prove the superiority of approximate NDCG loss. Further, under the self-supervised learning task, the training data is transformed by data augmentation method to improve the performance of the approximate NDCG loss, which proves that the approximate NDCG loss can make full use of the information of the unsupervised training data.
[1251] vixra:2103.0183 [pdf]
First Principles of Consistent Physics
For a consistent picture of fundamental physics and cosmology, three first principles are proposed as the foundations. That is, quantum variational principle that provides the formalism, consistent observation principle that set physical constraints and symmetries, and spacetime inflation principle that determines physical contents (particle fields and interactions). Under these three principles, a series of supersymmetric mirror models are constructed to study various phases of the universe at different spacetime dimensions and the dynamics between the phases. In particular, mirror symmetry, as the orientation symmetry of the underlying geometry, plays a critical role in the new framework.
[1252] vixra:2103.0180 [pdf]
Going to the Root of Quantum Measurement Problem
We can not think of measurement devices as if they would be a part of nature. Why? Because it violates the definition of nature, becoming the tautology: nature is what (measures) nature.
[1253] vixra:2103.0175 [pdf]
Webster's Universal Spanish-English Dictionary, the Graphical law and A Dictionary of Geography of Oxford University Press
We study the Webster's Universal Spanish-English Dictionary by Geddes and Grosset. We draw the natural logarithm of the number of entries, normalised, starting with a letter vs the natural logarithm of the rank of the letter, normalised. We conclude that the Dictionary can be characterised by BP(4,$\beta H$=0) i.e. a magnetisation curve for the Bethe-Peierls approximation of the Ising model with four nearest neighbours in absence of external magnetic field. H is external magnetic field, $\beta$ is $\frac{1}{k_{B}T}$ where, T is temperature and $k_{B}$ is the Boltzmann constant. Moreover, We compare the Spanish language with two other Romance languages, the Basque and the Romanian languages respectively. On the top of it, we compare the Spanish-English Dictionary with A Dictionary of Geography of Oxford University Press by Susan Mayhew and find a tantalizing similarity between the Spanish and the jargon of Geography.
[1254] vixra:2103.0174 [pdf]
Explaining Representation by Mutual Information
Science is used to discover the law of world. Machine learning can be used to discover the law of data. In recent years, there are more and more research about interpretability in machine learning community. We hope the machine learning methodsaresafe,interpretable,andtheycanhelpusto find meaningful pattern in data. In this paper, we focus on interpretability of deep representation. We propose a interpretable method of representation based on mutual information, which summarizes the interpretation of representation into three types of information between input data and representation. We further proposed MI-LR module, which can be inserted into the model to estimate the amount of information to explain the model’s representation. Finally, we verify the method through the visualization of the prototype network.
[1255] vixra:2103.0173 [pdf]
Datasailr an R Package for Row by Row Data Processing, Using Datasailr Script
Data processing and data cleaning are essential steps before applying statistical or machine learning procedures. R provides a flexible way for data processing using vectors. R packages also provide other ways for manipulating data such as using SQL and using chained functions. I present yet another way to process data in a row by row manner using data manipulation oriented script, DataSailr script. This article introduces datasailr package, and shows potential benefits of using domain specific language for data processing.
[1256] vixra:2103.0160 [pdf]
The Inverse Tangent and Cotangent Functions, their Addition Formulas and their Values on their Branch Cuts
The principal inverse tangent and cotangent functions for complex arguments can be defined as formulas involving principal natural logarithms, but these are not odd on the imaginary axis, which they must be according to their definitions as inverse functions. These formulas are therefore modified in such a way that they become odd on the imaginary axis, by choosing the other branch on the lower branch cut, and the corresponding addition formulas for complex and real arguments are derived. With these addition formulas their values on their branch cuts are determined, confirming these modified formulas. Some new formulas for the (hyperbolic) inverse tangent and cotangent functions for complex arguments and some new addition formulas for these functions for real arguments are derived. Some new formulas for the inverse sine and cosine functions and their connections with the inverse tangent and cotangent functions for complex arguments are provided, and from these some new addition formulas for the inverse sine and cosine functions for real arguments are derived. Some duplication and bisection formulas for the inverse tangent, cotangent, sine and cosine functions are derived.
[1257] vixra:2103.0158 [pdf]
The Collatz Conjecture and the Quantum Mechanical Harmonic Oscillator
By establishing a dictionary between the QM harmonic oscillator and the Collatz process, it reveals very important clues as to why the Collatz conjecture most likely is true. The dictionary requires expanding any integer $ n $ into a binary basis (bits) $ n = \sum a_{nl} 2^l $ ($l$ ranges from $ 0 $ to $ N - 1$) that allows to find the correspondence between every integer $ n $ and the state $ | \Psi_n \rangle $, obtained by a superposition of bit states $ | l \rangle $, and which are related to the energy eigenstates of the QM harmonic oscillator. In doing so, one can then construct the one-to-one correspondence between the Collatz iterations of numbers $ n \rightarrow { n \over 2 }$ ($n$ even); $ n \rightarrow 3 n + 1$ ($n$ odd) and the operators $ {\bf L}_{ { n \over 2} }; { \bf L}_{ 3 n + 1 } $, which map $ \Psi_n $ to $ \Psi_{ { n \over 2 } }$, or to $ \Psi_{ 3 n + 1 } $, respectively, and which are constructed explicitly in terms of the creation $ {\bf a}^\dagger$, annihilation $ {\bf a }$, and unit operator $ { \bf 1 } $ of the QM harmonic oscillator. A rigorous analysis reveals that the Collatz conjecture is most likely true, if the composition of a chain of $ {\bf L}_{ { n \over 2} }; { \bf L}_{ 3 n + 1 } $ operators (written as $ L_*$ in condensed notation) leads to the null-eigenfunction conditions $ ( {\bf L_* L_* \ldots L_* } - {\cal P } ) \Psi_n = 0 $, where $ {\cal P} $ is the operator that $projects$ any state $ \Psi_n $ into the ground state $ \Psi_1 \equiv | 0 \rangle $ representing the zero bit state $ | 0 \rangle$ (since $2^0 = 1$). In essence, one has a realization of the integer/state correspondence typical of QM such that the Collatz paths from $ n $ to $ 1$ are encoded in terms of quantum transitions among the states $ \Psi_n$, and leading effectively to an overall downward cascade to $ \Psi_1$. The QM oscillator approach explains naturally why the Collatz conjecture fails for negative integers because there are no states below the ground state.
[1258] vixra:2103.0151 [pdf]
Effect of Particle Trapping on Frost Heaving Soils
A model of freezing soils is developed that accounts for the dependence of the frost heave rate on particle trapping. At sufficiently low cooling rates the soil experiences primary frost heave with a single growing ice lens that rejects all soil particles. At higher cooling rates ice lenses start to engulf the largest soil particles and the rate of segregation heave is reduced. At the highest freezing rates all particles are engulfed by the ice and the pore water freezes in situ. A new kinetic expression for the segregation potential of the soil is obtained that accounts for particle trapping. Using this expression a simple transient frost heave model is developed and compared with experimental data.
[1259] vixra:2103.0150 [pdf]
On the Infinitude of Sophie Germain Primes
In this paper we obtain the estimate \begin{align} \# \left \{p\leq x~|~2p+1,p\in \mathbb{P}\right \}\geq (1+o(1))\frac{\mathcal{D}}{(2+2\log 2)}\frac{x}{\log^2x}\nonumber \end{align}where $\mathbb{P}$ is the set of all prime numbers and $\mathcal{D}\geq 1$. This proves that there are infinitely many primes $p\in \mathbb{P}$ such that $2p+1\in \mathbb{P}$ is also prime.
[1260] vixra:2103.0149 [pdf]
Classical Doppler Shift Explains the Michelson-Morley Null Result
Here we review Michelson-Morley’s original analysis of their interferometer experiment and discuss its use of optical distance. We derive a formula for transverse Doppler shift from geometric considerations, apply this to the Michelson-Morley interferometer, and present a phase analysis for the experiment. Furthermore, we present an equation for Doppler shift at a general angle and use this to derive the null phase shift result for round-trip interferometer paths at any arbitrary angle. We do not dispute the validity of the null result, nor the prediction of an arrival-time difference for the transverse and longitudinal arms of the interferometer; rather, we challenge the implicit assumption that an arrival time difference will necessarily result in an observable fringe shift.
[1261] vixra:2103.0148 [pdf]
New Ordinal Relative Fuzzy Entropy
In real life, occurrences of a series of things are supposed to come in an order. Therefore, it is necessary to regard sequence as a crucial factor in managing different kinds of things in fuzzy environment. However, few related researches have been made to provided a reasonable solution to this demand. Therefore, how to measure degree of uncertainty of ordinal fuzzy sets is still an open issue. To address this issue, a novel ordinal relative fuzzy entropy is proposed in this paper taking orders of propositions into consideration in measuring level of uncertainty in fuzzy environment. Compared with previously proposed entropies, effects on degrees of fuzzy uncertainty brought by sequences of sequential propositions are embodied in values of measurement using proposed method in this article. Moreover, some numerical examples are offered to verify the correctness and validity of the proposed entropy.
[1262] vixra:2103.0147 [pdf]
The Equation of Life
This study will first define the "equation of life" via the principle of least action. Then the paper will show how this "equation of life" can be used to derive smaller equations, involving transcription and translation, for [computer] modeling and simulation of a cell. The conclusion will provide a terse description of its uses in the realm of Systems Biology. CORRECTION: the second terms on the right side of both equations 2.4 and 2.5 should be proceeded by a minus sign, not a plus symbol. Also, the last term on the right side of equation 2.5, the oligo Lagrangian, should also be proceeded by a minus sign, not a plus symbol.
[1263] vixra:2103.0136 [pdf]
Numbers of Goldbach Conjecture Occurence in Every Even Numbers
This paper proposed proof of Goldbach Conjecture by using a function such that the numbers occurences of conjecture solution in any even numbers can be estimated. The function sketches after Eratoshenes Sieve under modulo term such that the function fulfilled prime sub-condition in closed intervals.
[1264] vixra:2103.0134 [pdf]
Interpretation of Nuclear Decay
The interpretation of nuclear decay is based on the structure of the nuclei that is two fundamental phenomena. The inverse electric field of the proton and the electric entity of the macroscopically neutral neutron. At the nucleus scale the neutron behaves as a positively charged particle due to the negative surface charge q=−0.685e, which creates an inverse electric field of positive potential (internal field) as a cloud of positive electrical units. Outside the nucleus the neutron decays with a half-life of 12min. However, the inverse electric field of the nucleus is considered to be the refuge of neutron salvation. A neutron’s decay β− can occur due to the above electrical entity of neutrons and due to their synod (session) in the nucleus, with which the negativity of nucleus decreases due to the positive hill created by the neutrons’ synod. A proton’s decay β+ can occur when the produced proton of a beta decay β− is immersed in a very negative potential of the nuclear field. Also, a neutrons’ synod can cause potential imbalance, increasing the negativity of the field’s region from which the neutrons were removed, resulting in a beta decay β+. It is noted that the exit of the positron e+ takes place through the canyon created by the neutrons’ synod. Alpha decay α occurs in radioactive nuclei and for example in the nucleus uranium-235, that emits a particle α of energy 4MeV, which is called to jump over the potential barrier 27MeV of uranium-235. The above nuclear decaying procedures act as a balance for the potential of the nucleus. This is an excellent compensatory mechanism for maintaining the stability of the nuclei.
[1265] vixra:2103.0132 [pdf]
An Hypothesis for Mass Dependence on Radial Distance, with Novel Cosmological Implications for the Early Universe as Well as Dark Matter
Working from first principles of special relativity and using a version of the equivalence principle, an hypothesize that an objects mass is a function of its distance to other objects is presented. Further it is posited that there is a correlation between this mass variation and gravitational time dilation in general relativity. Additionally due to the Schwarzschild metric the mass variation also distorts the spatial components of the metric which contributes to gravitational lensing. Hence this approach could be used to explain the unseen mass increase due to gravitational lensing of current dark matter exploration. This leads naturally to the hypothesis of the early rapid clumping in the early universe to produce the ‘cosmic web’ and dark matter halos necessary to produce galaxy formation. Certain electrically neutral MACHOS are suggested. These might include possible boson stars or primordial black holes. Radio emission spectrum would also be expected to be higher than average for various dark matter regions. We also expect that the theory is consistent with the observation that not all galaxies exhibit dark matter since not all galaxy clumping originated from the cosmic web. We propose mathematically how the theory can fit naturally with Einsteins field equation. Finally we propose a simple principle for terrestrial measurement of the theory
[1266] vixra:2103.0099 [pdf]
Chordal Bipartite Graphs Are Rank Determined
A partial matrix A is a rectangular array with entries in F ∪ {∗}, where F is the ground field, and ∗ is a placeholder symbol for entries which are not specified. The minimum rank mr(A) is the smallest value of the ranks of all matrices obtained from A by replacing the ∗ symbols with arbitrary elements in F. For any bipartite graph G with vertices (U, V), one defines the set M(G) of partial matrices in which the row indexes are in U, the column indexes are in V, and the (u, v) entry is specified if and only if u, v are adjacent in G. We prove that, if G is chordal bipartite, then the minimum rank of any matrix in M(G) is determined by the ranks of its fully specified submatrices. This result was conjectured by Cohen, Johnson, Rodman, Woerdeman in 1989.
[1267] vixra:2103.0093 [pdf]
Stabilized Quantum Field Theory
An analysis of the action of elementary charges on the vacuum leads to a resolution of divergence issues in QFT without mass and charge renormalization. For an irreducible self-interaction amplitude Ω, infinite field actions split the vacuum into positive and negative self-energy components such that its net mass-energy remains zero for free particles. For each particle mass in a loop, two dressed mass states including vacuum energy, are constructed for fermion and boson self-energy processes. For electroweak interactions, the stabilized amplitude Ωˆ = Ω - Ω¯ includes a correction for a vacuum energy deficit within a point-like, near-field region, where Ω¯ is given by an average of Ω over dressed mass levels. For QCD, strong interactions redistribute vacuum energy so that there is an energy surplus in the near-field with a corresponding deficit in the confinement region resulting in a sign reversal of Ωˆ relative to QED and asymptotic freedom. Stabilized amplitudes agree with renormalization for radiative corrections in Abelian and non-Abelian gauge theories. Renormalization is only required in standard QFT because it neglects near-field vacuum energy changes in violation of energy conservation.
[1268] vixra:2103.0092 [pdf]
An Idea of Fermat for the Stop and Division by Zero Calculus
In this note we will consider an idea of Fermat for the stop in connection with the division by zero calculus. Here, in particular, we will see some mysterious logic on the stop in connection with the concepts of differential and differential coefficient.
[1269] vixra:2103.0085 [pdf]
A Simple Criteria of Prime Numbers
In this short note, we will propose a simple criteria for prime numbers and our mehtod seems to be that is practical. Our idea will have some connection with the famous Goldbach conjecture.
[1270] vixra:2103.0053 [pdf]
The Theory of Cells
In this paper we introduce and develop the notion of universe, induced communities and cells with their corresponding spots. We study the concept of the density, the mass of communities, the concentration of spots in a typical cell, connectedness and the rotation of communities. In any case we establish the connection that exist among these notions. We also formulate the celebrated union-close set conjecture in the language of density of spots and the mass of a typical community.
[1271] vixra:2103.0051 [pdf]
Natural Units‘ Collision Space-Time; Maximum Simplified Theory that Fits Observations
We have recently [1, 2] shown a possible method to unify gravity and quantum mechanics in a simple way that we have called collision space-time. Here, we demonstrate a special version of our theory when we set lp = 1 and c = 1. Mass, energy, Compton momentum, and the Schwarzschild radius are then all identical, and simply a collision frequency. A frequency below one is not observable and can be interpreted as a frequency quantum probability. One could easily make the mistake that this is simply setting G =hbar = c = 1 (Planck natural unit system); however this would possibly be inaccurate as we do not need either G or �� in our system even when not setting them to one. Furthermore, we can find the Planck length totally independent of G and ��, for any standardised length unit chosen. Setting c = 1 simply means one links space and time through the speed of light, and setting lp = 1 means one selects the Planck length as the fundamental length unit, and the Planck length we have argued for is the diameter of an indivisible particle. One of the beauties of our theory is that, in the output of many formulas we obtain from our theory, the integer part represents real observations (collisions) and fractions represent quantum probabilities. Therefore, we could say there is also almost a unification between numbers and physics, not only a unification of gravity and quantum mechanics.
[1272] vixra:2103.0039 [pdf]
History of the Division by Zero Calculus
Today is the 7th birthday of the division by zero calculus as stated in details in the Announcement 456(2018.10.15) of the Institute of Reproducing Kernels and the book was published recently. We recall simply a history of the division by zero calculus. Division by zero has a long and mysterious history since the origins of mathematics by Euclid and Brahmagupta. We will see that they are very important in mathematics, however they had the serious problems; that is, on the point at infinity and the division by zero, respectively.
[1273] vixra:2103.0029 [pdf]
A Theorem on the Number of Distinct Eigenvalues
A theorem on the number of distinct eigenvalues of diagonalizable matrices is obtained. Some applications related to matrices with simple eigenvalues, triangular defective matrices, adjacency matrices and graphs are discussed. Other ideas and examples are provided.
[1274] vixra:2103.0026 [pdf]
Bound State
Relativistic solutions of the bound state problem for the hydrogen atom and one electron ions using the uncorrected Coulomb potential and comparing those results with ones using the correct physical potential reveals that relativity’s gamma in the quantum bound state takes on values less than one. This also explains the physical origin of the Bose-Einstein and Fermi-Dirac statistics for bound state particles.
[1275] vixra:2103.0018 [pdf]
On the Computation of the Principal Constants $d_{2}$ and $d_{3}$ Used to Construct Control Limits for Control Charts Applied in Statistical Process Control
In this communication a short and straightforward algorithm, written in Octave (version 6.1.0 (2020-11-26))/Matlab (version '9.9.0.1538559 (R2020b) Update 3'), is proposed for brute-force computation of the principal constants $d_{2}$ and $d_{3}$ used to calculate control limits for various types of variables control charts encountered in statistical process control (SPC).
[1276] vixra:2103.0015 [pdf]
Algorithmic Simulations of Hydrocortisone-Induced Degeneration
The Elk River, Gulf Wars, and Atlantic Oxybenzone instances proved chemical contamination is a serious threat. HC is a common corticosteroid used to treat skin lesions that hasn't undergone testing. This research will utilize real plant cultivation, mathematical modeling, machine learning, and custom Python3 programming to determine the effect of HC on plant life. Metadata significant at a 0.01 level under 48 degrees of freedom was gathered by cultivating real Raphanus sativus plants treated with various levels of HC contaminated water; the t-test was verified using Java8 computer code. The simulation comprised of three independent algorithms. Because the metadata was very sparse, the first algorithm was dedicated to using modified bootstrapping to iteratively generate synthetic data points with less entropy. The second algorithm was a deep learning substructure incorporating concepts of linear regression and neural networks; the backpropagation used the MSE loss function and ADAM optimizer, which has aspects of previously engineered AdaGrad and RMSProp models. The third algorithm measured the effect of HC on a terrain based on aquatic content. The simulations used an HC contaminator based of the Gulf Wars event and five semi-aquatic terrains in North America, which were the contaminator and terrestrial parameters respectively. The plant cultivation and software simulations supported the research hypothesis stating that the uncontaminated plants would be healthiest was accepted. Future research could use convolutional neural networks to achieve a higher R-squared value and real-time procedural terrain generation to improve the simulations.
[1277] vixra:2103.0008 [pdf]
Compressed Particle Methods for Expensive Models with Application in Astronomy and Remote Sensing
In many inference problems, the evaluation of complex and costly models is often required. In this context, Bayesian methods have become very popular in several fields over the last years, in order to obtain parameter inversion, model selection or uncertainty quantification. Bayesian inference requires the approximation of complicated integrals involving (often costly) posterior distributions. Generally, this approximation is obtained by means of Monte Carlo (MC) methods. In order to reduce the computational cost of the corresponding technique, surrogate models (also called emulators) are often employed. Another alternative approach is the so-called Approximate Bayesian Computation (ABC) scheme. ABC does not require the evaluation of the costly model but the ability to simulate artificial data according to that model. Moreover, in ABC, the choice of a suitable distance between real and artificial data is also required. In this work, we introduce a novel approach where the expensive model is evaluated only in some well-chosen samples. The selection of these nodes is based on the so-called compressed Monte Carlo (CMC) scheme. We provide theoretical results supporting the novel algorithms and give empirical evidence of the performance of the proposed method in several numerical experiments. Two of them are real-world applications in astronomy and satellite remote sensing.
[1278] vixra:2103.0006 [pdf]
Towards an Einsteinian Quantum Mechanics
A rational approach to understanding quantum mechanics is presented in which one is able to account for the observation that two spinning particles, irrespective of their space-time separation, can be correlated in EPR-like experiments.
[1279] vixra:2103.0002 [pdf]
Photon Induced Low Energy Nuclear Reactions
We propose a new mechanism for inducing low energy nuclear reactions (LENRs). The process is initiated by a perturbation which we assume is caused by an external photon. The initial two body nuclear state absorbs the photon and forms an intermediate state which makes a transition into the final nuclear state with emission of a light particle which in the present paper is taken to be a photon. We need to sum over all energies of the intermediate state. Since the energy of this state is unconstrained we get contributions from very high energies for which the barrier penetration factor is not too small. The contribution from such high energy states is typically suppressed due to the large energy denominators and its matrix element with the initial state. Furthermore the process is higher order in perturbation theory in comparison to the standard fusion process. However these factors are relatively mild compared to the strong suppression due to the barrier penetration factor at low energies. By considering a specific reaction we find that its cross section is higher than the cross section of the standard process by a factor of 10^41 or more. This enhancement makes LENRs observable in laboratory even for relatively low energies. Hence we argue that LENRs are possible and we provide a theoretical set up which may explain some of the experimental claims in this field.
[1280] vixra:2102.0172 [pdf]
Modeling that Matches, Augments, and Unites Data About Physics Properties, Elementary Particles, Cosmology, and Astrophysics
This essay shows modeling that - across four facets of physics - matches and predicts data. The facets are elementary particles, properties of elementary particles and other objects, cosmology, and astrophysics. Regarding elementary particles, our modeling matches all known particles and suggests new particles. New particles include zero-charge quark-like particles, a graviton, an inflaton, and other elementary particles. Some models split gravitational fields in ways similar to the splitting of electromagnetic fields into electric fields and magnetic fields. Regarding properties, our modeling suggests a new property - isomer. An isomer is a near copy of a set of most elementary particles. Our modeling includes a parameter that catalogs charge, mass, spin, and other properties. Regarding cosmology and astrophysics, the elementary particles and the new property seem to explain dark matter. Most dark matter has bases in five new isomers of the Standard Model elementary particles. More than eighty percent of dark matter is cold dark matter. Some dark matter has similarities to ordinary matter. Regarding cosmology, our modeling points to a basis for the size of recent increases in the rate of expansion of the universe. Our modeling suggests five eras in the evolution of the universe. Two eras would precede inflation. Regarding astrophysics, our modeling explains ratios of dark matter to ordinary matter. One ratio pertains to densities of the universe. Some ratios pertain to galaxy clusters. Some ratios pertain to galaxies. One ratio pertains to depletion of cosmic microwave background radiation. The modeling seems to offer insight about galaxy formation. That our work seems to explain cosmology data and astrophysics data might confirm some of our work regarding properties and elementary particles. Our modeling has roots in discrete mathematics. Our modeling unites itself and widely-used physics modeling.
[1281] vixra:2102.0163 [pdf]
A Generalization of Vajda's Identity for Fibonacci and Lucas Numbers
In this paper, we present two identities involving Fibonacci numbers and Lucas numbers. The first identity generalizes Vajda's identity, which in turn generalizes Catalan's identity, while the second identity is a corresponding result involving Lucas numbers. Binet's formulas for generating the nth term of Fibonacci numbers and Lucas numbers will be used in proving the identities.
[1282] vixra:2102.0161 [pdf]
A Proof of Lemoine's Conjecture by Circles of Partition
In this paper we use a new method to study problems in additive number theory. We leverage this method to prove the Lemoine conjecture, a closely related problem to the binary Goldbach conjecture. In particular, we show by using the notion of circles of partition that for all odd numbers $n\geq 9$ holds \begin{align*} n=p+2q\mbox{ for not necessarily different primes }p,q. \end{align*}
[1283] vixra:2102.0157 [pdf]
Effects of Political Choices in the Greek Mortality Rate
In this short article an attempt is made to look closer at the mortality rate in Greece, during the ongoing century. An excess is found, not consistent with a statistical fluctuation, from 2011 until today.
[1284] vixra:2102.0140 [pdf]
When G and M are Understood from a Deeper Perspective it Looks Like the Newtonian Field Equation Contains Time Dynamics
An argument often used to show that the Newtonian speed of gravity is infinite is that the Newtonian field equation (rooted in the Poisson equation) has no time derivative with respect to the gravitational potential. However, as the Newton gravitational constant G has not been well understood until recently, and also due to the fact one has not understood mass in the gravity formula that well, except from the surface level, we will demonstrate that, when understood from a deeper perspective, there is likely to be a concealed time derivative of the gravitational potential in the Newton field equation. This supports our recent claim that Newtonian gravity speed is consistent with the idea that gravity moves at the speed of light, not by assumption, but from calibration, and in a way that does not conflict with the equations one can derive from Newtonian theory.
[1285] vixra:2102.0138 [pdf]
Asymptotic Analysis of the Sir Model: Applications to Covid-19 Modelling
The SIR (Susceptible-Infected-Removed) model can be very useful in modelling epidemic outbreaks. The present paper derives the parametric solution of the model in terms of quadratures. The paper demonstrates a simple analytical asymptotic solution for the I-variable, which is valid on the entire real line. Moreover, the solution can be used successfully for parametric estimation either in stand-alone mode or as a preliminary step in the parametric estimation using numerical inversion of the parametric solution. The approach is applied to the ongoing coronavirus disease 2019 (COVID-19) pandemic in three European countries -- Belgium, Italy and Sweden.
[1286] vixra:2102.0136 [pdf]
Division by Zero Calculus and Hyper Exponential Functions by K. Uchida
In this paper, we will consider the basic relations of the normal solutions (hyper exponential functions by K. Uchida) of ordinary differential equations and the division by zero calculus. In particular, by the concept of division by zero calculus, we extend the concept of Uchida's hyper exponential functions by considering the equations and solutions admitting singularities. Surprisingly enough, by this extension, any analytic functions with any singularities may be considered as Uchida's hyper exponential functions. Here, we will consider very concrete examples as prototype examples.
[1287] vixra:2102.0130 [pdf]
Brown Effect: The Experimental Proof
The article describes an experiment for detecting the Brown effect (also known as Biefeld–Brown effect). The essence of the effect is that a capacitor under high voltage develops a force. The paper provides measurements and calculations related to the conducted experiment as well the explanation for the origin of the force as induced by aether.
[1288] vixra:2102.0127 [pdf]
Unexplored Ways the Atomic Nucleus May be Bound
The first theories of atomic nuclear cohesion entailed electric forces binding together protons with a few electrons in the nucleus. The 1932 discovery of neutrons destroyed that line of thinking. The evidence suggested a new fundamental force of nature characterized by operation on both protons and electrically-neutral neutrons, with a very short range, and overpowering strength. Presented herein are novel and non-obvious structures that show these characteristics could nevertheless be manifestations of the electrical force. Protons and neutrons are now known to each securely contain fractional charges of both signs. If two oppositely-charged fractional charges in neighboring nucleons can get within 5% of a nucleon radius, Coulomb's law predicts they will form an electrical bond strong enough to explain nuclear cohesion. Ironically, such electrical bonds would be characterized by the very phenomena that were thought to rule out the electrical force: participation of neutrons, nucleon-contact distances, and more powerful than overall proton repulsion. Such bonding also predicts saturation at three bonds per nucleon, particularly stable 4-nucleon rings, limited 3D structures of nucleons, and more. If fractional charges had been known in 1932, scientists would have adapted their theories of an electrically-bound nucleus before assuming that they had discovered a new fundamental force of nature.
[1289] vixra:2102.0121 [pdf]
Infinity Put to the Test: Towards a Discrete Revolution in the Mathematics of the XXI Century
From different areas of mathematics, such as set theory, geometry, transfinite arithmetic or supertask theory, in this book more than forty arguments are developed about the inconsistency of the hypothesis of the actual infinity in contemporary mathematics. A hypothesis according to which the uncompletable lists, as the list of the natural numbers, exist as completed lists. The inconsistency of this hypothesis would have an enormous impact on physics, forcing us to change the continuum space-time for a discrete model, with indivisible units (atoms) of space and time. The discrete model would be a great simplification of physical theories, including relativity and quantum mechanics. It would also suppose the solution of the old problem of change, posed by the pre-Socratics philosophers twenty-seven centuries ago.
[1290] vixra:2102.0114 [pdf]
Rotation Without Imaginary Numbers, Transcendental Functions, or Infinite Sums
Abstract. Quaterns are introduced as a new measure of rotation. Rotation in quaterns has an advantage in that only simple algebra is required to convert back and forth between rectangular and polar coordinates that use quaterns as the angle measure. All analogue trigonometric functions also become algebraic when angles are expressed in quaterns. This paper will show how quatern measure can be easily used to approximate trigonometric functions in the first quadrant without recourse to technology, innite sums, imaginary numbers, or transcendental functions. Using technology, these approximations can be applied to all four quadrants to any degree of accuracy. This will also be shown by approximating u to any degree of accuracy desired without reference to any traditional angle measure at all.
[1291] vixra:2102.0106 [pdf]
The Thermal Photoeffect with the Debye and the Wigner Crystal
We define the photoelectric effect with the specific heat term replacing the work function. The photon propagator involving the radiative correction is also considered. We consider the Debye specific head for the 3D crystal medium, the specific heat for the 2D medium and specific heat for the Wigner crystal.
[1292] vixra:2102.0096 [pdf]
"Relativistic Ring" Simulation New Approach Resolves Apparent Paradoxes
Recent analysis of the "Relativistic Ring" problem revealed that its angular momentum features a "paradox" maximum at circumferential velocity $\approx 0,24$ c declining to near zero with increasing velocity. This apparent "paradox" can be resolved by a new approach based on simulated high external forces up to the "Weak Energy Limit" in a low velocity regime $0<v<<c$. Simulations comprise a "Relativitic Rod" in uniform translational motion subjected to a pair of mutually opposed external forces and a pressurized "Relativistic Ring". If the external forces simulate centrifugal force on a "Relativitic Rod" its canonical momentum features a maximum at velocity $\hat{v} = \sqrt{\frac{2}{3}}$ c being analogue to the maximum canonical angular momentum of a "Relativistic Ring". Remarkably, rotational velocity of a pressurized "Relativistic Ring" can be modulated by variation of pressure - at constant canonical angular momentum.\\
[1293] vixra:2102.0094 [pdf]
The kth Power Expectile Estimation and Testing
This paper develops the theory of the kth power expectile estimation and considers its relevant hypothesis tests for coefficients of linear regression models. We prove that the asymptotic covariance matrix of kth power expectile regression converges to that of quantile regression as k converges to one, and hence provide a moment estimator of asymptotic matrix of quantile regression. The kth power expectile regression is then utilized to test for homoskedasticity and conditional symmetry of the data. Detailed comparisons of the local power among the kth power expectile regression tests, the quantile regression test, and the expectile regression test have been provided. When the underlying distribution is not standard normal, results show that the optimal k are often larger than 1 and smaller than 2, which suggests the general kth power expectile regression is necessary.
[1294] vixra:2102.0078 [pdf]
A Proof of the Union-Close Set Conjecture
In this paper we introduce the notion of universe, induced communities and cells with their corresponding spots. Using this language we formulate and prove the union close set conjecture by showing that for any finite universe $\mathbb{U}$ and any induced community $\mathcal{M}_{\mathbb{U}}$ there exist some spot $a\in \mathbb{U}$ such that the density \begin{align} \mathcal{D}_{\mathcal{M}_{\mathbb{U}}}(a)\geq \frac{1}{2}.\nonumber \end{align}
[1295] vixra:2102.0076 [pdf]
Majorana Fermions in Self-Consistent Effective Hamiltonian Theory
Majorana fermion solution is obtained from the self-consistent effective Hamiltonian theory. The ground state is conjectured to be a non-empty vacuum with 2 fermions, one for each type. The first type is the original charged fermion and the second type the chiral charge-less Majorana fermion. The Marjorana fermion is like a shadow of the first fermion cast by the non-empty vacuum.
[1296] vixra:2102.0067 [pdf]
Spin Coherent States, Bell States, Entanglement, Husimi Distribution, Uncertainty Relation, Bell Inequality and Bell Matrix
We study spin coherent states, Bell states, entanglement, Husimi distributions, uncertainty relation, Bell inequality. The distance between these states is also derived. The Bell matrix, spin coherent states and Bell states are also investigated.
[1297] vixra:2102.0065 [pdf]
Effective Hard-Sphere Model of Diffusion in Aqueous Polymer Solutions
An effective hard-sphere model of the diffusion and cross-diffusion of salt in unentangled polymer solutions is developed. Given the viscosity, sedimentation coefficient and osmotic pressure of the polymer, the model predicts the diffusion and cross-diffusion coefficients as functions of the polymer concentration and molecular weight. The results are compared with experimental data on NaCl diffusion in aqueous polyethylene glycol solutions, showing good agreement at polymer molecular weights up to 400\,g/L. At higher molecular weights the model becomes less accurate, likely because of the effects of entanglement. The tracer Fickian diffusivity can be written in the form of a Stokes-Einstein equation containing the solution viscosity. For NaCl diffusion in polyethylene glycol solutions, the Stokes-Einstein equation breaks down as the polymer size increases. Using Batchelor's viscous correction factor to determine an effective viscosity experienced by the salt ions within the polymer matrix leads to much closer agreement with experiment.
[1298] vixra:2102.0064 [pdf]
Affirmative Resolve of the Riemann Hypothesis
Riemann Hypothesis has been the unsolved conjecture for 170 years. This conjecture is the last one of conjectures without proof in "Ueber die Anzahl der Primzahlen unter einer gegebenen Grosse"(B.\\Riemann). The statement is the real part of the non-trivial zero points of the Riemann Zeta function is 1/2. Very famous and difficult this conjecture has not been solved by many mathematicians for many years. In this paper, I try to solve the proposition about the Mobius function equivalent to the Riemann Hypothesis. First, the non-trivial formula for Mobius function is proved in theorem 1 and theorem 2. In theorem 4, I get upper bound for the sum of the mobius functions (for meaning of R.H. See theorem 4).
[1299] vixra:2102.0058 [pdf]
Compressible Helical Turbulence: Fastened-Structure Geometry and Statistics
Reduction of flow compressibility with the corresponding ideally invariant helicities, universally for various fluid models of neutral and ionized gases, can be argued statistically and associated with the geometrical scenario in the Taylor-Proudman theorem and its analogues. A `chiral base field', rooted in the generic intrinsic local structure, as well as an `equivalence principle' is explained and used to bridge the single-structure mechanics and the helical statistics. The electric field fluctuations may similarly be depressed by the (self-)helicities of the two-fluid plasma model, with the geometry lying in the relation between the electric and density fields in a Maxwell equation.
[1300] vixra:2102.0045 [pdf]
Modeling that Matches and Augments Data About Physics Properties, Elementary Particles, Astrophysics, and Cosmology
This essay suggests advances regarding the following challenges. Describe elementary particles that people have yet to find. Describe dark matter. Explain cosmology and astrophysics data that people have yet to explain. Correlate physics properties with each other. Correlate properties of elementary particles with each other. Show united modeling that leads to the advances.
[1301] vixra:2102.0044 [pdf]
Solution to the Riemann Hypothesis from Geometric Analysis of Component Series Functions in the Functional Equation of Zeta
This paper presents a new approach towards the Riemann Hypothesis. On iterative expansion of integration term in functional equation of the Riemann zeta function we get sum of two series function. At the ‘nontrivial’ zeros of zeta function, value of the series is zero. Thus, Riemann hypothesis is false if that happens for an ‘s’ off the line <(s) = 1/2 ( the critical line). This series has two components f(s) and f(1 − s). For the hypothesis to be false one component is additive inverse of the other. From geometric analysis of spiral geometry representing the component series functions f(s) and f(1 − s) on complex plane we find by contradiction that they cannot be each other’s additive inverse for any s, off the critical line. Thus, proving truth of the hypothesis.
[1302] vixra:2102.0040 [pdf]
On a Field Theory of Angular Frequency Resulting from the Spin of Particles
Promoting the maxim followed in viXra:2101.0094 and viXra:2102.0029 to the Principle of Force-Property Correspondence, we propose that any spinning particle creates a field in spacetime given by the equation $\square\omega=-\frac{4\pi c}{\hbar}\sigma$ where $\omega$ is the frequency of rotation of the particle, and $\sigma$ spin density per unit volume of space.
[1303] vixra:2102.0033 [pdf]
Derivation of General Doppler Effect Equations (Ii)
In the manuscript [1] we derived the general Doppler effect equations. In order to prove the correctness of the equations, it remains to define an adequate coordinate system. We have argued that such a coordinate system cannot be chosen arbitrarily but is determined by the direction between the receiver at the time when the signal is received and the sender at the time when the signal is emitted. In this manuscript, several experiments have been proposed to prove the existence of such a coordinate system. In addition, we will determine the velocities at which the sender and receiver of the signal move and the distance between them.
[1304] vixra:2102.0029 [pdf]
On the Inherent Dynamics of the Fabric of Spacetime
Motivated by E=tc^5/G where t is time, we shall propose that spacetime itself can have a matter-like behaviour, being a source for a new fundamental field, which turns out to have the dimensions of force and power. Accordingly a rough explanation for the expansion of the universe is presented.
[1305] vixra:2102.0028 [pdf]
Hridai: a Tale of Two Categories of Ecgs
This work presents a geometric study of computational disease tagging of ECGs problems. Using ideas like Earthmover’s distance (EMD) and Euclidean distance, it clusters category 1 and category −1 ECGs in two clusters, computes their average and then predicts the category of 100 test ECGs, if they belong to category 1 or category −1. We report 80% success rate using Euclidean distance at the cost of intense computation investment and 69% success using EMD. We suggest further ways to augment and enhance this automated classification scheme using bio-markers like Troponin isoforms, CKMB, BNP. Future direc- tions include study of larger sets of ECGs from diverse populations and collected from a heterogeneous mix of patients with different CVD conditions. Further we advocate the robustness of this programmatic approach as compared to deep learning kind of schemes which are amenable to dynamic instabilities. This work is a part of our ongoing framework Heart Regulated Intelligent Decision Assisted Information (HRIDAI) system.
[1306] vixra:2101.0171 [pdf]
The Binary Goldbach Conjecture and Circles of Partition
In this paper we use a new method to study problems in the additive number theory (see \cite{CoP}). With the notion of circle of partition as a set of points whose weights are natural numbers of a particular subset under an additive condition we are almost able to prove the binary Goldbach conjecture.
[1307] vixra:2101.0168 [pdf]
Recent Trends in Named Entity Recognition (NER)
The availability of large amounts of computer-readable textual data and hardware that can process the data has shifted the focus of knowledge projects towards deep learning architec- ture. Natural Language Processing, particularly the task of Named Entity Recognition is no exception. The bulk of the learning methods that have produced state-of-the-art results have changed the deep learning model, the training method used, the training data itself or the encoding of the output of the NER system. In this paper, we review significant learning methods that have been employed for NER in the recent past and how they came about from the linear learning methods of the past. We also cover the progress of related tasks that are upstream or downstream to NER eg. sequence tagging, entity linking etc. wherever the processes in question have also improved NER results.
[1308] vixra:2101.0167 [pdf]
A Universality Theorem for Nonnegative Matrix Factorizations
Let A be a nonnegative matrix, that is, a matrix with nonnegative real entries. A nonnegative factorization of size k is a representation of A as a sum of k nonnegative rank-one matrices. The space of all such factorizations is a bounded semialgebraic set, and we prove that spaces arising in this way are universal. More presicely, we show that every bounded semialgebraic set U is rationally equivalent to the set of nonnegative size-k factorizations of some matrix A up to a permutation of matrices in the factorization. Our construction is effective, and we can compute a pair (A, k) in polynomial time from a given description of U as a system of polynomial inequalities with coefficients in Q. This result gives a complete description of the algorithmic complexity of several important problems, including the nonnegative matrix factorization, completely positive rank, nested polytope problem, and it also leads to a complete resolution of the problem of Cohen and Rothblum on nonnegative factorizations over different ordered fields.
[1309] vixra:2101.0166 [pdf]
Note On The Tolerance of The Closure of The Angles of A Triangle
In this note, we give the expression of the tolerance of the closure of the horizontal angles of a plane triangle and its numerical estimation for an equilateral triangle.
[1310] vixra:2101.0151 [pdf]
The Destruction of the Covid-19 by the Magnetron Free Electron Laser
We determine the power spectrum generated by the system of N electrons moving coherently in the electromagnetic field of the planar magnetron. We argue that for large N and high intensity of electric field, the power of radiation of such magnetron laser (MAGFEL), can be used in the physical, chemical, biological and medical sciences, and specially for the destruction of the COVID-19. The application of such new electron laser as the photoelectron spectroscopy facility in the solid state physics and chemistry is evident.
[1311] vixra:2101.0129 [pdf]
An Introduction to the Relearning Cycle Index $i_{rel}$ and the Average Split Function $\bar{f}_{rel}$ of the Source Energy Exploitation by and for the Neural Networks.
We define the relearning cycle index $i_{ReL}$ and the average split function $\bar{f}_{ReL}$ of the source energy exploitation by and for the neural networks. We propose an optimized learning strategy which depend on a fixed relearning cycle index $i_{ReL}$ and a fixed average split function $\bar{f}_{ReL}$. In practice, this theory may explain why the communist politics in Russia and in China faced strong difficulties at the 20th century and why the private companies politics in Western countries faced critical difficulties at the beginning of the 21th century. We conclude with some critical hints for the future relearning cycles of the source energy exploitation by and for the neural networks.
[1312] vixra:2101.0117 [pdf]
Approximate Formula For zeta Function ζ(s) and L Function L(s) S=re
I created an approximate calculation formula for the zeta function and the L function. The range is 1 <x <2 and x> = 2. The L function has only ×> = 2. In both cases, the accuracy increases as the starting point moves away from 2.There is no mathematical proof.
[1313] vixra:2101.0115 [pdf]
CNN Based Common Approach to Handwritten Character Recognition of Multiple Scripts
There are many scripts in the world, several of which are used by hundreds of millions of people. Handwrittencharacter recognition studies of several of these scripts arefound in the literature. Different hand-crafted feature sets havebeen used in these recognition studies. However, convolutionalneural network (CNN) has recently been used as an efficientunsupervised feature vector extractor. Although such a networkcan be used as a unified framework for both feature extractionand classification, it is more efficient as a feature extractor than asa classifier. In the present study, we performed certain amount of training of a 5-layer CNN for a moderately large class characterrecognition problem. We used this CNN trained for a larger classrecognition problem towards feature extraction of samples of several smaller class recognition problems. In each case, a distinctSupport Vector Machine (SVM) was used as the correspondingclassifier. In particular, the CNN of the present study is trainedusing samples of a standard 50-class Bangla basic characterdatabase and features have been extracted for 5 different 10-classnumeral recognition problems of English, Devanagari, Bangla,Telugu and Oriya each of which is an official Indian script.Recognition accuracies are comparable with the state-of-the-art
[1314] vixra:2101.0111 [pdf]
Toward Advances in Medicine and Interstellar Travel
The motion in a Black Hole spacetime is studied. Several new results are found, in particular about the nature of Dark Matter and Dark Energy. The energy aspect of a matter in curved spacetime is explained. It is understandable why underground detectors for particles of Dark Matter have caught absolutely nothing for so many years of work. Usually, particles have a pretty strong effect on our world. But such small corpuscles as neutrinos have the weakest effect on ordinary matter. I give convincing arguments that Dark Matter acts so weakly on our world that its direct-contact action is equal to zero. That is why Dark Matter passes through the devices that are built for its capture completely without noticing them, completely without labor and friction with these devices. Such Dark Matter is representative for the INVISIBLE world, i.e. the detectors trying to detect it locally are ''blind'', they see nothing.
[1315] vixra:2101.0104 [pdf]
An Introduction to the Super-Normal-Irreducible-Irrational Numbers and the Axiom at Third-Order of Logic.
We define the super-normal-irreducible-irrational numbers from some irreducible-irrational numbers and with the help of the $n$-irreducible sequents (see my previous articles). Instead of taking some integer part of the irreducible-irrational number (or from its inverse), we add a super-normal-irreducible formula which give the position of the first digit breaking some super-normal number definition. From $84$ irreducible-irrational numbers, we deduce from the axiom at second-order of logic that they are all super-normal numbers as well. Moreover, with some random digits, the probability that the super-normal-irreducible formula holds for the $84$ ones is about $9.0\times 10^{-10}$ and we have taken in account that some irreducible-irrational numbers are only some different functions of the same irreducible-irrational number. From this large coincidence, we introduce the axiom at third-order of logic which states that every irreducible-irrational numbers are super-normal numbers as well. From that new axiom at third-order of logic, we deduce the none-existence of an exotic $4$-sphere. Finally, we conclude about the finitude of the total number of $n$-irreducible sequents.
[1316] vixra:2101.0098 [pdf]
Axiomatic Particle Theory
An axiomatic proposal for an underlying description of particles and their interactions. The existence of fundamental laws of physics is precluded and only random events exists at the fundamental level. Quarks and Leptons in 3 families are found along with spin 2 massless gravitation. Chiral sector - 3 chiral neutrinos can mix, no charged lepton mixing, quark mixing is allowed. The standard model groups U(1), SU(2) and SU(3) act on vacuum states to generate particles.
[1317] vixra:2101.0093 [pdf]
Equiprobability for Any Non Null Natural Integer of Having Either an Odd or Even Number of Prime Factor(s) Counted with Multiplicity.
Redefining the set of all non null natural integers N∗ as the union of infinitely many disjoint sets, we prove the equiprobability for any integer of each said set to have either an odd or even number of prime factor(s) counted with multiplicity. The thus established equiprobability on N∗ allows us to use the standard normal distribution to establish that lim N→+∞ L(N)/√N=0, L(N) the summatory Liouville function. Recalling the Dirichlet series for the Liouville function we deduce that ζ(2s)/ζ(s), s = σ +it, is analytic for σ > 1/2, ζ(s) the Riemann zeta function. Consequently the veracity of the Riemann hypothesis is being established.
[1318] vixra:2101.0091 [pdf]
Multivariate Expansivity Theory
In this paper we launch an extension program for single variable expansivity theory. We study this notion under tuples of polynomials belonging to the ring $\mathbb{R}[x_1,x_2,\ldots,x_n]$. As an application we show that \begin{align}\mathrm{min}\{\mathrm{max}\{\mathrm{Ind}_{f_k}(x_{\sigma(i)})\}_{k=1}^{s}+1\}_{i=1}^{l}&<\frac{1}{l}\sum \limits_{i=1}^{l}\mathrm{max}\{\mathrm{Ind}_{f_k}(x_{\sigma(i)})\}_{k=1}^{s}+2+\mathcal{J}\nonumber \end{align}where $\mathcal{J}:=\mathcal{J}(l)\geq 0$ and $\mathrm{Ind}_{f_k}(x_j)$ is the largest power of $x_j$~($1\leq j\leq n$) in the polynomial $f_k\in \mathbb{R}[x_1,x_2,\ldots,x_n]$.
[1319] vixra:2101.0079 [pdf]
The Simple Condition of Fermat Wiles Theorem Mainly Led by Combinatorics
This paper gives the simple and necessary condition of Fermat Wiles Theorem with mainly providing one method to analyze natural numbers and the formula X^n + Y^n = Z^n logically and geometrically, which is positioned in combinatorial design theory. The condition is gcd(X, E)^n = X − E ∧ gcd(Y, E)^n = Y − E in ¬(n | XY ), or gcd(X, E)^n/n = X − E ∧ gcd(Y, E)^n = Y − E in n | X ∧ ¬(n | Y ). Provided that E denotes E = X + Y − Z, n is a prime number equal to or more than 2, and X, Y, Z are coprime numbers.
[1320] vixra:2101.0074 [pdf]
Energy - Spacetime Equivalence
In this essay the equivalence between mass/energy and spacetime is postulated, so that matter, energy and spacetime can be transformed into one another. This has major implications in the physics of Black Holes and in cosmology. It is argued that no central singularity arises inside a Black Hole, no remnant is left once it has evaporated and unitarity is conserved by means of gravitational radiation. The possible origin of the cosmological constant is briefly discussed.
[1321] vixra:2101.0060 [pdf]
Octonionic Strings, Branes and Three Fermion Generations
Actions for strings and $p$-branes moving in octonionic-spacetime backgrounds and endowed with octonionic-valued metrics are constructed. An extensive study of the bosonic octonionic string moving in flat backgrounds, and its quantization, is presented. A thorough discussion follows pertaining whether or not the analysis leading to the $ D = 26$ critical dimension of the ordinary bosonic string is valid in the octonionic case. A remarkable numerical coincidence is found (without invoking supersymmetry) in that the total number of (real) degrees of freedom of $3$ fermion generations (involving massless Weyl fermions in $ 4D$) is $ 16 \times 4 \times 3 = 192$, and which matches the number of $ 8 \times 24 = 192$ real dimensions (degrees of freedom) corresponding to the $24$ transverse octonionic dimensions associated with the octonionic-worldsheet of a bosonic octonionic-string moving in $D = 26$ octonionic dimensions.
[1322] vixra:2101.0054 [pdf]
Kurumi: A New Liquid Crystal
The work characterizes develop a single layer bioinorganic membrane using nano-molecule Kurumi C13H20BeLi2SeSi / C13H19BeLi2SeSi, is well characterize computationally. As its scientific name 3-lithio-3-(6-{3-selena 8-beryllatricyclo [3.2.1.02,⁴]oct-6-en-2-yl}hexyl)-1-sila-2-lithacyclopropane. The work was based on a molecular dynamics (MD) of 1ns, using the CHARMM22 force field, with step 0.001 ps. Calculations indicate that the final structure, arrangement have the tendency to form a single layer micellar structure, when molecular dynamics is performed with a single layer. However, when molecular dynamics were carried out in several layers, indicates the behavior of a liotropic nematic liquid crystal order. Kurumi features the structure polar-apolar-polar predominant. Limitations our study has so far been limited to computational simulation via quantum mechanics e molecular mechanics (QM/MM), an applied theory. Our results and calculations are compatible and with the theory of QM/MM, but their physical experimental verification depend on advanced techniques for their synthesis, obtaining laboratory for experimental biochemical. Going beyond imagination, the most innovative and challenging proposal of the work advances the construction of a structure compatible with the formation of a “new DNA”, based now on the kurumi molecule.
[1323] vixra:2101.0049 [pdf]
Filter Exhaustiveness and Lter Limit Theorems for K-Triangular Lattice Group-Valued Set Functions
We give some limit theorems for sequences of lattice group-valuedk-triangular set functions,in the setting of filter convergence, and some results about their equivalence. We use the toolof filter exhaustiveness to get uniform (s)-boundedness, uniform continuity and uniform regular-ity of a suitable subsequence of the given sequence, whose indexes belong to the involved filter. Furthermore we pose some open problems.
[1324] vixra:2101.0045 [pdf]
Self-Consistent Hydrodynamic Model of Vortex Plasma
We propose the system of self-consistent equations for vortex plasma in the framework of hydrodynamic two-fluid model. These equations describe both longitudinal flows and the rotation and twisting of vortex tubes taking into account internal electric and magnetic fields generated by fluctuations of plasma parameters. The main peculiarities of the proposed equations are illustrated with the analysis of electron and ion sound waves.
[1325] vixra:2101.0040 [pdf]
On the Minimal Uncompletable Word Problem for Unambiguous Automata
This paper deals with nite (possibly not complete) unambiguous automata, not necessarily deterministic. In this setting, we investigate the problem of the minimal length of the uncompletable word. This problem is associated with the well-known conjecture formulated by A. Restivo. We introduce the concept of relatively maximal row for a suitable set of matrices, and show the existence of a relatively maximal row of length of quadratic order with respect to the number of the states of the treated automaton. We give some estimates of the maximal length of the minimal uncompletable word in connection with the number the states of the involved automaton and the length of a suitable relatively maximal but not maximal word, provided that it exists. In the general case, we establish an estimate of the length of the minimal uncompletable word in terms of the number of states of the studied automaton, the length of a suitable relatively maximal word and the minimal length of the uncompletable word of the automaton formed by all associated maximal rows.
[1326] vixra:2101.0033 [pdf]
Evaluation and Implementation of Proven Real-Time Image Processing Features for Use in Web Browsers
We explored the requirement of proven features for real-time use in web browsers, adopting a linear SVM based face detection model as a test case to evaluate each descriptor with appropriate parameters. After checking multiple feature extraction algorithms, we decided to study the following four descriptors Histogram of oriented gradients, Canny edge detection, Local binary pattern, and Dense DAISY . These four descriptors are used in various computer vision tasks to oer a wide range of options. We then investigated the influence of different parameters as well as dimension reduction on each descriptor computational time and its ability to be processed in real-time. We also evaluated the influence of such changes on the accuracy of each model.
[1327] vixra:2101.0032 [pdf]
Evaluation of ML Methods for Online Use in the Browser
Machine learning continues to be an increasingly integral component of our lives, whether we are applying the techniques to research or business problems. Machine learning models ought to be able to give accurate predictions in order to create real value for a given organization. At the same time Machine learning training by running algorithms on the browser has gradually become a trend. As the closest link to users in the Internet, the web front-end can also create a better experience for our users through AI capabilities. This article will focus on how to evaluate machine learning algorithms and deploy machine learning models in the browser.We will use "Cars", "MNIST" and "Cifar-10" datasets to test LeNet, AlexNet, GoogLeNet and ResNet models in the browser. On the other hand, we will also test emerging lightweight models such as MobileNet. By trying, comparing, and comprehensively evaluating regression and classification tasks, we can summarize some excellent methods/models and experiences suitable for machine learning in the browser.
[1328] vixra:2101.0031 [pdf]
Improving BrowserWatermarking with Eye Tracking
The problem of authenticating online users can be divided in two general sub problems: confirming two different web users being the same person, and confirming two different web users being not the same person. The easiest and most accessible method for fingerprinting used by online services is browser fingerprinting. Browser fingerprinting distinguishes between user devices using information about the browser and the system, which is usually provided by most browsers to any website, even when cookies are turned off. This data is usually unique enough even for identical devices, since it heavily relies on device usage by the user. In this work, browser fingerprinting is being improved with the usage of information acquired from an eye tracker.
[1329] vixra:2101.0030 [pdf]
Erstellung Und Bewertung Eineswebbrowserbasierten Frameworks Zur Datenakquise Für Studien
Diese Arbeit beschäftigt sich mit der Ertsellung und Bewertung eines Frameworks zur Durchführung und Datenerfassung von Studien, wie beispielswweise eine Blickpostionsvorhersage oder Emotionserkennung. Der Eyetracker basiert auf der webgazer.js Biblitohek der Brown Universität und die Emotionserkennung beruht auf einer FrontEnd-Emotionserkennungs Bibliothek auf Grundlage von der Google Shape Detection API. Beide Bibiliotheken, wurden implementiert, um sie für Studien zu gebrauchen. Zusätzlich zu diesen Daten werden auch noch Angaben wie das Alter und das Geschlecht, jedoch nur auf freiwilliger Basis, erfasst, um mehr Aussagekraft zu erhalten. Anschließend wird das Framework getestet und bewertet, um zu erfahren, ob es genauso gut in der Praxis wie in der Theorie funktioniert, da es sich hierbei um komplett clientseitige Anwendungen handelt. Im Normalfall benötigen solche Anwendungen nämlich noch zusätzliche Hardware, um zu funktionieren. Die verwendeten Biblitoheken kommen jedoch ohne diese aus, fordern nur eine funktionierende Webcam und bestehen nur auf JavaScript Code. Deshalb stellt sich hier nun die Frage, ob eine rein clientseitige Anwendung vergleichbar gut mit anderen Anwednungen in diesen Bereichen funktioniert und ob sich die Daten gut erfassen lassen.
[1330] vixra:2101.0029 [pdf]
Erstellung Und Bewertung Einer Webplattformbasierten Datenhaltungs Und Bewertungssoftware Für Studien
Aufgrund der zunehmenden Nutzung und Felxibilität des Internets, hat sich der Einsatz von Webanwendungen in vielen Anwendungsgebieten etabliert. Vor allem in der empirischen Forschung, z.B. in Bereichen des maschinellen Lernens oder der Mensch-Computer-Interaktion, findet eine steigende Integration vonWebanwendungen in den Studienbetrieb statt, um schnell und flexibel große Datensätze verwalten und auswerten zu können. Hierzu ist es wichtig, eine Datenhaltungssoftware so zu gestalten, dass eine qualitativ hochwertige und effziente Speicherung der erhobenen Studiendaten gewährleistet wird. Das Ziel dieser Arbeit ist die Entwicklung und Bewertung einer Webplattformbasierten Datenhaltungs- und Bewertungssoftware für Eye-Tracking- und Emotion-Detection-Studien. Dazu wurde eine auf dem Client-Server-Modell basierende Webplattform erstellt, die Probandendaten sammelt und anschließend in einer Datenbank verwaltet. Darauf aufbauend wird im Rahmen dieser Arbeit diese Software unter Betrachtung verschiedener Anforderungen evaluiert und bewertet.
[1331] vixra:2101.0018 [pdf]
Cosmological Constant as a Finite Temperature Effect
The cosmological constant problem is examined by taking an Einstein--scalar with a Higgs-type potential and scrutinizing the infrared structure induced by finite temperature effects. A variant optimal perturbation theory is implemented in the recently proposed quantum-gravitational framework. The optimized renormalized mass, i.e., the renormalized mass determined by the variant optimal perturbation theory, of the scalar field turns out to be on the order of the temperature. This shifts the cosmological constant problem to compatibility of the consequent perturbative analysis. The compatibility is guaranteed essentially by renormalization group invariance of physical quantities. We point out the resummation behind the invariance.
[1332] vixra:2101.0016 [pdf]
The C² Gravitational Potential Limit
The relationship between gravitational potentials, black holes and the squared light speed c² is examined in this paper as well as the implications of the presented findings for the universal gravitational constant G and general relativity theory. It is common knowledge that the velocity limit in our universe is defined by light speed c and as shown in this work c² plays a similar role for the gravitational potential since c²/G is linked to the mass density of black holes and our local Hubble sphere. Furthermore, it is demonstrated that the rift between cosmology and quantum physics can possibly be reconciled by acknowledging the physical meaning of the Planck units which proposedly define the characteristics of a quantized space-time, including its gravitational impedance. This notion is also supported by the presented logarithmic relationships between the cosmological scale and the quantum scale. Moreover, the presented findings uncover a previously unknown physical meaning for the constituents of Dirac's mysterious large number hypothesis.
[1333] vixra:2012.0224 [pdf]
Quantum Algorithm of Dempster Combination Rule
Dempster combination rule is widely used in many applications such as information fusion and decision making. However, the computational complexity of Dempster combination rule increases exponentially with the increase of frame of discernment. To address this issue, we propose the quantum algorithm of Dempster combination rule based on quantum theory. The algorithm not only realizes most of the functions of Dempster combination rule, but also effectively reduces the computational complexity of Dempster combination rule in future quantum computer. Meanwhile, we carried out a simulation experiment on the quantum cloud platform of IBM, and the experimental results showed that the algorithm is reasonable.
[1334] vixra:2012.0219 [pdf]
Proving Zeta (n>=2) Is Irrational Using Decimal Sets
We prove that partial sums of Zeta(n)-1=zn are not given by any single decimal in a number base given by a denominator of their terms. These sets of single decimals we call decimal sets. This result, applied to all partials, shows that partials are excluded from an ever greater number of rational, possible convergence points, elements of these decimal sets. The limit of the partials is zn and the limit of the exclusions leaves only irrational numbers. Thus zn is proven to be irrational.
[1335] vixra:2012.0216 [pdf]
A Lower Limit $\delta H_{vap}^z$ for the Latent Heat of Vaporization $\delta H_{vap}$ with Respect to the Pressure and the Volume Change of the Phase Transition.
We derive a lower limit $\Delta H_{vap}^Z $ for the latent heat of vaporization $\Delta H_{vap}$ with respect to the pressure and the volume change of the phase transition from the study of a heat engine using liquid-gas as working fluid with an infinitesimal variation of the temperature $\delta T$ and an infinitesimal variation of the pressure $\delta P$ and in the vanishing limit of the massive flow rate $Q_m$. We calculate the latent heat index $h^Z= \Delta H_{vap}^Z/\Delta H_{vap}$ of few gas and at few different pressures $P$. Finally, we consider the latent heat index limit $h^Z_{cr}$ as the temperature $T$ approaches the critical temperature $T_{cr}$.
[1336] vixra:2012.0212 [pdf]
Using Decimal to Prove e is Irrational
Using for decimal bases the terms of e-2, we calculate partial sums and form open intervals for tails of partials. These intervals exclude all possible rational convergence points and thus show e-2 and hence e is irrational.
[1337] vixra:2012.0211 [pdf]
EsoCipher: An Instance of Hybrid Cryptography
This paper proposes a whole new concept in the field of Cryptography, i.e., EsoCiphers. Short for Esoteric Cipher, EsoCipher is an algorithm, which can be understood by a few, who have the knowledge about its backend architecture. The idea behind this concept is derived from esoteric programming languages, but EsoCiphers will be having a practical use in the future, as more research is done on the topic. Although the algorithm is quite simple to understand, the complexity of the output will indeed prove to be difficult to brute-force if a secret is embedded in it. It uses a hybrid cryptography-based technique, which combines ASCII, Binary, Octal, and ROT 13 ciphers. The implementation and similarity index has been provided to show that it can be implemented practically.
[1338] vixra:2012.0206 [pdf]
Geoid And Systems of Heights
In this paper, we give the different definition of heights and the corrected terms to take in consideration. An example is presented on how to correct a line of the spirit levelling by introducing gravity observations.
[1339] vixra:2012.0200 [pdf]
Exact Solution of Some Non-Autonomous Nonlinear ODEs of Second Order
This paper shows the exact integrability analysis of two classes of non-autonomous and nonlinear differential equations. It has been possible to recover some equations of general relativity from the first class of equations and consequently to compute their solution in fashion way. The second class is shown to include the Emden-Fowler equation and its integrability analysis, performed with the first integral theory developed by Monsia et al. [16] allowed to compute the exact solution of some subclasses of Emden-Fowler equations.
[1340] vixra:2012.0198 [pdf]
Abridgment of Cycles in GCS
<p> A shortened English version of new concepts appeared this year in three French papers about cycles in Generalized Collatz Subsequences (GCS) and few new developments. Reduced and compact subseqs – Shape vector and shape rank – Triplet operator as a powerful tool to compose linear functions – Monoid of transition functions between elements of compact sequences – Diophantine equation <em>p<sup>m</sup>x - r<sup>d</sup>y - q =</em> 0 related to each monoid element – Shape class as general solution of this diophantine equation – Specific cyclic solution – Universal rotation function on <em>q</em> parameters – Condition for tranfer cyclic property to numbers – Cardinality of equation classes – Cycle occurence probability in classes – Layers of linked algebraic cycles. </p>
[1341] vixra:2012.0192 [pdf]
Why did Nature need Quantum Mechanics at all?
It is instrumental interpretation of Quantum Mechanics. Why this new interpretation is needed Because all known interpretations only describe how Quantum Mechanics works, so that one can be able to apply equations, but do not answer the question: "why did nature need Quantum Mechanics at all?"
[1342] vixra:2012.0169 [pdf]
Electromagnetism and Gravity Field Equations Are Unified and Described by a New E(4,0) Tensor for Total Energy Representing the New Conformal Energy Tensor T(4,0) and the Stress-Energy Tensor T(2,0)
Electromagnetism and Gravity field equations are unified and described by a new E(4,0) tensor for Total Energy representing the new Conformal Energy tensor T(4,0) and the Stress–Energy tensor T(2,0). In four dimensions electromagnetic field tensor is defined as a differential 2-form F that constructs electromagnetic stress–energy tensor as a combination of F and the Hodge dual of F, in Einstein field equations this role is played by the Weyl tensor C, the conformal tensor curvature is the only part of the curvature that exists in free space and governs the propagation of gravitational waves, so the conformal energy tensor can be defined as a combination of C and the Hodge dual of C. The Hodge dual definition of electromagnetic tensor and Weyl tensor leads to electromagnetic field tensor embedded in Weyl tensor unifying Electromagnetism and Gravity, both tensors are related to the Conformal Energy tensor
[1343] vixra:2012.0164 [pdf]
Primorials and a Formula for Odd Abundant Numbers
We conjecture that a formula that represents a difference between two primorials of different parities generates only odd abundant numbers for its arguments greater than 3. Using PARI/GP, we verified this conjecture for the arguments up to $4*10^4$. We also discuss another formula that generates only odd abundant numbers in an arithmetic progression and explain its origin in the context of the distribution of odd abundant numbers in general.
[1344] vixra:2012.0163 [pdf]
Smallest Numbers Whose Number of Divisors Is a Perfect Number
We present a formula for the smallest possible numbers whose number of divisors is the $n$-th perfect number. The formula, that produces an integer sequence $a(n)$, involves the $n$-th Mersenne prime that appears both in an exponent of a power of 2 and in the product of consecutive odd primes (the odd primorial). While smallest in some sense, these numbers are among largest one can run into through an exercise in elementary number theory.
[1345] vixra:2012.0154 [pdf]
The Concept of Particle-Antiparticle and the Baryon Asymmetry of the Universe
Following the results of our publications, in the first part of this letter, we explain that quantum theory based on finite mathematics (FQT) is more general (fundamental) than standard quantum theory based on Poincare invariance. Standard concept of particle-antiparticle is not universal because it arises as a result of symmetry breaking from FQT to standard quantum theory based on Poincare or standard anti-de Sitter symmetries. In FQT one irreducible representation of the symmetry algebra describes a particle and its antiparticle simultaneously, and there are no conservation laws of electric charge and baryon quantum number. Poincare and standard anti-de Sitter symmetry are good approximations at the present stage of the universe but in the early stages they cannot take place. Therefore, the statement that in such stages the numbers of baryons and antibaryons were the same, does not have a physical meaning, and the problem of baryon asymmetry of the universe does not arise. Analogously, the numbers of positive and negative electric charges at the present stage of the universe should not be the same, i.e., the total electric charge of the universe should not be zero.
[1346] vixra:2012.0145 [pdf]
Preserving Absolute Simultaneity with the Lorentz Transformation
In this work it is shown how absolute simultaneity of spatially distinct events can be established by means of a general criterion based on isotropically propagating signals and how it can be consistently preserved also when operating with Lorentz-like coordinate transformations between moving frames. The specific invariance properties of these transformations of coordinates are discussed, leading to a different interpretation of the physical meaning of the transformed variables with respect to their prevailing interpretation when associated with the Lorentz transformation. On these basis, the emission hypothesis of W. Ritz is then applied to justify the outcomes of the Fizeau experiment, thanks to the introduction of an additional hypothesis regarding the influence of turbulence on the refractive index of the moving fluid. Finally, a test case to investigate the validity of either the Galilean or the Relativistic velocity composition rule is presented. Such test relies on the aberration of the light coming from celestial objects due to the motion of the observer and on the analysis of the results obtained by applying the two different formulas to process the data of the observed positions, as measured in the moving frame, in order to determine the actual un-aberrated location of the source.
[1347] vixra:2012.0144 [pdf]
Spacetime Curvature Caused by an Inhomogeneous Distribution of Matter Unveiled by a Pattern of Fringes
Einstein's general relativity is a theory of the nature of time, space and gravity in which gravity is a curvature of space and time that results from the presence of matter or energy. Spacetime curvature caused by an inhomogeneous distribution of matter unveiled by a discontinuous pattern of fringes instead of the expected continuous pattern
[1348] vixra:2012.0142 [pdf]
Predicting Year of Plantation with Hyperspectral and Lidar Data
This paper introduces a methodology for predicting the year of plantation (YOP) from remote sensing data. The application has important implications in forestry management and inventorying. We exploit hyperspectral and LiDAR data in combination with state-of-the-art machine learning classi-fiers. In particular, we present a complete processing chain to extract spectral, textural and morphological features from both sensory data. Features are then combined and fed a Gaussian Process Classifier (GPC) trained to predict YOP in a forest area in North Carolina (US). The GPC algorithm provides accurate YOP estimates, reports spatially explicit maps and associated confidence maps, and provides sensible feature rankings.
[1349] vixra:2012.0141 [pdf]
Passive Millimeter Wave Image Classification with Large Scale Gaussian Processes
Passive Millimeter Wave Images (PMMWIs) are being increasingly used to identify and localize objects concealed under clothing. Taking into account the quality of these images and the unknown position, shape, and size of the hidden objects, large data sets are required to build successful classification/detection systems. Kernel methods, in particular Gaussian Processes (GPs), are sound, flexible, and popular techniques to address supervised learning problems. Unfortunately, their computational cost is known to be prohibitive for large scale applications. In this work, we present a novel approach to PMMWI classification based on the use of Gaussian Processes for large data sets. The proposed methodology relies on linear approximations to kernel functions through random Fourier features. Model hyperparameters are learned within a variational Bayes inference scheme. Our proposal is well suited for real-time applications, since its computational cost at training and test times is much lower than the original GP formulation. The proposed approach is tested on a unique, large, and real PMMWI database containing a broad variety of sizes, types, and locations of hidden objects.
[1350] vixra:2012.0135 [pdf]
Solution of an Open Problem Concerning the Augmented Zagreb Index and Chromatic Number of Graphs
Let $G$ be a graph containing no component isomorphic to the path graph of order $2$. Denote by $d_w$ the degree of a vertex $w$ in $G$. The augmented Zagreb index ($AZI$) of $G$ is the sum of the quantities $(d_ud_v/(d_u+d_v-2))^3$ over all edges $uv$ of $G$. Denote by $\mathcal{G}(n,\chi)$ the class of all connected graphs of a fixed order $n$ and with a fixed chromatic number $\chi$, where $n\ge5$ and $3\le \chi \le n-1$. The problem of finding graph(s) attaining the maximum $AZI$ in the class $\mathcal{G}(n,\chi)$ has been solved recently in [F. Li, Q. Ye, H. Broersma, R. Ye, MATCH Commun. Math. Comput. Chem. 85 (2021) 257--274] for the case when $n$ is a multiple of $\chi$. The present paper gives the solution of the aforementioned problem not only for the remaining case (that is, when $n$ is not a multiple of $\chi$) but also for the case considered in the aforesaid paper.
[1351] vixra:2012.0130 [pdf]
Proving Basic Theorems about Chords and Segments via High-School-Level Geometric Algebra
We prove the Intersecting-Chords Theorem as a corollary to a relation-ship, derived via Geometric Algebra, about the product of the lengths of two segments of a single chord. We derive a similar theorem about the product of the lengths of a secant a chord.
[1352] vixra:2012.0129 [pdf]
Using Geometric Algebra: A High-School-Level Demonstration of the Constant-Angle Theorem
Euclid proved (Elements, Book III, Propositions 20 and 21) proved that an angle inscribed in a circle is half as big as the central angle that subtends the same arc. We present a high-school level version of Hestenes' GA-based proof ([1]) of that same theorem. We conclude with comments on the need for learners of GA to learn classical geometry as well.
[1353] vixra:2012.0128 [pdf]
Simple Close Curve Gagnetization and Application to Bellman's Lost in the Forest Problem
In this paper we introduce and develop the notion of simple close curve magnetization. We provide an application to Bellman's lost in the forest problem assuming special geometric conditions between the hiker and the boundary of the forest.
[1354] vixra:2012.0119 [pdf]
A Polynomial Pattern for Primes Based on Nested Residual Regressions
The pattern of the primes is one of the most fundamental mysteries of mathematics. This paper introduces a core polynomial model for primes based on nested residual regressions. Residual nestedness reveals increasing polynomial intertwining and shows scale invariance, or at least strong self-similarity up to at least p = 15,485,863. Accuracy of prediction decreases as the prediction range increases, conversely, the increase in the number of models helps refine predictions holistically.
[1355] vixra:2012.0108 [pdf]
Tachyons for Interstellar Communication
Concerns that tachyons, which have imaginary mass, may violate causality have been been discussed in the context of two distinct embodiments for constructing a message loop. One employs transmitters in motion relative to receivers, while the other has transmitters and receivers at rest with each other and messages are passed between moving observers using electromagnetic signals. The latter (Method II) is of interest only to those who seek to disprove the existence of faster-than-light phenomena by constructing hypothetical thought experiments based solely upon kinematics that purportedly violate causality, often by specious means. The former (Method I), on the other hand, is based upon the wider foundation of both kinematics and dynamics, and sound analysis proves that causality is not violated. For Method I, the relative speed between transmitter and receiver limits the propagation speed according to u = c^2/v, where u is the maximum possible propagation speed and v is the relative speed between transmitter and receiver. This paper discusses this paradigm for communicating between outposts in different star systems. Techniques will be discussed for increasing propagation speed beyond that limited by the relative motion between earth and a planetary base in orbit around a distant star.
[1356] vixra:2012.0106 [pdf]
Is the ABC Conjecture True?
In this paper, we consider the abc conjecture. In the first part, we give the proof of the conjecture c < rad ^{1.63}(abc) that constitutes the key to resolve the abc conjecture. The proof of the abc conjecture is given in the second part of the paper, supposing that the abc conjecture is false, we arrive in a contradiction.
[1357] vixra:2012.0105 [pdf]
Stirling Numbers Via Combinatorial Sums
In this paper, we have derived a formula to find combinatorial sums of the type $\sum_{r=0}^n r^k {n\choose r}$ for $k \in \mathbb{N}$. The formula is conveniently expressed as a linear combination of terms involving the falling factorial. The co-efficients in this linear expression satisfy a recurrence relation, which is identical to that of the Stirling numbers of the first and second kind.
[1358] vixra:2012.0096 [pdf]
Full Relativistic Compton Edge
The original Compton wavelength published by Arthur Compton [1] in 1923 was not full relativistic, but this has been extended to a full relativistic Compton wavelength by Haug [2]. This means also the standard Compton edge not is full relativistic, here we extend the Compton edge to be full relativistic.
[1359] vixra:2012.0093 [pdf]
On the Conformal Energy Tensor Defined as a Combination of Weyl Tensor and the Hodge Dual of Weyl Tensor
In four dimensions with the Minkowski metric the electromagnetic field tensor is defined as a differential 2-form F that constructs the electromagnetic stress–energy tensor as a combination of F and the Hodge dual of F, in Einstein field equations this role is played by the Weyl tensor C, the conformal tensor curvature is the only part of the curvature that exists in free space and governs the propagation of gravitational waves, so the conformal energy tensor can be defined as a combination of C and the Hodge dual of C
[1360] vixra:2012.0089 [pdf]
Rethinking the Foundation of Physics and Its Relation to Quantum Gravity and Quantum Probabilities: Unification of Gravity and Quantum Mechanics
In this paper we will show that standard physics to a large degree consists of derivatives of a deeper reality. This means standard physics is both overly complex and also incomplete. Modern physics has typically started from working with first understanding the surface of the world, that is typically the macroscopic world, and then forming theories about the atomic and subatomic world. And we did not have much of a choice, as the subatomic world is very hard to observe directly, if not impossible to observe directly at the deepest level. Despite the enormous success of modern physics, it is therefore no big surprise that we at some point have possibly taken a step in the wrong direction. We will claim that one such step came when one thought that the de Broglie wavelength represented a real matter wavelength. We will claim that the Compton wavelength is the real matter wavelength. Based on such a view we will see that many equations in modern physics are only derivatives of much simpler relations. Second, we will claim that in today’s physics one uses two different mass definitions, one mass definition that is complete or at least more complete, embedded in gravity equations without being aware of it, as it is concealed in GM, and the standard, but incomplete, kg mass definition in non-gravitational physics. First, when this is understood, and one uses the more complete mass definition that is embedded in gravity physics, not only in gravity physics, but in all of physics, then one has a chance to unify gravity and quantum mechanics. Our new theory shows that most physical phenomena when observed over a very short timescale are probabilistic for masses smaller than a Planck mass and dominated by determinism at or above Planck mass size. Our findings have many implications. For example, we show that the Heisenberg uncertainty principle is rooted in a foundation not valid for rest-mass particles, so the Heisenberg uncertainty principle can say nothing about rest-masses. When re-formulated based on a foundation compatible with a new momentum that is also compatible with rest-masses, we obtain a re-defined Heisenberg principle that seems to become a certainty principle in the special case of a Planck mass particle. Furthermore, we show that the Planck mass particle is linked to gravity and that we can easily detect the Planck scale from gravity observations.
[1361] vixra:2012.0086 [pdf]
Proof of the ABC Conjecture
In this short note, I prove the abc conjecture. You are free not to get enlightened about that fact. But please pay respect to new dispositions of the abc conjecture and research methods in this note.
[1362] vixra:2012.0060 [pdf]
Study of Systematic Errors in the Combination of Doppler Data and Classical Terrestrial Observations
This paper concerns the study of systematic errors in the combination of Doppler data and classical terrestrial observations in the adjustment of geodetic networks. This study is taken from the thesis presented in October 1986 for obtaining the Civil Geographic Engineer diploma from the National School of Geographic Sciences (ENSG / IGN France).
[1363] vixra:2012.0058 [pdf]
Detecting Insincere Questions from Text: A Transfer Learning Approach.
The internet today has become an unrivalled source of information where people converse on content based websites such as Quora, Reddit, StackOverflow and Twitter asking doubts and sharing knowledge with the world. A major arising problem with such websites is the proliferation of toxic comments or instances of insincerity wherein the users instead of maintaining a sincere motive indulge in spreading toxic and divisive content. The straightforward course of action in confronting this situation is detecting such content beforehand and preventing it from subsisting online. In recent times Transfer Learning in Natural Language Processing has seen an unprecedented growth. Today with the existence of transformers and various state of the art innovations, a tremendous growth has been made in various NLP domains. The introduction of BERT has caused quite a stir in the NLP community. As mentioned, when published, BERT dominated performance benchmarks and thereby inspired many other authors to experiment with it and publish similar models. This led to the development of a whole BERT-family, each member being specialized on a different task. In this paper we solve the Insincere Questions Classification problem by fine tuning four cutting age models viz BERT, RoBERTa, DistilBERT and ALBERT.
[1364] vixra:2012.0056 [pdf]
The 2N Conjecture on Spectrally Arbitrary Sign Patterns Is False
A sign pattern is a matrix with entries in {+, −, 0}. An n × n sign pattern S is spectrally arbitrary if, for any monic polynomial f of degree n with real coefficients, one can replace the + and − signs in S with real numbers of the corresponding signs so that the resulting matrix has characteristic polynomial f. This paper refutes a long-standing conjecture with a construction of an n × n spectrally arbitrary sign pattern with less than 2n entries nonzero.
[1365] vixra:2012.0054 [pdf]
What Atheist Steven Hawking Has Discovered at All, if Black Holes Do Not Evaporate?
It is the severe scientific criticism against Hawking's life work. One of the issues is the following. The evaporation of the Black Holes is impossible at first because of Steven Hawking's sentence in his original paper that the Schwarzschild Black Hole does not evaporate. We have no other spherical symmetric Black Hole, which would have an event horizon, and what shrinks despite that.
[1366] vixra:2012.0048 [pdf]
Randomized RX for Target Detection
This work tackles the target detection problem through the well-known global RX method. The RX method models the clutter as a multivariate Gaussian distribution, and has been extended to nonlinear distributions using kernel methods. While the kernel RX can cope with complex clutters, it requires a considerable amount of computational resources as the number of clutter pixels gets larger. Here we propose random Fourier features to approximate the Gaussian kernel in kernel RX and consequently our development keep the accuracy of the nonlinearity while reducing the computational cost which is now controlled by an hyperparameter. Results over both synthetic and real-world image target detection problems show space and time efficiency of the proposed method while providing high detection performance.
[1367] vixra:2012.0044 [pdf]
On a Linnik Theorem in Theory of Errors
In this note, we give a proof of a theorem of Linnik concerning the theory of errors, stated in his book "Least squares method and the mathematical bases of the statistical theory of the treatment of observations", without proof.
[1368] vixra:2012.0039 [pdf]
Proof of Riemann Hypothesis
This paper is a trial to prove Riemann hypothesis according to the following process. 1. We create the infinite number of infinite series from one equation that gives ζ(s) analytic continuation to Re(s) > 0 and 2 formulas (1/2 + a + bi, 1/2 − a − bi) which show zero point of ζ(s). 2. We find that a cannot have any value but zero from the above infinite number of infinite series. Therefore zero point of ζ(s) must be 1/2 ± bi.
[1369] vixra:2012.0038 [pdf]
Automatic Emulator and Optimized Look-up Table Generation for Radiative Transfer Models
This paper introduces an automatic methodology to construct emulators for costly radiative transfer models (RTMs). The proposed method is sequential and adaptive, and it is based on the notion of the acquisition function by which instead of optimizing the unknown RTM underlying function we propose to achieve accurate approximations. The Automatic Gaussian Process Emulator (AGAPE) methodology combines the interpolation capabilities of Gaussian processes (GPs) with the accurate design of an acquisition function that favors sampling in low density regions and flatness of the interpolation function. We illustrate the good capabilities of the method in toy examples and for the construction of an optimal look-up-table for atmospheric correction based on MODTRAN5.
[1370] vixra:2012.0035 [pdf]
Group Metropolis Sampling
Monte Carlo (MC) methods are widely used for Bayesian inference and optimization in statistics, signal processing and machine learning. Two well-known class of MC methods are the Importance Sampling (IS) techniques and the Markov Chain Monte Carlo (MCMC) algorithms. In this work, we introduce the Group Importance Sampling (GIS) framework where different sets of weighted samples are properly summarized with one summary particle and one summary weight. GIS facilitates the design of novel efficient MC techniques. For instance, we present the Group Metropolis Sampling (GMS) algorithm which produces a Markov chain of sets of weighted samples. GMS in general outperforms other multiple try schemes as shown by means of numerical simulations.
[1371] vixra:2012.0034 [pdf]
Joint Gaussian Processes for Inverse Modeling
Solving inverse problems is central in geosciences and remote sensing. Very often a mechanistic physical model of the system exists that solves the forward problem. Inverting the implied radiative transfer model (RTM) equations numerically implies, however, challenging and computationally demanding problems. Statistical models tackle the inverse problem and predict the biophysical parameter of interest from radiance data, exploiting either in situ data or simulated data from an RTM. We introduce a novel nonlinear and nonparametric statistical inversion model which incorporates both real observations and RTM-simulated data. The proposed Joint Gaussian Process (JGP) provides a solid framework for exploiting the regularities between the two types of data, in order to perform inverse modeling. Advantages of the JGP method over competing strategies are shown on both a simple toy example and in leaf area index (LAI) retrieval from Landsat data combined with simulated data generated by the PROSAIL model.
[1372] vixra:2012.0033 [pdf]
Distributed Particle Metropolis-Hastings Schemes
We introduce a Particle Metropolis-Hastings algorithm driven by several parallel particle filters. The communication with the central node requires the transmission of only a set of weighted samples, one per filter. Furthermore, the marginal version of the previous scheme, called Distributed Particle Marginal Metropolis-Hastings (DPMMH) method, is also presented. DPMMH can be used for making inference on both a dynamical and static variable of interest. The ergodicity is guaranteed, and numerical simulations show the advantages of the novel schemes.
[1373] vixra:2012.0026 [pdf]
Too Many Trailing Zeros?
In this paper I discuss Question 8 from the Chalkdust 2019 Christmas card. In particular I investigate the ratio of the number of trailing zeros of a factorial to its number of digits.
[1374] vixra:2012.0022 [pdf]
A Kitchen Sink Measurement of g
In this paper I look at the experimental determination of the acceleration due to gravity \(g\) and the flow rate from my kitchen tap as suggested in a comment to a puzzle in "The Chicken From Minsk" (TCFM) using only equipment already on hand. This differs from the method proposed in TCFM in that it uses a method of taking the measurements that is practical rather than seemingly dismissing the problem of taking measurements with a wave of the hand.
[1375] vixra:2012.0015 [pdf]
On Ultra-High-Speed Interstellar Travel
The possibility of ultra-high-speed interstellar travel is considered. The role of the Devil's advocate is played, purely for the sake of interest -- it is considered that ultra-high-speed interstellar travel is impossible.
[1376] vixra:2012.0011 [pdf]
Learning Drivers of Climate-Induced Human Migrations with Gaussian Processes
In the current context of climate change, extreme heat waves, droughts and floods are not only impacting the biosphere and atmosphere but the anthroposphere too. Human populations are forcibly displaced, which are now referred to as climate-induced migrants. In this work we investigate which climate and structural factors forced major human displacements in the presence of floods and storms during years 2017-2019. We built, curated and harmonized a database of meteorological and remote sensing indicators along with structural factors of 27 developing countries world-wide. We show how we can use Gaussian Processes to learn what variables can explain the impact of floods and storms in a context of forced displacements and to develop models that reproduce migration flows. Our results at regional, global and disaster-specific scales show the importance of structural factors in the determination of the magnitude of displacements. The study may have both societal, political and economical implications.
[1377] vixra:2012.0010 [pdf]
La Courbure Des Transformées Planes Conformes Des Géodésiques
In this paper, we give the formula of the curvature of the curve image, in the case of a conformal map projection, of geodesic curves of the ellipsoid as the model of the Earth. As an example, we consider the UTM map projection and we give the expression of the formula of the curvature.
[1378] vixra:2012.0006 [pdf]
The Law of Conservation of Time and Its Applications
Time is a complex category not only in philosophy but also in mathematics and physics. In one thought about time, the author accidentally discovered a new way to explain and solve problems related to time dilation, such as solving the problem of Muon particle when moving from a height of 10 km to the earth’s surface, while the Muon’s lifespan is only 2.2 microseconds, or explaining Michelson-Morley experiment using the new method. In addition, the author also prove that the speed of light in vacuum is the maximum speed in the universe, and discovered the red shift effect while there is no increase in distance between objects. To do this, the author has built two axioms based on the discontinuity in the motion of the object and draw two consequences along with the law of conservation of time.
[1379] vixra:2012.0002 [pdf]
Operational Definition of Electric Charge and Derivation of Coulomb’s Law
The paper focuses on the part of the Coulomb’s Law that is just a definition and provides one possible mechanism for operationally defining electric charge based on the concept of force(action at a distance). Then a derivation of Coulomb’s law from the definition is presented and the sign of the charges are defined. Finally, the paper concludes with a discussion on the conservation of charge.
[1380] vixra:2011.0213 [pdf]
Right-handed and Left-handed Circularly Polarized Light Derived From Projection Operators in Clifford Algebras, Stokes Parameters and Mueller Matrix in Four Dimensions
The hermitian polarization matrix is written, using the Pauli matrices and the identity matrix as a basis, with four real coefficients, the four Stokes vector parameters; the interaction of light with matter is described as the modification of these 4 parameters by a 4x4 matrix, the Mueller matrix. Pauli matrices form a Clifford algebra, the projection operators R and L define the two Stokes vectors for right-handed and left-handed circularly polarized light. In four dimensions, the Minkowski metric ηµν = diag(+1, −1, −1, −1) leads to the Clifford algebra C(1,3) Dirac matrices, the 16 Dirac matrices form a basis for the polarization matrix, now with 16 Stokes parameters the interaction of light with matter is described by a 16x16 Mueller matrix, the projection operators R and L in this algebra define the right-handed and left-handed circularly polarized light.
[1381] vixra:2011.0212 [pdf]
Assuming C Less Then Rad2 (Abc), a New Proof of the Abc Conjecture
In this paper, we consider the abc conjecture. Assuming that c<rad^2(abc) is true, we give a new proof of the abc conjecture, by proceeding with the contradiction of the definition of the abc conjecture, for \epsilon \geq 1, then for \epsilon \in ]0,1[.
[1382] vixra:2011.0205 [pdf]
On the Physical Implications of Spacetime, Entropy, and Matter-energy Conversion
Quantum gravity is the most profound outstanding question in fundamental physics. How do we describe spacetime itself quantum mechanically? In this article we present a novel approach called “geometrodynamics,” which uses the interconnections between space, time, and mechanical entropy. In particular we will show how quantum scattering processes indicate that Lorentz symmetry must be broken, in a way manifested physically through transformation of energy into mass that can no longer be accelerated. Throughout we apply our theoretical ideas to specific physical situations.
[1383] vixra:2011.0202 [pdf]
On Linear Ordinary Differential Equations of Second Order and Their General Solutions
We have worked out a new geometric approach to linear ordinary differential equations of second order which makes it possible to obtain general solutions to infinite number of equations of this sort. No need new families of special functions and their theories arose, solutions are composed straightforwardly. In this work we present a number of particular cases of equations with their general solutions. The solutions are divided into four groups the same way one encounters in any book on special functions.
[1384] vixra:2011.0199 [pdf]
Acknowledgment of Non-linearity or How to Solve Several Conjectures
Several famous conjectures from Number Theory are studied. I derive a new equivalent formulation of Goldbach's strong conjecture and present an independent conjecture with some evidence for it.
[1385] vixra:2011.0198 [pdf]
Exceptions from Robin's Inequality
In this short but rigorous research note I study Robin's Inequality. The number of possible violations of this inequality turns out to be finite. As the finiteness includes zero, I am able to convince you that there are no such violations.
[1386] vixra:2011.0197 [pdf]
Hawking Radiation by Neutron Stars
There is a critical issue against Hawking Radiation: the ``Trans-Planckian problem'', dwelling on the fact that the laws of gravitation are unknown at short distances [Adam D.~Helfer, ``Do black holes radiate?'', Rep.\ Progr.\ Phys.\ 66 (6), 943--1008 (2003)]. In this short note, I demonstrate to have no Hawking Radiation from the static neutron star and the collapsing star (latter gradually becomes a Black Hole), therefore, one has no Information Loss Paradox [Sabine Hossenfelder (2020) ``The Black Hole Information Loss Problem is Unsolved. And Unsolvable'', https://youtu.be/mqLM3JYUByM].
[1387] vixra:2011.0185 [pdf]
On Negative-Energy 4-Spinors and Masses in the Dirac Equation
Both algebraic equation $Det (\hat p - m) =0$ and $Det (\hat p + m) =0$ for $u-$ and $v-$ 4-spinors have solutions with $p_0= \pm E_p =\pm \sqrt{{\bf p}^2 +m^2}$. The same is true for higher-spin equations (or they may even have more complicated dispersion relations). Meanwhile, every book considers the equality $p_0=E_p$ for both $u-$ and $v-$ spinors of the $(1/2,0)\oplus (0,1/2))$ representation only, thus applying the Dirac-Feynman-Stueckelberg procedure for elimination of negative-energy solutions. The recent Ziino works (and, independently, the articles of several other authors) show that the Fock space can be doubled. We re-consider this possibility on the quantum-field level for both $s=1/2$ and higher spin particles. Some parts have also be presented at the XIII DGFM SMF Workshop, Nov. 4-8, 2019. Leon, Gto., M\'exico.
[1388] vixra:2011.0182 [pdf]
A Note on Lp-Convergence and Almost Everywhere Convergence
It is a classical but relatively less well-known result that, for every given measure space and every given $1 \leq p \leq +\infty$, every sequence in $L^{p}$ that converges in $L^{p}$ has a subsequence converging almost everywhere. The typical proof is a byproduct of proving the completeness of $L^{p}$ spaces, and hence is not necessarily ``application-friendly''. We give a simple, perhaps more ``accessible'' proof of this result for all finite measure spaces.
[1389] vixra:2011.0181 [pdf]
New Principles of Differential Equations Ⅳ
In previous papers, we proposed several new methods to obtain general solutions or analytical solutions of some nonlinear partial differential equations. In this paper, we will continue to propose a new effective method to obtain general solutions of certain nonlinear partial differential equations for the first time, such as nonlinear wave equation, nonlinear heat equation, nonlinear Schrödinger equation, etc.
[1390] vixra:2011.0179 [pdf]
On the Belief Coulomb Force
Conflict management is a key issue in D-S evidence theory(DST) and has been the focus of many related researchers. However, there has been a lack of discussion about whether the evidence should be fused. In this paper, in the frame of DST, inspired by the belief universal gravitation[1], we proposed a concept of belief Coulomb force (BCF) to focus on whether or not the evidence should be fused. It aims to discuss the elimination of conflicts in the information fusion process from the perspective of electricity, which may provide us with a new idea to solve the problem of conflict evidence. An application is used to show that the conflict management is solved better than previous methods by using the proposed BCF.
[1391] vixra:2011.0165 [pdf]
A New Solvable Quintic Equation of the Shape X^5 + aX^2 + b = 0
So far, there are in all five solvable quintics of the shape x^5 + ax^4 + b = 0. We have found one more. In this paper, we give that equation and its solutions.
[1392] vixra:2011.0164 [pdf]
A Theoretical Approach to Complex Systems Analysis: Simple Non-Directed Graphs as Homogenous, Morphological Models
Recent advances have begun to blur the lines between theoretical mathematics and applied mathematics. Oftentimes, in a variety of fields, concepts from not only applied mathematics but theoretical mathematics have been employed to great effect. As more and more researchers come to utilize, deploy, and develop both abstract and concrete mathematical models (both theoretical and applied), the demand for highly generalizable, accessible, and versatile mathematical models has increased drastically (Rosen, 2011). Specifically in the case of Complex Systems and the accompanying field of Complex Systems Analysis, this phenomenon has had profound effects. As researchers, academics, and scholars from these fields turn to mathematical models to assist in their scientific inquiries (specifically, concepts and ideas taken from various subsets of graph theory), the limitations of our current mathematical frameworks becomes increasingly apparent. To remedy this, we present the Chang Graph, a simple graph defined by an n-sided regular polygon surrounding a 2n-sided regular polygon. Various properties and applications of this graph are discussed, and further research is proposed for the study of this mathematical model.
[1393] vixra:2011.0163 [pdf]
A General Definition of Means and Corresponding Inequalities
This paper proves inequalities among generalised f-means and provides formal conditions which a function of several inputs must satisfy in order to be a `meaningful' mean. The inequalities we prove are generalisations of classical inequalities including the Jensen inequality and the inequality among the Quadratic and Pythagorean means. We also show that it is possible to have meaningful means which do not fall into the general category of f-means.
[1394] vixra:2011.0148 [pdf]
Nonsingular Start to Inflationary Expansion, Calculating the Cosmological Constant, and a Minimal Time Step, for Our Present Universe Using Klauder Enhanced Quantization for DE
We start with an elementary example of a nonsingular configuration using a trivial solution to massive gravity yielding a nonzero initial time step. Then we present a history of the cosmological constant issue from Einstein’s introduction—which did not work because his static universe solution to the Ricci Scalar problem and GR was unstable—to the radius of the universe being proportional to the inverse square root of the cosmological constant. We use two spacetime first integrals to isolate a nonperturbative cosmological constant solution at the surface of the start of expansion of the universe. A phenomenological solution to the cosmological constant involves scaling the radius of the present universe. Our idea is to instead solve the cosmological constant at the surface of the initial spacetime bubble, using the initially derived time step, ∆t, as input for the cosmological constant. This was done in a Zeldovich4 section for dark energy; solving the initial value of the cosmological constant supports one of the models of DE and why the universe reaccelerated a billion years ago. We depend on Katherine Freese’s Zeldovich4 talk of dark stars, which form supermassive black holes, to consume initially created DM, and Abhay Ashtekar’s nonsingular start to the universe as part of a solution to low l values in CMBR data. We conclude with a reference to a multiverse generalization of Penrose’s cyclic conformal cosmology as input to the initial nonsingular spacetime bubble.
[1395] vixra:2011.0147 [pdf]
Nuclear Fission of Uranium-238 by Electromagnetic Acceleration of Neutrons
This paper discusses the nuclear fission of Uranium-235 and Uranium-238 and further gets into the discussion of how Uranium-238 can be used for fission fuel for power generation
[1396] vixra:2011.0138 [pdf]
Modular Logarithm Unequal
The main idea of this article is simply calculating integer functions in module. The algebraic in the integer modules is studied in completely new style. By a careful construction, a result is proven that two finite numbers is with unequal logarithms in a corresponding module, and is applied to solving a kind of high degree diophantine equation.
[1397] vixra:2011.0131 [pdf]
Topological Stationarity and Precompactness of Probability Measures
We prove the precompactness of a collection of Borel probability measures over an arbitrary metric space precisely under a new legitimate notion, which we term \textit{topological stationarity}, regulating the sequential behavior of Borel probability measures directly in terms of the open sets. Thus the important direct part of Prokhorov's theorem, which permeates the weak convergence theory, admits a new version with the original and sole assumption --- tightness --- replaced by topological stationarity. Since, as will be justified, our new condition is not vacuous and is logically independent of tightness, our result deepens the understanding of the connection between precompactness of Borel probability measures and metric topologies.
[1398] vixra:2011.0129 [pdf]
An Attempt to Decrypt Pages 269-271 of Jonathan Safran Foer's "Extremely Loud & Incredibly Close"
In this paper we attempt to decrypt the sequence of digits given by Jonathan Safran Foer in his novel <i>Extremely Loud & Incredibly Close</i>. We create directed acyclic graphs that a human can follow to find potential solutions. Representations of these graphs are displayed in this paper. The Python code used to produce them is also provided, in the appendix.
[1399] vixra:2011.0126 [pdf]
A Quantum Model of Linear Optical Devices for Quantum Computing
Extending an existing quantum optic model for beamsplitters, a comprehensive model is developed from the first principles of quantum physics to describe photons traversing linear optical devices of arbitrary number of ports and even a circuit of devices, which are essential components of photonic quantum computing gates and circuits. The model derives the quantum operators and states of the photons at the egress ports of a device, gate or circuit from those at the ingress ports. As an application and validation, it is used to model the experiment that discovered the Hong-Ou-Mandel (HOM) effect and to explain the effect without the notion of quantum interference. The experiment is not only a landmark in the research of quantum optics but also important to photonic quantum computing design.
[1400] vixra:2011.0124 [pdf]
Discussion of Foundation of Mathematics and Quantum Theory
Following the results of our recently published book (F. Lev, Finite mathematics as the foundation of classical mathematics and quantum theory. With application to gravity and particle theory. Springer (2020)), we discuss different aspects of classical and finite mathematics and explain why finite mathematics based on a finite ring of characteristic p is more general (fundamental) than classical mathematics: the former does not have foundational problems, and the latter is a special degenerate case of the former in the formal limit p→∞. In particular, quantum theory based on a finite ring of characteristic p is more general than standard quantum theory because the latter is a special degenerate case of the former in the formal limit p→∞ .
[1401] vixra:2011.0123 [pdf]
Computation of Cup I-Product and Steenrod Operations on the Classifying Space of Finite Groups
The aim of this paper is to give a computational treatment to compute the cup iproduct and Steerod operations on cohomology rings of as many groups as possible. We find a new method that could be faster than the methods of Rusin, Vo, and Guillot. There are some available approaches for computing Steenrod operations on these cohomology rings. The computation of all Steenrod squares on the Mod 2 cohomology of all groups of order dividing 32 and all but 58 groups of order 64; partial information on Steenrod square is obtained for all but two groups of order 64. For groups of order 32 this paper completes the partial results due to Rusin, Thanh Tung Vo and Guillot.
[1402] vixra:2011.0101 [pdf]
New Principles of Differential Equations Ⅲ
Using the new method proposed in this paper, in theory, it is possible to obtain general or exact solutions of an infinite number of ordinary differential equations and partial differential equations. These equations can be linear or nonlinear. We enumerate some typical cases and use the new method to prove that some equations do not have certain forms of solutions.
[1403] vixra:2011.0090 [pdf]
An Untold Story of Brownian Motion
Although the concept of Brownian motion or Wiener process is quite popular, proving its existence via construction is a relatively deep work and would not be stressed outside mathematics. Taking the existence of Brownian motion in $C([0,1], \R)$ ``for granted'' and following an existing implicit thread, we intend to present an explicit, simple treatment of the existence of Brownian motion in the space $C([0, +\infty[, \R)$ of all continuous real-valued functions on the ray $[0, +\infty[$ with moderate technical intensity. In between the developments, some informative little results are proved.
[1404] vixra:2011.0083 [pdf]
Theoretical Study of a Spherically Symmetric Exact Solution Without Event Horizon and Its Gravity Loss
To provide solutions for the unresolved theoretical questions of black holes, such as the presence of an event horizon, we propose a new spherically symmetric exact solution (we call the Ryskmit (R) solution). The R solution can be obtained by applying Kruskal-Szekeres coordinates (referred to hereinafter as Kruskal coordinates) to the Schwarzchild solution. The R solution has no singularities other than the origin of coordinates and no “event horizon”; therefore, a black hole from which information could not be extracted from the outside need not be considered. Far from the origin, this solution is approximately equal to the Schwarzschild solution. Another characteristic of this solution is that the gravity reaches its maximum at the Schwarzschild radius, and at the half of this radius, it transits to Minkowski space, in which gravity does not exist. This means that the gravity gradually decreases with distance from the Schwarzschild radius. Based on the law of conservation of energy, we deduced a result that explains the production of sufficient kinetic energy for gamma-ray burst. Furthermore, the metric of this solution was remarkably similar to the Reissner–Nordstrøm metric, and the presence and absence of an electrical charge lead to two different masses at the scale of Planck units where the two solutions match. This is an important relationship for answering questions about dark matter. As described above, this exact solution could be a useful basic equation that sheds light not only on astrophysics, but also on particle theory and the unified field theory.
[1405] vixra:2011.0082 [pdf]
Proof That Newton Gravity Moves at the Speed of Light and Not Instantaneously (Infinite Speed) as Thought!
In this paper, we will prove, based on reasoning as well as mathematical evidence and experimental obser- vations, why Newton gravity moves at the speed of light and is not instantaneous as previously thought. The misunderstanding that Newton gravity is instantaneous has constrained our progress in understanding gravity to its full extent. We will show that all of Newton’s gravitational phenomena contain the Planck length and the speed of gravity; this speed of gravity is identical to the speed of light. A series of gravitational phenomena that are considered to be non-Newtonian and most often explained by theory of general relativity actually contain no information about the speed of gravity. However, all observable gravity phenomena can be predicted from the Planck length and the speed of gravity alone, and we can easily extract both of them from gravitational phenomena with no knowledge of any physics constants. In addition, we can also measure the speed of light from electromagnetic phenomena and then extract the Planck length from any of Newton’s gravitational phenomena with no knowledge of G or h.
[1406] vixra:2011.0078 [pdf]
Young’s Double-Slit and Wheeler’s Delayed-Choice Experiments at a Single-Quantum Level: Wave-Particle Non-Duality
A new `wave-particle non-dualistic interpretation at a single-quantum level' is presented by showing the physical nature of Schrodinger's wave function as an ‘instantaneous resonant spatial mode’ to which a particle's motion is confined. The initial phase associated with a state vector is identified to be related to a particular position eigenstate of the particle and hence, the equality of quantum mechanical time to classical time is obtained; this equality automatically explains the emergence of classical world from the underlying quantum world. Derivation of the Born rule as a limiting case of the relative frequency of detection is provided for the first time, which automatically resolves the measurement problem. Also, the Born rule derivation is supplemented with a geometrical interpretation. `What’s really going on' in Young's double-slit and Wheeler's delayed-choice experiments is explained at a single-quantum level. Also, an interference experiment is proposed to verify the correctness of the non-dualistic interpretation.
[1407] vixra:2011.0076 [pdf]
Tachyons from a Laboratory Perspective
Since the first part of the twentieth century, it has been maintained that faster-than-light motion could produce time travel into the past with its accompanying causality-violating paradoxes. However, there are two different approaches to tachyon communication around a loop, one employs a "hand-off" between momentarily- adjacent observers in relative motion passing each other, while the other applies direct tachyon communication between moving observers who are not adjacent. Tachyon physics in the latter method clearly precludes causality violation, but it is more subtle in the former approach. An analysis of what would be observed in a physics laboratory, rather than what is inferred from a Minkowski diagram, attests that causality violation does not occur in the hand-off method, either. Thus it is demonstrated that tachyons do not violate causality.
[1408] vixra:2011.0073 [pdf]
Bell's Theorem Refuted via True Local Realism
Bell's theorem has been described as the most profound discovery of science; one of the few essential discoveries of 20th Century physics; indecipherable to non-mathematicians. However, taking elementary analysis to be an adequate logic here, we refute Bell's theorem, correct his inequality and identify his error. Further, we do this under the principle of true local realism, the union of true locality (or relativistic causality: no influence propagates superluminally) and true realism (or non-naive realism: some existents change interactively). We thus lay the foundation for a more complete physical theory: one in line with Einstein's ideas and Bell's hopes. Let's see.
[1409] vixra:2011.0069 [pdf]
Turkmen-English Dictionary and the Graphical law
We study the Turkmen-English Dictionary by Jonathan Garrett, Greg Lastowka, Kimberly Naahielua and Meena Pallipamu. We draw the natural logarithm of the number of entries, normalised, starting with a letter vs the natural logarithm of the rank of the letter, normalised. We conclude that the Dictionary can be characterised by BP(4,$\beta H$=0.04) i.e. a magnetisation curve for the Bethe-Peierls approximation of the Ising model with four nearest neighbours with $\beta H$=0.04. H is external magnetic field, $\beta$ is $\frac{1}{k_{B}T}$ where, T is temperature and $k_{B}$ is the Boltzmann constant.
[1410] vixra:2011.0066 [pdf]
Sakata Model Revisited: Hadrons, Nuclei and Scattering
Modern theories of strong interactions suggest that baryon-antibaryon forces can be strongly attractive, and manifestations of ``baryonium'' states have been seen in experiments. In light of these new data, we attempt to revisit the Fermi-Yang-Sakata idea that mesons and baryons are bound states of few fundamental ``sakatons'' identified with $p, n, \Lambda$, and $\Lambda_c$ particles. We optimized parameters of inter-sakaton potentials and calculated meson and baryon mass spectra in fair agreement with experiment. Moreover, the same set of potentials allows us to reproduce approximately elastic scattering cross sections of baryons and binding energies of light nuclei and hypernuclei. This suggests that the Sakata model could be a promising organizing principle in particle and nuclear physics. This principle may also coexist with the modern quark model, where both valence and sea quark contributions to the hadron structure are allowed.
[1411] vixra:2011.0052 [pdf]
Another Topological Proof for Equivalent Characterizations of Continuity
To prove the equivalence between the $\eps$-$\delta$ characterization and the topological characterization of the continuity of maps acting between metric spaces, there are two typical approaches in, respectively, analysis and topology. We provide another proof that would be pedagogically informative, resembling the typical proof method --- principle of appropriate sets --- associated with sigma-algebras.
[1412] vixra:2011.0044 [pdf]
How Likely Is It for Countably Many Almost Sure Events to Occur Simultaneously?
Given a countable collection of almost sure events, the event that at least one of the events occurs is ``evidently'' almost sure. It is, however, not so trivial to assert that the event for every event of the collection to occur is almost sure. Measure theory helps to furnish a simple, definite, and affirmative answer to the question stated in the title. This useful proposition seems to rarely, if not never, occur in a teaching material regarding measure-theoretic probability; our proof in particular would help the beginning students in probability theory to get a feeling of almost sure events.
[1413] vixra:2011.0043 [pdf]
Very Elementary Proof of Invariance of Domain for the Real Line
That every Euclidean subset homeomorphic to the ambient Euclidean space is open, a version of invariance of domain, is a relatively deep result whose typical proof is far from elementary. When it comes to the real line, the version of invariance of domain admits a simple proof that depends precisely on some elementary results of ``common sense''. It seems a pity that an elementary proof of the version of invariance of domain for the real line is not well-documented in the related literature even as an exercise, and it certainly deserves a space. Apart from the main purpose, as we develop the ideas we also make present some pedagogically enlightening remarks, which may or may not be well-documented.
[1414] vixra:2011.0034 [pdf]
Many-Body Fermions and Riemann Hypothesis
We study the algebraic structure of the eigenvalues of a Hamiltonian that corresponds to a many-body fermionic system. As the Hamiltonian is quadratic in fermion creation and/or annihilation operators, the system is exactly integrable and the complete single fermion excitation energy spectrum is constructed using the non-interacting fermions that are eigenstates of the quadratic matrix related to the system Hamiltonian. Connection to the Riemann Hypothesis is discussed.
[1415] vixra:2011.0032 [pdf]
Upper-order Nuclei Consist of H-2, H-3, He-3, He-4 and n
There is no a nucleus with more than two neighboring protons, because the presence of a third proton creates an increased negative potential that exceeds their stability potential, causing a cleaving (beta decay β+) of this third proton. These two protons are next to each other and due to their opposite magnetic moments they create a column of magnetic field, while a magnetic column is created by the rotated neutrons as well. So, the first phase of the nuclei structure ends in He-4. Of course, protons are immobile, while neutrons are rotating around them. However, how is the second nucleus He-4 added? Apparently having a common axis with the first He-4. But why is beryllium Be-8, with the two superimposed nuclei He-4, unstable? We will prove that column construction is based on the stability of carbon C-12 and oxygen O-16, which consist of three superimposed nuclei He-4 and four He-4 respectively. Consequently, the structure of the nuclei begins with the so-called lower-order nuclei, as the deuterium, tritium and helium He-3, which evolve into helium He-4 and then first upper-order oxygen nucleus, that has four helium nuclei He-4 in a column of strong negative electric field.
[1416] vixra:2011.0030 [pdf]
Distribution of Integrals of Wiener Paths
With a new proof approach, we show that the normal distribution with mean zero and variance $1/3$ is the distribution of the integrals $\int_{[0,1]}W_{t}\df t$ of the sample paths of Wiener process $W$ in $C([0,1], \R)$.
[1417] vixra:2011.0029 [pdf]
Growth Order of Standardized Distribution Functions
Denote by $\CDF^{0,1}(\R)$ the class of all (cumulative) distribution functions on $\R$ with zero mean and unit variance; if $F \in \CDF^{0,1}(\R)$, we are interested in the asymptotic behavior of the function sequence $(x \mapsto nF(x/\sqrt{n}))_{n \in \N}$. We show that $\inf_{F \in \CDF^{0,1}(\R)}\liminf_{n \to \infty}nF(x/\sqrt{n}) \geq \Phi(x)$ for all $x \in \R$, which in particular would be a result obtained for the first time regarding the growth order of an arbitrary standardized distribution function on $\R$ near the origin.
[1418] vixra:2011.0027 [pdf]
Comments on "Analytical Features of the Sir Model and Their Applications to Covid-19"
In their article, Kudryashov et al. (2021) try to establish the analytical solution of the SIR epidemiological model. One of the equations given there is wrong, which invalidates the presented solution, derived from this result. The objective of the present letter is to indicate this error and present the correct analytical solution to the SIR epidemiological model.
[1419] vixra:2011.0026 [pdf]
Indirect Polarization Alignment with Points on the Sky, the Hub Test
The alignment of transverse vectors on the sky, such as the polarization directions of electromagnetic radiation from astronomical sources, can be an interesting property of the sources themselves or of the intervening medium between source and detector. For many regions of the Milky Way the alignment of the polarization directions of starlight is evident. However evident visually, it is useful to have a numerical alignment function that can be used to judge the significance of the correlations. The test described here evaluates the tendency for aligned directions to focus on points in the sky, as well as correlations in their avoidance of points in the sky. The formulas needed to conduct the test are derived and two illustrative examples are provided. In one sample from the Milky Way, the polarization directions from starlight are seen to converge far from the sample and, for another sample, a set of quasars with polarized radio emissions, the convergence occurs close to the sample.
[1420] vixra:2011.0015 [pdf]
Probability and Stochastic Analysis in Reproducing Kernels and Division by Zero Calculus
Professor Rolin Zhang kindly invited in The 6th Int'l Conference on Probability and Stochastic Analysis (ICPSA 2021), January 5-7, 2021 in Sanya, China as a Keynote speaker and so, we will state the basic interrelations with reproducing kernels and division by zero from the viewpoint of the conference topics. The connection with reproducing kernels and Probability and Stochastic Analysis are already fundamental and well-known, and so, we will mainly refer to the basic relations with our new division by zero $1/0=0/0=z/0=\tan(\pi/2) =\log 0 =0, [(z^n)/n]_{n=0} = \log z$, $[e^{(1/z)}]_{z=0} = 1$. 
[1421] vixra:2011.0014 [pdf]
Cantor Diagonal Argument
This paper proves a result on the decimal expansion of the rational numbers in the open rational interval (0, 1), which is subsequently used to discuss a reordering of the rows of a table T that is assumed to contain all rational numbers within (0, 1), in such a way that the diagonal of the reordered table T could be a rational number from which different rational antidiagonals (elements of (0, 1) that cannot be in T ) could be defined. If that were the case, and for the same reason as in Cantor’s diagonal argument, the open rational interval (0, 1) would be non-denumerable, and we would have a contradiction in set theory, because Cantor also proved the set of rational numbers is denumerable.
[1422] vixra:2011.0013 [pdf]
Hilbert Machine
Inspired by the emblematic Hilbert Hotel, Hilbert machine is a conceptual super-machine whose functioning questions the consistency of the actual infinity hypothesis subsumed into the Axiom of Infinity.
[1423] vixra:2011.0012 [pdf]
Zeno Dichotomies
This chapter introduces a formalized version of Zeno’s Dichotomy in its two variants (here referred to as Dichotomy I and II) based on the discreteness and separation of ω-order (Dichotomy I) and of ω∗-order (Dichotomy II) defined below in this section. Each of these formalized versions leads to a contradiction pointing to the inconsistency of the hypothesis of the actual infinity.
[1424] vixra:2011.0011 [pdf]
Khasi English Dictionary and the Graphical law
We study Khasi English Dictionary by U Nissor Singh. We draw the natural logarithm of the number of entries, normalised, starting with a letter vs the natural logarithm of the rank of the letter, normalised. We conclude that the Dictionary can be characterised by BP(4,$\beta H$=0) i.e. a magnetisation curve for the Bethe-Peierls approximation of the Ising model with four nearest neighbours with $\beta H$=0 i.e. H=0. $\beta$ is $\frac{1}{k_{B}T}$ where, T is temperature, H is external magnetic field and $k_{B}$ is the Boltzmann constant.
[1425] vixra:2011.0005 [pdf]
Some New Type Laurent Expansions and Division by Zero Calculus; Spectral Theory
In this paper we introduce a very interesting property of the Laurent expansion in connection with the division by zero calculus and Euclid geometry by H. Okumura. The content may be related to analytic motion of figures. We will refer to some similar problems in the spectral theory of closed operators.
[1426] vixra:2010.0239 [pdf]
Can Both the Many Worlds and the Copenhagen Interpretation be True?
The Many Worlds Interpretation (MWI) was proposed by H.Everett. It has sounded like most senseless stuff to the scientists of his time, but today it is one of the leading interpretations of Quantum Mechanics (QM). I argue that this interpretation does not conflict with the conventional interpretation. Namely, the observer whose mind does not depart from Our Universe must use the Copenhagen Interpretation (CI).
[1427] vixra:2010.0228 [pdf]
Division by Zero Calculus and Euclidean Geometry - Revolution in Euclidean Geometry
In this paper, we will discuss Euclidean geometry from the viewpoint of the division by zero calculus with typical examples. Where is the point at infinity? It seems that the point is vague in Euclidean geometry in a sense. Certainly we can see the point at infinity with the classical Riemann sphere. However, by the division by zero and division by zero calculus, we found that the Riemann sphere is not suitable, but D\"aumler's horn torus model is suitable that shows the coincidence of the zero point and the point at infinity. Therefore, Euclidean geometry is extended globally to the point at infinity. This will give a great revolution of Euclidean geometry. The impacts are wide and therefore, we will show their essence with several typical examples.
[1428] vixra:2010.0227 [pdf]
A Journey to the Pierce-Birkhoff Conjecture
This paper initializes the study of the Pierce-Birkhoff conjecture. We start by introducing the notion of the area and volume induced by a multivariate expansion and develop some inequalities for our next studies. In particular we obtain the inequality \begin{align} \sum \limits_{\substack{i,j\in [1,n]\\a_{i_{\sigma(s)}}<a_{j_{\sigma(s)}}\\s\in [1,l]\\v\neq i,j\\v\in [1,n] }}\bigg | \bigg |\vec{a}_{i} \diamond \vec{a}_{j}\diamond \cdots \diamond \vec{a}_v\bigg |\bigg |\sum \limits_{k=1}^{n}\int \limits_{a_{i_{\sigma(l)}}}^{a_{j_{\sigma(l)}}}\int \limits_{a_{i_{\sigma(l-1)}}}^{a_{j_{\sigma(l-1)}}}\cdots \int \limits_{a_{i_{\sigma(1)}}}^{a_{j_{\sigma(1)}}}g_kdx_{\sigma(1)}dx_{\sigma(2)}\cdots dx_{\sigma(l)}\nonumber\\ \leq 2C\times \binom{n}{2}\times \sqrt{n}\times \nonumber \\ \times \int \limits_{a_{i_{\sigma(l)}}}^{a_{j_{\sigma(l)}}}\int \limits_{a_{i_{\sigma(l-1)}}}^{a_{j_{\sigma(l-1)}}}\cdots \int \limits_{a_{i_{\sigma(1)}}}^{a_{j_{\sigma(1)}}}\sqrt{\bigg(\sum \limits_{k=1}^{n}(\mathrm{max}(g_k))^2\bigg)}dx_{\sigma(1)}dx_{\sigma(2)}\cdots dx_{\sigma(l)}\nonumber \end{align}for some constant $C>0$, where $\sigma:\{1,2,\ldots,l\}\longrightarrow \{1,2,\ldots,l\}$ is a permutation for $g_k\in \mathbb{R}[x_1,x_2,\ldots,x_l]$ and $\vec{a}_{i} \diamond \vec{a}_{j}\diamond \cdots \diamond \vec{a}_k \diamond \vec{a}_{v}$ is the cross product of any of the $n-1$ fixed spots in $\mathbb{R}^{l}$ including the spots $\vec{a}_i,\vec{a}_j$.
[1429] vixra:2010.0225 [pdf]
FUSIONET: A Scalable Framework for Image Classification
Convolutional Neural Networks have become state-of-the-art methods for image classification in recent times. CNNs have proven to be very productive in identifying objects, human faces, powering machine vision in robots as well as self-driving cars. At this point, they perform better than human subjects on a large number of image datasets. A large portion of these datasets depends on the idea of solid classes. Hence, Image classification has become an exciting and appealing domain in Artificial Intelligence (AI) research. In this paper, we have proposed a unique framework, FUSIONET, to aid in image classification. Our proposition utilizes the combination of 2 novel models in parallel (MainNET, a 3 x 3, architecture and AuxNET, a 1 x 1 architecture) Successively; these relatively feature maps, extracted from the above combination are fed as input features to a downstream classifier for classification tasks about the images in question. Herein FUSIONET, has been trained, tested, and evaluated on real-world datasets, achieving state-of-the-art on the popular CINIC-10 dataset.
[1430] vixra:2010.0224 [pdf]
The Extraordinary Mathematical Properties of the Fine Structure Constant by Its Relation to the Monster Group and Some Mathematical Constants
As R. P. Feynman said: the fine structure constant is the probability that an electron emits or absorbs a photon. In this work several equations are shown; the majority of empirical heuristic character, by which the precise value of the fine-structure constant (its inverse) is obtained. Far from being pure numerology, we think that they are not accidental. We rely to affirm this in that in all there appear repetitively either mathematical constants such as Pi, e, etc, and / or quantum corrections due to the masses of the leptons with electric charge and the masses of the Wand Z bosons (and its entropies). This rules out its casual character. That there are so many possible equations and that they are related to very relevant aspects of mathematics, such as the monster group and others; allows us to demonstrate the extraordinary mathematical properties of this dimensionless constant
[1431] vixra:2010.0222 [pdf]
Assuming C Less Than Rad2(abc) Implies the ABC Conjecture Is True
In this paper about the $abc$ conjecture, assuming the condition $c<rad^2(abc)$ holds, and the constant $K(\epsilon)$ is a smooth function, having a derivative for $\epsilon \in ]0,1[$, then we give the proof of the $abc$ conjecture.
[1432] vixra:2010.0219 [pdf]
Comment on "Quantum Correlations Are Weaved by the Spinors of the Euclidean Primitives"
I point out some obviously fatal mathematical errors in the recent paper published in Royal Society Open Science and entitled "Quantum correlations are weaved by the spinors of the Euclidean primitives" by Joy Christian, director of the "Einstein Center for Local Realistic Physics" in the city of Oxford, in the UK. Submitted to RSOS on the invitation of the editors. This is my version 3; 29 October, 3020. Minor errors corrected and logos blanked.
[1433] vixra:2010.0217 [pdf]
Exact Self-Consistent Effective Hamiltonian Theory
We propose a general variational fermionic many-body wavefunction that generates an effective Hamiltonian in quadratic form which can then be exactly solved. The theory can be constructed within density functional theory framework and a self-consistent scheme is proposed for solving the exact density functional theory. We apply the theory to structurally disordered system and a symmetric and asymmetric Hubbard dimer and corresponding lattice models and the the single fermion excitation spectra show a persistent gap due to the fermionic entanglement induced pairing condensate. For disordered system, density of state at the edge of the gap diverges in the thermodynamic limit, suggesting a topologically ordered phase and a sharp resonance is predicted as the gap is not dependent on the temperature of the system. For the symmetric Hubbard model, the gap for both half filling and doped case suggests quantum phase transition between the AFM and SC is a continuous phase transition.
[1434] vixra:2010.0214 [pdf]
The Wave Equations for the Lorentz Transformations
We prove here, by the rigorous mathematical procedure, that so-called Lorentzian time in the special theory of relativity is defined by the wave equation, where the wave of time is the form of matter and not the Bergson physiological process in S and S′.
[1435] vixra:2010.0208 [pdf]
Error in Derivation of Compton Scattering Formula
Arthur H. Compton published his light photon scattering theory in 1922. He derived a remarkably elegant formula which now bears his name, the Compton scattering formula: λ' - λ = λ_c(1 - cos(φ). It was derived basically from energy and momentum conservation considerations for collision of x-ray or gamma-ray photons with electrons within the atoms of light elements. Due possibly to the sterling reputation of Compton as a physicist, his theory was readily accepted. But there is a critical flaw in the derivation of the Compton formula that should render the formula dubious. In the derivation, Compton assumed the scattering electron to be initially at rest. The original experiment of Compton used carbon graphite as the scattering target. The ionization energy of carbon is about 11.3eV and this is also the kinetic energy of the least bound electrons in the carbon atom. For the scattering angle of 10°, the energy lost to the x-ray photon which ended up as the recoil energy of the scattered electron was around 9.04eV. This shows that the initial kinetic energy of the scattering electron is not insignificant and should not be ignored. This unjustified assumption in the derivation makes the generality of the Compton scattering formula now dubious.
[1436] vixra:2010.0199 [pdf]
Fibonacci-zeta Infinite Series Associated with the Polygamma Functions
We derive new infinite series involving Fibonacci numbers and Riemann zeta numbers. The calculations are facilitated by evaluating linear combinations of polygamma functions of the same order at certain arguments.
[1437] vixra:2010.0194 [pdf]
Physics and the Problem of Change
Physics, the science of change, has managed to discover and to explain a large number of qualitative and quantitative aspects of a large number of natural changes, but change itself remains unexplained since we first faced it, over twenty-seven centuries ago. This paper proves, in terms of transfinite arithmetic, that change is inconsistent within the infinitist framework of the spacetime continuum, were all solutions have been tried until now. It then proposes a consistent solution within the finitist framework of the discrete spacetimes of cellular automata like models, proving the factor that convert between continuum and discrete spacetimes has the algebraic form of the relativistic factor of Lorentz transformation, which could be reinterpreted as an operator to translate between a consistent discrete reality and an inconsistent continuous reality.
[1438] vixra:2010.0193 [pdf]
The Waring Rank of the 3 x 3 Determinant
Let f be a homogeneous polynomial of degree d with coefficients in C. The Waring rank of f is the smallest integer r such that f is a sum of r powers of linear forms. We show that the Waring rank of the polynomial x1 y2 z3 − x1 y3 z2 + x2 y3 z1 − x2 y1 z3 + x3 y1 z2 − x3 y2 z1 is at least 18, which matches the known upper bound.
[1439] vixra:2010.0189 [pdf]
Isometric Admissibility for Bounded Subrings
Let H̃ be a right-irreducible, contra-characteristic, pairwise commutative manifold. In [23, 36, 15], the authors address the compactness of algebraic topoi under the additional assumption that − ζ̃(y) > ρ 00 0, . . . , BA( Ḡ) . We show that Φ > −1. The work in [35, 21] did not consider the associative case. In [17], the authors address the smoothness of real, Turing, sub-continuously D-Gödel random variables under the additional assumption that O > kT k.
[1440] vixra:2010.0187 [pdf]
Towards Science Unification Through Number Theory
The Number Theory comes back as the heart of unified Science, in a Computing Cosmos using the bases 2;3;5;7 whose two symmetric combinaisons explain the main lepton mass ratios. The corresponding Holic Principle induces a symmetry between the Newton and Planck constants which confirms the Permanent Sweeping Holography Bang Cosmology, with invariant baryon density 3/10, the dark baryons being dephased matter-antimatter oscillation. This implies the DNA bi-codon mean isotopic mass, confirming to 0.1 ppm the electron-based Topological Axis, whose terminal boson is the base 2 c-observable Universe in the base 3 Cosmos. The physical parameters involve the Euler ideonal numbers and the special Fermat primes of Wieferich (bases 2) and Mirimanoff (base 3). The prime numbers and crystallographic symmetries are related to the 4-fold structure of the DNA bi-codon. The forgotten Eddington’s proton-tau symmetry is rehabilitated, renewing the supersymmetry quest. This excludes the concepts of Multiverse, Continuum, Infinity, Locality and Zero-mass Particle, leading to stringent predictions in Cosmology, Particle Physics and Biology.
[1441] vixra:2010.0178 [pdf]
Determinism, Quantum Mechanics and Asymmetric Visible Matter
The focus of this note is in the formation of matter-antimatter asymmetric universe without antimatter in the first place. To avoid problems of best known published preon models we utilize ´t Hooft theory’s deterministic Hilbert space methods to preons. Inflation is started in a ultra dense graviton phase predominating the very early universe and producing supersymmetric preons, axion like particles and torsion in spacetime. All standard model and dark sector fermions are created as spectators during early inflation from the preons. The dark sector particles are spectators all the way beyond reheating while the visible sector particles couple to the inflaton. Before reheating is reached supersymmetry is broken to the minimal supersymmetric standard model by gravitational mediation from the preon sector. Consequently, asymmetric visible matter, symmetric dark matter and dark energy are produced, and much later nucleons and light nuclei are formed. The deterministic preon level structure is necessary for the mechanism which creates from C symmetric preons the asymmetric standard model visible matter directly, without notable amount of antimatter and without the Sakharov conditions.
[1442] vixra:2010.0170 [pdf]
A Wave Representation for Massless Neutrino Oscilltaions: The Weak
There are solutions of the Klein-Gordon equation for the massless neutrino that produce massless neutrino oscillation of flavor. These solutions serve as a counterexample to Pontecorvo, Maki, Nakagawa, and Sakata theory for neutrino oscillation of flavor, which implies neutrinos must have mass contrary to the standard model. We show that the wave function for the massless antineutrino for an inverse beta decay (IBD) is a superposition of two independent solutions of the Klein-Gordon equation. One solution represents the latent incident wave upon an IBD. The other solution represents the latent reflected wave from the IBD. This superposition renders a compound modulated wave function with regard to amplitude and phase modulations. This compound modulation is shown to facilitate neutrino oscillation that may be massless and, therefore, consistent with the standard model. Extra to a massless counterexample, the weak interaction is shown to transmute the wave function during an IBD by changing the amounts of the latent incident and latent reflected wave functions that are allocated to the superposition.
[1443] vixra:2010.0166 [pdf]
The Gaussian Law of Gravitation under Collision Space-Time
In this short note, we present the Gaussian law of gravitation, based on the concept that the mass is collision- time, see our paper Collision Space-Time, [1].
[1444] vixra:2010.0163 [pdf]
Biquaternion Based Construction of the Weyl- and Dirac Matrices and Their Lorentz Transformation Operators
The necessity of Lorentz transforming the Dirac matrices is an ongoing issue with contradicting opinions. The Lorentz transformation of Dirac spinors is clear but for the Dirac adjoint, the combination of a spinor and the `time-like' zeroth gamma-matrix, the situation is fussy again. In the Feynman slash objects, the gamma matrix four vector connects to the dynamic four vectors without really becoming one itself. The Feynman slash objects exist in 4-D Minkowsky space-time on the one hand, the gamma matrices are often taken as inert objects like the Minkowski metric itself on the other hand. To be short, a slumbering confusion exists in RQM's roots. In this paper, first a Pauli-level biquaternion environment equivalent to Minkowski space-time is presented. Then the Weyl-Dirac environment is produced as a PT doubling of the biquaternion Pauli-environment. It is the production process from basic elements that produces some clarification regarding the mentioned RQM foundational fussiness.
[1445] vixra:2010.0162 [pdf]
Comparative Statics for Oligopoly: Flawless Mathematics Applied to a Faulty Result
The assumption that each player conditions on the endogenous actions of his rivals when the players are unable to cooperate is inconsistent with the assumption of rational, optimising behavior.
[1446] vixra:2010.0159 [pdf]
Thomson Lamp
The argument of Thomson lamp and Benacerraf’s critique are reexamined from the perspective of the w-order legitimated by the hypothesis of the actual infinity subsumed into the Axiom of Infinite. The conclusions point to the inconsistency of that hypothesis.
[1447] vixra:2010.0158 [pdf]
Double Relativity: An Inconsistent Reflection of Light
This paper discusses the role of Lorentz transformation in two inconsistent changes in the velocity of a photon freely moving through a standard fluid each time it is reflected by a mirror inside the fluid, being the fluid at rest in its container and the container observed at rest and at uniform relative motion.
[1448] vixra:2010.0154 [pdf]
Modeling That Predicts Elementary Particles and Explains Data about Dark Matter, Early Galaxies, and the Cosmos
We try to solve three decades-old physics challenges. List all elementary particles. Describe dark matter. Describe mechanisms that govern the rate of expansion of the universe. We propose new modeling. The modeling uses extensions to harmonic oscillator mathematics. The modeling points to all known elementary particles. The modeling suggests new particles. Based on those results, we do the following. We explain observed ratios of dark matter amounts to ordinary matter amounts. We suggest details about galaxy formation. We suggest details about inflation. We suggest aspects regarding changes in the rate of expansion of the universe. We interrelate the masses of some elementary particles. We interrelate the strengths of electromagnetism and gravity. Our work seems to offer new insight regarding applications of harmonic oscillator mathematics. Our work seems to offer new insight regarding three branches of physics. The branches are elementary particles, astrophysics, and cosmology.
[1449] vixra:2010.0152 [pdf]
Finitist Results Concerning Physics
This paper uses transfinite ordinals to prove the distance between any two given points and the interval of time between any two given instants can only be finite, and that, under certain conditions, the number of events between any two events is always finite. It also proves a contradiction involving the actual infinity hypothesis on which the spacetime continuum is grounded. The alternative of a discrete spacetime is then considered, and the consideration leads, via Pythagoras digital theorem, to the conclusion that the factor for converting between continuous and digital geometries is the relativistic Lorentz factor if length is replaced with the product of speed and time in a isotropic space. These finitist results suggest the convenience to consider the possibility of a digital interpretation of special relativity.
[1450] vixra:2010.0147 [pdf]
A Genetic Algorithm and Discriminant Analysis Based Outlier Detector
Fisher Discriminant Analysis (FDA), also known as Linear Discriminant Analysis (LDA) is a simple in nature yet highly effective tool for classification for vast types of datasets and settings. In this paper, we propose to leverage the discriminative potency of FDA for an unsupervised outlier detection algorithm. Unsupervised anomaly detection has been a topic of high interest in literature due to its numerous practical applications and fuzzy nature of subjective interpretation of success, therefore it is important to have different types of algorithms which can deliver distinct perspectives. Proposed method selects the subset of outlier points based on the maximization of LDA distance between the class of non-outliers via genetic algorithm.
[1451] vixra:2010.0146 [pdf]
Super-Almost Countable Fields for a Functor
Let τ be an almost elliptic morphism. Recent interest in finitely right-free monoids has centered on constructing pseudo-analytically maximal groups. We show that every continuously negative system is degenerate, uncountable and invariant. In this setting, the ability to construct almost surely closed, almost surely hyper-empty homeomorphisms is essential. It would be interesting to apply the techniques of [11] to natural functors.
[1452] vixra:2010.0132 [pdf]
Proving Unproved Euclidean Propositions on a New Foundational Basis
This article introduces a new foundation for Euclidean geometry more productive than other classical and modern alternatives. Some well-known classical propositions that were proved to be unprovable on the basis of other foundations of Euclidean geometry can now be proved within the new foundational framework. Ten axioms, 28 definitions and 40 corollaries are the key elements of the new formal basis. The axioms are totally new, except Axiom 5 (a light form of Euclid’s Postulate 1), and Axiom 8 (an extended version of Euclid’s Postulate 3). The definitions include productive definitions of concepts so far primitive, or formally unproductive, as straight line, angle or plane The new foundation allow to prove, among other results, the following axiomatic statements: Euclid's First Postulate, Euclid's Second Postulate, Hilbert's Axioms I.5, II.1, II.2, II.3, II.4 and IV.6, Euclid's Postulate 4, Posidonius-Geminus' Axiom, Proclus' Axiom, Cataldi's Axiom, Tacquet's Axiom 11, Khayyam's Axiom, Playfair's Axiom, and an extended version of Euclid's Fifth Postulate.
[1453] vixra:2010.0130 [pdf]
A Wrong Argument in a Seminal Physics Paper
This article examines a wrong argument in a seminal physics paper published 114 years ago: Einstein's paper on the electrodynamics of moving objects. Although the argument in question is not determinant for the main conclusions of the paper (thanks to the prevalence of Lorentz transformation also deduced in the same paper), it contains several basic and significant errors that have remained undetected to this day. Those errors are consequences of a misinterpretation of the Principle of Relativity, surprisingly the same misinterpretation one can find in some naif critiques of special relativity. Considering the relevance of the paper and of its author, and the importance of the errors, there is no alternative but to make them public.
[1454] vixra:2010.0115 [pdf]
"Phase-Tube" Structure Associated with Quantum State Vector and the Born Rule
According to non-dualistic interpretation of quantum mechanics, the initial/ global/overall phase associated with a quantum state vector is related to a particular eigenstate of an observable. This phase gives raise to a ``tube" like geometrical structure, associated with the state vector and the tube branches into several smaller tubes. The total number of smaller tubes is equal to the total number of eigenstates of the observable; each branch is associated with a particular eigenstate. The cross-sectional area of the initial tube is equal to the sum of cross-sectional areas of all tubes, resulting in the Born rule and also in the conservation of probability in quantum mechanics.
[1455] vixra:2010.0108 [pdf]
Argumenting the Validity of Riemann Hypothesis
There are tens of self-proclaimed proofs for the Riemann Hypothesis and only 2 or 4 disproofs of it in arXiv. To this Status Quo I am adding my very short and clear results even without explicit mentioning prime numbers. One of my breakthroughs uses the peer-reviewed achievement of Dr.~Sol\'e and Dr.~Zhu, published just 4 years ago in a serious mathematical journal INTEGERS.
[1456] vixra:2010.0105 [pdf]
Fibonacci Series from Power Series
We show how every power series gives rise to a Fibonacci series and a companion series involving Lucas numbers. For illustrative purposes, Fibonacci series arising from trigonometric functions, inverse trigonometric functions, the gamma function and the digamma function are derived. Infinite series involving Fibonacci and Bernoulli numbers and Fibonacci and Euler numbers are also obtained.
[1457] vixra:2010.0101 [pdf]
The Two Relativistic Rydberg Formulas of Suto and Haug: Further Comments
In a recent paper, we [1] discussed that Suto [2] has pointed out an interesting relativistic extension of Rydberg’s formula. In that paper, we had slightly misunderstood Suto’s approach, something we will comment on further here. The relativistic Suto formula is actually derived from a theory where the standard relativistic momentum relation is changed. The relativistic Rydberg formula we presented and mistakenly thought was the same as Suto’s formula is, on the other hand, derived to be fully consistent with the standard relativistic momentum relation. Here we will point out the differences between the formulas and correct some errors in our previous paper. The paper should give deeper and better intuition about the Rydberg formula and what it represents.
[1458] vixra:2010.0067 [pdf]
Predictive Mathematical Models of the Covid–19 Pandemic in Ode/sde Framework
This article proposes a viral diffusion model (like Covid-19 pandemic) in the ordinary differential equations (ODE) and stochastic differential equations (SDE) framework. The classic models based on the logistic map are analyzed, and then a noise term is introduced that models the behavior of the so-called deniers. This model fairly faithfully reproduces the Italian situation in today’s period. We then move on to local analysis, arriving at an equation of continuity for what concerns the density of the number of infected in an assigned region. We, therefore, prove a Theorem according to which classical logistics is the most catastrophic of predictions. In a realistic scenario, it is necessary to take into account the inevitable fluctuations in the aforementioned density. This implies a fragmentation of the initial cluster (generated by “patient zero”) into an N disjoint sub clusters. For very large N, statistical analysis suggests the use of the two-point correlation function (and more generally, n-points). In principle, an estimate of this function makes it possible to determine the evolution of the pandemic. The distribution of the sub clusters could be fractal, exactly as it happens for the distribution of galaxies starting from a homogeneous and isotropic primordial universe, but with random fluctuations in matter density. This is not surprising, since due to the invariance in scale, fractals have a low “computational cost”. The idea that pandemics are cyclical processes, that is, they occur with a given periodicity, would therefore remain corroborated.
[1459] vixra:2010.0063 [pdf]
Vortex Cotes’s Spiral in an Extratropical Cyclone in the Southern Coast of Brazil
Ae extratropical cyclone” is an at- mospheric phenomenon that occurs when there is a very rapid drop in central atmospheric pressure. This phenomenon, with its characteristic of rapidly lowering the pressure in its interior, generates very intense winds and for this reason it is called explosive cyclone, “cyclone bomb” CB. It was determined the mathematical equation of the shape of the extratropical cyclone, being in the shape of a spiral called "Cotes’s Spiral." In the case of CB, which formed in the south of the Atlantic Ocean, and passed through the south coast of Brazil in July 2020, causing great damages in several cities in the State of Santa Catarina, Southern Brazil. With gusts recorded of 116 km/h, atmospheric phenomenon – CB hit southern Brazil on June 30, the beginning of winter 2020, causing destruction in its influence over. In five hours the CB traveled a distance of 257.48 km (159.99 miles), at an average speed of 51.496 km/h (31.998 miles/h) 27.81 knots, moved towards ENE, with a low pressure center of 986 mbar, 07:20 UTC, approximate location 35◦S 45◦W, and 5 hours after 12:20 UTC had already grown and had a low pressure center of 972 mbar, approximate location 34◦S 42◦30’W. The temperatures of the clouds and the surface near the low pressure center of the CB. The temperature in the center of the CB is approximately 45◦C at 07:20 UTC, July 1, 2020. Five hours later, at 12:20 UTC, in the low pressure center of the CB, the temperature varies from 45◦C to -30◦C, indicating that the CB increases in size and further tapers its core, sucking a great amount of steam to high altitudes of water where it condenses quickly.
[1460] vixra:2010.0060 [pdf]
Auto-Encoder Transposed Permutation Importance Outlier Detector
We propose an innovative, trivial yet effective unsupervised outlier detection algorithm called Auto-Encoder Transposed Permutation Importance Outlier Detector (ATPI), which is based on the fusion of two machine learning concepts, autoencoders and permutation importance. As unsupervised anomaly detection is a subjective task, where the accuracy of results can vary on the demand; we believe this kind of a novel framework has a great potential in this field.
[1461] vixra:2010.0059 [pdf]
Structural Entropy of Daily Number of COVID-19 Related Fatalities
A recently proposed temporal correlation based network framework applied on financial markets called Structural Entropy has prompted us to utilize it as a means of analysis for COVID-19 fatalities across countries. Our observation on the resemblance of volatility of fluctuations of daily novel coronavirus related number of deaths to the daily stock exchange returns suggests the applicability of this approach.
[1462] vixra:2010.0055 [pdf]
The Alternate Interpretation of the Quantum Theory Utilizing Indefinite Metric
In this paper, we propose an alternate interpretation of the quantum theory using objective physical reality that does not depend on the conventional probability interpretation.As typical physical phenomena for the probability interpretation, we consider the single-photon interference, single-electron interference, and EPR correlation experimentsusing photon polarization. For the calculation using the alternate interpretation, the minus sign derived from the covariant quantization of Maxwell’s equations,which is associated with the scalar potential of time axis component of four-vector, is taken as it is as an inevitable request from the theory. In addition, geometrical phaseis incorporated, which can be recognized as a kind of scalar potential. We show that both conventional and alternate interpretation derive the identical calculation resultsfor these single photon, single electron interference, and EPR correlation.These alternate calculation processes describe that there is a kind of scalar potential in whole space-time and when there is some geometrical arrangement in thespace, the scalar potential forms the oscillatory field of the potential according to the arrangement. It reveals the objective physical reality that the single-photon,single-electron interference, and EPR correlation are generated by the movement of the photons and electrons in the oscillatory field with interference.In addition, we show that the oscillatory field formation of the scalar potential depending on the geometrical arrangement causes energy fluctuation in the space,which enables removal of infinite zero-point energy and causes spontaneous symmetry breaking and Casimir effect. By recognizing the electromagnetic field as anunitary U(1) gauge field and generalizing it to a special unitary SU(2), we also show the uncharged particle, e.g. neutron, interferences are generated by the geometricalarrangement of the SU(2) gauge field or geometrical phase. Furthermore, we discuss the origin of the scalar potential by distinguishing the space where the substanceexists and the vacuum.Finally, by introducing the extended Lorentz gauge, we propose the alternate solution without physical state and subsidiary condition for the contradiction between Lorentz gauge as an operator equation and the commutation relation in the covariant canonical quantization of Maxwell’s equations with the conventional Lorentz gauge.This paper contains the compilation of published author’s papers cite{Morimoto1,Morimoto2} in addition to featured discussions such as the physical reality, uncharged particle interference,geometrical phase and alternate proposal for the contradiction between Lorentz gauge as an operator equation and commutation relation.
[1463] vixra:2010.0050 [pdf]
Division by Zero Calculus and Laplace Transform
In this paper, we will discuss the Laplace transform from the viewpoint of the division by zero calculus with typical examples. The images of the Laplace transform are analytic functions on some half complex plane and meanwhile, the division by zero calculus gives some values for isolated singular points of analytic functions. Then, how will be the Laplace transform at the isolated singular points? For this basic question, we will be able to obtain a new concept for the Laplace integral.
[1464] vixra:2010.0035 [pdf]
On a Modular Property of Odd Numbers Under Tetration
The aim of this paper is to generalize problem 3 of the 2019 PROMYS exam, which asks to show that the last 10 digits (in base 10) of t_n are same for all n >= 10, where t_0 = 3 and t_(k+1) = 3^(t_k). The generalization shows that given any positive odd integer p, t_m is congruent to t_n modulo [(p^2)+1]^n for all m >= n >= 1, where t_0 = p and t_(k+1) = p^(t_k)
[1465] vixra:2010.0026 [pdf]
Philosophy of Mathematics and Division by Zero
From the viewpoint of the philosophy of mathematics, we would like to introduce our recent results on the division by zero that has a long and mysterious history.
[1466] vixra:2010.0020 [pdf]
Calculus of Astronomic Latitude and Longitude Determined by Astronomic Observations Called "Equal Height"
In this paper, we present the calculus of astronomic latitude and longitude determined by astronomic observations method called "equal height". We give also the standard deviation of each unknowns.
[1467] vixra:2009.0195 [pdf]
The Super-Generalised Fermat Equation Pa^x + Qb^y=rc^z and Five Related Proofs
In this paper, we consider five proofs related to the super-generalised Fermat equation, Pa^x + Qb^y=Rc^z. All proofs depend on a new identity for a^x + b^y which can be expressed as a binomial sum to an indeterminate power, z. We begin with the Generalised Fermat Conjecture, for the case P,Q,R=1, also known as the Tijdeman-Zagier Conjecture and Beal Conjecture. We then show how the method applies to its famous corollary Fermat's Last Theorem, where x,y,z=n. We then return to the title equation, considered by Henri Darmon and Andrew Granville and extend the proof for the case P,Q,R>1 and x,y,z>2. Finally, we use the results to prove Catalan's Conjecture, and from this a weak proof that under certain conditions only one solution exists for equations of the form a^4-c^2=b^y.
[1468] vixra:2009.0194 [pdf]
RLC Circuits and Division by Zero Calculus
In this paper, we will discuss an RLC circuit for missing the capacitor from the viewpoint of the division by zero calculus as a typical example.
[1469] vixra:2009.0173 [pdf]
Can a Video Game and Artificial Intelligence Assist for Selecting National Soccer Squads ?
We have used the FIFA19 video game open dataset of soccer player attributes and the actual list of squads of national teams that participated in World Cup 2018, which almost coincides in time with the game’s release date. With the intended rationale behind that numerous expert game developers should have spent considerable amount of time to assess each individual player’s attributes; we can develop and test data science and machine learning tools to select national soccer teams in an attempt to assist coaches. The work provides detailed explanatory data analysis and state-of-the-art machine learning and interpretability measures.
[1470] vixra:2009.0166 [pdf]
The Circle Embedding Method and Applications
In this paper we introduce and develop the circle embedding method. This method hinges essentially on a Combinatorial structure which we choose to call circles of partition. We provide applications in the context of problems relating to deciding on the feasibility of partitioning numbers into certain class of integers. In particular, our method allows us to partition any sufficiently large number $n\in\mathbb{N}$ into any set $\mathbb{H}$ with natural density greater than $\frac{1}{2}$. This possibility could herald an unprecedented progress on class of problems of similar flavour. The paper finishes by giving a partial proof of the binary Goldbach conjecture.
[1471] vixra:2009.0165 [pdf]
Combining Conflicting Evidences Based on Pearson Correlation Coefficient and Weighted Graph
Dempster-Shafer evidence theory (evidence theory) has been widely used for its great performance of dealing with uncertainty. Based on evidence theory, researchers have presented different methods to combine evidences. Dempster's rule is the most well-known combination method, which has been applied in many fields. However, Dempster's rule may yield counter-intuitive results when evidences are in high conflict. To improve the performance of combining conflicting evidences, in this paper, we present a new evidence combination method based on Pearson correlation coefficient and weighted graph. The proposed method can correctly identify the target with a high accuracy. Besides, the proposed method has a better performance of convergence compared with other combination methods. In addition, the weighted graph generated by the proposed method can directly represent the relation of different evidences, which can help researchers to determine the reliability of every evidence. Moreover, an experiment is expounded to show the efficiency of the proposed method, and the results are analyzed and discussed.
[1472] vixra:2009.0158 [pdf]
Names of Minor Planets and the Graphical law
We study the Dictionary of Minor Planet Names of L. D. Schmadel. We draw the natural logarithm of the number of names, normalised, starting with a letter vs the natural logarithm of the rank of the letter, normalised. We conclude that the names of the minor planets discovered upt o 1900 and 1910 can be characterised by BP(4,$\beta H=0$) and the minor planets discovered up to 1920, 1930 and 1940 can be characterised by BP(4,$\beta H=0.02$) i.e. a magnetisation curve for the Bethe-Peierls approximation of the Ising model with four nearest neighbours with $\beta H=0.02$. $\beta$ is $\frac{1}{k_{B}T}$ where, T is temperature, H is external magnetic field and $k_{B}$ is the Boltzmann constant.
[1473] vixra:2009.0154 [pdf]
Exceptional Jordan Matrix Models, Octonionic $p$-Branes and Star Product Deformations
A brief review of the essentials behind the construction of a Chern-Simons-like brane action from the Large $N$ limit of Exceptional Jordan Matrix Models paves the way to the construction of actions for membranes and $p$-branes moving in octonionic-spacetime backgrounds endowed with octonionic-valued metrics. The main result is that action of a membrane moving in spacetime backgrounds endowed with an octonionic-valued metric is not invariant under the usual diffeomorphisms of its world volume coordinates $ \sigma^a \rightarrow \sigma'^a (\sigma^b) $, but instead it is invariant under the rigid $ E_{ 6 ( - 26) } $ transformations which preserve the volume (cubic) form. The star-product deformations of octonionic $p$-branes follow. In particular, we focus on the octonionic membrane along with the phase space quantization methods developed by \cite{Szabo} within the context of Nonassociative Quantum Mechanics. We finalize with some concluding remarks on Double and Exceptional Field theories, Nonassociative Gravity and $A_\infty, L_\infty$ algebras.
[1474] vixra:2009.0152 [pdf]
Motzkin Islands: a 3-dimensional Embedding of Motzkin Paths
A Motzkin Path is a walk left-to-right starting at the horizontal axis, consisting of up, down or horizontal steps, never descending below the horizontal axis, and finishing at the horizontal axis. Interpret Motzkin Paths as vertical geologic cuts through mountain ranges with limited slopes. The natural embedding of these paths defines Motzkin Islands as sets of graphs labeled on vertices by non-negative integers (altitudes), a graph cycle defining a shoreline at zero altitude, and altitude differences along edges never larger than one. We address some of these islands with simple shapes on triangular and quadratic meshes.
[1475] vixra:2009.0149 [pdf]
Representations of the Division by Zero Calculus by Means of Mean Values
In this paper, we will give simple and pleasant introductions of the division by zero calculus by means of mean values that give an essence of the division by zero. In particular, we will introduce a new mean value for real valued functions in connection with the Sato hyperfunction theory.
[1476] vixra:2009.0144 [pdf]
Cotes’s Spiral Vortex in Extratropical Cyclone bomb South Atlantic Oceans
he characteristic shape of hurricanes, cyclones, typhoons is a spiral. There are several types of turns, and determining the characteristic equation of which spiral the "cyclone bomb" (CB) fits into is the goal of the work. In mathematics, a spiral is a curve which emanates from a point, moving farther away as it revolves around the point. An “explosive extratropical cyclone” is an atmospheric phenomenon that occurs when there is a very rapid drop in central atmospheric pressure. This phenomenon, with its characteristic of rapidly lowering the pressure in its interior, generates very intense winds and for this reason it is called explosive cyclone, bomb cyclone. It was determined the mathematical equation of the shape of the extratropical cyclone, being in the shape of a spiral called “Cotes’s Spiral." In the case of CB, which formed in the south of the Atlantic Ocean, and passed through the south coast of Brazil in July 2020, causing great damages in several cities in the State of Santa Catarina. With gusts recorded of 116 km/h, atmospheric phenomenon – “cyclone bomb" (CB) hit southern Brazil on June 30, the beginning of winter 2020, causing destruction in its influence over. In five hours the CB traveled a distance of 257.48 km (159.99 miles), at an average speed of 51.496 km/h (31.998 miles/h) 27.81 knots, moved towards ENE, with a low pressure center of 986 mbar, 07:20 UTC, approximate location 35◦S45◦W, and 5 hours after 12:20 UTC had already grown and had a low pressure center of 972 mbar , approximate location 34◦S 42◦30’W.
[1477] vixra:2009.0138 [pdf]
RGBSticks : A New Deep Learning Based Framework for Stock Market Analysis and Prediction
We present a novel intuitive graphical representation for daily stock prices, which we refer as RGBSticks, a variation of classical candle sticks. This representation allows the usage of complex deep learning based techniques, such as deep convolutional autoencoders and deep convolutional generative adversarial networks to produce insightful visualizations for market’s past and future states
[1478] vixra:2009.0135 [pdf]
A Joint Introduction to Gaussian Processes and Relevance Vector Machines with Connections to Kalman Filtering and Other Kernel Smoothers
The expressive power of Bayesian kernel-based methods has led them to become an important tool across many different facets of artificial intelligence, and useful to a plethora of modern application domains, providing both power and interpretability via uncertainty analysis. This article introduces and discusses two methods which straddle the areas of probabilistic Bayesian schemes and kernel methods for regression: Gaussian Processes and Relevance Vector Machines. Our focus is on developing a common framework with which to view these methods, via intermediate methods a probabilistic version of the well-known kernel ridge regression, and drawing connections among them, via dual formulations, and discussion of their application in the context of major tasks: regression, smoothing, interpolation, and filtering. Overall, we provide understanding of the mathematical concepts behind these models, and we summarize and discuss in depth different interpretations and highlight the relationship to other methods, such as linear kernel smoothers, Kalman filtering and Fourier approximations. Throughout, we provide numerous figures to promote understanding, and we make numerous recommendations to practitioners. Benefits and drawbacks of the different techniques are highlighted. To our knowledge, this is the most in-depth study of its kind to date focused on these two methods, and will be relevant to theoretical understanding and practitioners throughout the domains of data-science, signal processing, machine learning, and artificial intelligence in general.
[1479] vixra:2009.0119 [pdf]
Compact $N$-quark Hadron Mass Dependence on $N^4$: A Classical Field Picture
We give a hypothesis on the mass spectrum of compact $N$-quark hadron states in a classical field picture, which indicates that there would be a mass dependence on about $N^4$. We call our model “bag-tube oscillation model”, which can be seemed as a kind of combination of quark-bag model and flux-tube model. The large decay widths due to large masses might be the reason why the compact $N$-quark hadrons still disappear so far.
[1480] vixra:2009.0117 [pdf]
A Review of Ages in Stellar Metamorphosis
Stellar Metamorphosis is the name given to a proposed alternative hypothesis for the origin and evolution of stars, planets, and all other celestial bodies. One of the most basic predictions of Stellar Metamorphosis is for the ages of celestial bodies. Since Stellar Metamorphosis rejects parts or even all knowledge of astronomical bodies as erroneous, this review focuses on internal checks of the hypothesis only. A number of internal inconsistencies are found. Contradictions in age results of up to 6,140% are found in Stellar Metamorphosis papers. Contradictions in Stellar Metamorphosis age measurement methods are also found, averaging 26,000% across all methods and surveyed objects.
[1481] vixra:2009.0113 [pdf]
A New Robust Theory of Everything with Expected 10-100 Ppb Anomalies in Some Gravitational Experiments Depending on the Ratio Neutron-Proton.
A New Robust Theory of Everything with Expected 10-100 ppb Anomalies in some Gravitational Experiments depending on the total ghost hypercharges of the proton-electron pair and on the total ghost hypercharges of the neutron. Here, the meaning of the word robust is the use of the concept of $n$-irreducible sequents and $n$-irreducible numbers to constraint the possible theories of everything into a single one with a maximized $n$-irreducible number. We try to write the definitions of the theory of everything in the most rigorous mathematical way and in a compatible way with every known experiments. A kind of Nobel Prize experiment is proposed for detecting anomaly between orbital parameters of identical metallic balls orbiting around Earth with different neutron-proton ratio. An anomaly between $10^{-8}-10^{-12}$ is expected and the current precision is $10^{-8.7}$.
[1482] vixra:2009.0109 [pdf]
Analytical Psychology, an Information Theoretic Approach
In the article "The introduction of the Einstein Model of a Solid, to Analytical Psychology", I documented that from an economic point of view, the psyche as defined in Analytical Psychology (AP), can be considered an abstract Einstein Solid (ES). <p> In this article, I examine the case of an interacting ES (grand canonical ensemble) and the results of this interaction in its Information content. I consider fluctuations in the number of the quantum harmonic oscillators (or Structures of the psyche in AP) and energy quanta (or Values in AP). For fluctuations, the microcanonical and grand canonical ensemble are considered equivalent. This means that we are allowed to examine the psyche both in isolation (as originally identified with the ES) and also as an open system accepting fluctuations in its core elements.
[1483] vixra:2009.0100 [pdf]
Mursi-English-Amharic Dictionary and the Graphical law
We study Mursi-English-Amharic Dictionary. We draw the natural logarithm of the number of entries, normalised, starting with a letter vs the natural logarithm of the rank of the letter, normalised. We conclude that the Dictionary can be characterised by BP(4,$\beta H=0.02$) i.e. a magnetisation curve for the Bethe-Peierls approximation of the Ising model with four nearest neighbours with $\beta H=0.02$. $\beta$ is $\frac{1}{k_{B}T}$ where, T is temperature, H is external magnetic field and $k_{B}$ is the Boltzmann constant.
[1484] vixra:2009.0095 [pdf]
Uncertainty Principle and Superradiance
Hasegawa Yuji of the Vienna University of Technology and Masaaki Ozawa of Nagoya University and other scholars published empirical results against Heisenberg’s uncertainty principle on January 15, 2012.They got a measurement result with a smaller error than the Heisenberg uncertainty principle, which proved the measurement advocated by the Heisenberg uncertainty principle.This article follows the method I used to study superradiation and connects the uncertainty principle with the superradiation effect. I found that under the superradiation effect, the measurement limit of the uncertainty principle can be smaller.
[1485] vixra:2009.0068 [pdf]
New Formula for Prime Counting Function
This paper presents two functions for prime counting function and its inverse function (the function that returns nth prime number as output) with high accuracy and best approximation, which, due to their significant features, are distinguished from other similar functions presented thus far. the presented function for prime counting function is denoted by πm(x) and presented function for nth prime is denoted by Pm(x) in this article.
[1486] vixra:2009.0061 [pdf]
Transparency and Granularity in the SP Theory of Intelligence and Its Realisation in the SP Computer Model
This chapter describes how the SP System, meaning the SP Theory of Intelligence, and its realisation as the SP Computer Model, may promote transparency and granularity in AI, and some other areas of application. The chapter describes how transparency in the workings and output of the SP Computer Model may be achieved via three routes: 1) the program provides a very full audit trail for such processes as recognition, reasoning, analysis of language, and so on. There is also an explicit audit trail for the unsupervised learning of new knowledge; 2) knowledge from the system is likely to be granular and easy for people to understand; and 3) there are seven principles for the organisation of knowledge which are central in the workings of the SP System and also very familiar to people (eg chunking-with-codes, part-whole hierarchies, and class-inclusion hierarchies), and that kind of familiarity in the way knowledge is structured by the system, is likely to be important in the interpretability, explainability, and transparency of that knowledge. Examples from the SP Computer Model are shown throughout the chapter.
[1487] vixra:2009.0058 [pdf]
The Advance of Planets’ Perihelion in Newtonian Theory Plus Gravitational and Rotational Time Dilation
It is shown through three different approaches that, contrary to a longstanding conviction older than 160 years, the orbit of Mercury behaves as required by Newton's equations with a very high precision if one correctly analyzes the situation in the framework of the two-body problem without neglecting the mass of Mercury. General relativity remains more precise than Newtonian physics, but the results in this paper show that Newtonian framework is more powerful than researchers and astronomers were thinking till now, at least for the case of Mercury. The Newtonian formula of theadvance of planets' perihelion breaks down for the other planets. The predicted Newtonian result is indeed too strong for Venus and Earth. Therefore, it is also shown that corrections due to gravitational and rotational time dilation, in an intermediate framework which analyzes gravity between Newton and Einstein, solve the problem. By adding such corrections, a result consistent with the one of general relativity is indeed obtained. Thus, the most important results of this paper are two: i) It is not correct that Newtonian theory cannot predict the anomalous rate of precession of the perihelion of planets' orbit. The real problem is instead that a pure Newtonian prediction is too strong. ii) Perihelion's precession can be achieved with the same precision of general relativity by extending Newtonian gravity through the inclusion of gravitational and rotational time dilation effects. This second result is in agreement with a couple of recent and interesting papers of Hansen, Hartong and Obers. Differently from such papers, in the present work the importance of rotational time dilation is also highlighted.
[1488] vixra:2009.0056 [pdf]
Garo to English School Dictionary and the Graphical law
We study a Garo to English School Dictionary. We draw the natural logarithm of the number of entries, normalised, starting with a letter vs the natural logarithm of the rank of the letter, normalised. We conclude that the Dictionary can be characterised by BP(4,$\beta H=0.02$) i.e. a magnetisation curve for the Bethe-Peierls approximation of the Ising model with four nearest neighbours with $\beta H=0.02$. $\beta$ is $\frac{1}{k_{B}T}$ where, T is temperature, H is external magnetic field and $k_{B}$ is the Boltzmann constant.
[1489] vixra:2009.0052 [pdf]
A Mystery Circle Arising from Laurent Expansion
For a parametric equation of circles touching two externally touching circles, we consider its Laurent expansion around one of the singular points. Then we can find an equation of a notable circle and the equations of the external common tangents of the two circles from the coefficient of the Laurent expansion. However it is a mystery why we can find such things.
[1490] vixra:2009.0051 [pdf]
Mirror Images and Division by Zero Calculus
Very classical results on the mirror images of the centers of circles and bolls should be the centers as the typical results of the division by zero calculus. For their importance, we would like to discuss them in a self-contained manner.
[1491] vixra:2009.0045 [pdf]
Quantum Permutations and Quantum Reflections
The permutation group $S_N$ has a quantum analogue $S_N^+$, which is infinite at $N\geq4$. We review the known facts regarding $S_N^+$, and its versions $S_F^+$, with $F$ being a finite quantum space. We discuss then the structure of the closed subgroups $G\subset S_N^+$ and $G\subset S_F^+$, with particular attention to the quantum reflection groups.
[1492] vixra:2009.0039 [pdf]
On the Number of Intersections of Tubes
In this article we will always assume that the number of $\delta$-tubes is $N = \delta^{1-n}.$ Moreover, we will assume that if any two $\delta$-tubes intersect, then they intersect in the unit ball. We will show that the number of their intersections of order $\mu$ is bounded by $\frac{|B(0,1+\delta)|}{|B(0,1)|}(1+ 2\delta)2^{n-1}|B_{n-1}(0,1)|\frac{N^{n/(n-1)}}{\mu}$. After making a dyadic decomposition and summing the orders together we will find that the number of tube intersections of $N$ tubes is bounded by $\frac{|B(0,1+N^{-1/(n-1)})|}{|B(0,1)|}2^{n-1}(1+ 2N^{-1/(n-1)})|B_{n-1}(0,1)|N^{n/(n-1)}.$ Moreover, we will prove a generalized lemma of C{\'o}rdoba and we will prove the Kakeya sets have greater dimension than $n-1.$
[1493] vixra:2009.0018 [pdf]
More Problems in AI Research and How the SP System May Help to Solve Them (Technical Report)
This technical report, an adjunct to the paper "Problems in AI research ...", describes some problems in AI research and how the {\em SP System} (meaning the "SP Theory of Intelligence" and its realisation in the "SP Computer Model") may help to solve them. It also contains a fairly detailed outline of the SP System. Most of the problems considered in this report are described by leading researchers in AI in interviews with science writer Martin Ford, and presented in his book "Architects of Intelligence". Problems and their potential solutions that are described in this report are: the need for more emphasis in research on the use of top-down strategies is met by the way SP has been developed entirely within a top-down framework; the risk of accidents with self-driving vehicles may be minimised via the theory of generalisation within the SP System; the need for strong compositionality in the structure of knowledge is met by processes within the SP Computer Model for unsupervised learning and the organisation of knowledge; although commonsense reasoning and commonsense knowledge are challenges for all theories of AI, the SP System has some promising features; the SP programme of research is one of very few working to establishing the key importance of information compression in AI research; Likewise, the SP programme of research is one of relatively few AI-related research programmes attaching much importance to the biological foundations of intelligence; the SP System lends weight to 'localist' (as compared with 'distributed') views of how knowledge is stored in the brain; compared with deep neural networks, the SP System offers much more scope for adaptation and the representation of knowledge; reasons are given for why the important subjects of motivations and emotions have not so far been considered in the SP programme of research. Evidence in this report, and "Problems in AI research ...", suggests that ***the SP System provides a relatively promising foundation for the development of artificial general intelligence***.
[1494] vixra:2009.0014 [pdf]
Visayan-English Dictionary and the Graphical law
We study the Visayan-English Dictionary(Kapul\'{u}$\bar{n}$gan Binisay\'{a}-Ining\'{l}s). We draw the natural logarithm of the number of entries, normalised, starting with a letter vs the natural logarithm of the rank of the letter, normalised. We conclude that the Dictionary can be characterised by BP(4,$\beta H=0.02$) i.e. a magnetisation curve for the Bethe-Peierls approximation of the Ising model with four nearest neighbours with $\beta H=0.02$. $\beta$ is $\frac{1}{k_{B}T}$ where, T is temperature, H is external magnetic field and $k_{B}$ is the Boltzmann constant.
[1495] vixra:2009.0013 [pdf]
New Principles of Differential Equations Ⅱ
This is the second part of the total paper. Three kinds of Z Transformations are used to get many laws for general solutions of mth-order linear partial differential equations with n variables in the present thesis. Some general solutions of first-order linear partial differential equations, which cannot be obtained by using the characteristic equation method, can be solved by the Z Transformations. By comparing, we find that the general solutions of some first-order partial differential equations got by the characteristic equation method are not complete.
[1496] vixra:2009.0012 [pdf]
Problems in AI Research and How the SP System May Help to Solve Them
This paper describes problems in AI research and how the SP System (described in sources referenced in the paper) may help to solve them. Most of the problems considered in the paper are described by leading researchers in AI in interviews with science writer Martin Ford, and reported by him in his book "Architects of Intelligence". These problems, each with potential solutions via SP, are: the divide between symbolic and non-symbolic kinds of knowledge and processing, and how the SP System may bridge the divide; the tendency of deep neural networks (DNNs) to make large and unexpected errors in recognition, something that does not happen with the SP System; in most AI research, unsupervised learning is regarded as a challenge, but unsupervised learning is central in how SP learns; in other AI research, generalisation, with under- and over-generalisation is seen as a problem, but it is a problem that has a coherent solution in the SP System; learning usable knowledge from a single exposure or experience is widely regarded as a problem, but it is a problem that is already solved in the SP System; transfer learning (incorporating old knowledge in new) is seen as an unsolved problem, but it is bedrock in how the SP System learns; there is clear potential for the SP System to solve problems that are prevalent in most AI systems: learning that is slow and greedy for large volumes of data and large computational resources; the SP System provides solutions to problems of transparency in DNNs, where it is difficult to interpret stored knowledge and how it is processed; although there have been successes with DNNs in the processing of natural language, the SP System has strengths in the representation and processing of natural languages which appear to be more in accord with how people process natural language, and these strengths in the SP System are well-integrated with other strengths of the system in aspects of intelligence; by contrast with DNNs, SP has strengths and potential in human-like probabilistic reasoning, and these are well integrated with strengths in other aspects of intelligence; unlike most DNNs, the SP System eliminates the problem of catastrophic forgetting (where new learning wipes out old learning); the SP System provides much of the generality across several aspects of AI which is missing from much research in AI. The strengths and potential of the SP System in comparison with alternatives suggest that {\em the SP System provides a relatively promising foundation for the development of artificial general intelligence}.
[1497] vixra:2009.0010 [pdf]
Evidence of a Neutral Potential Surrounding the Earth
We examine the associated wave of the electron, and we put in evidence the problem with its relative velocity. The velocity of an electron is always measured relative to the laboratory, which gives the correct behaviour of the electron concerning the law of Louis de Broglie. But, to agree with this law, there must exist some interaction between the electron and the laboratory, which allows the electron to modify its characteristics. The electron must therefore interact with a media connected to the laboratory. Such a media must be associated with the earth, following it in its path through the Universe. {\bf It follows that the relativity theories of A. Einstein are wrong.}
[1498] vixra:2009.0005 [pdf]
Liouville-Type Theorems Outside Small Sets
We prove that convex functions of finite order on the real line and subharmonic functions of finite order on finite dimensional real space, bounded from above outside of some set of zero relative Lebesgue density, are bounded from above everywhere. It follows that subharmonic functions of finite order on the complex plane, entire and plurisubharmonic functions of finite order, and convex or harmonic functions of finite order bounded from above outside some set of zero relative Lebesgue density are constant.
[1499] vixra:2008.0224 [pdf]
Complexity Arising from Life at the Edge of Chaos-Fractal: Riemann Hypothesis, Polignac's and Twin Prime Conjectures
We commence this paper by outlining the manifested Chaos and Fractal phenomena derived from our concocted idiom "Complexity arising from Life at the Edge of Chaos-Fractal". COVID-19 originated from Wuhan, China in December 2019. Declared by World Health Organization on March 11, 2020; COVID-19 pandemic has resulted in unprecedented negative global impacts on health and economy. With China and US playing crucial roles, international cooperation is required to combat this "Incompletely Predictable" pandemic. We mathematically model COVID-19 and solve [unconnected] below-mentioned open problems in Number theory using our versatile Fic-Fac Ratio. Computed as Information-based complexity, our innovative Information-complexity conservation constitutes a unique all-purpose analytic tool associated with Mathematics for Incompletely Predictable problems. These problems are literally "complex systems" containing well-defined Incompletely Predictable entities such as nontrivial zeros and two types of Gram points in Riemann zeta function (or its proxy Dirichlet eta function) together with prime and composite numbers from Sieve of Eratosthenes. Correct and complete mathematical arguments for first key step of converting this function into its continuous format version, and second key step of using our unique Dimension (2x-N) system instead of this Sieve result in primary spin-offs from first key step consisting of providing proof for Riemann hypothesis (and explaining closely related two types of Gram points), and second key step consisting of providing proofs for Polignac's and Twin prime conjectures.
[1500] vixra:2008.0216 [pdf]
The Quantum Pythagorean Fuzzy Evidence Theory Based on Negation in Quantum of Mass Function
Dempster-Shafer (D-S) evidence theory is an effective methodology to handle unknown and imprecise information, due it can assign the probability into power of set. Quantum of mass function (QM) is the extension of D-S evidence theory, which can combine quantum theory and D-S evidence theory and also extended D-S evidence theory to the unit circle in complex plane. It can be seen that QM has the more bigger uncertainty in the framework of the complex plane. Recently, negation is getting more and more attention due it can analyze information from the another point. Hence, the paper firstly proposed negation of QM by using the subtraction of vectors in the unit circle, which can degenerate into negation proposed by Yager in startand probability theory and negation proposed by Yin. et al in D-S evidence theory. the paper proposed quantum pythagorean fuzzy evidence theory (QPFET), which is the first work to consider QPFET from the point of negation.
[1501] vixra:2008.0199 [pdf]
Physics and Division by Zero Calculus
In order to show some power of the division by zero calculus we will give several simple applications to physics. Recall that Oliver Heaviside: {\it Mathematics is an experimental science, and definitions do not come first, but later on.}
[1502] vixra:2008.0188 [pdf]
Haga's Theorems in Paper Folding and Related Theorems in Wasan Geometry Part 2
We generalize problems in Wasan geometry which nvolve no folded figures but are related to Haga's fold in origamics. Using the tangent circles appeared in those problems with division by zero, we give a parametric representation of the generalized Haga's fold given in the first part of these two-part papers.
[1503] vixra:2008.0184 [pdf]
The Haug Quantum Wave Equation Combined with Pauli Operator
In this short note we will show the Haug-1 quantum wave equation in relation to where we incorporate spin following the Pauli route. This lead to a similar equation to the Schrodinger-Pauli equation, but our equation is relativistic while the Schrodinger-Pauli equation is non-relativistic, our equation is also simpler in terms of for example the time and space are on the same order (first derivatives), while in the Schrodinger-Pauli equation time is on first order and spatial-spin dimensions on second order. Comments are welcome.
[1504] vixra:2008.0182 [pdf]
Une Note Sur La Conjecture ABC
It is a paper of Gerhard Frey published in 2012. It is an introduction about the $abc$ conjecture and its subtleties and consequences for the theory of numbers. It is a scientific version of the original paper.
[1505] vixra:2008.0177 [pdf]
A Conjecture On Some ds Periods On The Complex Plane
Here we will propose a simple and very difficult open question like the Fermat's problem on some $ds$ periods on the complex pane. This very elementary problem will create a new field on the complex plane.
[1506] vixra:2008.0163 [pdf]
Dynamics of Feed Forward Induced Interference Training
Preceptron model updating with back propagation has become the routine of deep learning. Continu-ous feed forward procedure is required in order for backward propagate to function properly. Doubt-ing the underlying physical interpretation on transformer based models such as GPT brought aboutby the routine explaination, a new method of training is proposed in order to keep self-consistencyof the physics. By treating the GPT model as a space-time diagram, and then trace the worldlinesof signals, identifing the possible paths of signals in order fot a self-attention event to occure. Witha slight modification, self-attention can be viewed as an ising model interaction, which enables thegoal to be designed as energy of system. Target is treated as an external magnetic field inducing sig-nals modeled as magnetic dipoles. A probability network is designed to pilot input signals travellingat constant speed through different routes. A rule of updating the probabilities is designed in orderto form constructive interference at target locations so that instantaneous energy can be maximised.Experiment is conducted on a 4-class classification problem extracted from MNIST. The results ex-hibit interesting but expected behavours, which do not exist in a bp updated network, but more likelearning in a real human, especially in the few-shot scenario.
[1507] vixra:2008.0158 [pdf]
On Hydrostatic Approximation by R.i. Nigmatulin and L.F. Richardson's Equation.
The theorem given in 'Equations of Hydro-and Thermodynamics of the Atmosphere when Inertial Forces Are Small in Comparison with Gravity' (2018) is wrong, since the passage to the limit from the system of Navier-Stokes equations to the system of equations of (quasi) hydrostatic approximation, as the vertical acceleration approaches zero, does not exist. The main consequence is that the scales given in the paper are not suitable for application of hydrostatic approximation. The correct asymptotics should be given by the traditional hydrostatic parameter H/L, where H and L are the vertical and horizontal scales of motion. Also scale analysis of the L.F. Richardson's equation for vertical velocity in hydrostatic approximation is not correct. (О гидростатическом приближении Р.И. Нигматулина и уравнении Л.Ф. Ричардсона. Теорема, сформулированная в статье `Уравнения гидро- и термодинамики атмосферы при малых силах инерции по сравнению с силой тяжести' (2018), не верна, поскольку предельного перехода от уравнений Навье-Стокса к системе уравнений гидростатического приближения при вертикальном ускорении, стремящемся к нулю, не существует. Основное последствие заключается в том, что предложенные в статье масштабы не приемлемы для применения гидростатического по вертикали приближения (квазистатического приближения). Корректная асимптотика задается традиционным параметром гидростатичности H/L, где H и L - вертикальный и горизонтальный масштабы течения. Анализ масштабов в уравнении Л.Ф. Ричардсона для вертикальной скорости в гидростатическом приближении также является некорректным. Ключевые слова: гидростатическое приближение, уравнение Ричардсона, квазистатическое приближение, синоптические масштабы, мезомасштабы, микрометеорология, уравнения сохранения, силы инерции.
[1508] vixra:2008.0137 [pdf]
Wholistic Mechanics: Classical Mechanics Extended from Light-Speed to Planck's Constant.
Against Bell (1964c), and via classical analysis (including the principle of relativity): we make quantum correlations intelligible by completing the quantum mechanical account in a classical way. We find that Bell neglects a logical correlation that refutes his theorem, while a commutation relation and some easy algebra refute his inequality. So: for Einstein -- and against Bell's naive local realism; with certainty -- relativistic causality prevails. In this way we arrive at wholistic mechanics (WM): ie, classical mechanics extended from light-speed c to Planck's constant h. Importantly, for STEM students and teachers, and against popular opinion-pieces about quantum nonlocality, our results require no knowledge of quantum theory: for the quantum is here, to be found. Let's see.
[1509] vixra:2008.0129 [pdf]
A Gift Of God And Isaac Newton
Isaac Newton was a genius. His greatest achievement was undoubtedly his discovery of the laws of motion and the universal law of gravitation. They are as if "gifts of God" concerning the laws of the natural world, laws that are universal and valid for all times. Any attempt to provide alternatives to such laws could only end in failure.
[1510] vixra:2008.0122 [pdf]
Representing Sets with Minimal Space: Jamie’s Trit
The theoretical minimum storage needed to represent a set of size N drawn from a universe of size M is about N * (log_2(M/N) + 1.4472) bits (assuming neither N nor M/N is very small). I review the technique of `quotienting' which is used to approach this minimum, and look at the actual memory costs achieved by practical designs. Instead of somehow implementing and exploiting 1.4472 bits of steering information, most practical schemes use two bits (or more). In the conclusion I mention a scheme to reduce the overhead cost from two bits to a single trit.
[1511] vixra:2008.0118 [pdf]
The Need for Absolute Time - The World of Special Theory of Relativity -
Since the beginning of human history, the way of thinking about time has changed with the times. Various concepts of time existed for a time according to religions and philosophies, but in the late 17th century, Sir Isaac Newton's (1643-1727) concept of "absolute space and time" spread. Later, special relativity proposed by Albert Einstein (1879-1955) at the beginning of the 20th century made it clear that this was not the case. However, if one process is added after the paradox that is emerging in special relativity, we get a result that indicates the existence of absolute time. Therefore, I present the need to rethink the concept of "time" again.
[1512] vixra:2008.0112 [pdf]
Our New Relativistic Wave Equation and Hydrogen-Like Atoms
In this paper, we look further into one of the relativistic wave equations we have introduced recently. Our relativistic wave equation is a PDE that is rooted in the relativistic energy Compton momentum relation, rather than the standard energy momentum relation. They are two sides of the same coin, but the standard momentum is just a derivative of the Compton momentum, so this simplifies things considerably. Here the main focus is to rewrite our relativistic PDE wave equation in spherical polar coordinates, then by some separation of variables we end up with three ordinary differential equations (ODEs) for which we present possible solutions, and we also provide a table where we compare our ODEs with the ODEs one gets from the Schro ̈dinger equation [1]. This approach is used to describe hydrogen-like atoms. We encourage other researchers to check our calculations and the predictions from our solutions and see how they fit compared to observations from hydrogen-like atoms.
[1513] vixra:2008.0106 [pdf]
Conservation Of Displacement From Isotropic Symmetry
The length of a rod in motion may be different from the length of another identical rod at rest. The difference in length between two rods is independent of the direction of the motion of the moving rod. The isotropic symmetry demands that the difference in length is conserved in any direction. All moving rods are of identical length as long as the speed of motion is identical. The center of two rods in anti-parallel motion will coincide. The ends will also coincide. Such end-to-end match exists in all reference frames. The length of a rod is independent of reference frame and motion.
[1514] vixra:2008.0105 [pdf]
The Emission of Photons by Plasma Fluctuations
The totally ionized charged collisionless plasma at nite temperature is considered. Using the statistical and Schwinger field methods we derive the production of photons from the plasma by the Cerenkov mechanism. We derive the spectral formula of emitted photons by the plasma uctuations. The calculation can be extended to the photon propagator involving radiative corrections.
[1515] vixra:2008.0095 [pdf]
Einstein’s Theory Resulting from a Logical Mistake by Lorentz
It is generally held that Lorentz transformation is superior to Galilean transformation. However, this paper reveals that the Lorentz’s application of both transformations to derive the electromagnetic wave equation for two inertial reference frames is logically flawed. This logical mistake makes the superiority of Lorentz transformation questionable. The paper also demonstrates that both Lorentz and Galilean transformations can generate standard equations for electromagnetic waves, while both cannot keep the transformed Maxwell’s equations consistent. Since the relativity theory is based on the belief that the Lorentz transformation can keep physics law in the same form for different reference frames, the findings of this paper have important implications: Either the relativity theory is not one hundred percent correct or our understanding of the theory needs updating in light of the new knowledge
[1516] vixra:2008.0092 [pdf]
On Seat Allocation Problem with Multiple Merit Lists
In this note, we present a simpler algorithm for joint seat allocation problem in case there are two or more merit lists. In case of two lists (the current situation for Engineering seats in India), the running time of the algorithm is proportional to sum of running time for two separate (delinked) allocations. The algorithm is straight forward and natural and is not (at least directly) based on deferred acceptance algorithm of Gale and Shapley. Each person can only move higher in his or her preference list. Thus, all steps of the algorithm can be made public. This will improve transparency and trust in the system.
[1517] vixra:2008.0090 [pdf]
Only Gravity
Simplified toy theories abound in theoretical physics. These toy models are ex- tremely useful. An example is N = 4 supersymmetric Yang–Mills theory. In this toy model alone, tens of thousands of papers have been published, some cited thousands of times. This essay proposes that physicists consider studying "N=4 General Relativity" as a toy model. This 'Only Gravity' toy model uses Einstein's field equations on their own in the hope that ignoring complicated interactions of gravity with other fields (electromagnetism, etc) and physical theories (quantum mechanics, QFT, etc) may paradoxically help us understand more about quantum gravity.
[1518] vixra:2008.0086 [pdf]
Analysis on the Non Linearity of Time
In this paper I have used the Schwarzschild solution for Einstein’s field equation to find the change of proper time in a gravitational field with respect to time measured by gravitationally unaffected stationary clock. The equations tells us about the nature of temporal flow or temporal velocity for any body considering gravitational effect.I have further treated the solution to find a rate of change of the temporal velocity which signifies a temporal acceleration for any varying mass body. On the second phase I have used Einstein’s time dilation equation of special relativity to find the temporal flow of a particle moving with certain speed, neglecting the gravitational effect of the particle. Further calculations on the nature of temporal velocity reveals us a temporal acceleration for a particle with different velocities at different instant of time. I have also found equations relating to the nature of time in extreme cases of the universe like black holes and particles like photons.
[1519] vixra:2008.0078 [pdf]
Modified Analog Gravity from De Broglie Matter Wave
We have used the concept of De Broglie's matter wave associated with particles to derive a inverse square law of gravitation like the new-ton's law of gravitation at the plank's length with a slight modification. Obtaining the Newtonian form we have further extrapolated it to formulate Einstein's field equation, which generally is embedded with a slight modification.The Schwarzschild solution is calculated from the modified field equation, which gives us the Schwarzschild radius of a miniature black hole. Using the solution, the entropy and the hawking temperature is also modeled theoretically for the miniature black hole at the plank's order.
[1520] vixra:2008.0077 [pdf]
Oxford Dictionary of Social Work and Social Care and the Graphical Law
We study the Oxford Dictionary Of Social Work and Social Care. We draw the natural logarithm of the number of entries, normalised, starting with a letter vs the natural logarithm of the rank of the letter, normalised. We conclude that the Dictionary can be characterised by BP(4,$\beta H=0.01$) i.e. a magnetisation curve for the Bethe-Peierls approximation of the Ising model with four nearest neighbours with $\beta H=0.01$. $\beta$ is $\frac{1}{k_{B}T}$ where, T is temperature, H is external magnetic field and $k_{B}$ is the tiny Boltzmann constant.
[1521] vixra:2008.0070 [pdf]
Enteric Bacteria, Simple Introduction with Numerical Optical Sectioning
Human gastrointestinal microbiota, also known as gut ora or gut mi- crobiota, are the microorganisms (generally bacteria and archaea), that live in the digestive tracts of humans. Many non-human animals, in- cluding insects, are hosts to numerous microorganisms that reside in the gastrointestinal tract as well. The human gastrointestinal metagenome is the aggregate of all the genomes of gut microbiota. The gut is one niche that human microbiota inhabit.
[1522] vixra:2008.0065 [pdf]
La Théorie des Erreurs (Theory of Errors)
It is a digital version of a manuscript of a course about the theory of errors given by the Engineer-in-Chief Philippe Hottier at the '80s, at the French National School of Geographic Sciences. The course gives the foundation of the method of the least squares for the case of linear models.
[1523] vixra:2008.0032 [pdf]
Dense Summary of New Developments in Quantum Mechanics
In this note, we present a table with a summary of recent developments in quantum mechanics. If you not have studied our papers carefully already [1–4], then this paper will not make much sense and the equations could easily be misunderstood, so we strongly encourage the reader to become familiar with this material first. We will possibly write a long paper on this topic later, but it may be helpful to present the essence of both old and new QM, as we understand it today. This is useful for both tracing the history of ideas related to QM in mathematical form and for creating an opportunity to compare equations and analyze their similarities and differences.
[1524] vixra:2008.0024 [pdf]
Basic Simulation of S and R Processes in Stars and Formation of Heavy Elements
In this introductory project, we have studied the about the neutron capture processes which are responsible for the abundances of heavy elements than iron in different phases of stellar evolution. Our main aim is to understand the distribution of isotropic abundances in our universe. First we start with the discussion on nucleosynthesis and its types, then we discussed about the different types of neutron capture processes. After that we provide results of the simulation which we have done using Monte Carlo method. Finally we conclude with the importance of these processes and current research status.
[1525] vixra:2008.0023 [pdf]
Aristotle's View on Physics and Philosophy
Aristotle, one of renowned Greek philosophers made a huge contribution for the good foundation of symbolic logic and scientific thinking to Western Philosophy in addition to making an advancement in the field of ``Metaphysics''. He was probably the first who was serious in ``Virtue Ethics'' theory. These contributions made him possibly the most important philosopher till 18th century. This article gives a brief overview of his ideas on Physics and Philosophy.
[1526] vixra:2008.0021 [pdf]
Investigation of the Onset of Turbulence in Boundary Layers and the Implications for Solutions of the Navier-Stokes EquationsV4.
This paper investigates the onset of turbulence in incompressible viscous fluid flow over a flat plate by looking at the pressure gradients implied by the Blasius solution for laminar fluid flow and adjusting the predicted flow, leading to a mathematically predictable flow separation in the boundary layer and the onset of turbulence (including both transition and fully turbulent regions - both with and without the presence of a flat plate). It then considers the implications for potential analytic solutions to the Navier-Stokes Equations of the fact that it is possible to predict turbulence and a singularity for many flows (at any velocity).
[1527] vixra:2008.0019 [pdf]
Further Development on Collision Space-Time: Unified Quantum Gravity
We will here show that there is one more relativistic wave equation rooted in the relativistic energy Compton moment relation, which should not be confused with the standard relativistic energy momentum relation. The standard momentum is, from a quantum perspective, rooted in the de Broglie wave. The de Broglie wave is not mathematically defined for rest-mass particles and has strange properties such as converging to infinity when the particle is almost at rest. As mentioned in the previous paper, the de Broglie wave is likely just a mathematical derivative of the true matter wave, which we have good reasons to think is the Compton wave. A wave equation that satisfies the relativistic energy Compton momentum relation will, in addition automatically satisfy the standard relativistic energy momentum relation, so there is no conflict between these two relations. The new one related to Compton is just the deeper reality and help explain why it gives simpler and more elegant relativistic wave equations, which are likely easier to interpret in terms of the physical reality.
[1528] vixra:2008.0010 [pdf]
Rethinking Mass, Energy, Momentum, Time, and Quantum Mechanics
In this paper, we discuss in brief the most common wave equations in quantum mechanics and some recent development in wave mechanics. We also present two new quantum wave mechanics equations based on the Compton momentum. We also question the idea that energy and mass are scalars, and we claim they are vectors instead. We have good reasons to think that the standard momentum is a mathematical derivative of the more fundamental Compton momentum. This will hopefully simplify interpretations of quantum mechanics significantly; our new relativistic wave equations look promising, but need further investigation into what they predict. This way of looking at quantum mechanics in new light is not in conflict with existing equations, but they are supplemental to the collection of existing wave equations. We prove mathematically that if one satisfies our new relativistic energy Compton momentum relation, one also satisfies the standard relativistic energy momentum relation automatically. They are two sides of the same coin, where the relations to the Compton wavelength likely represent the deeper reality, so we have reasons to think our new wave mechanics addresses a deeper level of understanding than the existing conception. We also look at our new wave equation in relation to hydrogen-like atoms; we follow the “standard approach” used for the Schro ̈dinger equation of putting it in polar coordinate form and, by change of variables, finding three ODEs and their solutions. We give a table summary of our new ODEs and their solutions compared to the well-known solutions of the Schro ̈dinger equation.
[1529] vixra:2008.0002 [pdf]
On Evaluation of an Improper Real Integral Involving a Logarithmic Function
In this paper we use the methods in complex analysis to evaluate an improper real integral involving the natural logarithmic function. Our presentation is somewhat unique because we use traditional notation in performing the calculations.
[1530] vixra:2007.0244 [pdf]
A Magic Formula & "New" Way to Solve Quadratic Equations.
In this paper we will give the formula of second roots (square root) of a complex number z=a+ib, which is very important and useful especially when we cannot write the complex number a+ib in the exponential or the geometric form. And, we will propose a precise direct method and fast to solve the equations of the second degree in the general case (which means with complex coefficients). We will also propose an algorithm that will make our calculations more easy and resolve fastly all type of equations of the second degree .
[1531] vixra:2007.0225 [pdf]
Note to Wiseman & Cavalcanti (2015): All Bell Inequalities Are False; the Principle of Relativity is True and the World is Relativistically Local
‘Experiments violating a Bell inequality [BI] thus leave [‘realists'] no option: the principle of relativity is false [sic]. The world is nonlocal [sic],' Wiseman & Cavalcanti (2015). But we show that even high-school math violates a BI; in fact, all BIs—being inadequate for experiments with highly correlated outcomes—are false. Moreover: under the principle of relativity, elementary math shows that wholistic mechanics (WM)—classical mechanics extended to include Planck's constant—also violates BIs. So, with 3 ways to violate a BI—experiment, high-school math, WM—true-realists find: Bellians are being rather silly, as Bell (1990) half-expected; quantum correlations are wholly explicable via WM in our relativistically local world; the principle of relativity is true, nonlocality false. Importantly: for STEM teachers, and against popular opinion-pieces about quantum nonlocality, our results require no knowledge of QM. Let's see.
[1532] vixra:2007.0219 [pdf]
Euler's Derivation of Rigid Body Equations
In this work, we give the modern version of Euler's derivation of equations governing rigid body rotations. This derivation helps understand the rigid body rotations.
[1533] vixra:2007.0218 [pdf]
Algebra of Discrete Symmetries in the Extended Poincare Group
We begin with the comprehensible review of the basics of the Lorentz, (extended) Poincare Groups and O(3,2) and O(4,1). On the basis of the Gelfand-Tsetlin-Sokolik-Silagadze research~[1-3], we investigate the definitions of the discrete symmetry operators both on the classical level, and in the secondary-quantization scheme. We studied the physical content within several bases: light-front form formulation, helicity basis, angular momentum basis, on several practical examples. The conclusion is that we have ambiguities in the definitions of the the corresponding operators P, C; T, which lead to different physical consequences.
[1534] vixra:2007.0215 [pdf]
Dynamics of R\'enyi Entanglement Entropy in Diffusive Qudit Systems
My previous work [arXiv:1902.00977] studied the dynamics of R\'enyi entanglement entropy $R_\alpha$ in local quantum circuits with charge conservation. Initializing the system in a random product state, it was proved that $R_\alpha$ with R\'enyi index $\alpha>1$ grows no faster than ``diffusively'' (up to a sublogarithmic correction) if charge transport is not faster than diffusive. The proof was given only for qubit or spin-$1/2$ systems. In this note, I extend the proof to qudit systems, i.e., spin systems with local dimension $d\ge2$.
[1535] vixra:2007.0211 [pdf]
The Cooper Pair Potential in the Thermal Bath
The Coulomb and Yukawa potentials of particles are derived from the Schwinger field theory at zero and the finite temperature. At the same time the correction to the Coulomb potential is determined for the two charges interacting in the photon black-body sea by the photon exchange mechanism. The running coupling constant is determined in dependence on the mean free path of photons in the photon sea. The relation to the Cooper pairs and phonon-phonon interaction at finite temperature is discussed.
[1536] vixra:2007.0209 [pdf]
Lunar Rock Classification Using Machine Learning
Lunar landings by esteemed space stations around the world have yielded an abundance of new scientific data on the Moon which has helped scientists to study our closest neighbour and hence have provided evidence for understanding Earth’s past and future. This paper is about solving the challenge on HackerEarth about classifying the lunar rock into small or large rock. These tasks have historically been conducted by visual image inspection, thereby reducing the scope, reliability and accuracy of the retrieval. The competition was to build a machine learning model to reduce human effort of doing a monotonous task. We built a Support Vector Machine model, used widely in classification problems, feeding features extracted from images in the dataset using OpenCV, only to obtain an accuracy of 99.41%. Our source code solving the challenge and the dataset are given in the github repository https://github.com/ArshitaKalra/Lunar-Rock-classification.
[1537] vixra:2007.0206 [pdf]
Sedeonic Generalization of Navier-Stokes Equation
We present a generalization of the equations of hydrodynamics based on the noncommutative algebra of space-time sedeons. It is shown that for vortex-less flow the system of Euler and continuity equations is represented as a single non-linear sedeonic second-order wave equation for scalar and vector potentials, which is naturally generalized on viscous and vortex flows. As a result we obtained the closed system of four equations describing the diffusion damping of translational and vortex motions. The main peculiarities of the obtained equations are illustrated on the basis of the plane wave solutions describing the propagation of sound waves.
[1538] vixra:2007.0200 [pdf]
More Problems in AI Research and How the Sp System May Help to Solve Them
This paper, a companion to "Problems in AI research and how the SP System may help to solve them", describes problems in AI research and how the "SP System" (described in sources detailed in the paper) may help to solve them. Most of these problems are described by leading researchers in AI in interviews with science writer Martin Ford, and reported by him in his book "Architects of Intelligence". Problems and their potential solutions that are described in this paper are: the need to rebalance research towards top-down strategies; how to minimise the risk of accidents with self-driving vehicles; the need for strong compositionality in the structure of knowledge; the challenges of commonsense reasoning and commonsense knowledge; establishing the key importance of information compression in AI research; establishing the importance of biological validity in AI research; whether knowledge in the brain is represented in 'distributed' or 'localist' form; the limited scope for adaptation of deep neural networks; and reasons are given for why the important subjects of motivations and emotions have not so far been considered. The evidence in this paper and its companion paper suggests that ***the SP System provides a firmer foundation for the development of artificial general intelligence than any alternative***.
[1539] vixra:2007.0196 [pdf]
There Exist Infinitely Many Couples of Primes (P,p+2n) ,with 2n >2 is a Fixed Distance Between P and P+2n
We will prove the next results : 1. there exist infinite twin primes . 2. there exist infinite cousin primes . 3. The cousin primes are equivalent to twin primes in infinity.
[1540] vixra:2007.0190 [pdf]
Global Stability for a System of Parabolic Conservation Laws Arising from a Keller-Segel Type Chemotaxis Model
In this paper, we investigate the time-asymptotically nonlinear stability to the initial-boundary value problem for a coupled system in (p, q) of parabolic conservation laws derived from a Keller-Segel type repulsive model for chemotaxis with singular sensitivity and nonlinear production rate of g(p) = p γ , where γ > 1. The proofs are based on basic energy method without any smallness assumption. We also show the zero chemical diffusion limit (ε → 0) of solutions in the case ¯p = 0.
[1541] vixra:2007.0183 [pdf]
Frame of Reference Moving Along a Line in Quantum Mechanics
We consider a frame of reference moving along a line with respect to an inertial frame of reference. We expect the rate of total energy expectation value not to depend on the order of transformations of a transformation that is a composition of a translation and a transformation that has a frame of reference accelerating with a time dependent acceleration with respect to an inertial frame of reference.
[1542] vixra:2007.0180 [pdf]
Time-periodic Solution to the Compressible Viscoelastic Flows in Periodic Domain
In this paper, we are concerned with the time-periodic solutions to the threedimensional compressible viscoelastic flows with a time-periodic external force in a periodic domain. By using an approach of parabolic regularization and combining with the topology degree theory, we show the existence and uniqueness of the time-periodic solution to the model under some smallness and symmetry assumptions on the external force.
[1543] vixra:2007.0178 [pdf]
Initial-boundary Value Problems for a System of Parabolic Conservation Laws Arising From a Keller-segel Type Chemotaxis Model
In this paper, we investigate the time-asymptotically nonlinear stability to the initial-boundary value problem for a coupled system in (p, q) of parabolic conservation laws derived from a Keller-Segel type repulsive model for chemotaxis with singular sensitivity and nonlinear production rate of g(p) = p γ , where γ > 1. The proofs are based on basic energy method without any smallness assumption.
[1544] vixra:2007.0175 [pdf]
The Action-reaction Asymetry in the String
We consider the string, the left end of which is fixed and the right end of this string is in a periodic motion. We, show that the law of the action-reaction symmetry is broken during the string motion.
[1545] vixra:2007.0160 [pdf]
On Jordan-Clifford Algebras, Three Fermion Generations with Higgs Fields and a $ SU(3) \times SU(2)_L \times SU(2)_R \times U(1) $ model
It is shown how the algebra $ {{\bf J } }_3 [ { \bf C \otimes O } ] \otimes {Cl(4, {\bf C}) } $ based on the tensor product of the complex Exceptional Jordan $ {{\bf J } }_3 [ { \bf C \otimes O } ]$, and the complex Clifford algebra $ Cl(4, {\bf C}) $, can describe all of the spinorial degrees of freedom of three generations of fermions in four-spacetime dimensions, and, in addition, to include the degrees of freedom of three sets of pairs of complex scalar Higgs-doublets $\{ {\bf H}^i_L, {\bf H}^i_R\}; i = 1,2,3$, and their conjugates. A close inspection of the fermion structure of each generation reveals that it fits naturally with the $ {\bf 16}$ complex-dimensional representation of the internal left/right symmetric gauge group $ G_{LR} = SU(3)_C \times SU(2)_L \times SU(2)_R \times U(1)$. It is reviewed how the latter group emerges from the intersection of $ SO (10) $ and $ SU(3) \times SU(3) \times SU(3) $ in $E_6$. In the concluding remarks we briefly discuss the role that the extra Higgs fields may have as dark matter candidates; the construction of Chern-Simons-like matrix cubic actions; hexaquarks and Clifford bundles over the complex-octonionic projective plane $ { \bf (C \otimes O) P^2 } $ whose isometry group is $ E_6$.
[1546] vixra:2007.0156 [pdf]
de Broglie Wavelength For the Proton at a Very Low Velocity
In modern physics, the de Broglie wavelength is considered to be the matter wave. However, the de Broglie wave has a series of strange properties. It is not mathematically defined for a rest-mass particle, when v=0. However, one can claim a particle never stands still and that the de Broglie wavelength only converges towards infinite when v converges to zero. An infinite matter wavelength would also be strange. We have good reasons to think that the de Broglie wavelength only is a mathematical derivative of the true matter wavelength, which we believe is the Compton wavelength. Although noted briefly here, this has already been described by Haug \cite{Hau20UnifiedA} and is a topic for another article. What we will focus on here is that the length of the de Broglie wavelength, if we use an observational window of one second and the minimum observable velocity, is the Planck length per second. Then the de Broglie wavelength for a proton actually has a length very close to the assumed radius of the observable universe. We think most likely this is a coincidence, particularly since one second is an arbitrarily chosen time unit, and not a fundamental time unit such as the Compton time, or the Planck time, for example. Still, we think this finding is worth mentioning and could be the basis for further discussion.
[1547] vixra:2007.0152 [pdf]
One Field for All Known Forces: Relativity as an Exclusively Speed Problem
Our present standard model is based on basic laws for forces which were mathematically introduced by matching equations with experimentally obtained curves. The basic laws are Coulomb, Ampere, Lorentz, Maxwell, Gravitation, etc. The equations were not deduced mathematically from the interactions of one postulated field, resulting the need to introduce for each particular manifestation of the force a different field, namely electric, magnetic, strong, weak and gravitation. In the present paper a model is presented where each known force is the product of a particular interaction of one field which consists of longitudinal and transversal angular momenta of a Fundamental Particle (FP). It shows that electrons and positrons neither attract nor repel each other when the distance between them tends to zero. This allows to represent muons, tauons and hadrons as swarms of electrons and positrons called quarks. The paper then concentrates on relativity showing that it is a speed and not a time-space problem, and that time and space are absolute variables. It also shows that photons are emitted with light speed from their source and move with speeds different than light speed relative to a moving reference system.
[1548] vixra:2007.0143 [pdf]
The Blockcard Protocol: It's the Proof-of-Thought That Counts
We identify a major security flaw in modern gift transaction protocol that allows for malicious entities to send questionable metadata to insecure1 recipients. To address these weaknesses we introduce the Blockcard protocol, a novel variant of Blockchain technology that uses an asymmetric proof-of-work CPU cost function over payload metadata to provide a cryptographically secure and efficient method of verifying that gift-givers thought enough about the recipients payload or lack thereof for it to count. This has the advantage of making it computationally infeasible and socially awkward for adversarial gift-givers to double-spend, spoof, or precompute their celebratory thoughts.
[1549] vixra:2007.0133 [pdf]
Theory that Predicts Elementary Particles and Explains Data About Dark Matter, Early Galaxies, and the Cosmos
We try to solve three decades-old physics challenges. List all elementary particles. Describe dark matter. Describe mechanisms that govern the rate of expansion of the universe. We propose new theory. The theory uses an extension to harmonic oscillator mathematics. The theory points to all known elementary particles. The theory suggests new particles. Based on those results, we do the following. We explain ratios of dark matter amounts to ordinary matter amounts. We suggest details about galaxy formation. We suggest details about inflation. We suggest aspects regarding changes in the rate of expansion of the universe. We interrelate the masses of some elementary particles. We interrelate the strengths of electromagnetism and gravity. Our work seems to offer new insight regarding three branches of physics. The branches are elementary particles, astrophysics, and cosmology.
[1550] vixra:2007.0132 [pdf]
Conservation Of Time From Parity Symmetry
A parity symmetry can be established between any pair of identical clocks. The rest frames of each clock can also form a parity symmetry. The time of each rest frame is independent of the direction of the motion of each clock. The elapsed time is identical for both rest frames. Consequently, the elapsed time is conserved in all reference frames. Conservation of elapsed time is a property of parity symmetry.
[1551] vixra:2007.0128 [pdf]
A Remark on the Strong Goldbach Conjecture
Under the assumption that $\sum \limits_{n\leq N}\Upsilon(n)\Upsilon(N-n)>0$, we show that for all even number $N>6$ \begin{align} \sum \limits_{n\leq N}\Upsilon(n)\Upsilon(N-n)=(1+o(1))K\sum \limits_{p|N}\sum \limits_{\substack{n\leq N/p}}\Lambda_{0}(n)\Lambda_{0}(N/p-n)\nonumber \end{align}for some constant $K>0$, and where $\Upsilon$ and $\Lambda_{0}$ denotes the master and the truncated Von mangoldt function, respectively. Using this estimate, we relate the Goldbach problem to the problem of showing that for all $N>6$ $(N\neq 2p)$, If $\sum \limits_{p|N}\sum \limits_{\substack{n\leq N/p}}\Lambda_{0}(n)\Lambda_{0}(N/p-n)>0$, then $\sum \limits_{\substack{n\leq N/p}}\Lambda_{0}(n)\Lambda_{0}(N/p-n)>0$ for each prime $p|N$.
[1552] vixra:2007.0127 [pdf]
Unbihexium Ubh-310/354 or Orion Nucleus-307?
The structure of the nuclei begins with the so-called lower-order nuclei, as the deuterium, tritium and helium He-3, which evolve into helium nucleus He-4 and then first upper-order oxygen nucleus O-16. The second upper-order calcium nucleus Ca is based on the fundamental natural phenomenon of mirror symmetry, by repetition of the first upper-order oxygen nucleus and one half of it, i.e. at the 2.5 factor. The same stands with the third upper-order tin nucleus Sn, which emerged from the second upper-order calcium nucleus, according to the mirror symmetry and the same 2.5 factor. Furthermore, orion nucleus Or-307 forecast, as a theoretical construction, is derived by repetition of the third upper-order tin nucleus and one half of it for the connection as the fourth upper-order nucleus, according to the mirror symmetry. The atomic numbers Z of the above four upper-order nuclei are the so-called four magic numbers, i.e. Z1=8, Z2=8x2.5=20, Z3=20x2.5=50 and Z4=50x2.5=125. That is the simple and elegant structure model, according to which the nuclei consist of fixed helium nuclei He-4 (plus deuterium, tritium and helium He-3, all evolving into helium He-4) and neutrons rotating around of them. It is noted that the word orion comes from the Greek όριον, meaning the limit. Thus, orion nucleus Or-307 means the limited nucleus of Nature that cannot be further divided, due to the indivisible original deuterium. Additionally, orion nucleus Or-307 is the corresponding hypothetical chemical element with atomic number Z=126 and placeholder symbol Ubh (Ubh-310 or Ubh-354), also known as element 126 or eka-plutonium.
[1553] vixra:2007.0121 [pdf]
Quasinilpotent Operators on Separable Hilbert Spaces Have Nontrivial Invariant Subspaces
The invariant subspace problem is a well known unsolved problem in funtional analysis. While many partial results are known, the general case for complex, infinite dimensional separable Hilbert spaces is still open. It has been shown that the problem can be reduced to the case of operators which are norm limits of nilpotents. One of the most important subcases is the one of quasinilpotent operators, for which the problem has been extensively studied for many years. In this paper, we will prove that every quasinilpotent operator has a nontrivial invariant subspace. This will imply that all the operators for which the ISP has not been established yet are norm-limits of operators having nontrivial invariant subspaces.
[1554] vixra:2007.0116 [pdf]
Riemann Hypothesis: New Criterion, Evidence, and One-Page Proof
There are tens of self-proclaimed proofs for the Riemann Hypothesis and only 2 or 4 disproofs of it in arXiv. I am adding to the Status Quo my very short and clear results even without explicit mentioning of the prime numbers. One of my breakthroughs uses the peer-reviewed achievement of Dr.Sole and Dr.Zhu, published just 4 years ago in a serious mathematical journal INTEGERS.
[1555] vixra:2007.0115 [pdf]
Proofs for Goldbach's, Twin Prime, and Polignac's Conjectures
I derive a new equivalent formulation of Goldbach's strong conjecture and present several proofs of Goldbach's strong conjecture and other conjectures. You are free not to get enlightened about that facts. But please pay respect to new dispositions of the conjectures and research methods in this note.
[1556] vixra:2007.0113 [pdf]
A More Elegant Proof of Poincare Conjecture
Besides the proof of the mathematical conjecture, a new form for the three-dimensional euclidean sphere is given. This sphere can be embedded into pseudo-euclidean metric, making the new description for the Universe.
[1557] vixra:2007.0112 [pdf]
Gravity Law Without Universalism is Solving Many Tasks
My MOND proposal includes General Relativity as a special case, i.e. I have effects of General Relativity in many areas of spacetime. I argue that my proposal can describe Dark Matter as well if one understands the modification of gravity as a tensor field X^{\mu\nu}(t,x,y,z) in the Einstein equations, i.e. as an additional mathematical parameter filling the Universe without correspondence to new particles. Notably, there are many different fields in nature, e.g. the Higgs field, the inflaton field, and the temperature distribution field T(t,x,y,z). My testable prediction that they will never find weakly interacting Dark Matter particles is well-realized up to today; therefore, Popper's falsifiability criterion is satisfied because the underground detectors could report the signal. On the other hand, the testable proof would be the indirect discovery of the sterile neutrino. As it does not interact with visible matter even weakly, it is an example of X^{\mu\nu}. This justified theory, applied to key problems of Physics, provides satisfactory answers.
[1558] vixra:2007.0111 [pdf]
On the Coloring of Efl Graph Using Colors Equal to Size of Maximal Clique
In this short note, we give a proof for the fact that the chromatic number of the EFL graph formed by the adjoining of k cliques such that any two cliques share at most one vertex is k
[1559] vixra:2007.0110 [pdf]
A Semantic Question Answering in a Restricted Smart Factory Domain Attaching to Various Data Sources
Industrial manufacturing has become more interconnected between smart devices such as the industry of things edge devices, tablets, manufacturing equipment, and smartphones. Smart factories have emerged and evolved with digital technologies and data science in manufacturing systems over the past few years. Smart factories make complex data enables digital manufacturing and smart supply chain management and enhanced assembly line control. Nowadays, smart factories produce a large amount of data that needs to be apprehensible by human operators and experts in decision making. However, linked data is still hard to understand and interpret for human operators, thus we need a translating system from linked data to natural language or summarization of the volume of linked data by eliminating undesired results in the linked data repository. In this study, we propose a semantic question answering in a restricted smart factory domain attaching to various data sources. In the end, we will perform qualitative and quantitative evaluation of the semantic question answering, as well as discuss findings and conclude the main points with regard to our research questions.
[1560] vixra:2007.0106 [pdf]
Contradictions, Mathematical Science and Incompleteness
Do you believe that science is based on contradictions? Let me consider the common experience of seeing a dot of the pencil on a paper. If I call the dot ``zero length dimension'' or ``zero extension'', then certainly I have seen `nothing'. But, if I have seen `nothing', I wonder how I can refer to `nothing', let alone naming `nothing' as ``a point''. Therefore, the expression ``zero length dimension'' is a contradiction. Mathematical science, as of now, is based on this contradiction that results from the attitude of exactness, because exact ``zero'' is non-referable and inexpressible. Such attitude leads to incomplete statements like ``infinitesimal quantities'' which never mention ``with respect to'' what. Consequently, as I find, science becomes fraught with singularity. I avoid this contradiction, by accepting my inability to do exact science. Therefore, I consider the dot as of ``negligible length dimension''. It is a practical statement rather than a sacrosanct axiom. The practicality serves the purpose of drawing geometry, that becomes impossible if I decide or choose to look at the dot through a magnifying glass. It then answers a different practical question, namely, what the dot is made up of. Certainly, reality of the dot depends on how I choose or decide to observe it. This is the essence of ``relational existence''. On the contrary, modern mathematical science is founded upon belief of ``independent existence''(invariant). My belief in inexact mathematical science and relational existence needs the introduction of an undecidable length unit to do arithmetic and leads to non-singular gravity. Further, the quest for justification of my choice or decision leads to my incompleteness -- ``I''-- the undecidable premise beyond science, the expression of which is a (useful) contradiction in itself as ``I'' is inexpressible.
[1561] vixra:2007.0102 [pdf]
Bounds on the Range(s) of Prime Divisors of a Class of Baillie-PSW Pseudo-Primes
In the literature [1], Carmichael Numbers that satisfy additional constraints $(p+1) \mydivides (N+1)$ for every prime divisor $p \mydivides N$ are referred to as ``Williams' Numbers''\footnote{more precisely, ``1-Williams Numbers''~; however~; the distinctions between different types of Willliams' numbers are not relevant in this document and therefore, we refer to 1-Williams Numbers~ simply as Williams' numbers.}. % In the renowned Pomerance-recipe~\cite{pomerance1984there} to search for Baillie-PSW pseudoprimes; there are heuristic arguments suggesting that the number of Williams' Numbers could be large (or even unlimited). Moreover, it is shown~\cite{pomerance1984there} that if a Williams' number is encountered during a search in accordance with all of the conditions in that recipe~\cite{pomerance1984there}~; then it must also be a Baillie-PSW pseudoprime. We derive new analytic bounds on the prime-divisors of a Williams' Number.\\ Application of the bounds to Grantham's set of 2030 primes~(see ~\cite{grantham-620-list}) drastically reduces the search space from the impossible size $\approx 2^{(2030)}$ to less than a quarter billion cases (160,681,183 cases to be exact, please see the appendix for details). We tested every single case in the reduced search space with maple code. The result showed that there is \underline{NO Williams' number (and therefore NO Baillie-PSW pseudo-prime which is also a Williams' number)} in the entire space of subsets of the Grantham-set. The results thus demonstrate that Williams' numbers either do not exist or are extremely rare. We believe the former; i.e., that No such composite (i.e., a Williams' Number of this type) exists.
[1562] vixra:2007.0090 [pdf]
Proof of Goldbach Conjecture
In the letter sent by Goldbach to Euler in 1742 (Christian, 1742) he stated that its seems that every odd number greater than 2 can be expressed as the sum of three primes. As reformulated by Euler, an equivalent form of this conjecture called the strong or binary Goldbach conjecture states that all positive even integers greater or equal to 4 can be expressed as the sum of two primes which are sometimes called a Goldbach partition. Jorg (2000) and Matti (1993) have verified it up to 4.1014. Chen (1973) has shown that all large enough even numbers are the sum of a prime and the product of at most two primes... The majority of mathematicians believe that Goldbach's conjecture is true, especially on statistical considerations ,on the subject we give the proof of Goldbach's strong conjecture whose veracity is based on a clear and simple approach.
[1563] vixra:2007.0088 [pdf]
Limited Polynomials
In this paper we study a particular class of polynomials. We study the distribution of their zeros, including the zeros of their derivatives as well as the interaction between this two. We prove a weak variant of the sendov conjecture in the case the zeros are real and are of the same sign.
[1564] vixra:2007.0085 [pdf]
Microscopy Image Processing for the Human Eye
Vivo confocal microscopy allows scientists to better understand eye health and systemic diseases. Microneuromas could play a role, however, monitoring their growth from a mosaic of images is error prone and time consuming. We used automated image stitching as a solution; focusing on accuracy and computational speed of three different feature detection algorithms: SIFT, SURF, and ORB. The results illustrated that SURF was computationally efficient with our data. Future investigation is to create a global solution that can replace the need for manual image stitching in this application.
[1565] vixra:2007.0084 [pdf]
Nonextensive Belief Entropy
The belief entropy has high performance in handling uncertain information, which is the extension of information entropy in Dempster-shafer evidence theory. The Tsallis entropy is an extent of information entropy, which is a nonextensive entropy. However, how to applied the idea of belief entropy to improve the Tsallis entropy is still an open issue. This paper proposes the nonextensive belief entropy(NBE), which consists of belief entropy and Tsallis entropy. If the extensive constant of the proposed model equal to 1, then the NBE will degenerate into classical belief entropy. Furthermore, When the basic probability assignment degenerates into probability distribution, then the proposed entropy will be degenerated as classical Tsallis entropy. Meanwhile, if NBE focus on the probability distribution and the extensive constant equal to 1, then the NBE is equate the classical information entropy. Numerical examples are applied to prove the efficiency of the proposed entropy. The experimental results show that the proposed entropy can combine the belief entropy and Tsallis entropy effectively and successfully.
[1566] vixra:2007.0063 [pdf]
Is it Still Worth the Cost to Teach Compiling in 2020 ? a Pedagogical Experience Through Hortensias Compiler and Virtual Machine
With the disruption produced by extensive automation of automation due to advanced research in machine learning, and auto machine learning, even in programming language translation, the main goal of this paper is to discuss the following question "Is it still worth the cost to teach compiling in 2020 ?". Our paper defends the "Yes answer" within software engineering majors. The paper also shares the experience of teaching compiling techniques course best practices since more than 15 years, presents and evaluates this experience through Hortensias, a pedagogical compiling laboratory platform providing a language compiler and a virtual machine. Hortensias is a multilingual pedagogical platform for learning end teaching how to build compilers front and back-end. Hortensias language offers the possibility to the programmer to customise the compiler associativity management, visualise the intermediary representations of compiled code, or customise the optimisation management, and the error management language for international students communities. Hortensias offers the possibility to the beginner programmer to use a graphical user interface to program by clicking. Hortensias compiling pedagogy evaluation has been conducted through two surveys involving in a voluntarily basis engineering students and alumni during one week. It targeted two null hypothesis : the first null hypothesis supposes that compiling teaching is becoming outdated with regards to current curricula evolution, and the second null hypothesis supposes Hortensias compiling based pedagogy has no impact neither on understanding nor on implementing compilers and interpreters. During fifteen years of teaching compiler engineering, Hortensias was a wonderful pedagogic experiment either for teaching and for learning, since vulgarising abstract concepts becomes very easier for teachers, lectures follow a gamification-like approach, and students become efficient in delivering versions of their compiler software product in a fast pace.
[1567] vixra:2007.0061 [pdf]
The Waring Rank of the 3 X 3 Permanent
Let f be a homogeneous polynomial of degree d with coefficients in a field F satisfying char F = 0 or char F > d. The Waring rank of f is the smallest integer r such that f is a linear combination of r powers of F-linear forms. We show that the Waring rank of the polynomial x1 y2 z3 + x1 y3 z2 + x2 y1 z3 + x2 y3 z1 + x3 y1 z2 + x3 y2 z1 is at least 16, which matches the known upper bound.
[1568] vixra:2007.0042 [pdf]
Some Relations Among Pythagorean Triples
Some relations among Pythagorean triples are established. The main tool is a fundamental characterization of the Pythagorean triples through a cathetus that allows to determine the relationships between two Pythagorean triples with an assigned cathetus a and b and the Pythagorean triple with cathetus a · b.
[1569] vixra:2007.0036 [pdf]
Differential Quotients and Division by Zero
In this very short note, a pleasant relation of the basic idea of differential quotients $dy/dx$ of Leipniz and division by zero $1/0=0$. This will give a natural interpretation of the important result $\tan (\pi/2)=0$.
[1570] vixra:2007.0028 [pdf]
About Dark Matter and Gravitation.
A close inspection of Zwicky's seminal papers on the dynamics of galaxy clusters reveals that the discrepancy discovered between the dynamical mass and the luminous mass of clusters has been widely overestimated in 1933 as a consequence of several factors, among which the excessive value of the Hubble constant $H_0$, then believed to be about seven times higher than today's average estimate. Taking account, in addition, of our present knowledge of classical dark matter inside galaxies, the contradiction can be reduced by a large factor. To explain the rather small remaining discrepancy of the order of 5, instead of appealing to a hypothetic exotic dark matter, the possibility of a inhomogeneous gravity is suggested. This is consistent with the ``cosmic tapestry" found in the eighties by De Lapparent and her co-authors, showing that the cosmos is highly inhomogeneous at large scale. A possible foundation for inhomogeneous gravitation is the universally discredited ancient theory of Fatio de Duillier and Lesage on pushing gravity, possibly revised to avoid the main criticisms which led to its oblivion. This model incidentally opens the window towards a completely non-standard representation of cosmos, and more basically calls to develop fundamental investigation to find the origin of the large scale inhomogeneity in the distribution of luminous matter.
[1571] vixra:2007.0020 [pdf]
Comment on "Perturbative Operator Approach to High-Precision Light-Pulse Atom Interferometry"
An anomaly of the Earth gravitational field could increase the second-order gravity-gradient tensor more than an order of magnitude and third-order gravity-gradient tensor more than 4 orders of magnitudes. As a result estimates of the systematic errors in the atomic gravimetry considered in the articles [1, 2] should be proportionally enlarged.
[1572] vixra:2007.0009 [pdf]
Composition of Relativistic Gravitational Potential Energy
A relativistic composition of gravitational redshift can be implemented using the Volterra product integral. Using this composition as a model, expressions are developed for gravitational potential energy, escape velocity, and a metric. Each of these expressions alleviates a perceived defect in its conventional counterpart. Unlike current theory, relativistic gravitational potential energy would be limited to rest energy (Machian), escape velocity resulting from the composition would be limited to the speed of light, and the associated metric would be singularity-free. These ideal properties warrant investigation, at a foundational level, into relativistic compositions based on product integration.
[1573] vixra:2007.0003 [pdf]
Formulation And Validation Of First Order Lagrangian
An unproven lagrangian generates erroneous theory. The unknown lagrangian can be validated with Hamiltonian. The invalid lagrangian can be formulated into a valid lagrangian with a three-phase process based on Euler-Lagrange equation and conservation law. The formulation process modifies the lagrangian with system invariant. The process is superior to the popular trial-and-error approach.
[1574] vixra:2006.0277 [pdf]
The Explosion at the Center of Our Galaxy
At the center of the Milky Way, our black hole may have suddenly changed from supermassive to intermediate-mass status. In doing so, it would have emitted an enormous burst of electromagnetic radiation. Here, the total energy of that burst is calculated and compared with the Fermi bubble data.
[1575] vixra:2006.0270 [pdf]
Ordered Motions in the Universe
There is hardly any mention in the broad literature of our observed knowledge about the ordered global motions taking place in our Galaxy and beyond, which require powerful re-ejections of the matter that steadily falls in from outside, at their innermost centers. These re-ejections can only be achieved by nuclear-burning of the strongly compressed hydrogen inside their central accretion disks. SMBHs cannot do it.
[1576] vixra:2006.0263 [pdf]
Approximation of Harmonic Series
Background : Harmonic Series is the sum of Harmonic Progression. There have been multiple formulas to approximate the harmonic series, from Euler's formula to even a few in the 21st Century. Mathematicians have concluded that the sum cannot be calculated, however any approximation better than the previous others is always needed. In this paper we will discuss the flaws in Euler's formula for approximation of harmonic series and provide a better formula. We will also use the infinite harmonic series to determine the approximations of finite harmonic series using the Euler-Mascheroni constant. We will also look at the Leibniz series for Pi and determine the correction factor that Leibniz discussed in his paper which he found using Euler numbers. Each subsequent approximation we find in this paper is better than all previous ones. Different approximations for different types of harmonic series are calculated, best fit for the given type of harmonic series. The correction factor for Leibniz series might not provide any applied results but it is a great way to ponder some other infinite harmonic series.
[1577] vixra:2006.0255 [pdf]
Decays of the Hagen-Hurley Bosons: Possible Compositeness of the W Boson
We continue our study of the Hagen-Hurley equations describing spin 1 bosons. Recently, we have demonstrated that it is possible to describe the decay of a Hagen-Hurley boson into a lepton and a neutrino. However, it was necessary to assume that the spin of the boson is in the $0\oplus 1$ space. We have suggested that this Hagen-Hurley boson can be identified with the W boson mediating weak interactions. The mixed beta decays have been explained by a mechanism of spin 1 and spin 0 mixing of the virtual W boson. In this work, we study the top quark decay involving the real W boson. We substantiate the view that the real W boson is a mixture of spin 1 and spin 0 states.
[1578] vixra:2006.0254 [pdf]
Ulam Numbers Have Zero Density
In this paper we show that the natural density $mathcal{D}[(U_m)]$ of Ulam numbers $(U_m)$ satisfies $mathcal{D}[(U_m)]=0$. That is, we show that for $(U_m)subset [1,k]$ then begin{align}lim limits_{klongrightarrow infty}frac{left |(U_m)cap [1,k]ight |}{k}=0.onumberend{align}
[1579] vixra:2006.0236 [pdf]
8 Boolean Atoms Spanning the 256-Dimensional Entanglement-Probability Three-Set Algebra of the Two-Qutrit Hiesmayr-Loffler Magic Simplex of Bell States
We obtain formulas (bot. p. 12)--including $\frac{2}{121}$ and $\frac{4 \left(242 \sqrt{3} \pi -1311\right)}{9801}$--for the eight atoms (Fig.~\ref{fig:Venn}), summing to 1, which span a 256-dimensional three-set (P, S, PPT) entanglement-probability boolean algebra for the two-qutrit Hiesmayr-L{\"o}ffler states. PPT denotes positive partial transpose, while P and S provide the Li-Qiao necessary {\it and} sufficient conditions for entanglement. The constraints ensuring entanglement are $s> \frac{16}{9} \approx 1.7777$ and $p> \frac{2^{27}}{3^{18} \cdot 7^{15} \cdot13} \approx 5.61324 \cdot 10^{-15}$. Here, $s$ is the square of the sum (Ky Fan norm) of the eight singular values of the $8 \times 8$ correlation matrix in the Bloch representation, and $p$, the square of the product of the singular values. In the two-{\it ququart} Hiesmayr-L{\"o}ffler case, one constraint is $s>\frac{9}{4} \approx 2.25$, while $\frac{3^{24}}{2^{134}} \approx 1.2968528306 \cdot 10^{-29}$ is an upper bound on the appropriate $p$ value, with an entanglement probability $\approx 0.607698$. The $S$ constraints, in both cases, prove equivalent to the well-known CCNR/realignment criteria. Further, we detect and verify--using software of A. Mandilara--pseudo-one-copy undistillable (POCU) negative partial transposed two-qutrit states distributed over the surface of the separable states. Additionally, we study the {\it best separable approximation} problem within this two-qutrit setting, and obtain explicit decompositions of separable states into the sum of eleven product states. Numerous quantities of interest--including the eight atoms--were, first, estimated using a quasirandom procedure.
[1580] vixra:2006.0235 [pdf]
A Vector Interpretation of Quaternion Mass Function
Mass function vector is used to handle uncertainty. Quaternion number is the extent of real number. The mass function vector can extend the mass function by combining the vector. In this paper, the mass function vector is extended by quaternion number, named as Quaternion Mass Function Vector(QMFV). The proposed QMFV has the advantage to deal with uncertain information. When the quaternion number degenerates into the real number, then the QMFV degenerates into the quaternion mass function. In addition, if the probability of multiple subsets of frame of discernment is not assigned to the single subsets, then the mass function vector will degenerate into mass function in classical evidence theory. When the quaternion number degenerates into the real number, then the combination rule of quaternion mass function vectors degenerates into the combination rule of mass function vectors. In the case when the probability of multiple subsets of frame of discernment is not assigned to the single subsets, the combination rule of mass function vectors degenerates into generalized dempster's rule of combination. Numerical examples are applied to prove the efficiency of the proposed model. The experimental results show that the proposed model can apply the quaternion theory to mass function vector effectively and successfully.
[1581] vixra:2006.0218 [pdf]
Nonlinear Continuum Mechanics with Defects Resembles Electrodynamics - A Comeback of the Aether?
This article discusses the dynamics of an incompressible, isotropic elastic continuum. Starting from the Lorentz-invariant motion of defects in elastic continua (Frank 1949), MacCullagh's aether theory (1839) of an incompressible elastic solid is reconsidered. Since MacCullagh's theory, based on linear elasticity, cannot describe charges, particular attention is given to a topological defect that causes large deformations and therefore requires a nonlinear description. While such a twist disclination can take the role of a charge,the deformation field of a large number of these defects produces a microstructure of deformation related to a Cosserat continuum (1909). On this microgeometric level, a complete set of quantities can be defined that satisfies equations equivalent to Maxwell's.
[1582] vixra:2006.0214 [pdf]
Lingacom Muography
Lingacom Ltd. develops detectors for muography -- imaging using cosmic-ray muons, together with imaging algorithms and tools. We present selected simulation results from muon imaging of cargo containers, from a joint muon and X-ray imaging algorithm, and for ground surveys using borehole detectors. This follows a presentation in the ``Cosmic-ray muography'' meeting of the Royal Society.
[1583] vixra:2006.0210 [pdf]
Quaternion Mass Function
Mass function is used to handle uncertainty. Quaternion number is the extent of imaginary number. In this paper, the classical mass function is extended by quaternion number, named as Quaternion Mass Function (QMF). The proposed QMF has the advantage to deal with uncertain information. When the quaternion number degenerates into the complex number, then the QMF degenerates into the complex mass function. In addition, if the complex mass function is degenerated as real number, the QMF is the same as mass function in classical evidence theory. In the case when the quaternion number degenerates into the real number and the QMF focus on the frame of discernment with single subsets, the QMF is the same as the probability distribution in probability theory. The combination rule is also presented to combine two QMFs, which is the generalization of Dempster rule. In the case when the quaternion mass function degenerates into the real number and assigns only to single subsets, the proposed combination rule is degenerated as Beyesian updation in probability theory. Numerical examples are applied to prove the efficiency of the proposed model. The experimental results show that the proposed model can apply the quaternion theory to mass function effectively and successfully.
[1584] vixra:2006.0206 [pdf]
Majorization in the Framework of 2-Convex Systems
We define a 2-convex system by the restrictions $x_{1} + x_{2} + \ldots + x_{n} = ns$, $e(x_{1}) + e(x_{2}) + \ldots + e(x_{n}) = nk$, $x_{1} \geq x_{2} \geq \ldots \geq x_{n}$ where $e:I \to \RR$ is a strictly convex function. We study the variation intervals for $x_k$ and give a more general version of the Boyd-Hawkins inequalities. Next we define a majorization relation on $A_S$ by $x\preccurlyeq_p y$ $\Leftrightarrow$ $T_k(x) \leq T_k(y) \ \ \forall 1 \leq k \leq p-1$ and $B_k(x) \leq B_k(y) \ \ \forall p+2 \leq k \leq n$ (for fixed $1 \leq p \leq n-1$) where $T_k(x) = x_1 + \ldots + x_k$, $B_k(x) = x_k + \ldots + x_n$. The following Karamata type theorem is given: if $x, y \in A_S$ and $x\preccurlyeq_p y$ then $f(x_1) + f(x_2) + \ldots + f(x_n) \leq f(y_1) + f(y_2) + \ldots + f(y_n)$ $\forall$$f:I \to \RR$ 3-convex with respect to $e$. As a consequence, we get an extended version of the equal variable method of V. Cîrtoaje
[1585] vixra:2006.0204 [pdf]
Cycles in Generalized Collatz Sequences
The generalized Collatz sequences are set by <em>D</em>(<em>x</em>) <em>= x/r</em> if <em>x</em> mod <em>r =</em> 0 and <em>T</em>(<em>x</em>) = floor (<em>px/r</em>) otherwise. It has previously been shown (2002.0594) with <em>px + q</em> sequences that numerical cycles are derived from algebraic cycles. The same is shown here in a richer framework where the number of elementary functions increases from 2 to <em>r</em>. Again, the beginning and the end of each sequence are connected by a diophantine equation, <em>p<sup>m</sup> x - r<sup>d</sup> y - q = 0</em>, where <em>m</em> and <em>d</em> are the respective numbers of multiplications and divisions. There are still always rotation cycles <em>(q<sub>1</sub> q<sub>2</sub> ... q<sub>m</sub>)</em> while derived cycles <em>(x<sub>1</sub> x<sub>2</sub> ... x<sub>m</sub>)</em> are present only when <em>q<sub>i</sub> / (r<sup>d</sup> - p<sup>m</sup>)</em> are integers. The function <em>R</em> outlined by <em>R<sup>m</sup></em>(<em>q</em>) <em>= q</em> proves to be a powerful computational tool. In addition, the subsequences are numbered and one can easily find a subsequence from its rank ρ in the base r.
[1586] vixra:2006.0184 [pdf]
Sums of Powers of Fibonacci and Lucas Numbers and their Related Integer Sequences
In this paper we will look at sums of odd powers of Fibonacci and Lucas numbers of even indices. Our motivation will be conjectures, now theorems, which go back to Melham. Using the simple approach of telescoping sums we will be able to give new proofs of those results. Along the way we will establish inverse relationships for such sums and discover new integer sequences.
[1587] vixra:2006.0182 [pdf]
Invisible Decays of Neutral Hadrons
Invisible decays of neutral hadrons are evaluated as ordinary-mirror particle oscillations using the newly developed mirror matter model. Assuming equivalence of the $CP$ violation and mirror symmetry breaking scales for neutral kaon oscillations, rather precise values of the mirror matter model parameters are predicted for such ordinary-mirror particle oscillations. Not only do these parameter values satisfy the cosmological constraints, but they can also be used to precisely determine the oscillation or invisible decay rates of neutral hadrons. In particular, invisible decay branching fractions for relatively long-lived hadrons such as $K^0_L$, $K^0_S$, $\Lambda^0$, and $\Xi^0$ due to such oscillations are calculated to be $9.9\times 10^{-6}$, $1.8\times 10^{-6}$, $4.4\times 10^{-7}$, and $3.6\times 10^{-8}$, respectively. These significant invisible decays are readily detectable at existing accelerator facilities.
[1588] vixra:2006.0180 [pdf]
Universal Complexity in Action: Active Condensed Matter, Integral Medicine, Causal Economics and Sustainable Governance
We review the recently proposed universal concept of dynamic complexity and its new mathematics based on the unreduced interaction problem solution. We then consider its progress-bringing applications at various levels of complex world dynamics, including complex-dynamical nanometal physics and living condensed matter, unreduced nanobiosystem dynamics and the integral medicine concept, causally complete management of complex economical and social dynamics, and the ensuing concept of truly sustainable world governance.
[1589] vixra:2006.0150 [pdf]
A Solution to Einstein's Field Equations in Which the Lambda Discrepancy Is Resolved
The acceptance of the multiverse by prominent cosmologists opens the door to exploring alternative solutions to Einstein's field equations. This brief paper explores the mathematics of an alternative solution in which it is postulated that dτ physically behaves differently than it does in the FLRW cosmology (<em>i.e.</em>, dτ=a(t)dt as opposed to the FLRW's dτ=dt).  The equations that are analogous to the Friedmann equations contain an additional a<sup>2</sup> term, and the equation that is analogous to the Friedmann acceleration equation has a change in sign. The age-based a(t)=t/t<sub>0</sub> solution to these equations results in an Einstein variable gravity model in which the cosmological constant (Λ) discrepancy is resolved. The analogs to the Friedmann equations, when evaluated between the Planck era and t<sub>0</sub> effectively reduce to the same expression as the calculation of the vacuum Zero-point energy, <em>i.e.</em> Ω<sub>Λ<sub>Z</sub></sub>=10<sup>120</sup>
[1590] vixra:2006.0143 [pdf]
Structure Model of Uranium Nucleus U-235
The structure of the nuclei begins with the so-called lower-order nuclei, as the deuterium H-2, tritium H-3 and helium He-3, which evolve into helium nucleus He-4 and then first upper-order oxygen nucleus O-16 that has four helium nuclei He-4 in a column of strong negative electric field. Furthermore, the second upper-order calcium nucleus Ca is based on the fundamental natural phenomenon of mirror symmetry, by repeating the first upper-order oxygen nucleus and its half, i.e. at the 2,5 factor. The same stands with the third upper-order tin nucleus Sn, which emerged from the second upper-order calcium nucleus Ca, according to the mirror symmetry and the same 2,5 factor. It is noted that the tin nucleus Sn will further form the basis for the structure of all heavy nuclei up to the radioactive uranium nucleus U-235. That is the simple and elegant structure model, according to which the nuclei consist of fixed helium nuclei He-4 (plus deuterium, tritium and helium He-3, all evolving into helium He-4) and neutrons rotating around of them.
[1591] vixra:2006.0141 [pdf]
How a New Mathematics May with Advantage be Applied in Science
This paper is about a proposed "New Mathematics" (NM) and its potential in science. The NM is proposed as an amalgamation of mathematics with the "SP System" -- meaning the "SP Theory of Intelligence" and its realisation in the "SP Computer Model" -- both now and as they may be developed in the future. A key part of the structure and workings of the SP System and the proposed NM is the compression of information via the matching and unification of patterns. A preamble includes: brief notes about solipsism in science; an introduction to the SP System with pointers to where fuller information may be found; an outline of how the NM may be developed; and a discussion of several aspects of information compression; In sections that follow: 1) A summary of some of the potential benefits of the NM in science, including: adding an AI dimension to mathematics; facilitating the integration of mathematics, logic, and computing; development of the NM as a "universal framework for the representation and processing of diverse kinds of knowledge" (UFK); a new perspective on statistics; and new concepts of proof, theorem, and so on. 2) A discussion of mathematical and non-mathematical means of representing and processing scientific knowledge. 3) A discussion of how the NM may help overcome the known problems with infinity in physics. 4) how the NM may help in modelling of the quantum mechanics concepts of 'superposition' and 'qubits' via analogies with concepts in stochastic computational linguistics and ordinary mathematics. 5) Likewise, how the NM may prove useful in modelling the quantum mechanics concept of 'nonlocality' and 'entanglement' via an analogy with the phenomenon of discontinuous dependencies in natural languages. 6) How the NM, with the SP System, provides alternative, and arguably more plausible, interpretations of such concepts as the 'Mathematical Universe Hypothesis' and the 'Many Worlds' interpretation of quantum mechanics, as described in Max Tegmark's book "Our Mathematical Universe". Two appendices including: A) a tentative 'tsunami' interpretation of the concept of 'wave-particle duality' in quantum mechanics; and B) A discussion of the possibility of interference fringes with real tsunamis.
[1592] vixra:2006.0137 [pdf]
Tangent Velocity Of Schwarzschild Geodesics
Schwarzschild metric describes the geodesic of massless point in a manifold created by a point mass at the origin. The metric forbids any mass on the geodesic. The speed of light along the geodesic is a function of its distance to the origin. The time is also a function of the distance. Consequently, light accelerates through the geodesic unless the geodesic follows a circular orbit around the origin which Schwarzschild characterized with Kepler's law.
[1593] vixra:2006.0126 [pdf]
AIXI Responses to Newcomblike Problems
We provide a rigorous analysis of AIXI's behaviour under repeated Newcomblike settings. In this context, a Newcomblike problem is a setting where an agent is tied against an environment that contains a perfect predictor, whose predictions are used to determine the environmet's outputs. Since AIXI lacks good convergence properties, we chose to focus the analysis on determining whether an environment appears computable to AIXI, that is, if it maps actions to observations in a way that a computable program can achieve. It is in this sense that, it turns out, AIXI can learn to one-box in *repeated* Opaque Newcomb, and to smoke in *repeated* Smoking Lesion, but may fail all other Newcomblike problems, because we found no way to reduce them in a computable form. However, we still suspect that AIXI can succeed in the repeated settings.
[1594] vixra:2006.0124 [pdf]
Disruptive Gravity: Gravitation as Quantizable Spacetime Bending Force
Gravity is the most problematic interaction of modern science. Our current understanding of gravitation as a spacetime curvature needs the introduction of both Dark Matter and Dark Energy accounting for 95\% of the energy of the universe. Questioning the very foundations of gravity might be the key to understanding it better since its description changed over time. Newton described it as a force, Einstein described it as a spacetime curvature and this paper shows how gravity can be described as a force able to bend spacetime instead. Based on a physical interpretation of the Schwarzschild metric, this approach yields the same predictions as General Relativity such as Mercury’s Perihelion Precession, Light Bending, Time Dilation and Gravitational Waves as well as predicted testable deviations from General Relativity. Applying this description of gravity to cosmology accounts for the accelerating expanding universe with no need for Dark Energy. Described as a spacetime bending force, gravity becomes quantizable as a force in a curved spacetime which is compatible with the Standard Model of particle physics. Therefore, one could adapt the Lagrangian of the Standard Model to this theory to achieve Quantum Gravity.
[1595] vixra:2006.0110 [pdf]
The Information Volume of Uncertain Information: (7) Information Quality
Information quality is a concept that can be used to measure the information of probability distribution. Dempster-Shafer evidence theory can describe uncertain information more reasonably than probability theory. Therefore, it is a research hot spot to propose information quality applicable to evidence theory. Recently, Deng proposed the concept of information volume based on Deng entropy. It is worth noting that, compared with the Deng entropy, the information volume of the Deng entropy contains more information. Obviously, it may be more reasonable to use information volume of Deng entropy to represent uncertain information. Therefore, this article proposes a new information quality, which is based on the information volume of Deng entropy. In addition, when the basic probability (BPA) degenerates into a probability distribution, the proposed information quality is consistent with the information quality proposed by Ygare and Petry. Finally, several numerical examples illustrate the effectiveness of this new method.
[1596] vixra:2006.0105 [pdf]
First Steps of Vector Differential Calculus
This paper treats the fundamentals of the *vector differential calculus* part of *universal geometric calculus.* Geometric calculus simplifies and unifies the structure and notation of mathematics for all of science and engineering, and for technological applications. In order to make the treatment self-contained, I first compile all important *geometric algebra* relationships, which are necessary for vector differential calculus. Then *differentiation by vectors* is introduced and a host of major vector differential and vector derivative relationships is proven explicitly in a very elementary step by step approach. The paper is thus intended to serve as reference material, giving details, which are usually skipped in more advanced discussions of the subject matter.
[1597] vixra:2006.0104 [pdf]
Coexistence Positive and Negative-Energy States in the Dirac Equation with One Electron
Having solved the Dirac equation, we obtained the concept of a spinor. There are two solutions to the Dirac equation, one with positive-energy states and one with negative-energy states. Conventionally, these two positive and negative states have been interpreted and corresponded to electrons and positrons. This paper aims to represent that we interpret the positive and negative states of the Dirac equation as two spinor particles contained in one electron, and examine their validity. As a result, adapting the previous study (i.e., The 0-sphere Electron Model) to the Dirac equation's positive and negative solutions gave a new interpretation of the negative-energy status. The positive and negative states of mass correspond to a thermal potential energy's radiation and absorption, respectively. The positive and negative momentum states could be described based on the simple harmonics oscillation of the virtual photon's kinetic energy.
[1598] vixra:2006.0102 [pdf]
Generalization of Brick Wall Method in Kerr-Newman Black Hole for Entropy Calculation
Using the brick wall method, we will calculate the entropy of Kerr-Newman black Hole and arrive at the already well established result. During the calculation, we will arrive at the generalized equation for FNSR and FSR mode of free energy, which can be used to evaluate the free energy of any black hole. And we will also show the detailed steps in our calculation to provide clarification about how we calculated it.
[1599] vixra:2006.0101 [pdf]
Entropy Calculation Using Brick Wall Method in Kerr-Newman Ads Black Hole
Using the brick wall method, we will calculate the entropy of Kerr-Newman AdS black Hole and arrive at the already well established result. During the calculation, we will be using the generalized equation for FNSR and FSR mode of free energy.
[1600] vixra:2006.0086 [pdf]
Towards Geographic, Demographic, and Climatic Hypotheses Exploration on Covid-19 Spread an Open Source Johns Hopkins University Time Series Normalisation, and Integration Layer
Epidemiologist, Scientists, Statisticians, Historians, Data engineers and Data scientists are working on finding descriptive models and theories to explain COVID-19 expansion phenomena or on building analytics predictive models for learning the apex of COVID-19 confirmed cases, recovered cases, and deaths evolution time series curves. In CRISP-DM life cycle, 75% of time is consumed only by data preparation phase causing lot of pressures and stress on scientists and data scientists building machine learning models. This paper aims to help reducing data preparation efforts by presenting detailed data preparation repository with shell and python scripts for formatting, normalising, and integrating Johns Hopkins University COVID-19 daily data via three normalisation user stories applying data preparation at lexical, syntactic & semantics and pragmatic levels, and four integration user stories through geographic, demographic, climatic, and distance based similarity dimensions, among others. This paper and related open source repository will help data engineers and data scientists aiming to deliver results in an agile analytics life cycle adapted to critical COVID-19 context.
[1601] vixra:2006.0081 [pdf]
Revisiting and Extending Kepler's Laws
Starting from Kepler's laws we can not only derive Newton's Force(F) balance equation but also the Energy(E) conservation equation. We can derive that, E=K+P, where K=Kinetic Energy and P=Potential Energy=-GMm/|r| and E=Constant. Note that, F=m(d^2r/dt^2)=mass*acceleration and F_g=-(GMm/|r|^3)r=Newton's Law of Gravity. Also we get dP/dt=-F_g.(dr/dt), a vector dot product. Here r is the position vector and |r| indicates its magnitude. Thus, we get dE/dt=[F-F_g].(dr/dt), dot product. This is true even when E is not Constant. If E=Constant then, dE/dt=[F-F_g].(dr/dt)=0. This in general means, F-F_g is perpendicular to dr/dt. Not always m(d^2r/dt^2)-F_g=0 as Newton's Universal law of Gravity is stated. Hence Newton's equation encompass only a small subset of all the phenomena covered by the equation dE/dt=0. The equation F=F_g or m(d^2r/dt^2)=-(GMm/|r|^3)r in that form is not even applicable for all 2-body problems in 2D. In general, F=Some component of F_g. Since F_g=-grad(P) we also get that, in general, F=m(d^2r/dt^2)=Some component of -gradient(P). Thus assuming F=-gradient(P) is not valid. Determining which component of F_g is causing the body to accelerate is non-trivial. The free-body diagrams are of limited use and the principle of least action(Lagrangian calculations) employ energy terms but in a much more complicated manner. We can achieve better results directly using the Energy conservation equation. Further we extend the analysis to include Lagrange type 3-body periodic orbit solutions with equilateral configuration and show that Lagrangian/Newtonian method gives some sporadic, apparently unstable solutions, where as the Energy method provides the entire set of stable elliptical orbit solutions including non-equilateral configurations. With Energy method we can also derive a condition which determines whether the 3-bodies end up in an orbit with 1 center of revolution(like in Lagrange type periodic orbits) or end up with 2 centers of revolution(like in the Sun-Earth-Moon system). We also note that the term Inertia coined by Galileo to explain the height conserving property of balls rolling down inclined planes has to be properly interpreted as energy. That is, Inertia = Energy. And we point at the need to replace Newton's Laws of Motion(and Gravity) by the Energy conservation principle. And principle of angular momentum conservation or angular velocity conservation and such.
[1602] vixra:2006.0070 [pdf]
Evidence of a Neutral Potential Surrounding the Earth
We examine the associated wave of the electron, and we put in evidence the problem with its relative velocity. The velocity of an electron is always measured relative to the laboratory, which gives the correct behaviour of the electron relative to the law of Louis de Broglie. But, to agree with this law, there must exist some interaction between the electron and the laboratory, which allows the electron to modify its characteristics. The electron must therefore interact with a media connected to the laboratory. Such a media must be associated with the earth, following it in its path through the Universe. It follows that the relativity theories of A. Einstein are wrong.
[1603] vixra:2006.0064 [pdf]
The Information Volume of Uncertain Information: (6) Information Multifractal Dimension
How to measure the uncertainty in the open world is a popular topic in recent study. Many entropy measures have been proposed to address this problem, but most have limitations. In this series of paper, a method for measuring the information volume of mass function is presented. The fractal property about the maximum information volume is shown in this paper, which indicates the inherent physical meanings of Deng entropy from the perspective of statistics. The results shows the multifractal property of this maximum information volume. Some experiment results are applied to support this perspective.
[1604] vixra:2006.0062 [pdf]
The Information Volume of Uncertain Information: (4) Negation
Negation is an important operation on uncertainty information. Based on the information volume of mass function, a new negation of basic probability assignment is presented. The result show that the negation of mass function will achieve the information volume increasing. The convergence of negation is the situation when the Deng entropy is maximum, namely high order Deng entropy. If mass function is degenerated into probability distribution, the negation of probability distribution will also achieve the maximum information volume, where Shannon entropy is maximum. Another interesting results illustrate the situation in maximum Deng entropy has the same information volume as the whole uncertainty environment.
[1605] vixra:2006.0061 [pdf]
The Information Volume of Uncertain Information: (5) Divergence Measure
Dempster-Shafer Evidence theory is an extension of probability theory, which can describe uncertain information more reasonably. Divergence measure is always an important concept in probability theory. Therefore, how to propose a reasonable divergence measurement has always been a research hot spot in evidence theory. Recently, Deng proposed the concept of information volume based on Deng entropy. It is interesting to note that compared with the uncertainty measure of Deng entropy, information volume of Deng entropy contains more information. Obviously, it might be more reasonable to use information volume of Deng entropy to represent uncertainty information. Based on this, in the paper, we combined the characteristics of non-specific measurement of Deng entropy, and propose a new divergence measure. The new divergence measurement not only satisfies the axiom of distance measurement, but also has some advantages that cannot be ignored. In addition, when the basic probability assignment(BPA) degenerates into probability distribution, the measured result of the new divergence measure is the same as that of the traditional Jensen-Shannon divergence. If the mass function is assigned in probability distribution, the proposed divergence is degenerated as Kullback-Leibler divergence. Finally, some numerical examples are illustrated to show the efficiency of the proposed divergence measure of information volume.
[1606] vixra:2006.0053 [pdf]
On The Infinity of Twin Primes and other K-tuples
The paper uses the structure and math of Prime Generators to show there are an infinity of twin primes, proving the Twin Prime Conjecture, as well as establishing the infinity of other k-tuples of primes.
[1607] vixra:2006.0046 [pdf]
An Improved Lower Bound of Heilbronn's Triangle Problem
Using the method of compression we improve on the current lower bound of Heilbronn's triangle problem. In particular, by letting $\Delta(s)$ denotes the minimal area of the triangle induced by $s$ points in a unit disc. Then we have the lower bound\begin{align}\Delta(s)\gg \frac{\log s}{s\sqrt{s}}.\nonumber \end{align}
[1608] vixra:2006.0043 [pdf]
New Bounds for the Stochastic Knapsack Problem
The knapsack problem is a problem in combinatorial optimization that seeks to maximize an objective function subject to the a weight constraint. We consider the stochastic variant of this problem in which $\mathbf{v}$ remains deterministic, but $\mathbf{x}$ is an $n$-dimensional vector drawn uniformly at random from $[0, 1]^{n}$. We establish a sufficient condition under which the summation-bound condition is almost surely satisfied. Furthermore, we discuss the implications of this result on the deterministic problem.
[1609] vixra:2006.0040 [pdf]
Accounting for the Impact of Media Coverage on Polling
This paper examines the feedback cycle of news ratings and electoral polling, and offers an algorithmic news algorithm to patch the problem. The cycle hinges on overexposure of a candidate to familiarize their name in otherwise apathetic voters, and therefore, the algorithm weighs down exposure on a logarithmic scale to only pass increasingly important news as coverage of a candidate inflates. This problem is a symptom of a deeper issue, and the solution proposes to patch it for the present, as well as offer insight into the machinations of the issue, and therefore aid its understanding
[1610] vixra:2006.0037 [pdf]
The Information Volume of Uncertain Information: (2) Fuzzy Membership Function
In fuzzy set theory, the fuzzy membership function describes the membership degree of certain elements in the universe of discourse. Besides, Deng entropy is a important tool to measure the uncertainty of an uncertain set, and it has been wildly applied in many fields. In this paper, firstly, we propose a method to measure the uncertainty of a fuzzy MF based on Deng entropy. Next, we define the information volume of the fuzzy MF. By continuously separating the BPA of the element whose cardinal is larger than $1$ until convergence, the information volume of the fuzzy sets can be calculated. When the hesitancy degree of a fuzzy MF is $0$, information volume of the fuzzy membership function is identical to the Shannon entropy. In addition, several examples and figures are expound to illustrated the proposed method and definition.
[1611] vixra:2006.0035 [pdf]
The Information Volume of Uncertain Information: (3) Information Fractal Dimension
How to measure the uncertainty in the open world is a popular topic in recent study. Many entropy measures have been proposed to address this problem, but most have limitations. In this series of paper, a method for measuring the information volume of mass function is presented. The fractal property about the maximum information volume is shown in this paper, which indicates the inherent physical meanings of Deng entropy from the perspective of statistics. The results shows the linear relationship between the maximum information volume and the probability scale. Some experiment results are applied to support this perspective.
[1612] vixra:2006.0028 [pdf]
The Information Volume of Uncertain Informaion: (1) Mass Function
Given a probability distribution, its corresponding information volume is Shannon entropy. However, how to determine the information volume of a given mass function is still an open issue. Based on Deng entropy, the information volume of mass function is presented in this paper. Given a mass function, the corresponding information volume is larger than its uncertainty measured by Deng entropy. The so called Deng distribution is defined as the BPA condition of the maximum Deng entropy. The information volume of Deng distribution is called the maximum information volume, which is lager than the maximum Deng entropy. In addition, both the total uncertainty case and the Deng distribution have the same information volume, namely, the maximum information volume. Some numerical examples are illustrated to show the efficiency of the proposed information volume of mass function.
[1613] vixra:2006.0016 [pdf]
Generalized Sedeonic Equations of Hydrodynamics
We discuss a generalization of the equations of hydrodynamics based on space-time algebra of sedeons. It is shown that the fluid dynamics can be described by sedeonic second-order wave equation for scalar and vector potentials. The generalized sedeonic Navier-Stokes equations for a viscous fluid and vortex flows are also discussed. The main peculiarities of the proposed approach are illustrated on the equations describing the propagation of sound waves.
[1614] vixra:2006.0014 [pdf]
Conditio Sine Qua Non
Aims: Different processes or events which are objectively given and real are equally one of the foundations of human life (necessary conditions) too. However, a generally accepted, logically consistent (bio)-mathematical description of these natural processes is still not in sight. Methods: Discrete random variables are analysed. Results: The mathematical formula of the necessary condition is developed. The impact of study design on the results of a study is considered. Conclusion: Study data can be analysed for necessary conditions.
[1615] vixra:2006.0009 [pdf]
Conservation Property of Standing Wave
The standing wave exists in a microwave resonator if the length of the resonator cavity is equal to multiple half-wavelengths of microwave. The stationary interference of standing wave will travel in another inertial reference frame. The vibrating pattern of the standing wave is conserved. The existence of nodes in all reference frames requires the wavelength of the microwave to be conserved in all inertial reference frames. The angular frequency of microwave is different in every reference frame. Hence, the apparent velocity of the microwave depends on the choice of reference frame while the elapsed time remains invariant in all reference frames.
[1616] vixra:2006.0002 [pdf]
An Artiticial Intelligence Enabled Multimedia Tool for Rapid Screening of Cervical Cancer
Cervical cancer is a major public health challenge. Further mitigation of cervical cancer can greatly benefit from development of innovative and disruptive technologies for its rapid screening and early detection. The primary objective of this study is to contribute to this aim through large scale screening by development of Artificial Intelligence enabled Intelligent Systems as they can support human cancer experts in making more precise and timely diagnosis. Our current study is focused on development of a robust and interactive algorithm for analysis of colposcope-derived images analysis and a diagnostic tool/scale namely the OM- The Onco-Meter. This tool was trained and tested on 300 In-dian subjects/patients yielding 77% accuracy with a sensitivity of 83.56% and a specicity of 59.25%. OM-The Oncometer is capable of classifying cervigrams into cervical dysplasia, carcinoma in situ (CIS) and invasive cancer(IC). Pro- gramming language - R has been used to implement and compute earth mover distances (EMD) to characterize different diseases labels associated with cervical cancer, computationally. Deployment of automated tools will facilitate early diagnosis in a noninvasive manner leading to a timely clinical intervention for cervical cancer patients upon detection at a Primary Health Care (PHC). The tool developed in this study will aid clinicians to design timely intervention strategies aimed at improving the clinical prognosis of patients.
[1617] vixra:2005.0292 [pdf]
The Ritva Blockchain: Enabling Confidential Transactions at Scale
The distributed ledger technology has been widely hailed as the break- through technology. It has realised a great number of application scenarios, and improved workflow of many domains. Nonetheless, there remain a few major concerns in adopting and deploying the distributed ledger technology at scale. In this white paper, we tackle two of them, namely the through- put scalability and confidentiality protection for transactions. We learn from the existing body of research, and build a scale-out blockchain plat- form that champions privacy called RVChain. RVChain takes advantage of trusted execution environment to offer confidentiality protection for trans- actions, and scale the throughput of the network in proportion with the number of network participants by supporting parallel shadow chains.
[1618] vixra:2005.0282 [pdf]
A New Criterion for Riemann Hypothesis or a True Proof?
There are tens of self-proclaimed proofs for Riemann Hypothesis and only 2 or 4 disproofs of it in arXiv. I am adding to the Status Quo my very short and clear evidence which uses the peer-reviewed achievement of Dr.Sole and Dr.Zhu, which they published just 4 years ago in a serious mathematical journal INTEGERS.
[1619] vixra:2005.0279 [pdf]
Quantum Field Theory with Fourth-order Differential Equations for Scalar Fields
We introduce a new class of higgs type complex-valued scalar fields $U$ with Feynman propagator $ 1/p^4$ and consider the matching to the traditional fields with propagator $1/p^2$ in the viewpoint of effective potentials at tree level. With some particular postulations on the convergence and the causality, there are a wealth of potential forms generated by the fields $U$, such as the linear, logarithmic, and Coulomb potentials, which might serve as sources of effects such as the confinement, dark energy, dark matter, electromagnetism and gravitation. Moreover, in some limit cases, we get some deductions, such as: a nonlinear Klein-Gordon equation, a linear QED, a mass spectrum with generation structure and a seesaw mechanism on gauge symmetry and flavor symmetry; and, the propagator $1/p^4$ would provide a possible way to construct a renormalizable gravitation theory and to solve the non-perturbative problems.
[1620] vixra:2005.0271 [pdf]
A Formula for the Number of (n − 2)-Gaps in Digital N-Objects
We provide a formula that expresses the number of (n − 2)-gaps of a generic digital n-object. Such a formula has the advantage to involve only a few simple intrinsic parameters of the object and it is obtained by using a combinatorial technique based on incidence structure and on the notion of free cells. This approach seems suitable as a model for an automatic computation, and also allow us to find some expressions for the maximum number of i-cells that bound or are bounded by a fixed j-cell.
[1621] vixra:2005.0269 [pdf]
Maximality Methods in Commutative Set Theory
Let f = w be arbitrary. Every student is aware that Kolmogorov’s criterion applies. We show that S ≤ |ρR,C |. J. Sasaki [34] improved upon the results of R. Thomas by deriving subsets. The goal of the present paper is to compute irreducible, generic random variables
[1622] vixra:2005.0267 [pdf]
Improved Estimate for the Prime Counting Function
Using some simple combinatorial arguments, we establish some new estimates for the prime counting function and its allied functions. In particular we show that \begin{align}\pi(x)=\Theta(x)+O\bigg(\frac{1}{\log x}\bigg), \nonumber \end{align}where \begin{align}\Theta(x)=\frac{\theta(x)}{\log x}+\frac{x}{2\log x}-\frac{1}{4}-\frac{\log 2}{\log x}\sum \limits_{\substack{n\leq x\\\Omega(n)=k\\k\geq 2\\2\not| n}} \frac{\log (\frac{x}{n})}{\log 2}.\nonumber \end{align}This is an improvement to the estimate \begin{align}\pi(x)=\frac{\theta(x)}{\log x}+O\bigg(\frac{x}{\log^2 x}\bigg)\nonumber \end{align}found in the literature.
[1623] vixra:2005.0266 [pdf]
Single Valued Neutrosophic Filters
In this paper we give a comprehensive presentation of the notions of filter base, filter and ultrafilter on single valued neutrosophic set and we investigate some of their properties and relationships. More precisely, we discuss properties related to filter completion, the image of neutrosophic filter base by a neutrosophic induced mapping and the infimum and supremum of two neutrosophic filter bases.
[1624] vixra:2005.0261 [pdf]
Theory that Predicts and Explains Data About Elementary Particles, Dark Matter, Early Galaxies, and the Cosmos
We develop and apply new physics theory. The theory suggests specific unfound elementary particles. The theory suggests specific constituents of dark matter. We apply those results. We explain ratios of dark matter amounts to ordinary matter amounts. We suggest details about galaxy formation. We suggest details about inflation. We suggest aspects regarding changes in the rate of expansion of the universe. The theory points to relationships between masses of elementary particles. We show a relationship between the strength of electromagnetism and the strength of gravity. The mathematics basis for matching known and suggesting new elementary particles extends mathematics for harmonic oscillators.
[1625] vixra:2005.0258 [pdf]
One page Proof of Riemann Hypothesis
There are tenths of proofs for Riemann Hypothesis and 3 or 5 disproofs of it in arXiv. I am adding to the Status Quo my proof, which uses the achievement of Dr. Zhu.
[1626] vixra:2005.0256 [pdf]
Newton's Limit Operator Has no Sense
The Limits and infinitesimal numbers were invented by the fathers of Science like Newton and Leibniz. However, a hypothetical being from another star system could have developed more realistic mathematics [in my opinion the mathematics should be defined via numbers of our fingers and the actions (like adding) with them]. In this note, I am showing the paradox of the current version of ``highest mathematics''.
[1627] vixra:2005.0255 [pdf]
The Mass-Gap in Quantum Chromodynamics and a Restriction on Gluon Masses
We prove that it is necessary to introduce the non-zero gluon masses into the fundamental Lagrangian of Quantum Chromodynamics in order to describe the mass-gap in the reaction of electron-positron annihilation into hadrons. A mew restriction on the gluon masses is obtained. The renormalized theory with non-zero Lagrangian gluon masses is constructed.
[1628] vixra:2005.0247 [pdf]
The Strong Cosmic Censorship Conjecture May be Violated
Penrose's intense cosmic censorship conjecture asserts that the Cauchy horizon inside the dynamically formed black hole is unstable to the remaining material eld that falls into the black hole. The physical importance of this conjecture stems from the fact that it provides the necessary conditions for general relativity to become truly deterministic gravity. In a recent paper by Hod, it provides a proof based on Beckenstein 's second law of generalized thermodynamics that conrms the validity of the interesting Penrose conjecture in the space-time of curved black holes. Recently, an article of mine obtained interesting results about the superradiant stability of Kerr black holes. The result contains some conclusions that violate the "no-hair theorem". We know that the phenomenon of black hole superradiation is a process of entropy reduction, and connecting Hod's paper with my paper, I found that the strong cosmic censorship conjecture may be violated.
[1629] vixra:2005.0237 [pdf]
Proof of the Beal Conjecture and Fermat Catalan Conjecture (Summary)
This article inclucles the theorems anh the lemmas, using them to prove the Beal conjecture anh the Fermat- Catalan conjecture, through which we learn more about rational and irrational numbers. I think the method of proof will be useful for solving other Math- problems and they need more research.
[1630] vixra:2005.0236 [pdf]
On the Pointwise Periodicity of Multiplicative and Additive Functions
We study the problem of estimating the number of points of coincidences of an idealized gap on the set of integers under a given multiplicative function $g:\mathbb{N}\longrightarrow \mathbb{C}$ respectively additive function $f:\mathbb{N}\longrightarrow \mathbb{C}$. We obtain various lower bounds depending on the length of the period, by varying the worst growth rates of the ratios of their consecutive values
[1631] vixra:2005.0224 [pdf]
A Note on Lattice Theory
Lattice is a partially ordered set with two operations defined on it that satisfy certain conditions. Lattice theory itself is a branch of abstract algebra. In this paper we present solutions to three classical problems in lattice theory. The solutions are by no means novel.
[1632] vixra:2005.0218 [pdf]
Using a Common Theme to Find Intersections of Spheres with Lines and Planes via Geometric (Clifford) Algebra
After reviewing the sorts of calculations for which Geometric Algebra (GA) is especially convenient, we identify a common theme through which those types of calculations can be used to find the intersections of spheres with lines, planes, and other spheres.
[1633] vixra:2005.0208 [pdf]
An Unintentional Repetition of the Ramanujan Formula for $\pi$, and Some Independent Mathematical Mnemonic Tools for Calculating with Exponentiation and with Dates
The paper is consisted of contentually unrelated sections, where each section could be one short paper. Section 1 deals with the process of an unintentional repetition of one of the Ramanujan formulas, and the author did not know it before. The process of the unintentional repetition of the Ramanujan formula is interesting for estimating, for instance, the physical background for guessing of dimensionless physical constants. Section 2 shows an approximation which helps at memorizing the square roots of integers up to 10. Section 3 shows the specialities of the squares of integers at the last digits. Section 4 shows how the last two digits are repeated for 2 to the sequential integer powers, and how to find out this. Section 5 contains some mathematical peculiarities at calculating dates. All sections contain mnemonic methods to help us memorize and calculate. These sections belong to pedagogical and recreational mathematics, maybe even something more is here.
[1634] vixra:2005.0201 [pdf]
The abc Conjecture is False: The End of The Mystery
In this note, I give the proof that the abc conjecture is false because, in the case c>rad(abc), for 0<\epsilon<1, presenting a counterexample that implies a contradiction for c very large.
[1635] vixra:2005.0200 [pdf]
3D Polytope Hulls of E8 4_21, 2_41, and 1_42
Using rows 2 through 4 of a unimodular 8X8 rotation matrix, the vertices of E8 4_21, 2_41, and 1_42 are projected to 3D and then gathered & tallied into groups by the norm of their projected locations. The resulting Platonic and Archimedean solid 3D structures are then used to study E8's relationship to other research areas, such as sphere packings in Grassmannian spaces, using E8 Eisenstein Theta Series in recent proofs for optimal 8D and 24D sphere packings, nested lattices, and quantum basis critical parity proofs of the Bell-Kochen-Specker (BKS) theorem.
[1636] vixra:2005.0197 [pdf]
Electric-Field Induced Strange Metal States and Possible High-Temperature Superconductivity in Hydrogenated Graphitic Fibers
In this work, we have studied the effects from increasing the strength of the applied electric field on the charge transport of hydrogenated graphitic fibers. Resistivity measurements were carried out for direct currents in the nA - mA range and for temperatures from 1.9 K to 300 K. The high-temperature non-ohmic voltage-current dependence is well described by the nonlinear random resistor network model applied to systems that are disordered at all scales. The temperature-dependent resistivity shows linear, step-like transitions from insulating to metallic states as well as plateau features. As more current is being sourced, the fiber becomes more conductive and thus the current density goes up. The most interesting features is observed in high electric fields. As the fiber is cooled, the resistivity first decreases linearly with the temperature and then enters a plateau region at a temperature T ~ 260 − 280 K that is field-independent. These observations on a system made out of carbon, hydrogen, nitrogen, and oxygen atoms suggest possible electric-field induced superconductivity with a high critical temperature that was predicted from studying the role of chirality on the origin of life.
[1637] vixra:2005.0182 [pdf]
Estimated Life Expectancy Impact of Sars-Cov-2 Infection on the Entire German Population
The life expectancy of the currently living German population is calculated per age and as weighted average. The same calculation is repeated after considering everyone infected with and potentially killed by SARS-CoV-2 within one year, given the current age-dependent lethality estimates from a study at London Imperial College [1]. For an average life expectancy of 83.0 years in the current population, the reduction due to SARS-CoV-2 infection amounts to 2.0 (1.1-3.9) months. The individual values show a maximum of 7.7 (4.4-15.2) months for a 70-year-old. People below age 50 loose less than 1 month in average.
[1638] vixra:2005.0173 [pdf]
The Exact Value of the Cosmological Constant
Aim: The theoretical value of the cosmological constant Lambda and the problem associated with the same is reviewed again. Methods: The stress-energy-tensor was geometrized Results: Based on the geometrized stress-energy tensor, it was possible to calculate the exact value the cosmological constant. Conclusion: The theoretical value of the cosmological constant Lambda can be calculated very precisely.
[1639] vixra:2005.0172 [pdf]
Wave Property in Non-inertial Reference Frame
The velocity of a wave depends on the choice of reference frame. The relative motion between the rest frames of the wave source, the observer, and the wave determines the apparent wavelength and apparent period in each rest frame. The apparent period is different from the original period unless the wave source and the observer occupy the same rest frame. The apparent wavelength is identical to the original wavelength unless the relative motion between the wave source and the wave is non-inertial. The observed wavelength is identical to all observers. A time varying wavelength is an indication that a remote star is in non-inertial motion during star birth. The inertial force corresponding to the non-inertial relative motion between the rest frames can not be identified as any fundamental force. A neutral object in the non-inertial motion is not attracted by electric force. The massless microwave in the non-inertial reference frame is not attracted by gravitational force.
[1640] vixra:2005.0168 [pdf]
Structure Model of Tin Nucleus
After the nuclei of oxygen O-16 and calcium Ca, which are the first and the second upper-order ones, the tin nucleus Sn is the third upper-order nucleus. Its structure is based on the successive conversions of iron Fe and nickel Ni-60 into tin nucleus Sn. From this third upper-order nucleus the fourth one is constructed (orion nucleus Or-307), as a forecast by the unified theory of dynamic space. The atomic numbers Z of the above four upper-order nuclei are the so-called four "magic numbers", i.e. Z1=8, Z2=8x2.5=20, Z3=20x2.5=50 and Z4=50x2.5=125, according to the mirror symmetry (Figs 4 and 5). It is noted that, this forecast of orion nucleus Or-307 with an atomic number Z4=125 is the corresponding "hypothetical unbihexium Ubh", whose the different atomic number is Z=126. However, the number Z4=125 looks symmetrical and not magical at all, due to the 2.5 factor.
[1641] vixra:2005.0163 [pdf]
Mathematical Representation and Formal Proofs of Card Tricks
Card tricks can be entertaining to audiences. Magicians apply them, but an in-depth knowledge of why they work the way they do is necessary, especially when constructing new tricks. Mapping a trick to its corresponding mathematical operations can be helpful in analysis, and the vice-versa process can help create new tricks and make them accessible to magicians.
[1642] vixra:2005.0160 [pdf]
Effect of Ensembling on ANLI Benchmark
Tremendous achievement of reaching fairly high success metric values with several NLI datasets caused eyebrows to raise questioning the real value of these metric numbers. Research papers started to appear with a comprehensive analysis of what these models really learn and the relative difficulty of forcing these models to fail with small syntactic and semantic changes in the input. In particular, ANLI benchmark is an example of a more challenging NLI task with the intent of measuring the comprehension capabilities of models to a deeper context. Relative success of transformer-based models on ANLI benchmarks were already reported by Nie et al., 2019. Given the challenging nature of iterative dataset formation, individual models are having more difficulty of extracting the underlying relationship between the context and hypothesis pair, and the target. Ensembles of these individual models might have a higher potential to achieve better performance numbers when the individual performances are that far from the equivalent ones in SNLI and MNLI tasks. On top of that, making controlled variations of the inputs and tracking the changes in the behavior of those models will give indications about the strength and robustness regarding the learning process.
[1643] vixra:2005.0159 [pdf]
Quantum Isometries and Noncommutative Geometry
The space $\mathbb C^N$ has no free analogue, but we can talk instead about the free sphere $S^{N-1}_{\mathbb C,+}$, as the manifold defined by the equations $\sum_ix_ix_i^*=\sum_ix_i^*x_i=1$. We discuss here the structure and hierarchy of the submanifolds $X\subset S^{N-1}_{\mathbb C,+}$, with particular attention to the manifolds having an integration functional $tr:C(X)\to\mathbb C$.
[1644] vixra:2005.0131 [pdf]
Planck Length and Speed of Gravity (Light) from Gravity Observations Only, Without Any Knowledge of G, h, or C
For more than hundred years, it has been assumed that one needs to know the Newton gravitational constant G, the Planck constant ̄h, and the speed of light c to find the Planck length. Here we demonstrate that the Planck length can be found without any knowledge of G, h, or c, simply by observing the change in the frequency of a laser beam in a gravity field at two altitudes. When this is done, we also show that the speed of light (gravity) easily can be extracted from any observable gravity phenomena. Further, we show that all observable gravity phenomena can be predicted using just these two constants, in addition to one variable that is dependent on the size of the gravity mass and the distance from the center of the gravity object. This lies in contrast to the standard theory, which holds that we need the three constants Max Planck suggested were the important universal constants, namely G, h, and c; in that formulation, we also need a variable for the mass size, and the radius. Based on our new findings, we get both a reduction in the number of constants required and a simplification of understanding gravity that is directly linked to the Planck scale. We discuss how this has a number of important implications that could even constitute a breakthrough in unifying quantum mechanics with gravity. Our analysis strongly indicates that standard physics uses two different mass definitions without being actively aware of it. The standard kg mass is used in all non-gravitational physics. Apparently, we are using the same mass in gravity, but we claim that the more complete mass is hidden in the multiplication of G and M. Based on this view, we will see that only two universal constants are needed, namely c and lp, to do all gravity predictions compared to the G, h, and c in the standard view of physics. In order to unify gravity with quantum mechanics, we need to use this “embedded” mass definition from gravity, which also impacts the rest of physics. Since 1922, a series of physicists have thought that the Planck length would play a major role in making progress in the understanding of gravity, particularly in the hope of unifying quantum mechanics with gravity. Although there have been a series of attempts to incorporate the Planck length in quantum gravity, little theoretical progress has been accomplished. However, with this recent discovery, we have reasons to think that a piece of the puzzle has emerged. We will continue our analysis and welcome other researchers to scrutinize our findings over time before drawing final conclusions.
[1645] vixra:2005.0130 [pdf]
On the Existence of Prime Numbers in Constant Gaps
This paper studies the existence of prime numbers on constant gaps, establishing a lower bound for the number of consecutive constant gaps for which the existence of some prime number contained in them is necessary.
[1646] vixra:2005.0126 [pdf]
Detecting a Valve Spring Failure of a Piston Compressor with the Help of the Vibration Monitoring.
The article presents problems related to vibration diagnostics in reciprocating compressors. This paper presents the evaluation of several techniques of the digital signal processing, such as the spectrum calculation with the Discrete Fourier Transform (DFT), Continuous Wavelet Transform (CWT), Segmented Analysis for detection the spring failure in reciprocating compressor valve with the help of the vibration monitoring. The experimental investigation to collect the data from the compressor with both the faultless valve and the valve with spring failure was conducted. Three 112DV1 vibration acceleration probes manufactured by TIK were mounted on the cylinder of the compressor. The keyphasor probe was mounted on the compressor’s flywheel. The signal of the vibration acceleration probe mounted on the top of the cylinder was used for the Condition Monitoring and Fault Detection of the valve. The TIK-RVM system of monitoring and data acquisition was used for gathering the signal samples from the probes. The sampling frequency was 30193.5 Hz, signal length was 65535 samples. To imitate the spring fault, the exhaust valve spring was replaced by the shortened one with the same stiffness. As it can be seen from the signal processing results in the article, the techniques used are showing quite different results for the cases of the normal valve spring and the short one. It seems what for this type of the compressor and valve, the valve spring failure can be quite reliably detected with the help of the vibration monitoring. To see if this is a case for other compressor types and other valve types, the additional experiments are needed.
[1647] vixra:2005.0125 [pdf]
A Novel Space-Time Transformation Revealing the Existence of Mirror Universe
The special relativity (SR) is proposed based on the Lorentz transformation (LT) which points out the relationship between space and time. However, the LT intuitively assumes that the different dimensions in space are independent, i.e., the motion of a body in x-direction doesn't affect its position change in y-direction. By considering the correlation between spatial dimensions, a novel and elegant space-time transformation is deduced based on the principle of constant speed of light. The new transformation not only indicates traditional relativistic effects, but also reveals a new one, called as transverse dilatation. More importantly, the transformation suggests that the full universe could be a four-dimensional complex space-time with extra mirror universe. Subsequently, several fundamental and challenging physics problems are reasonably explained, such as the nature of electromagnetic waves, the spooky quantum entanglement, the ghostly dark matter(DM) and the center of black hole.
[1648] vixra:2005.0120 [pdf]
Natural Way to Overcome Catastrophic Forgetting in Neural Networks
Not so long ago, a method was discovered that successfully overcomes the catastrophic forgetting of neural networks. Although we know about the cases of using this method to preserve skills when adapting pre-trained networks to particular tasks, it has not yet obtained widespread distribution. In this paper, we would like to propose an alternative method of overcoming catastrophic forgetting based on the total absolute signal passing through each connection in the network. This method has a simple implementation and seems to us essentially close to the processes occurring in the brain of animals to preserve previously learned skills during subsequent learning. We hope that the ease of implementation of this method will serve its wide application.
[1649] vixra:2005.0103 [pdf]
Superconductivity in Hydrogenated Graphites
We report transport and magnetization measurements on graphites that have been hydrogenated by intercalation with an alkane (octane). The temperature-dependent electrical resistivity shows anomalies manifested as reentrant insulator-metal transitions. Below T ∼ 50 K, the magnetoresistance data shows both antiferromagnetic (AFM) and ferromagnetic (FM) behavior as the magnetic field is decrease or increased, respectively. The system is possibly an unconventional magnetic superconductor. The irreversibility observed in the field-cooled vs. the zero-field cooled data for a sufficiently high magnetic field suggests that the system might enter a superconducting state below Tc ∼ 50 K. Energy gap data is obtained from nonlocal electric differential conductance measurements. An excitonic mechanism is likely driving the system to the superconducting state below the same T ∼ 50 K, where the gap is divergent. We find that the hydrogenated carbon fiber is a multiple gap system with critical temperatures estimates above room temperature. The temperature dependence of the superconducting gap follows the flat-band energy relationship, with the flat band gap parameter linearly increasing with the temperature above Tc ∼ 50 K. Thus, we find that either a magnetic or an electric field can drive this hydrogenated graphitic system to superconducting state below Tc ∼ 50 K. In addition, AF spin fluctuations creates pseudogap states above Tc ∼ 50 K.
[1650] vixra:2005.0100 [pdf]
An Agent-Based Control System for Wireless Sensor and Actuator Networks
This paper aims to propose a novel MIMO control system that is compounded with Distributed Control Systems (DCS) and Centralized Control Systems (CCS). Despite DCS and CCS, which have several drawbacks such as cost and delay, the proposed system is designed to have local and global controllers simultaneously. This MIMO control system has a significant advantage versus the two traditional systems in implementation, computation power reduction, cost decrementing, performance, and the problems that occur in addressing the system connections in DCs for Wireless Sensor Networks and the Internet of Things. The proposed the system is modeled as a Multi-Agent System (MAS) which is implemented in the osBrain MAS framework in Python.
[1651] vixra:2005.0089 [pdf]
The Theoretical Average Encoding Length for Micro-States in Boltzmann System Based on Deng Entropy
Because of the good performance of handling uncertainty, Dempster-Shafer evidence theory (evidence theory) has been widely used. Recently, a novel entropy, named as Deng entropy, is proposed in evidence theory, which is a generalization of Shannon entropy. Deng entropy and the maximum Deng entropy have been applied in many fields due to their efficiency and reliability of measuring uncertainty. However, the maximum Deng entropy lacks a proper explanation in physics, which limits its further application. Thus, in this paper, with respect to thermodynamics and Shannon's source coding theorem, the theoretical average encoding length for micro-states in Boltzmann system based on Deng entropy is proposed, which is a possible physical interpretation of the maximum Deng entropy.
[1652] vixra:2005.0081 [pdf]
My Understanding of Stagnation in Foundation of Physics
Sabine Hossenfelder argues in her blog \cite{Hossenfelder} that the present situation in foundation of physics should be called not crisis but stagnation. I argue that the main reason of the stagnation is that quantum theory inherited from classical one several notions which should not be present in quantum theory. In particular, quantum theory should not involve the notion of space-time background and, since nature is discrete and even finite, quantum theory should not be based on classical mathematics involving the notions of infinitely small/large and continuity. I discuss uncertainty relations, paradox with observation of stars, symmetry on quantum level, cosmological acceleration, gravity and particle theory. My main conclusion is that the most general quantum theory should be based on finite mathematics and, as a consequence: {\bf Mathematics describing nature at the most fundamental level involves only a finite number of numbers while the notions of limit and infinitely small/large and the notions constructed from them (e.g. continuity, derivative and integral) are needed only in calculations describing nature approximately}.
[1653] vixra:2005.0076 [pdf]
The Fermat Classes and the Proof of Beal Conjecture
If after 374 years the famous theorem of Fermat-Wiles was demonstrated in 150 pages by A. Wiles , The purpose of this article is to give a proofs both for the Fermat last theorem and the Beal conjecture by using the Fermat class concept.
[1654] vixra:2005.0074 [pdf]
La (-1)-Reconstruction Des Graphes Symétriques à au Moins 3 éléments
In this article, by algebraic and geometrical techniques, I give a proof to the famous Ulam's conjecture on the (-1)-reconstruction of the symmetric graphs with at least 3 elements conjectured in 1942, although was published only in 1960.
[1655] vixra:2005.0073 [pdf]
Action Adjointe Sur Les Graphes et la Preuve de la Conjecture P=NP
I study the link between the adjoint action and the Hamiltonian cycles in a symmetric graph. Then by a simple algebraic resolution of a system of equations with several variables I find all the Hamiltonian cycles of the graph. Finally I will apply the results found to give an algorithm of order $ \mathcal{ O } (n ^ 3 ) $ allowing to quickly give all the Hamiltonian cycles with their distance. This gives a proof of the conjecture $ P = NP $.
[1656] vixra:2005.0066 [pdf]
Los Caracteres Adquiridos de Lukács. el Problema de la Inasunción Biológica en Las Humanidades, IV
I try to show in this brief note a simple case of the use of science with ideological, political or sectarian interests by those who, having prejudices, say they love and know it. \textit{That} case, is the case of Lukács in his work on the criticism of irrationalism that emerges in the 18th and 19th century, in which he shows a deep scientific ignorance as well as a persistent and unpleasant defense of the ideology that he professes. If, someone, while affirming that science is valid knowledge using it in his speech, and his doctrine, advocates its use or considers science itself, burning the ship of demarcation, if, he maintains the recognized scientific methodology apart from his dogmas and arguments for being contrary to his creed and simultaneously \textit{ignoramus et ignorabimus} the essential of contemporary scientific knowledge, then, the result of his convictions and litanies, with the emphasis or insistence that others allow him, contributes absolutely nothing to the science and confuses the clueless.
[1657] vixra:2005.0059 [pdf]
The Special Functions and the Proof of the Riemann’s Hypothesis
By studying the $ \circledS $ function whose integer zeros are the prime numbers, and being inspired by the article [2], I give a new proof of the Riemann hypothesis.
[1658] vixra:2005.0044 [pdf]
On the Properties of the Hessian Tensor for Vector Functions
In this paper some properties and the chain rule for the hessian tensor for combined vector functions are derived. We will derive expressions for H(T + L) , H(aT) , and H(T ◦ L) (chain rule for hessian tensors) and show some specific examples of the chain rule in certain types of composite maps.
[1659] vixra:2005.0043 [pdf]
The Zitterbewegung of Planets and Moon from the String Gravity
The string model of gravitational force was proposed by author 40 years ago (Pardy, 1980; 1996). In this model the string forms the mediation of the gravitational interaction between two gravitating bodies. It reproduces the Newtonian results in the first-order approximation and it predicts in the higher-oder approximations the existence of oscillations of the massive bodies interacting by the string. In case of the moon it can be easily verified by the NASA laser measurements.
[1660] vixra:2005.0004 [pdf]
Assuming C<rad^2(abc) :A Proof of the Abc Conjecture
In this paper, assuming the conjecture $c<rad^2(abc)$ true, I give, using elementary calculus, the proof of the $abc$ conjecture proposing the constant $K(\ep)$. Some numerical examples are given.
[1661] vixra:2004.0694 [pdf]
Specification of the Photon
A derivation of continuously differentiable, fluctuating 3-dimensional vector fields as generalized Maxwell Fields leads to Identification of Einstein’s space as the result of a deformation of a Euclidean space and the fluctuating hypersurface of Einstein’s space as gravitational wave propagation. The consequence is the union of Maxwell field and Gravitational field which leads to 1. -the explanation of the photon and its formation by describing the detailed quantization process. 2. -the characterization of the photon resulting from the deformation movement defined in a point, which from here screws through space in a direction at the speed of light. These are discussions that are beyond the range of quantum mechanics and quantum field theory because of the uncertainty relation, although such connections seem qualitatively obvious. By the described unification electromagnetism is directly led back to the most fundamental terms of physics, space and time. Last but not least the importance of the Einstein-Equations for microphysics is proved. 2
[1662] vixra:2004.0690 [pdf]
A^x + B^y = C^z Part 2: Another Version of my Theorem and Infinite Ascent
We give another version of my theorem that was submitted in the previous article, which is the theorem leading to the proof of the Beal's conjecture and the Fermat - Catalan conjecture. We also give a view to prove whether the equation has infinitely many solutions in integer or not related to parametric solution and infinite ascent.
[1663] vixra:2004.0677 [pdf]
Reflections on the Foundations of Quantum Mechanics
In this paper the quantised version of Newton Second Law is derived assuming merely the existence of de Broglie matter-waves and their basic properties. At the same time we keep an eye towards interpretations of quantum mechanics and will realise that the two most different interpretations (Copenhagen interpretation and the de Broglie-Bohm theory) owe their difference to two fundamentally different approaches to `Harmonisation'. In this regard we shall see that the guiding equation of the de Broglie-Bohm theory currently found in literature is not the most complete equation possible; as a result we answer one of the important questions in interpreting quantum mechanics, namely that `when does the concept of classical path (trajectory) makes sense in quantum mechanics?' Moreover, in light of special-relativistic considerations we shall easily see that in the Number-Division approach, i.e. that of de Broglie-Bohm, the wave operator no longer appears, making it in turn impossible the application of Clifford algebras (Dirac's `square root' of the wave operator).
[1664] vixra:2004.0658 [pdf]
The Geometrization of Quantum Mechanics, the Nonlinear Klein-Gordon Equation, Finsler Gravity and Phase Spaces
The Geometrization of Quantum Mechanics proposed in this work is based on the postulate that the quantum probability density can $curve$ the classical spacetime. It is shown that the gravitational field produced by $smearing$ a point-mass $M_o$ at $ r = 0$ throughout all of space (in an spherically symmetric fashion) can be interpreted as the gravitational field generated by a self-gravitating anisotropic fluid droplet of mass density $ 4 \pi M_o r^2 \varphi^* ( r ) \varphi ( r ) $ and which is sourced by the $probability$ $cloud$ (associated with a spinless point-particle of mass $ M_o$) $permeating$ a $3$-spatial domain region $ {\cal D}_3 = \int 4 \pi r^2 dr $ at any time $ t $. Classically one may smear the point mass in any way we wish leading to arbitrary density configurations $ \rho (r ) $. However, Quantum Mechanically this is $not$ the case because the radial mass configuration $ M (r) $ must obey a key third order nonlinear differential equation (nonlinear extension of the Klein-Gordon equation) displayed in this work and which is the static spherically symmetric relativistic analog of the Newton-Schr\"{o}dinger equation. We conclude by extending our proposal to the Lagrange-Finsler and Hamilton-Cartan geometry of (co) tangent spaces and involving the relativistic version of Bohm's Quantum Potential. By further postulating that the quasi-probability Wigner distribution $W(x,p)$ $curves$ phase spaces, and by encompassing the Finsler-like geometry of the cotangent-bundle with phase space quantum mechanics, one can naturally incorporate the $noncommutative$ and non-local Moyal star product (there are also non-associative star products as well). To conclude, Phase Space is the arena where to implement the space-time-matter unification program. It is our belief this is the right platform where the quantization $of$ spacetime and the quantization $in$ spacetime will coalesce.
[1665] vixra:2004.0578 [pdf]
How to Read Faces Without Looking at Them
Face reading is the most intuitive aspect of emotion recognition. Unfortunately, digital analysis offacial expression requires digitally recording personal faces. As emotional analysis is particularlyrequired in more poised scenario, capturing faces becomes a gross violation of privacy. In thispaper, we use the concept ofcompressive analysisintroduced in [1] to conceptualise a systemwhich compressively acquires faces in order to ascertain unusable reconstruction, while allowing foracceptable (and adjustable) accuracy in inference.
[1666] vixra:2004.0538 [pdf]
El Altruismo Recíproco Imposible de Stirner. el Problema de la Inasunción Biológica en Las Humanidades, III
Intellectual blindness for those who believe they have the fleece of legitimacy, is the fundamental reason for advising a rigorous, scientifically based epistemology of human relations, including, or exceedingly, those contained in political theories, whose object is mere superstition, as is known. Stirner's infrequent political position, individualistic or selfish anarchism, despite some of his accurate analyzes, does not avoid the same criticism; in essence, his plot adventures are empty statements, linguistic games, lack of clarity, the same idealistic dogmatism that constantly attacks, poetizes everywhere, and, even basing his cause on nothing, he forgets that he is a living being. Is that nothing?
[1667] vixra:2004.0507 [pdf]
Structure Model of Calcium Nucleus
After the oxygen nucleus O-16, which is the first upper-order nucleus, the calcium nucleus Ca is the second upper-order one. Its structure is based on the successive conversions of fluorine F, magnesium Mg and silicon Si-28 into calcium nucleus Ca. From this second upper-order nucleus the third one is constructed (tin nucleus Sn) and from the third the fourth one (orion nucleus Or-307), according to the mirror symmetry. The atomic numbers Z of the above four upper-order nuclei are the so-called four "magic numbers", i.e. Z1=8, Z2=8x2.5=20, Z3=20x2.5=50 and Z4=50x2.5=125. It is noted that, this orion nucleus Or-307 with a differential atomic number Z=125 (unified theory of dynamic space) is the corresponding "hypothetical unbihexium Ubh", whose atomic number is Z=126 (Nuclear Physics). However, the number 125 looks symmetrical and not magical at all, due to the 2.5 factor (Fig. 5).
[1668] vixra:2004.0503 [pdf]
A Causally Connected Superluminal Natario Warp Drive Spacetime ???
Warp Drives are solutions of the Einstein Field Equations that allows superluminal travel within the framework of General Relativity. There are at the present moment two known solutions: The Alcubierre warp drive discovered in $1994$ and the Natario warp drive discovered in $2001$. However one the major drawbacks concerning warp drives is the problem of the Horizons(causally disconnected portions of spacetime) in which an observer in the center of the bubble cannot signal nor control the front part of the bubble.The geometrical features of the Natario warp drive are the required ones to overcome this obstacle at least in theory. The behavior of a photon sent to the front of the warp bubble in the case of a Natario warp drive with variable velocity is the main purpose of this work.We present the behavior of a photon sent to the front of the bubble in the Natario warp drive in the $1+1$ spacetime with both constant and variable velocities using quadratic forms and the null-like geodesics $ds^2=0$ of General Relativity and we provide here the step by step mathematical calculations in order to outline the final results found in our work which are the following ones: For the case of fixed velocities the Horizon exits in agreement with the current scientific literature but for the case of variable velocities the Horizon do not exists at all.Due to the extra terms in the Natario vector that affects the whole spacetime geometry this solution with variable velocities have different results when compared to the fixed velocity solution.Remember that we are presenting our results using step by step mathematics in order to better illustrate our point of view.
[1669] vixra:2004.0479 [pdf]
Finding the Planck Length Multiplied by the Speed of Light Without Any Knowledge of G, c, or h, Using a Newton Force Spring
In this paper, we show how one can find the Planck length multiplied by the speed of light from a Newton force spring with no knowledge of the Newton gravitational constant G, the speed of light c, or the Planck constant h. This is remarkable, as for more than a hundred years, modern physics has assumed that one needs to know G, c, and the Planck constant in order to find any of the Planck units. We also show how to find other Planck units using the same method. To find the Planck time and the Planck length, one also needs to know the speed of light. To find the Planck mass and the Planck energy in their normal units, we need to know the Planck constant, something we will discuss in this paper. For these measurements, we do not need any knowledge of the Newton gravitational constant. It can be shown that the Planck length times the speed of light requires less information than any other Planck unit; in fact, it needs no knowledge of any fundamental constant to be measured. This is a revolutionary concept and strengthens the case for recent discoveries in quantum gravity theory completed by Haug [3].
[1670] vixra:2004.0478 [pdf]
Sedeonic Generalization of London Equations
We discuss the generalization of phenomenological equations for electromagnetic field in superconductor based on algebra of space-time sedeons. It is shown that the combined system of London and Maxwell equations can be reformulated as a single sedeonic wave equation for the field with nonzero mass of quantum, in which additional conditions are imposed on the scalar and vector potentials, relating them to the deviation of charge density and currents in the superconducting phase. Also we considered inhomogeneous equations including external sources in the form of charges and currents of the normal phase. In particular, a screening of the Coulomb interaction of external charges in a superconducting media is discussed.
[1671] vixra:2004.0472 [pdf]
Bayesian Updating Quaternion Probability
The quaternion is an effective tool to evaluate uncertainty, and it has been studied widely. However, what is the quaternion probability still an open question. This paper has proposed the quaternion probability, which is the extent of classical probability and plural probability with the aid of quaternion. The quaternion probability can apply classical probability theory to the four-dimensional space. Based on the quaternion probability, the quaternion probability multiplication has been proposed, which is a method of multiplication conforming to the law of quaternion multiplication. Under the bayesian environment, the quaternion full joint probability and the quaternion conditional probability is proposed, which can apply the quaternion probability to address the issues of quantum decision making. Numerical examples are applied to prove the efficiency of the proposed model. The experimental results show that the proposed model can apply the quaternion theory to the bayesian updating effectively and successfully.
[1672] vixra:2004.0452 [pdf]
Multiple Sclerosis is Caused by an Epstein Bar Virus Infection
Aim: The relationship between Epstein-Barr virus and multiple sclerosis is assessed once again in order to gain a better understanding of this disease. Methods: A systematic review and meta-analysis is provided aimed to answer among other the following question. Is there a cause effect relationship between Epstein-Barr virus and multiple sclerosis? The conditio sine qua non relationship proofed the hypothesis without an Epstein-Barr virus infection no multiple sclerosis. The mathematical formula of the causal relationship k proofed the hypothesis of a cause effect relationship between Epstein-Barr virus infection and multiple sclerosis. Significance was indicated by a p-value of less than 0.05. Results: The data of the studies analysed provide evidence that an Epstein-Barr virus infection is a necessary condition (a conditio sine qua non) of multiple sclerosis. In particular and more than that. The data of the studies analysed provided impressive evidence of a cause-effect relationship between Epstein-Barr virus infection and multiple sclerosis. Conclusion: Multiple sclerosis is caused by an Epstein-Barr virus infection.
[1673] vixra:2004.0425 [pdf]
Automatic Tempered Posterior Distributions for Bayesian Inversion Problems
We propose a novel adaptive importance sampling scheme for Bayesian inversion problems where the inference of the variables of interest and the power of the data noise is split. More specifically, we consider a Bayesian analysis for the variables of interest (i.e., the parameters of the model to invert), whereas we employ a maximum likelihood approach for the estimation of the noise power. The whole technique is implemented by means of an iterative procedure, alternating sampling and optimization steps. Moreover, the noise power is also used as a tempered parameter for the posterior distribution of the the variables of interest. Therefore, a sequence of tempered posterior densities is generated, where the tempered parameter is automatically selected according to the actual estimation of the noise power. A complete Bayesian study over the model parameters and the scale parameter can be also performed. Numerical experiments show the benefits of the proposed approach.
[1674] vixra:2004.0408 [pdf]
Exponential Factorization of Multivectors in Cl(p,q), P+q < 3
In this paper we consider general multivector elements of Clifford algebras Cl(p,q), p+q <3, and study multivector factorization into products of exponentials and idempotents, where the exponents are blades of grades zero (scalar) to n (pseudoscalar).
[1675] vixra:2004.0364 [pdf]
Phase Diagram of Nuclear Matter Created in Relativistic Nuclear Collisions
The published theoretical data of few models (PHSD/HSD both with and without chiral symmetry restoration) applied to experimental data from collisions of nuclei from SIS to LHC energies, have been analised by using of the meta-analysis what allowed to localize a possible phase singularities of nuclear matter created in the central nucleus-nucleus collisions: The ignition of the Quark-Gluon Plasma's (QGP) drop begins already at top SIS/BEVALAC energies at around $\sqrt{s_{NN}}\,=\,2$ GeV. This drop of QGP occupies small part, 15\% (an averaged radius about 5.3 fm if radius of fireball is 10 fm), of the whole volume of a fireball created at top SIS energies. The drop of exotic matter goes through a split transition (separated boundaries of sharp (1-st order) crossover and chiral symmetry restoration (CSR) in chiral limit) between QGP and Quarkyonic matter at energy around $\sqrt{s_{NN}}\,=\,3.5$ GeV. The boundary of transition between Quarkyonic and Hadronic matter with partial CSR was localized between $\sqrt{s_{NN}}\,=\,$4.4 and 5.3 GeV and it is not being intersected by the phase trajectory of that drop. Critical endpoint of 2-nd order has been localized at around $\sqrt{s_{NN}}\,=\,9.3$ GeV, a triple phase area appears at 12$\div$15 GeV, a critacal endpoint of 1-st order - at around $\sqrt{s_{NN}}\,=\,20$ GeV, the boundary of smooth (2-nd order) crossover transition with CSR in chiral limit between Quarkyonic matter and QGP was localized between $\sqrt{s_{NN}}\,=\,$9.3 and 12 GeV and between Hadronic and QGP on interval from $\sqrt{s_{NN}}\,=\,$15 to 20 GeV. The phase trajectory of a hadronic corona, enveloping the drop, stays always in the hadronic phase. A possible phase diagram of nuclear matter created in the mid-central nucleus-nucleus collisions are also presented in the same range of energies as for the central collisions.
[1676] vixra:2004.0363 [pdf]
Multi-Task Deep Learning Based CT Imaging Analysis for Covid-19: Classification and Segmentation
The fast spreading of the novel coronavirus COVID-19 has aroused worldwide interest and concern, and caused more than one million and a half confirmed cases to date. To combat this spread, medical imaging such as computed tomography (CT) images can be used for diagnostic. An automatic detection tools is necessary for helping screening COVID-19 pneumonia using chest CT imaging. In this work, we propose a multitask deep learning model to jointly identify COVID-19 patient and segment COVID-19 lesion from chest CT images. Our motivation is to leverage useful information contained in multiple related tasks to help improve both segmentation and classification performances. Our architecture is composed by an encoder and two decoders for reconstruction and segmentation, and a multi-layer perceptron for classification. The proposed model is evaluated and compared with other image segmentation and classification techniques using a dataset of 1044 patients including 449 patients with COVID-19, 100 normal ones, 98 with lung cancer and 397 of different kinds of pathology. The obtained results show very encouraging performance of our method with a dice coefficient higher than 0.78 for the segmentation and an area under the ROC curve higher than 93% for the classification.
[1677] vixra:2004.0354 [pdf]
Group Geometric Algebras and the Standard Model
We show how to generalize the Weyl equation to include the Standard Model fermions and a dark matter fermion. The 2x2 complex matrices are a matrix ring R. A finite group G can be used to define a group algebra G[R] which is a generalization of the ring. For a group of size N, this defines N Weyl equations coupled by the group operation. We use the group character table to uncouple the equations by diagonalizing the group algebra. Using the full octahedral point symmetry group for G, our uncoupled Weyl equations have the symmetry of the Standard Model fermions plus a dark matter particle.
[1678] vixra:2004.0337 [pdf]
Spatial-Temporal Julia Type Structures in Quantum Boundary Problems
An initial boundary value problem to a system of linear Schrodinger equations with nonlinear boundary conditions is considered. It is shown that attractor of the problem lies on circles in complex plane. Trajectories tend to xed points of hyperbolic type with unstable manifold which is formed by saddle points of codimension one. Each element of the attractor are periodic piecewise constant function on pase and amplitude of a wave function in WKB -approximation with nite or innite points of discontinuities on a period of the Julia type. More exactly, it has been obtained limit solutions of the problem which with accuracy O(h2) match the exact attractor of the boundary problem, which is independent on h > 0 in the zero WKB - approximation. The presented mathematical result are applied to the study of dynamics of two charged particles with opposite impulses, which are conned by two at walls with surface potentials of double-well type. It is shown that asymptotic behaviour of particles is similar to the behaviour of orbits that arise to well-known logistic map in complex plane. As example, there exist limit periodic nearly piecewise constant distributions of wave functions of Mandelbrot type with Julia type points of 'jumps' for amplitudes and phases of given free charged particles in a conned box with surface nonlinear double-well potential at walls in magnetic eld.
[1679] vixra:2004.0333 [pdf]
Quantum Matter as a Showcase for Quantum Gravity: Analysis and Implications
By generically constraining the boundary term of the action of gravity, the formal structure of the observed types of matter fields (scalar, fermion/Dirac and spin 1) is obtained in the weak gravity limit, including their gauge behaviour, covering the standard model. By gravity, we mean any theory having the Gibbons-Hawking-York boundary term as its torsion-free weak gravity limit. The constraining term is assumed to be local, not explicitly coordinate-dependent and to be the boundary term of a bulk function (Lagrangian). In this way, the latter is fixed to a large extent, admitting couplings and mass terms. The formal matching with observed fields suggests that matter should be the consequence of gravity constraining, and quantum matter would result from constrained quantum gravity. This implies that it is possible to compute the value of 6.564.10^{-69} m^2 for the fundamental quantum constant of gravity - the smallest possible change of the boundary term. Also, the freedom to construct a fundamental quantum concept of gravity is strongly reduced, and the weak gravity limit is completely determined. For strong gravity, the boundary term - rather than the Hamiltonian - yields a key quantum counting operator.
[1680] vixra:2004.0331 [pdf]
Particle Transport by Turbulent Fluids
It is stated that moving fluids can be described as fluctuating continua although their material distribution is always discontinuous. A stochastic particle transport is then considered by an imaginary ensemble of any number of equivalent turbulent fluids existing in parallel. This leads to exspectation values of the densities of turbulently transported particles. First a transport equation for a molecular self-diffusion is found. It is used as a reference for the difference between self-moving diffusing particles and transport through turbulent moving continua (e.g. aerosols). This is followed by a transport theory for longitudinal continuum fluctuations to provide an easier transition to the more complicated turbulent particle transport. The following transport equations arise: 1. -transport equation of molecular self-diffusion as partial differential equation as well as integral equation. The transition probabillity of velocities is calculated, explicitly. 2. -transport equation of a passive particle transport by longitudinal continuumfluktuations as partial differential equation as well as integral equation. The transition probabillity of velocities is calculated, explicitly. 3. -transport equation of a passive particle transport by turbulent continuum-fluktuations as partial differential equation as well as integral equation. The transition probabillity of velocities is calculated, explicitly. 2
[1681] vixra:2004.0325 [pdf]
On Attractivity for $\psi$-Hilfer Fractional Differential Equations Systems
In this paper, we investigate the existence of a class of globally attractive solutions of the Cauchy fractional problem with the $\psi$-Hilfer fractional derivative using the measure of noncompactness. An example is given to illustrate our theory.
[1682] vixra:2004.0323 [pdf]
Attractivity for Differential Equations Systems of Fractional Order
This paper investigates the overall solution attractivity of the fractional differential equation introduced by the $\psi$-Hilfer fractional derivative and the Krasnoselskii's fixed point theorem. We highlight some particular cases of the result investigated here, especially involving the Riemann- Liouville and Katugampola fractional derivative, elucidating the fundamental property of the $\psi$-Hilfer fractional derivative, that is, the broad class of particular cases of fractional derivatives that consequently apply to the results investigated herein.
[1683] vixra:2004.0318 [pdf]
Instancenet: Object Instance Segmentation Using DNN
One-stage object detectors like SSD and YOLO are able to speed up existing two-stage detectors like Faster R-CNN by removing the object proposal stage and making up for the lost performance in other ways. Nonetheless, the same approach is not easily transferable to instance segmentation task. Current one-stage instance segmentation methods can be simply classified into segmentation-based methods which segment first then do clustering, and proposal-based methods which detect first then predict masks for each instance proposal. Proposal-based methods always enjoy a better mAP; by contrast, segmentation-based methods are generally faster when inferencing. In this work, we first propose a one-stage segmentation-based instance segmentation solution, in which a pull loss and a push loss are used for differentiating instances. We then propose two post-processing methods, which provide a trade-off between accuracy and speed.
[1684] vixra:2004.0311 [pdf]
Extracting the Speed of Gravity from A Grandfather Pendulum Clock
Based on recent development in quantum physics we show how to extract the speed of gravity (light) from a Pendulum Clock. This with no knowledge off the so-called Newton's gravitational constant G. This is a very short preliminary note with the mathematical results. We will likely at a later point extend this paper with an in-depth discussion.
[1685] vixra:2004.0300 [pdf]
Differential Correction and Arc-Length Continuation Applied to Boundary Value Problems: Examples Based on Snap-Through of Circular Arches and Spherical Shell
Inspired by the application of differential correction to initial-value problems to find periodic orbits in both the autonomous and non-autonomous dynamical systems, in this paper we apply differential correction to boundary-value problems. In the numerical demonstration, the snap-through buckling of arches and shallow spherical shells in structural mechanics are selected as examples. Due to the complicated geometrical nonlinearity in such problems, the limit points and turning points might exist. In this case, the typical Newton-Raphson method commonly used in numerical algorithms will fail to cross such points. In the current study, an arc-length continuation is introduced to enable the current algorithm to capture the complicated load-deflection paths. To show the accuracy and efficiency of differential correction, we will also apply the continuation software package COCO to get the results as a comparison to those from differential correction. The results obtained by the proposed algorithm and COCO agree well with each other, suggesting the validity and robustness of differential correction for boundary-value problems.
[1686] vixra:2004.0294 [pdf]
Numbers Are 3 Dimensional
Riemann hypothesis stands proved in three different ways of three different level of complexity.To prove Riemann hypothesis from the functional equation concept of Delta function and periodic harmonic conjugate of both Gamma and Delta functions are introduced similar to Gamma and Pi function. Other two proofs are derived using Eulers formula and elementary algebra. Analytically continuing zeta function to an extended domain, poles and zeros of zeta values are redefined. Other prime conjectures like Goldbach conjecture, Twin prime conjecture etc.. are also proved in the light of new understanding of primes. Numbers are proved to be three dimensional as worked out by Hamilton. Logarithm of negative and complex numbers are redefined using extended number system. Factorial of negative and complex numbers are redefined using values of Delta function and periodic harmonic conjugate of both Gamma and Delta functions.
[1687] vixra:2004.0287 [pdf]
Why Quasi-Interpolation onto Manifold has Order 4
We consider approximations of functions from samples where the functions take values on a submanifold of $\mathbb{R}^n$. We generalize a common quasi-interpolation scheme based on cardinal B-splines by combining it with a projection $P$ onto the manifold. We show that for $m\geq 3$ we will have approximation order $4$. We also show why higher approximation order can not be expected when the control points are constructed as projections of the filtered samples using a fixed mask.
[1688] vixra:2004.0281 [pdf]
Quantom Test of Prime Numbers
we give a new method to for testing whether any positive integer is prime or not using real experiment by throwing neutron into a Plutonium.
[1689] vixra:2004.0272 [pdf]
Spatial-Temporal Oscillations in Boundary Problems of Quantum Mechanics
We consider the Schrodinger equation with nonlinear boundary conditions and ini- tial conditions. It is shown that attractor of the problem contains periodic piecewise constant function with nite, countable or uncountable points of discontinuities on a period. Solutions exists for a special class of initial data which are small perturbations of invariant solutions of dynamical system. The problem is considered with accuracy O(h2), where h is a small parameter of the problem. Applications to optical resonators with nonlinear feedback has been considered.
[1690] vixra:2004.0266 [pdf]
The Photon Existence Paradox
Special relativity enjoins lightspeed objects from possessing essential characteristics one might naively associate with physically existing objects. Thus, it can be argued, had we not already known of the physical existence of lightspeed objects, such as photons, then we would have interpreted this enjoinment to mean that it is impossible for lightspeed objects to exist. From this perspective, the empirical existence of photons poses a paradox. A recently proved equivalence relation shows how this paradox is properly resolved: it is possible for lightspeed objects to exist, but not in spacetime. The interpretation of the paradox depends on which aspect of special relativity one wishes to emphasize: it can be framed either in terms of the ``error'' of assuming absolute existence, i.e. that physical existence can be specified without reference to a given spacetime, or a ``gap'' in standard special relativity in that it lacks any reference to a physics-based concept of existence in spacetime.
[1691] vixra:2004.0257 [pdf]
Image Reconstruction with a NON–PARALLELISM Constraint
We consider the problem of restorating images from blur and noise. We find the minimum of the primal energy function, which has two terms, related to faithfulness to the data, and smoothness constraints, respectively. In general, we do not know and we have to estimate the discontinuities of the ideal image. We require that the obtained images are piecewise continuous and with thin edges. We associate with the primal energy function a dual energy function, which treats discontinuities implicitly. We determine a dual energy function, which is convex and takes into account non-parallelism constraints, in order to have thin edges. The proposed dual energy can be used as initial function in a GNC (Graduated Non-Convexity)-type algorithm, to obtain reconstructed images with Boolean discontinuities. In the experimental results, we show that the formation of parallel lines is inhibited.
[1692] vixra:2004.0250 [pdf]
Las Comillas en la Libertad de Rothbard. el Problema de la Inasunción Biológica en Las Humanidades, II
In addition to the common difficulties in clarifying the basic concepts used in Rothbard's theory of the new freedom, there are those that this author strangely includes in the writing of his book, \textit{For a New Liberty. The Libertarian Manifesto}. These provoke in the reader the suspicion of encountering expressive rhetorical forms that Rothbard is unable to elucidate. In particular, when he deals with the concept of freedom and derivatives, which he sometimes uses and at other times he mentions as if the illumination of it in the aforementioned way leads to the assumption that one knows what the author is talking about or what the author is referring to. I wish to show that, despite Rothbard's insistence, his intellectual hope with his new concept of freedom is spurious and does not detect the setbacks that the environment poses to him.
[1693] vixra:2004.0248 [pdf]
Predicting the Likelihood of Mortality in Confirmed Positive COVID-19 Patients
The novel coronavirus - COVID-19 - has evolved into a global pandemic. With that, it is imperative that countries and medical facilities are equipped with the technology and resources to give every person the greatest chance of surviving. With that, even developed nations are beginning to run low on medical supplies such as hospital beds, masks, and respirators. With the growth of cases in the United States, hospitals will continue to run out of supplies. It is imperative that medical supplies get distributed to those who need it the most first. This paper outlines a machine learning approach to predicting patients who are at the most risk of mortality given the confirmed positive diagnosis of coronavirus. The final results were inconclusive enough to be implemented in a real-world scenario.
[1694] vixra:2004.0246 [pdf]
On the Distribution of Addition Chains
In this paper we study the theory of addition chains producing any given number $n\geq 3$. With the goal of estimating the partial sums of an additive chain, we introduce the notion of the determiners and the regulators of an addition chain and prove the following identities\begin{align}\sum \limits_{j=2}^{\delta(n)+1}s_j=2(n-1)+(\delta(n)-1)+\kappa(a_{\delta(n)})-\varrho(r_{\delta(n)+1})+\int \limits_{2}^{\delta(n)-1}\sum \limits_{2\leq j\leq t}\varrho(r_j)dt\nonumber \end{align}where \begin{align}2,s_3=\kappa(a_3)+\varrho(r_3),\ldots,s_{k-1}=\kappa(a_{k-1})+\varrho(r_{k-1}),s_{k}=\kappa(a_{k})+\varrho(r_{k})=n\nonumber \end{align}are the associated generators of the chain $1,2,\ldots,s_{k-1},s_{k}=n$ of length $\delta(n)$. Also we obtain the identity\begin{align}\sum \limits_{j=2}^{\delta(n)+1}\kappa(a_j)=(n-1)+(\delta(n)-1)+\kappa(a_{\delta(n)})-\varrho(r_{\delta(n)+1})+\int \limits_{2}^{\delta(n)-1}\sum \limits_{2\leq j\leq t}\varrho(r_j)dt.\nonumber \end{align}
[1695] vixra:2004.0241 [pdf]
Complex Nonlinear Waves in Autonomous CNNs Having Two Layers of Memristor Couplings
In this paper, we study the nonlinear waves in autonomous cellular neural networks (CNNs) having double layers of memristor coupling, by using the homotopy method. They can exhibit many interesting nonlinear waves, which are quite different from those in the single-layer autonomous CNNs. That is, the autonomous CNNs with double layers of memristor coupling can exhibit more complex nonlinear waves and more interesting bifurcation phenomena than those in the single layer autonomous CNNs. The above complex behaviors seem to be generated by the interaction with the two nonlinear waves, which are caused by the first layer and the second layer. The most remarkable point in this paper is that the autonomous CNNs with double layers can exhibit complex deformation behaviors of the nonlinear waves, due to the changes in the homotopy parameter. That is, we can generate many complex nonlinear waves by adjusting the homotopy parameter, and thereby we can control the complexity of the nonlinear waves. Furthermore, some autonomous CNNs exhibit the sensitive dependence on the homotopy parameter. That is, a small change in the homotopy parameter can result in large differences in a later state. Thus the homotopy method gives a new approach to the analysis of the complex nonlinear waves in the autonomous CNNs with double layers.
[1696] vixra:2004.0234 [pdf]
Relative Uniform Convergence of a Sequence of Functions at a Point and Korovkin-Type Approximation Theorems
We prove a Korovkin-type approximation theorem using the relative uniform convergence of a sequence of functions at a point, which is a method stronger than the classical ones. We give some examples on this new convergence method and we study also rates of convergence.
[1697] vixra:2004.0232 [pdf]
On Matrix Methods of Convergence of Order Alpha in L-Groups
We introduce a concept of convergence of order alpha, which is positive and strictly less than one, with respect to a summability matrix method A for sequences, taking values in lattice groups. Some main properties and dierences with the classical A-convergence are investigated. A Cauchy-type criterion and a closedness result for the space of convergent sequences according our notion is proved.
[1698] vixra:2004.0225 [pdf]
A New Method for Image Super-Resolution
The aim of this paper is to demonstrate that it is possible to reconstruct coherent human faces from very degraded pixelated images with a very fast algorithm, more faster than compressed sensing (CS) algorithm, easier to compute and without deep learning, so without important information technology resources, i.e. a large database of thousands training images (see https://arxiv.org/pdf/2003.13063.pdf). This technological breakthrough has been patented in 2018 with the demand of french patent FR 1855485 (https://patents.google.com/patent/FR3082980A1). The Face Super-Resolution (FSR) has many interests, in particular in a remote surveillance context which already exists in China but which can be a reality in USA and European countries. Today, deep learning methods and artificial intelligence (AI) appears in this context but these methods are difficult to put in their systems because of the need of important data. The demand of chinese patent CN107563965 and the scientist publication "Pixel Recursive Super Resolution", R. Dahl, M. Norouzi, J. Shlens propose such methods (see https://arxiv.org/pdf/1702.00783.pdf). In this context, this new method could help governments, institutions and enterprises to accelerate the generalisation of automatic facial identification and to earn time for reconstruction process in industrial steps such as terahertz imaging, medical imaging or spatial imaging.
[1699] vixra:2004.0222 [pdf]
Decoupling Global and Local Representations via Invertible Generative Flows
In this work, we propose a new generative model that is capable of automatically decoupling global and local representations of images in an entirely unsupervised setting, by embedding a generative flow in the VAE framework to model the decoder. Specifically, the proposed model utilizes the variational auto-encoding framework to learn a (low-dimensional) vector of latent variables to capture the global information of an image, which is fed as a conditional input to a flow-based invertible decoder with architecture borrowed from style transfer literature. Experimental results on standard image benchmarks demonstrate the effectiveness of our model in terms of density estimation, image generation and unsupervised representation learning. Importantly, this work demonstrates that with only architectural inductive biases, a generative model with a likelihood-based objective is capable of learning decoupled representations, requiring no explicit supervision. The code for our model is available at https://github.com/XuezheMax/wolf.
[1700] vixra:2004.0221 [pdf]
Multi-Key Homomorphic Encryption based Blockchain Voting System
During the pandemic covid-19. More than 70 national elections scheduled for the rest of the year worldwide, the coronavirus (COVID-19) pandemic is putting into question whether some of these elections will happen on time or at all. We proposed a novel solution based on multi-key homomorphic encryption and blockchain technology, which is unhackable,privacy-preserving and decentralized. We first introduce the importance of a feasible voting system in this special era, then we demonstrated how we construct the system. finally, we made a thorough comparison of the possible solutions.
[1701] vixra:2004.0217 [pdf]
An Embedding Lemma in Soft Topological Spaces
In 1999, Molodtsov initiated the concept of Soft Sets Theory as a new mathematical tool and a completely different approach for dealing with uncertainties in many fields of applied sciences. In 2011, Shabir and Naz introduced and studied the theory of soft topological spaces, also defining and investigating many new soft properties as generalization of the classical ones. In this paper, we introduce the notions of soft separation between soft points and soft closed sets in order to obtain a generalization of the well-known Embedding Lemma for soft topological spaces.
[1702] vixra:2004.0192 [pdf]
Doppler Effect In Relativity
The frequency of sound is always different in a different inertial reference frame. The Doppler effect for sound wave and electromagnetic wave is not identical. The main difference is the transmission medium. The wavelength changes if the rest frame of the wave source is different from the rest frame of the transmission medium. Without the medium, the wavelength is invariant in inertial reference frames. The Doppler effect for sound, water, and electromagnetic wave depends on the transmission medium.
[1703] vixra:2004.0169 [pdf]
The $abc$ Conjecture: the Proof of $c<rad^2(abc)$
In this note, I present a very elementary proof of the conjecture $c<rad^2(abc)$ that constitutes the key to resolve the $abc$ conjecture. The method concerns the comparison of the number of primes of $c$ and $rad^2(abc)$ for large $a,b,c$ using the prime counting function $\pi(x)$ giving the number of primes $\leq x$. Some numerical examples are given.
[1704] vixra:2004.0159 [pdf]
HyperSpacetime: Complex Algebro-Geometric Analysis of Intelligence Quantum Entanglement Convergent Evolution
Nature is structural instead of random, correlation is just approximation of causality, and data is not science: the more we reveal the more we revere nature on our voyage of unprecedented discovery. We argue that the soul(s) or exotic soul(s) of quotient Hypercomplex arbifold multiscale Spacetime (HyperSpacetime)'s corresponding manifold(s)/general (quotient and non-quotient) HyperSpacetime is the origin of super/general intelligence, and the metric of super/general intelligence is the complexity of quotient/general HyperSpacetime's corresponding generic polynomial. We also argue that the intersecting soul(s) and/or exotic soul(s) as varieties of quotient HyperSpacetime's corresponding manifold(s), when their maximal/minimum sectional curvatures approaching positive infinity and/or negative infinity as singularities, is the origin of quantum entanglement. We further argue that the maximal/minimum sectional curvatures of the same intersecting soul(s) and/or exotic soul(s), is the origin of convergent evolution through conformal transformation. We derive even N-dimensional HyperSpacetime, a M-open (\begin{math} M = C_{_{I+N}}^{^I} \text{, } I, N, M \to \infty \end{math}) arbifold as generalized orbifold with the structure of a algebraic variety $\mathcal{A}$, without or with loop group action as $\mathcal{A}=[\mathcal{M}/\mathcal{LG}]$ ($\mathcal{M}$ as complex manifold, $\mathcal{LG}$ as loop group), it arises from I-degree (power of 2) hypercomplex even N-degree generic polynomial continuous/discrete function/functor as nonlinear action functional in hypercomplex $\mathbb{HC}^{\infty}$ useful for generic neural networks: $\mathcal{F}(S_j,T_j)=\prod_{n=1}^{^{N}}(w_nS_n(T_n)+b_n+ \gamma \sum_{k=1}^{^{j}}\mathcal{F}(S_{k-1},T_{k-1}))$ where $j=1,\dots,N$, $S_{i}=s_0e_0+\sum_{i=1}^{^{{I-1}}}s_{i}e_{i}$, $T_{i}=t_0e_0+\sum_{i=1}^{^{{I-1}}}t_{i}e_{i}$ over noncommutative nonassociative loop group. Its sectional curvature is \begin{math} \kappa = \frac{{\left| {\mathcal{F}''\left(X \right)} \right|}}{{{{\left( {1 + {{\left[ {\mathcal{F}'\left(X \right)} \right]}^2}} \right)}^{\frac{3}{2}}}}} \end{math} if $\mathcal{F}(X)$ is smooth, or \begin{math} \kappa = \kappa_{max}\kappa_{min} \end{math} if nonsmooth, by correlating general relativity with quantum mechanics via extension from 3+1 dimensional spacetime $\mathbb{R}^{4}$ to even N-dimensional HyperSpacetime $\mathbb{HC}^{\infty}$. By directly addressing multiscale, singularities, statefulness, nonlinearity instead of via activation function and backpropagation, HyperSpacetime with its corresponding generic polynomial determining the complexity of ANN, rigorously models curvature-based $2^{nd}$ order optimization in arbifold-equivalent neural networks beyond gradient-based $1^{st}$ order optimization in manifold-approximated adopted in AI. We establish HyperSpacetime generic equivalence theory by synthesizing Generalized Poincar\'{e} conjecture, soul theorem, Galois theory, Fermat's last theorem, Riemann hypothesis, Hodge conjecture, Euler's theorem, Euclid theorem and universal approximation theorem. Our theory qualitatively and quantitatively tackles the black box puzzle in AI, quantum entanglement and convergent evolution. Our future work includes HyperSpacetime refinement, complexity reduction and synthesis as our ongoing multiversal endeavor.
[1705] vixra:2004.0145 [pdf]
Structure Model of Oxygen Nucleus-16
After the helium nucleus He-4, the oxygen nucleus O-16 is the second stable one in Nature and the first upper-order nucleus. Its structure is based on the successive conversions of lithium Li-6, lithium Li-7, beryllium Be-9, boron B-10, boron B-11, carbon C-12 and nitrogen N-14 into oxygen nucleus O-16. From this first upper-order nucleus the second one is constructed (calcium nucleus Ca), from the second the third one (tin nucleus Sn) and from the third the fourth one (orion nucleus Or-307), according to the mirror symmetry. The atomic numbers Z of the above four upper-order nuclei are the so-called four "magic numbers", i.e. Z1=8, Z2=8x2.5=20, Z3=20x2.5=50 and Z4=50x2.5=125. It is noted that, this orion nucleus Or-307 with a differential atomic number Z=125 (unified theory of dynamic space) is the corresponding "hypothetical unbihexium Ubh", whose atomic number is Z=126 (Nuclear Physics). However, the number Z=125 looks symmetrical and not magical at all, due to the 2.5 factor.
[1706] vixra:2004.0121 [pdf]
A Spacetime Oddity: Time Dilation and Length Contraction for the Amateur Enthusiast
Special relativity is undoubtedly one of the pillars of modern physics where concepts such as time dilation and length contraction subtly play a role in various aspects of nature. Typically veiled under complex and difficult-to-fathom mathematical analysis, the path to understanding these phenomena can leave a novice student lost and confused. In this lecture notes, we attempt to explain and arrive at these concepts using physically intuitive methods and elementary mathematics without the use of advanced mathematical knowledge to make it easier for high school students and amateur enthusiasts to comprehend.
[1707] vixra:2004.0118 [pdf]
Neutrino Mixing and Circulant Mass Operators
This short paper clarifies a few points discussed in earlier work. Under the neutrino CMB correspondence, low energy observables are analysed using quantum computation. Starting from the observed mu-tau symmetry, we discuss constraints on all neutrino masses and mixing parameters.
[1708] vixra:2004.0089 [pdf]
Lower Bound for the Number of Asymptomatics in Infected by COVID-19
We propose a method for evaluating the number of asymptomatics in a COVID-19 Outbreak. The method will give only a lower bound for the real number.
[1709] vixra:2004.0043 [pdf]
Numerical Approach in Superconductivity
The dependence of the critical temperature of high temperature superconductors of various families on their composition and structure is proposed. A clear dependence of the critical temperature of high temperature superconductors on the sequence number of the constituent elements, their valency, and the structure of the crystal lattice is revealed.
[1710] vixra:2004.0041 [pdf]
Relativistic Newtonian Gravity Makes Dark Energy Superfluous?
This paper shows that a simple and relativistic extension of Newtonian gravity leads to predictions that fits super- nova observations of magnitude versus redshift very well without having to rely on the hypothesis of dark energy. In order to test the concept, we look at 580 supernova data points from the Union2 database. Some relativistic extensions of Newtonian gravity have been investigated in the past, but we have reason to believe the efforts were rejected prematurely, before their full potential was investigated. Our model suggests that mass, as related to gravity, is also affected by standard relativistic velocity effects, something that is not the case in standard gravity theory, and this adjustment gives su- pernova predictions that fit the observations. Our find- ings are reflected in several recent research papers that follow the same approach; that work will also will be dis- cussed in this paper.
[1711] vixra:2003.0680 [pdf]
Nonuniform Linear Depth Circuits Decide Everything
In this work, we introduce the nonuniform complexity class LD, which stands for linear depth circuit. We also prove that LD = ALL, which is the complexity class that contains all decision languages, and therefore resolving all questions on this new complexity class. No further research is needed [Mun20]. In principle, this shows that anything can be computed via circuits in linear time, despite with (possibly undecidable) pre-computation and very inefficient advice, however, we note that exponential sized advice suffices to achieve ALL.
[1712] vixra:2003.0676 [pdf]
About Boundary Conditions for Kinetic Equations in Metal
Were analyzed boundary conditions for kinetic equations describing the dynamics of electrons in the metal. Boundary condition of the Fuchs and boundary condition of Soffer are considered. Were taken into account the Andreev conditions for almost tangential moving electrons. It is shown that the Soffer boundary condition does not satisfy this condition. It was proposed the boundary condition that satisfies the Andreev condition. It is shown that this boundary condition in the limiting case passes into the mirror--diffuse Fuchs boundary condition.
[1713] vixra:2003.0675 [pdf]
Relaxation Type Kinetic Equation for Electrons in Polycrystalline Metal
The kinetic equation for electrons in polycrystalline metal has been considered. This kinetic equation takes into account, along with collisions of electrons with impurities the collisions of electrons with the boundaries of the grains. We analyze the influence of a scattering of electrons on the boundaries of the grains on his electric properties.
[1714] vixra:2003.0674 [pdf]
The Bremsstrahlung Generated by RLC Circuit
The bremsstrahlung is calculated in case that electron current is realized by the RLC circuit. We determine the bremsstrahlung energy caused by the uniform oscillation of the RLC circuit.
[1715] vixra:2003.0669 [pdf]
Relativity. Exclusively a Speed Problem.
Special Relativity derived by Einstein is a mathematical approach with the unphysical results of time dilation, length contraction and the invariance of the light speed. This paper presents an approach where the Lorenz transformations are build exclusively on equations with speed variables instead of the mix of space and time variables and, where the interaction with the measuring instrument is taken into consideration. The results are transformation rules between inertial frames that are free of time dilation and length contraction. The equations derived for the momentum, energy and the Doppler effect are the same as those obtained with special relativity. The present work shows the importance of including the characteristics of the measuring equipment in the chain of physical interactions to avoid unphysical results.
[1716] vixra:2003.0668 [pdf]
Una Tautologı́a en Hayek. el Problema de la Inasunción Biológica en Las Humanidades, I
The purpose of this brief note is to highlight the use of anthropocentric tautologies at the base of the argumentation of liberal ideology. Some of these authors, like Hayek, seem to consider that they are exempt from the general emptiness to which they are logically subject by the mere expedient of supposing them endowed with social or political meaning, without taking into account that human characteristics are carried by a natural biological entity that evolves, like the rest, by means of natural selection.
[1717] vixra:2003.0661 [pdf]
Copycat Of Relativity
Woldemar Voigt had a theory of covariant wave equation in 1887. The Doppler effect can be applied to establish his theory if the speed of light can be assumed to be invariant in inertial reference frames. Voigt's theory was ignored by Hendrik Antoon Lorentz and the contemporary but was picked up by Albert Einstein. The theory of relativity was finalized in 1905 with a fatal error.
[1718] vixra:2003.0660 [pdf]
Are Qualia Reducible, Physical Entities?
Controversial hypotheses to explain consciousness exist in many fields of science, psychology and philosophy. Recent experimental findings in quantum cognition and magnetic resonance imaging have added new controversies to the field, suggesting that the mind may be based on quantum computing. Quantum computers process information in quantum bits (qubits) using quantum gates. At a first glance, it seems unrealistic or impossible that the brain can meet the challenges to provide either of these. Nevertheless, we show here why the brain has the incredible ability to perform quantum computing and how that may be realized.
[1719] vixra:2003.0640 [pdf]
Can the Standard Model Predict a Minimum Acceleration that Gets Rid of Dark Matter?
The standard model is considered to be very bad at predicting galaxy rotation, and this is why the hypothesis of dark matter was introduced in physics in the 20th century. However, in this paper we show that the standard model may not be as far off as previously believed. By taking into account that gravity has an infinite extent in space and assessing the assumed mass in the observable universe, we get a minimum acceleration that gives a much closer match to observed galaxy rotations than would be expected. We will discuss whether or not this is enough to overturn the long-standing perspective on the standard model and if it could indeed provide a possible and adequate explanation of galaxy rotations.
[1720] vixra:2003.0612 [pdf]
The Rockers Function
In this note we introduce and study the rockers function $\lambda(n)$ on the natural numbers. We establish an asymptotic for the rockers function on the integers and exploit some applications. In particular we show that \begin{align}\lambda(n)\sim \frac{n^{n-{\frac{1}{2n}-\frac{1}{2}}}\sqrt{2\pi}}{e^{n+\Psi(n)-1}}\nonumber \end{align}where \begin{align}\Psi(n):=\int \limits_{1}^{n-1}\frac{\sum \limits_{1\leq j\leq t}\log (n-j)}{(t+1)^2}dt.\nonumber \end{align}
[1721] vixra:2003.0608 [pdf]
Simplest Electromagnetic Felds and Their Sources
The problem of generation of plane and evanescent waves by elec- tric charge and current densities on a plane is considered. It is shown that, rst, both ordinary and evanescent waves can be emitted by such a source, second, that source of evanescent wave is perfectly static in an appropriate frame. The source is found as as explicit form of surface charge and current densities on a plane, which satisfy the continuity condition. One of components of retarded potential of the source is calculated. It is shown that the expression derived provides an erroneous representation of the eld.
[1722] vixra:2003.0606 [pdf]
Intention not Theory: the Vertigo of Love
A Theory of Everything (TOE) must be based on a principle so simple and powerful that it can explain not only all physics, but provide an answer to all philosophical questions and above all explain consciousness and the self. A principle is in fact all the more powerful the simpler it is, since everything that exists, from the simplest to the most complex, must derive from the nesting and stratification of the same principle. Around the nature of this principle, the candidate par excellence should be Hegel's dialectic. However, although Hegel's dialectic has proved useful in investigating the evolution of human thought and history, it is of little use in all other scientific areas such as in the investigation of natural laws. The principle sought must therefore be even more primitive: it must be the foundation of the whole, even of Hegel's dialectic. The purpose of this article is to present this principle and show how it is the foundation of the whole and how everything literally flows from it. In the Intention Physics, physics is completed, that is, it attains the highest and thereby its conclusion.
[1723] vixra:2003.0598 [pdf]
The Principles of the Celestial Sphere
It is possible to describe the centerless expansion of the universe where homogeneity and isotropy is established for all prime inertial systems by the theory of special relativity. Through observation ellipse technique, the universe observed in each prime inertial system can be analyzed, when the density distribution of the particles in the universe is 1/((1-β^2)^2) (which Milne has discovered first) with constant speed expension, by observers of all prime inertial system, the cosmic density distribution is observed homogenously as 1/8(r/(1-r))^2, the age structure of the universe is observed in sqrt(1-r).
[1724] vixra:2003.0579 [pdf]
Acceleration of Electromagnetic Radiation
Experimental evidences together with theoretical proofs are presented in a single paper. The evidences confirm that the speed of light and electromagnetic radiation can be accelerated by a moving reflector. For the first time since 1902, both evidences and proofs are available to remove any doubt that the assumption from the theory of special relativity is invalid in physics. The experimental evidences include radar speed gun, FG5 gravimeter, and spectral shift in astronomy. The theoretical proofs include double-slit interference, conservation of elapsed time, microwave resonance, and Fizeau's cogwheel experiment.
[1725] vixra:2003.0563 [pdf]
Bengali Language, Romanisation and Onsager Core
We continue to study Chalantika, a bengali to bengali dictionary. We romanise the bengali alphabet via wikipedia scheme. We reduce the romanised alphabet to English alphabet i.e. the alphabet which appears in an English dictionary. In the reduced alphabet scheme, we draw the natural logarithm of the number of words, normalised, starting with a letter vs the natural logarithm of the rank of the letter, normalised( unnormalised). We observe that behind the words of the dictionary of the bengali language, in the reduced alphabet scheme, the magnetisation curve is BP(4,$\beta H=0.08$), in the Bethe-Peierls approximation of Ising model with four nearest neighbours, in presence of liitle external magnetic fields, $\beta H=0.08$. Moreover, words of the bengali language in the reduced alphabet scheme, nearly go over to Onsager solution, on few successive normalisations. $\beta$ is $\frac{1}{k_{B}T}$ where, T is temperature, H is external magnetic field and $k_{B}$ is tiny Boltzmann constant.
[1726] vixra:2003.0557 [pdf]
Covid-19 :Statistical Exploration
In this article we present a naive model for the prediction of the number of COVID-19 infections, with illustrations of real data on the evolution of COVID-19 in France.
[1727] vixra:2003.0554 [pdf]
Early Evaluation and Effectiveness of Social Distancing Measures for Controlling COVID-19 Outbreaks
Based on real data, we study the effectiveness and we propose an early evaluation method for COVID-19 social distancing measures. Version v2 posted on 26/03/20. Version v3 posted on 26/04/20. In version v3 sections 7 and 8 have been added leaving unchanged previous sections.
[1728] vixra:2003.0538 [pdf]
How do I ... Develop an Online Research Seminar?
Developing an online research seminar requires work, trial and error, and the willingness to experiment with something new. There are multiple benefits to running a seminar online: an online seminar is accessible to a more diverse audience, does not negatively impact climate change, and brings together members of a community who otherwise might not interact. This article gives my tips for running an online research seminar, describes an online seminar that I co-direct, and links to other online seminars and resources.
[1729] vixra:2003.0494 [pdf]
The Area Method and Applications
In this paper we develop a general method for estimating correlations of the forms \begin{align}\sum \limits_{n\leq x}G(n)G(x-n),\nonumber \end{align}and \begin{align}\sum \limits_{n\leq x}G(n)G(n+l)\nonumber \end{align}for a fixed $1\leq l\leq x$ and where $G:\mathbb{N}\longrightarrow \mathbb{R}^{+}$. To distinguish between the two types of correlations, we call the first \textbf{type} $2$ correlation and the second \textbf{type} $1$ correlation. As an application we estimate the lower bound for the \textbf{type} $2$ correlation of the master function given by \begin{align}\sum \limits_{n\leq x}\Upsilon(n)\Upsilon(n+l_0)\geq (1+o(1))\frac{x}{2\mathcal{C}(l_0)}\log \log ^2x,\nonumber \end{align}provided $\Upsilon(n)\Upsilon(n+l_0)>0$. We also use this method to provide a first proof of the twin prime conjecture by showing that \begin{align}\sum \limits_{n\leq x}\Lambda(n)\Lambda(n+2)\geq (1+o(1))\frac{x}{2\mathcal{C}(2)}\nonumber \end{align}for some $\mathcal{C}:=\mathcal{C}(2)>0$.
[1730] vixra:2003.0467 [pdf]
Elementary Particles, Dark Matter, and Dark Energy: Descriptions that Explain Aspects of Dark Matter to Ordinary Matter Ratios, Inflation, Early Galaxies, and Expansion of the Universe
Physics theory has yet to settle on specific descriptions for new elementary particles, for dark matter, and for dark energy forces. Our work extrapolates from the known elementary particles. The work suggests well-specified candidate descriptions for new elementary particles, dark matter, and dark energy forces. This part of the work does not depend on theories of motion. This work embraces symmetries that correlate with motion-centric conservation laws. The candidate descriptions seem to explain data that prior physics theory seems not to explain. Some of that data pertains to elementary particles. Our theory suggests relationships between masses of elementary particles. Our theory suggests a relationship between the strengths of electromagnetism and gravity. Some of that data pertains to astrophysics. Our theory seems to explain ratios of dark matter effects to ordinary matter effects. Our theory seems to explain aspects of galaxy formation. Some of that data pertains to cosmology. Our theory suggests bases for inflation and for changes in the rate of expansion of the universe. Generally, our work proposes extensions to theory in three fields. The fields are elementary particles, astrophysics, and cosmology. Our work suggests new elementary particles and seems to explain otherwise unexplained data.
[1731] vixra:2003.0426 [pdf]
Essential Spaces
We introduce the idea of an Essential Spaces for 2-dimensional compact manifolds. We raise the question whether essential spaces do exist also for 3-dimensional manifolds.
[1732] vixra:2003.0384 [pdf]
From Neutron and Quark Stars to Black Holes
New physics and models for the most compact astronomical objects - neutron / quark stars and black holes are proposed. Under the new supersymmetric mirror models, neutron stars at least heavy ones could be born from hot deconfined quark matter in the core with a mass limit less than $2.5 M_\odot$. Even heavier cores will inevitably collapse into black holes as quark matter with more deconfined quark flavors becomes ever softer during the staged restoration of flavor symmetry. With new understanding of gravity as mean field theories emergent from the underlying quantum theories for providing the smooth background spacetime geometry for quantum particles, the black hole interior can be described well as a perfect fluid of free massless Majorana fermions and gauge bosons under the new genuine 2-d model. In particular, the conformal invariance on a 2-d torus for the black hole gives rise to desired consistent results for the interior microphysics and structures including its temperature, density, and entropy. Conjectures for further studies of the black hole and the early universe are also discussed in the new framework.
[1733] vixra:2003.0367 [pdf]
An Introduction to Multivariate Expansion
We introduce the notion of an expansion in a specified and mixed directions. This is a piece of an extension program of \textbf{single~variable~expansivity~theory} developed by the author.
[1734] vixra:2003.0318 [pdf]
Division by Zero Calculus in Ford Circles
We will refer to an application of the division by zero calculus in Ford circles that have the relations to some criteria of irrational numbers as covering problems and to the Farey sequence $F_n$ for any positive integer $n$. Division by zero, division by zero calculus, $1/0=0/0=z/0=\tan(\pi/2) =0, [(z^n)/n]_{n=0} = \log z$, $[e^{(1/z)}]_{z=0} = 1$, Ford circle, Farey series, Farey intermediate number, packing by circle, criteria of irrational number.
[1735] vixra:2003.0295 [pdf]
The Compton Wavelength and the Relativistic Compton Wavelength Derived from Collision-Space-Time
In this paper, we show how one can find the Compton scattering formula and thereby also the Compton wavelength based on new concepts from collision-space time. This gives us the standard Compton wavelength, but we go one step forward and show how to find the relativistic Compton wavelength from Compton scattering as well. (That is, when the electron is also moving initially.) The original Compton formula only gives the electron’s rest-mass Compton wavelength, or we could call it the standing electron’s Compton wave.
[1736] vixra:2003.0282 [pdf]
The New Matrix Multiplication
In this article, we are giving the meaning of a ’New Multiplication’ for the matrices. I have studied the properties of this multiplication in two cases, in the case of 2-D matrices and in the case of 3-D matrices, with elements from over whatever field $F$.
[1737] vixra:2003.0280 [pdf]
Electromagnetic Duality From Biot-Savart Law
A stationary charge distribution in one inertial reference frame becomes a charge current in another inertial reference frame. The magnetic field described by Biot-Savart Law becomes a representation of the relative velocity between two inertial reference frames and the electric field described by Coulomb's law. The charge current in dielectric medium exhibits property similar to the light in the vacuum. Both are characterized by a pair of electric field and magnetic field in the transverse direction. The electromagnetic duality from Maxwell's equations is indeed a representation of Biot-Savart law.
[1738] vixra:2003.0244 [pdf]
Galileo's Experiment is Still Undone
Galileo’s classic thought experiment, in which he envisions a cannonball falling through the Earth, has been doable as a scaled-down real experiment for decades. This fact was the subject of an essay submitted to this Foundation five years ago. [1, 2] The apparatus needed for the experiment—a very simple thing, in principle—may be called a Small Low-Energy Non-Collider. Sadly, the experiment remains undone. Presently, I will more emphatically argue that the standard prediction for the experiment could be wrong. The reasons for not filling this gap in our empirical knowledge of gravity have little to do with physics and a lot to do with sociology. The most operative influence is our primitive concept of an unmoving Earth, whose modern incarnation is embodied by Einstein’s relativistic principles. Inspiration to question prevailing dogma is found in the perspective of an imaginary alien civilization.
[1739] vixra:2003.0219 [pdf]
A Theory of Twin Prime Generators
It’s well known that every prime number $p \geq 5$ has the form $6k-1$ or $6k+1$. We’ll call $k$ the generator of $p$. Twin primes are distinghuished due to a common generator for each pair. Therefore it makes sense to search for the twin primes on the level of their generators. The present paper developes a sieve method to extract all twin primes on the level of their generators. On this basis important properties of the set of the twin prime generators will be studied. Finally the Twin Prime Conjecture is proved based on the studied properties.
[1740] vixra:2003.0206 [pdf]
CMOSpacetime: Geometric/Algebraic Complex Analysis of Intelligence/Quantum Entanglement/Convergent Evolution
No truth is truly true, the more we reveal the more we revere nature on our voyage of unprecedented discovery. We argue that the soul or anti-soul of Complex Multiscale Orbifold Spacetime (CMOSpacetime) in higher dimensional Non-Euclidean geometry, is the origin of intelligence, and the metric of metrizable intelligence is the sectional curvature's absolute value of CMOSpacetime's soul or anti-soul. We also argue that the intersecting souls and/or anti-souls , when their sectional curvatures approaching positive infinity and/or negative infinity as singularity, is the origin of quantum entanglement. We further argue that the sectional curvatures of CMOSpacetime's intersecting souls and/or anti-souls , is the origin of convergent evolution through conformal transformation. We derive CMOSpacetime, a N-dimensional orbifold $\mathbb{O}=\mathbb{M}/\mathbb{F_g}$ ($\mathbb {M}$ as manifold)/degree N projective algebraic variety $\mathbb X$ over $\mathbb{C}^{N}$ defined by degree N non-linear polynomial function $\mathbb{F_g}(X_1, ..., X_N) = \sum_{i,j=1}^N(w_iX_i^j+b_i)$ in hypercomplex number system with $X = x_1 + \sum_{m=2}^{N}(x_mi_m)$ on Non-Abelian quotient group \begin{math} SO(\frac{N}{2}, \frac{N}{2}) \end{math} (\begin{math} 8 \leq N \to \infty, N = 2^n \end{math}), neural networks by correlating general relativity and quantum mechanics based on mutual extensions from 3+1 dimensional spacetime $\mathbb{R}^{4}$ to N-dimensional CMOSpacetime $\mathbb{C}^{N}$. CMOSpacetime addresses both singularity and non-linearity as common issues faced by physics, AI and biology, and enables curvature-based second order optimization in orbifold-equivalent neural networks beyond gradient-based first order optimization in manifold-approximated a adopted in AI. We build CMOSpacetime theoretical framework based on General equivalence principle, a combination of Poincar\'{e} conjecture, Fermat's last theorem, Galois theory, Hodge conjecture, BSD conjecture, Riemann hypothesis, universal approximation theorem, and soul theorem. We also propose experiments on measuring intelligence of convolutional neural networks and transformers, as well as new ways of conducting Young's double-slit interference experiment. We believe that CMOSpacetime acting as a universal PDE, not only qualitatively and quantitatively tackles the black box puzzle in AI, quantum entanglement and convergent evolution, but also paves the way for CMOSpacetime synthesis to achieve true singularity.
[1741] vixra:2003.0205 [pdf]
Kantowski-Sachs Cosmology, Weyl Geometry and Asymptotic Safety in Quantum Gravity
A brief review of the essentials of Asymptotic Safety and the Renormalization Group (RG) improvement of the Schwarzschild Black Hole that removes the $ r = 0$ singularity is presented. It is followed with a RG-improvement of the Kantowski-Sachs metric associated with a Schwarzschild black hole interior and such that there is $no$ singularity at $ t = 0$ due to the running Newtonian coupling $ G ( t )$ (vanishing at $ t = 0$). Two temporal horizons at $ t _- \simeq t_P$ and $ t_+ \simeq t_H$ are found. For times below the Planck scale $ t < t_P$, and above the Hubble time $ t > t_H$, the components of the Kantowski-Sachs metric exhibit a key sign $change$, so the roles of the spatial $z$ and temporal $t$ coordinates are $exchanged$, and one recovers a $repulsive$ inflationary de Sitter-like core around $ z = 0$, and a Schwarzschild-like metric in the exterior region $ z > R_H = 2 G_o M $. The inclusion of a running cosmological constant $ \Lambda (t) $ follows. We proceed with the study of a dilaton-gravity (scalar-tensor theory) system within the context of Weyl's geometry that permits to single out the expression for the $classical$ potential $ V (\phi ) = \kappa\phi^4$, instead of being introduced by hand, and find a family of metric solutions which are conformally equivalent to the (Anti) de Sitter metric. To conclude, an ansatz for the truncated effective average action of ordinary dilaton-gravity in Riemannian geometry is introduced, and a RG-improved Cosmology based on the Friedmann-Lemaitre-Robertson-Walker (FLRW) metric is explored where instead of recurring to the cutoff identification $ k = k ( t ) = \xi H ( t ) $, based on the Hubble function $ H (t)$, with $ \xi $ a positive constant, one has now $ k = k ( t ) = \xi \phi ( t ) $, when $ \phi $ is a positive-definite dilaton scalar field which is monotonically decreasing with time.
[1742] vixra:2003.0172 [pdf]
Are Gamma-ray Bursts Caused by Multiverses?
Multiverses may provide the causal mechanism for gamma-ray bursts(GRBs). Assume that differing clock rates prevent interaction between universes in a multiverse. Gravitational time dilation in one universe may allow a temporarily connection to a slower universe. The formation and breaking of such a connection would produce neutrino emissions, gamma-ray emissions, and after-glow. This view derives from looking for a candidate that could connect universes; rather, than looking for an explanation of GRBs. email: rlmarker@spaceandmatter.org
[1743] vixra:2003.0161 [pdf]
From Uncomputability to Quantum Consciousness to Quantum Gravity to Neutrinos Masses Measurement
As first in this essay, I connected "uncomputability and unpredictability" with quantum consciousness, or more precisely, with a quantum decision. This means that I summarized my old model for quantum consciousness. Then, because consciousness is connected with information and because of panpsychism, I assumed that all physics is information. The essence of all physics is also space, time, and matter. The theory that will explain all this mathematically and will find connections between them is quantum gravity (QG). Because physics should also be information, this theory should be informational and also simple. I represent a scheme of a model for QG. Beside of informational aspect it is important that QG will explain one day, what matter, space, time, and relations between them are. These are mysteries, maybe even larger than consciousness, but probably it is connected with them. Probably, within five years the measurement of the masses of the neutrinos will be done, with it I will check my model for QG. This is also one of the rare measurements that will check some theory of QG.
[1744] vixra:2003.0152 [pdf]
Analogy Between Special Relativity and Finite Mathematics
In our publications we have proposed an approach called finite quantum theory (FQT) when quantum theory is based not on complex numbers but on finite mathematics. We have proved that FQT is more general than standard quantum theory because the latter is a special degenerate case of the former in the formal limit $p\to\infty$ where $p$ is the characteristic of the ring or field in finite mathematics. Moreover, finite mathematics itself is more general than classical mathematics (involving the notions of infinitely small/large and continuity) because the latter is a special degenerate case of the former in the same limit. {\bf As a consequence, mathematics describing nature at the most fundamental level involves only a finite number of numbers while the notions of limit and infinitely small/large and the notions constructed from them (e.g. continuity, derivative and integral) are needed only in calculations describing nature approximately}. However, physicists typically are reluctant to accept those results although they are natural and simple. We argue without formulas that there is a simple analogy between the above facts and the fact that special relativity is more general than nonrelativistic mechanics because the latter is a special degenerate case of the former in the formal limit $c\to\infty$.
[1745] vixra:2003.0150 [pdf]
The Relativistic Rydberg's Formula in Greater Depth and for Any Atom
K. Suto has recently pointed out an interesting relativistic extension of Rydberg's formula. Here we also discuss Rydberg's formula, and offer additional evidence on how one can easily see that it is nonrelativistic and therefore a good approximation, at best, when v<<c. We also extend the Suto formula to hold for any atom and examine the formula in detail.
[1746] vixra:2003.0128 [pdf]
Galileo's Undone Gravity Experiment: Part 1
Galileo’s classic thought experiment, in which he envisions a cannonball falling through the Earth, has been doable as a scaled-down real experiment for decades. Yet it remains undone. The reasons for not filling this gap in our empirical knowledge of gravity have little to do with physics and a lot to do with sociology. The influences go back to humans’ primitive concepts of an unmoving Earth, whose modern incarnations are embodied by Albert Einstein’s “relativistic” principles. An imaginary alien (Rotonian) perspective is adopted, whereby these ancient Earthian predilections are all questioned. Even the (3 + 1)-dimensionality of space is questioned. When Rotonians visit an astronomical body for the first time, their instinctive belief in accelerometer readings leads them to a gravitational hypothesis (Space Generation Model) according to which matter is the source of space. They conceive the essence of gravity as the process whereby matter regenerates itself and creates new space. They conceive the process as the outward motion OF space into a fourth spatial dimension; matter is thus seen as an inexhaustible source of perpetual propulsion. It is this stationary motion that causes the curvature of spacetime. The hypothesis is developed in detail with respect to local physics and a chart is plotted for a more in depth application to cosmological issues, as promised for Part 2. It is repeatedly urged that, of much greater importance than discussing these issues, is the need to at last do Galileo’s experiment.
[1747] vixra:2003.0108 [pdf]
Einstein Dual Theory of Relativity
This paper is a comparison of the Minkowski, Einstein and Einstein dual theories of relativity. The dual is based on an identity relating the observer time and the proper time as a contact transformation on configuration space, which leaves phase space invariant. The theory is dual in that, for a system of n particles, any inertial observer has two unique sets of global variables (X, t) and (X, τ) to describe the dynamics. Where X is the (unique) canonical center of mass. In the (X, t) variables, time is relative and the speed of light is unique, while in the (X, τ ) variables, time is unique and the speed of light is relative with no upper bound. The two sets of particle and Maxwell field equations are mathematically equivalent, but the particle wave equations are not. The dual version contains an additional longitudinal radiation term that appears instantaneously with acceleration and we predict that radiation from a betatron (of any frequency) will not produce photoelectrons. The theory does not depend on the nature of the force and the Wheeler- Feynman absorption hypothesis becomes a corollary. The homogeneous and isotropic nature of the universe is sufficient to prove that a unique definition of Newtonian time exists with zero set at the big bang. The isotopic dual of R is used to improve the big bang model, by providing an explanation for the lack of antimatter in our universe, a natural arrow for time, conservation of energy, momentum and angular momentum. This also solves the flatness and horizon problems without inflation. We predict that matter and antimatter are gravitationally repulsive and that experimental data from distant sources cannot be given a unique physical interpretation. We provide a table showing the differences between the Minkowski, Einstein and dual versions of the special theory.
[1748] vixra:2003.0105 [pdf]
Symmetries in Foundation of Quantum Theory and Mathematics
In standard quantum theory, symmetry is defined in the spirit of Klein's Erlangen Program: the background space has a symmetry group, and the basic operators should commute according to the Lie algebra of that group. We argue that the definition should be the opposite: background space has a direct physical meaning only on classical level while on quantum level symmetry should be defined by a Lie algebra of basic operators. Then the fact that de Sitter symmetry is more general than Poincare one can be proved mathematically. The problem of explaining cosmological acceleration is very difficult but, as follows from our results, there exists a scenario that the phenomenon of cosmological acceleration can be explained proceeding from basic principles of quantum theory. The explanation has nothing to do with existence or nonexistence of dark energy and therefore the cosmological constant problem and the dark energy problem do not arise. We consider finite quantum theory (FQT) where states are elements of a space over a finite ring or field with characteristic $p$ and operators of physical quantities act in this space. We prove that, with the same approach to symmetry, FQT and finite mathematics are more general than standard quantum theory and classical mathematics, respectively: the latter theories are special degenerated cases of the former ones in the formal limit $p\to\infty$.
[1749] vixra:2003.0101 [pdf]
The Absolute Frame of Reference
The manuscript found the formula to calculate the real velocity of the earth and the maximum velocity in the universe, the values of the quantities after calculation as follows: <strong>V_earth = 1.852819296∗10^8 m/s</strong>--- <strong>C_max = 4.8507438399∗10^8 m/s</strong>--- In order to calculate the above results, the manuscript has built a reference frame transformation suitable for all types of motion(suitable for both linear motion and chaotic motion of the reference frame), this means that we will calculate the velocity of an object without using the distance S of the object.
[1750] vixra:2003.0090 [pdf]
Optimal Binary Number System When Numbers Are Energy?
In this short note, we will quickly look at optimal binary number systems used in communication (or transactions) under the assumption that one must use energy to give away (send) numbers. We show that the current binary system is not the optimal binary number system as it can be arbitraged. We also show that there exist other optimal binary number systems in such a scenario. Naturally, one has to ask, ``Optimal for whom? -- For the one sending the number out, or for the one receiving the number?'' Alternatively, we can have a binary number system that, on average, is neutral for both sender and receiver. Numbers are typically only considered to have symbolic value, but if the money units were so small that they came in the smallest possibly energy units, then we could be forced to switch to a number system where the physical value of each number was equal to its symbolic value. That is to say, the physical value of three must be higher than the physical value of two, for example. Numbers are always physical because storing or sending a number from a computer requires bits, and bits of information require energy.
[1751] vixra:2003.0071 [pdf]
Ankur Tiwari's Great Discovery of the Division by Zero $1/0 = \tan (\pi/2) = 0$ on $ 2011$
We got an important information on the Ankur Tiwari's great discovery of the division by zero $1/0 = \tan (\pi/2) = 0$ on $ 2011$. Since the information was not known for us and among many colleagues, we would like to state our opinions on his great discovery.
[1752] vixra:2003.0066 [pdf]
An Elementary Proof of Goldbach's Conjecture
Goldbach's conjecture is proven using the Chinese Remainder Theorem. It is shown that an even number 2N greater than four cannot exist if it is congruent to every prime p less than N (mod a different prime number).
[1753] vixra:2003.0050 [pdf]
A Generator for Sums of Powers of Recursive Integer Sequences
In this paper we will prove a relationship for sums of powers of recursive integer sequences. Also, we will give a possible path to discovery. As corollaries of the main result we will derive relationships for familiar integer sequences like the Fibonacci, Lucas, and Pell numbers. Last, we will discuss some applications and point to further work.
[1754] vixra:2003.0039 [pdf]
Cross-Language Substitution Cipher: An Approach of the Voynich Manuscript
The Voynich Manuscript (VMS) is an illustrated hand-written document carbon-dated in the early 15th century. This paper aims at providing a statistically robust method for translating voynichese, the language used in the VMS. We will first provide a set of statistical properties that can be applied to any tokenizable language with sub-token elements, apply it to Universal Dependencies (UD) dataset plus VMS (V101 transliteration) to see how it compares to the 157 corpora written in 90 different languages from UD. In a second phase we will provide an algorithm to map characters from one language to characters from another language, and we will apply it to the 158 corpora we have in our possession to measure its quality. We managed to attack more than 60% of UD corpora with this method though results for VMS don't appear to be usable.
[1755] vixra:2003.0030 [pdf]
Cosmic Dark Matter Density and the Tessellated 3-sphere
The tessellation of space is considered for both the 2-sphere and the 3-sphere. As hypothesized in an earlier work, it is found that there is a dark matter density $\Omega_{DM} = 0.284 \pm 0.137$ associated with the curvature of the 3-sphere.
[1756] vixra:2003.0008 [pdf]
On the Erd\h{o}s Distance Problem
In this paper, using the method of compression, we recover the lower bound for the Erd\H{o}s unit distance problem and provide an alternative proof to the distinct distance conjecture. In particular, we show that for sets of points $\mathbb{E}\subset \mathbb{R}^k$ concentrated around the origin with $\# \mathbb{E}\cap \mathbb{N}^k=\frac{n}{2}$, we have \begin{align}\# \bigg\{||\vec{x_j}-\vec{x_t}||:\vec{x_j}\in \mathbb{E}\subset \mathbb{R}^k,~||\vec{x_j}-\vec{x_t}||=1,~1\leq t,j \leq n\bigg\}\gg_k \frac{\sqrt{k}}{2}n^{1+o(1)}.\nonumber \end{align}We also show that\begin{align}\# \bigg\{d_j:d_j=||\vec{x_s}-\vec{y_t}||,~d_j\neq d_i,~1\leq s,t\leq n\bigg\}\gg_k \frac{\sqrt{k}}{2}n^{\frac{2}{k}-o(1)}.\nonumber \end{align}
[1757] vixra:2002.0595 [pdf]
Reexamining the Thomas Precession
We review the Thomas precession exhibiting the exact form of the Thomas rotation in the axis-angle parameterization. Assuming three inertial frames $S, S', S''$ moving with arbitrary velocities and with $S, S''$ having their axis parallel to the axis of $S'$ we focus our attention on the two essential elements of the Thomas precession e.g., (i) there is a rotation between the axis of frames $S$, $S''$ and (ii) the combination of two Lorentz transformations from $S$ to $S'$ and from $S'$ to $S''$ fails to produce a pure Lorentz transformation from $S$ to $S''$. The physical consequence of (i) and (ii) refers to the impossibility of having arbitrary frames $S, S', S''$ moving with their axis mutually parallel. Then, we reexamine the validity of (i) and (ii) under the conjecture the time depends on the state of motion of the frames and we show that the Thomas precession assumes a different form as formulated in (i) and (ii).
[1758] vixra:2002.0594 [pdf]
Cycles Universels et Cycles Dérivés en 3x +1 Généralisé
<p>In Collatz-Kakutani sequences that are generalized to <em>px + q,</em> the beginning <em>x</em> and the end <em>y</em> of a sequence are connected by a diophantine equation <em>p<sup>m</sup> x - </em>2<em><sup>d</sup> y + qc = </em>0</em>, where <em>m</em> and <em>d</em> are the numbers of multiplication and division. There is a cycle (<em>x = y</em>) if <em>δ</em> (= 2<em><sup>d</sup> - p<sup>m</sup></em>) divide <em>qc</em>. It is shown that all <em>c</em> are included in parametric rotation cycles (<em>c<sub><em>1</em></sub> c<sub><em>2</em></sub> ... c<sub>m</sub></em>) for <em>px + δ</em>, and that the rare numerical cycles (<em>x<sub><em>1</em></sub> x<sub><em>2</em></sub> ... x<sub>m</sub></em>) derive from them when <em>x<sub>i</sub> = qc<sub>i</sub> / δ</em> are integers. The universal cycles are purely algebrical but the derived cycles result from a numerical coincidence. Assuming that the possible values of <em>qc</em> mod <em>δ</em> are equiprobable, a formula is given for the ocurrence probability of a derived cycle. </p>
[1759] vixra:2002.0538 [pdf]
Revisiting Mu and Tau as Excitations of the Electron in Light of CLFV
Charged lepton flavor violation (CLFV) is an interesting phenomenon to investigate in going beyond the Standard Model (BSM). This direction of investigation also inspires a new look at the idea of mu and tau being excitations of the electron. For this, the electron is required to have a substructure that is held together by some potential. However, even the simplest model of a two-body substructure has several troubling issues. First, a relativistically covariant formulation of such a bound system is non-trivial. However, this has been resolved in the past in a different context. Second, a consistent field theory of composite objects is needed to handle this model of leptons with substructure. This has also been done in the past in a different context. Third, the large observed mass ratios of the three charged leptons rule out binding potentials that depend only on the relative positions of constituents. Here it is shown that a concept similar to the "running coupling constant" of strong interactions generates a model that fits these ratios very well.
[1760] vixra:2002.0531 [pdf]
On a Function Modeling an L-Step Self Avoiding Walk
We introduce and study the needle \begin{align}(\Gamma_{\vec{a}_1} \circ \mathbb{V}_m)\circ \cdots \circ (\Gamma_{\vec{a}_{\frac{l}{2}}}\circ \mathbb{V}_m):\mathbb{R}^n\longrightarrow \mathbb{R}^n.\nonumber \end{align} By exploiting the geometry of compression, we prove that this function is a function modeling an $l$-step self avoiding walk for $l\in \mathbb{N}$. We show that the total length of the $l$-step self-avoiding walk modeled by this function is of the order \begin{align}\ll \frac{l}{2}\sqrt{n}\bigg(\mathrm{\max}\{\mathrm{sup}(x_{j_k})\}_{\substack{1\leq j\leq \frac{l}{2}\\1\leq k\leq n}}+\mathrm{\max}\{\mathrm{sup}(a_{j_k})\}_{\substack{1\leq j\leq \frac{l}{2}\\1\leq k\leq n}}\bigg)\nonumber \end{align}and at least \begin{align}\gg \frac{l}{2}\sqrt{n}\bigg(\mathrm{\min}\{\mathrm{Inf}(x_{j_k})\}_{\substack{1\leq j\leq \frac{l}{2}\\1\leq k\leq n}}+\mathrm{\min}\{\mathrm{Inf}(a_{j_k})\}_{\substack{1\leq j\leq \frac{l}{2}\\1\leq k\leq n}}\bigg).\nonumber \end{align}
[1761] vixra:2002.0523 [pdf]
Derivation of a Relativistic Compton Wave
In 1923, Arthur Holly Compton introduced what today is known as the Compton wave. Even if the Compton scattering derivation by Compton is relativistic in the sense that it takes into account the momentum of photons traveling at the speed of light, the original Compton derivation indirectly assumes that the electron is stationary at the moment it is scattered by electrons, but not after it has been hit by photons. Here, we extend this to derive Compton scattering for the case when the electron is initially moving at a velocity v. This gives us a relativistic Compton wave, something we remarkably have not seen published before.
[1762] vixra:2002.0508 [pdf]
On Certain Finite Sums of Inverse Tangents
An identity is proved connecting two finite sums of inverse tangents. This identity is discretized version of Jacobi's imaginary transformation for the modular angle from the theory of elliptic functions. Some other related identities are discussed.
[1763] vixra:2002.0481 [pdf]
IFSα -Open Sets in Intuitionistic Fuzzy Topological Space
The aim of this paper is to introduce the concepts of IFS α -open sets. Also we discussed the relationship between this type of Open set and other existing Open sets in Intuitionistic fuzzy topological spaces. Also we introduce new class of closed sets namely IFS α -closed sets and its properties are studied.
[1764] vixra:2002.0466 [pdf]
New Clues on Arbitrary-Precision Calculation of the Riemann Zeta Function On The Critical Line
The Riemann Hypothesis, is considered by many mathematicians to be the most important unsolved problem, consist in the assertion that all of zeta's nontrivial zeros line up at the so called critical line, $\zeta(1/2+it)$. This paper presents an algorithm, based on a closed-form system of equations, that computes directly at $n^{th}$ decimal digit each non-trivial zeros of the Riemann Zeta Function.
[1765] vixra:2002.0456 [pdf]
The Daon Theory: Fundamental Electromagnetics
We here present {\bf The Daon Theory} it is the first real Theory of Everything, which means that all fundamental physics is explained, all fundamental constants are calculated, and all results agree with experiments. The explanations are simple and logical since we use 3D+time which leads to easy mathematics. We present here the explanation to all fundamental electromagnetic phenomena: Electricity, Magnetism, Induction, and Electro-Magnetic waves.
[1766] vixra:2002.0454 [pdf]
The Daon Theory; The Atom
The {\bf Daon theory} is a new general theory of physics, it is a completely new way to approach physics and includes, in principle, all phenomena of nature. The theory is presented in a series of closely related papers treating Electromagnetism, Atomic physics, Relativity, Particle physics, Gravitation and Cosmology. The numerical value of the main natural constants and parameters are calculated, while the explanations for the various natural phenomena are simple and logical. All the results from this theory agree, as far as we know, with experimental data. In this part, we present some unknown forces acting within the atoms, giving the explanation to the strange behaviour of the electrons. We present a simulation program (ATOMOL) able to follow all charged particles within an atom, including their velocity, position and energy at any given moment. We explain the electron's associated wave, the fine structure constant, the constant of Planck and we explain and calculate the magnetic moment of the electron and the nucleus.
[1767] vixra:2002.0452 [pdf]
Tions Between Matter and Velocity
The {\bf Daon theory} is a new general theory of physics, it is a completely new way to approach physics and includes, in principle, all phenomena of nature. The theory is presented in a series of closely related papers treating Electromagnetism, Atomic physics, relativity, Particle physics, Gravitation and Cosmology. They should be red in this order for a complete understanding. The explanation for the various natural phenomena are simple and logical. All the results from this theory agree, as far as we know, with experimental data. In this document is presented an analysis of changes within an atom at high velocities. It is strongly recommended to first read \cite{1} and \cite{2} of the Daon Theory for a complete understanding.
[1768] vixra:2002.0415 [pdf]
TF-PSST : A Spatio-Temporal Scheduling Approach for Multi-FPGA Parallel Heterogeneous Architecture in High Performance Computing
This work is a proposed architectural prototype in the field of High Performance Computing (HPC). Intel Altera DE4 and Altera DE5a - Net FPGA boards were used as functional processors in our designed system. We further explore Peripheral Component Interconnect (PCI) Express communication and amalgamate the transfer of data through PCIe to two different kinds of FPGAs at the same time using a proposed scheduling algorithm called TF-PSST : Time First Power Second Scheduling Technique. This significantly improves efficiency of the system by reducing execution time and because of the heterogeneous nature of the architectural prototype, we also found a way to increase the hardware resource utilisation.
[1769] vixra:2002.0396 [pdf]
Riemann Hypothesis and the Zeroes of Riemann Zeta Function
The proof involves analytic continuation of Riemann Zeta function. Further we work on the Hadamard product representation of Riemann Xi function to prove the Riemann Hypothesis
[1770] vixra:2002.0390 [pdf]
A New Probability Distribution and Its Application in Modern Physics
In this paper we present a new symetric probability distribution with its properties and we show that is not a uniforme distribution using some standard proofs test like Kolmogorov-Smirnov test and also we may show that is derived from a new another special function by adjusting it using mean and deviation as two parameters , And in the second section we show that PDF present a wave function using rescaled plasma dispersion function such that we define it as a position of massive particle for such charged quantum system .
[1771] vixra:2002.0368 [pdf]
The Realtive Risk Is Logically Inconsistent
Many different measures of association are used by medical literature, the relative risk is one of these measures. However, to judge whether results of studies are reliable, it is essential to use among other measures of association which are logically consistent. In this paper, we will present how to deal with one of the most commonly used measures of association, the relative risk. The conclusion is inescapable that the relative risk is logically inconsistent and should not be used any longer.
[1772] vixra:2002.0366 [pdf]
Division by Zero Calculus For Differentiable Functions in Multiply Dimensions
Based on the preprint survey paper, we will give a fundamental relation among the basic concepts of division by zero calculus and derivatives as a direct extension of the preprints which gave the generalization of the division by zero calculus to differentiable functions. Here, we will consider the case of multiply dimensions. In particular, we will find a new viewpoint and applications to the gradient and nabla.
[1773] vixra:2002.0363 [pdf]
Nle-Lepton V4.2: Software to Find Polynomial-Like Formulas for Fermion Masses
nle-lepton is an software program that searches for polynomial-like non-linear equations with three real, positive roots representing the charged lepton masses. A formula of this type might explain why there are three generations of ordinary matter and give insight into the underlying physics of fermion Higgs field Yukawa couplings.
[1774] vixra:2002.0348 [pdf]
Correction to Maxwell's Equations
Maxwell's equations are examined specifically with a segment of electric current under Biot-Savart law and a single charge under Gauss' law. The line integral of the magnetic field is verified to be different from the surface integral of the curl of magnetic field because the magnetic field of Biot-Savart law diverges. Faraday's induction law is examined by inserting a capacitor into the coil loop to measure the voltage. The electric field inside the capacitor is directly proportional to the time derivative of magnetic flux. An optional capacitor is also attached to the end of the straight segment of electric wire. The time derivative of the electric field inside the capacitor is verified to be proportional to the line integral of the magnetic field from the electric current. Multiple corrections are made to Maxwell's equations.
[1775] vixra:2002.0340 [pdf]
Structure Model of Helium Nucleus-4
The atomic nuclei have been structured through two fundamental phenomena. The inverse electric field of the proton and the electric entity of the macroscopically neutral neutron. Specically, the above inverse field causes the nuclear force and the nuclear antigravity one. These forces, along with the experimental constants of the spin, the magnetic moment and the mass deficit of the nucleons, are the fundamental elements that have created the deuterium, the tritium, the helium-3 and the helium-4. This last nucleus, the helium-4, is the most stable in the Nature, with which all the nuclei of the periodic table have been constructed in the core of the stars.
[1776] vixra:2002.0287 [pdf]
Four Dimensional Localisation with Motivic Neutrinos
Quantum gravity traditionally begins with path integrals for four dimensional spacetimes, where the subtlety is in smooth structures. From a motivic perspective, the same diagrams belong to ribbon categories for quantum computation, based on algebraic number fields. Here we investigate this divide using the principle of the neutrino CMB correspondence, which introduces a mirror pair of ribbon diagrams for each Standard Model state. Categorical condensation for gapped boundary systems in extended quantum double models extends the modular structure to encompass Kirby diagrams.
[1777] vixra:2002.0262 [pdf]
Supersymmetric Mirror Models and Dimensional Evolution of Spacetime
A dynamic view is conjectured for not only the universe but also the underlying theories in contrast to the convectional pursuance of a single unification theory. As the 4-d spacetime evolves dimension by dimension via the spontaneous symmetry breaking mechanism, supersymmetric mirror models consistently emerge one by one at different energy scales and scenarios involving different sets of particle species and interactions. Starting from random Planck fluctuations, the time dimension and its arrow are born in the time inflation process as the gravitational strength is weakened under a 1-d model of a ``timeron'' scalar field. The ``timeron'' decay then starts the hot big bang and generates Majorana fermions and $U(1)$ gauge bosons in 2-d spacetime. The next spontaneous symmetry breaking results in two space inflaton fields leading to a double space inflation process and emergence of two decoupled sectors of ordinary and mirror particles. In fully extended 4-d spacetime, the supersymmetric standard model with mirror matter before the electroweak phase transition and the subsequent pseudo-supersymmetric model due to staged quark condensation as previously proposed are justified. A set of principles are postulated under this new framework. In particular, new understanding of the evolving supersymmetry and $Z_2$ or generalized mirror symmetry is presented.
[1778] vixra:2002.0256 [pdf]
Assuming C<R.exp(\frac{3\sqrt[3]{2}}{2}Log^{2/3}R) a New Conjecture Implies the Abc Conjecture True
In this paper about the abc conjecture, we propose a new conjecture about an upper bound for c as c<R.exp(\frac{3\sqrt[3]{2}}{2}Log^{2/3}R). Assuming the last condition holds, we give the proof of the abc conjecture by proposing the expression of the constant K(\epsilon), then we approve that \forall \epsilon>0, for a,b,c positive integers relatively prime with c=a+b, we have c< K(\epsilon).rad^{1+\epsilon}(abc). Some numerical examples are given.
[1779] vixra:2002.0255 [pdf]
Resolvement of the St. Petersburg Paradox and Improvement of Pricing Theory
The St. Petersburg Paradox was proposed before two centuries. In the paper we proposed a new pricing theory with several rules to solve the paradox and state that the fair pricing should be judged by buyer and seller independently. The pricing theory we proposed can be applied to financial market to solve the confusion with fat tails.
[1780] vixra:2002.0202 [pdf]
The Possibility to Explain Dark Matter Without Need of Actual Matter is Still Open
By introducing concept of virtual terms as the pure mathematical insertions into the laws in nature, made by hand, the author tries to explain the Dark Matter anomaly.
[1781] vixra:2002.0190 [pdf]
Atiyah's Physics-Mathematics Unification Confirms the Permanent Flickering Cosmology
The Permanent Oscillatory Cosmology is confirmed by 75 formula giving the Hubble radius, with 7correlating to 10^−9. The computer shows the best formula, obtained using the Atiyah constant and the number 137, the Eddington’s integer part of the electric constant. This conforms with Atiyah’s testimony about the Physics-Mathematics unification and the central role of arithmetics in this unification process. The identification with the Eddington statistical formula gives G, compatible with the 10^−5 precise BIPM measurement and the 10^−6 precise sun-quasar non-Doppler Kotov period. The hypothesis of a computing Cosmos implies a π rationalization process which validates the Wyler’s theory and the Fermion Koide formula in the 10^−9 domain
[1782] vixra:2002.0186 [pdf]
A Constraint Based K-Shortest Path Searching Algorithm for Software Defined Networking
Software Defined Networking (SDN) is a concept in the area of computer networks in which the control plane and data plane of traditional computer networks are separated as opposed to the mechanism in conventional routers and switches. SDN aims to provide a central control mechanism in the network through a controller known as the SDN Controller. The Controller then makes use of various southbound Application Programming Interfaces (APIs) to connect to the physical switches located on the network and pass on the control information, which used to program the data plane. SDN Controller also exposes several northbound APIs to connect to the applications which can leverage the controller to orchestrate the network. The controller used with regard to this paper is the Open Network Operating System (ONOS), on which the algorithm in question is to be deployed. ONOS provides several APIs which is leveraged to connect the application to the network devices. The typical network path between any two endpoints is a shortest path fulfilling a set of constraints. The algorithm developed here is for optimal K-Shortest path searching in a given network satisfying specified constraints, controlled by ONOS.
[1783] vixra:2002.0180 [pdf]
Australia in Flames
Global warming is not only dangerous, it may bring some localized benefits too. However, more than half of the world's population lives more or less within 60 km of the sea. Rising sea levels and increasingly extreme weather are becoming more frequent and intense and may force the people to move. The lack of supply of fresh water will compromise hygiene and increase the risk of water-borne diseases and diseases transmitted through insects. Even if some are likely to be more vulnerable than others, at the end, the assumption is justified that all populations will be affected by a dramatic climate change. Global warming constitutes a global health threat too, i. e. more than 70,000 additional deaths were documented in Europe during the summer 2003. Of course, it’s clear that the true problems of human mankind are much deeper -- deeper than the consumption of huge proportion of the planet’s natural resources or shortages in long lasting values, deeper even than war or evolutionary disasters. Those who don't reach out to listen to the terrible silent screams of the already extinct are itself doomed to extinction. Keywords: hurricane, climate change --- anti dot --- measures --- quick help
[1784] vixra:2002.0179 [pdf]
Some Recent Aspects of Developments of Chern-Simon Gauge Field Theories
In this chapter, we present the basic elements of Chern-Simon theory and then we review some recent aspects of developments in Chern- Simon gauge field theory as a topological quantum field theory on a threemanifold.
[1785] vixra:2002.0178 [pdf]
Optimal Metamodeling to Interpret Activity-Based Health Sensor Data
Wearable sensors are revolutionizing the health monitoring and medical diagnostics arena. Algorithms and software platforms that can convert the sensor data streams into useful/actionable knowledge are central to this emerging domain, with machine learning and signal processing tools dominating this space. While serving important ends, these tools are not designed to provide functional relationships between vital signs and measures of physical activity. This paper investigates the application of the metamodeling paradigm to health data to unearth important relationships between vital signs and physical activity. To this end, we leverage neural networks and a recently developed metamodeling framework that automatically selects and trains the metamodel that best represents the data set. A publicly available data set is used that provides the ECG data and the IMU data from three sensors (ankle/arm/chest) for ten volunteers, each performing various activities over one-minute time periods. We consider three activities, namely running, climbing stairs, and the baseline resting activity. For the following three extracted ECG features – heart rate, QRS time, and QR ratio in each heartbeat period – models with median error of <25% are obtained. Fourier amplitude sensitivity testing, facilitated by the metamodels, provides further important insights into the impact of the different physical activity parameters on the ECG features, and the variation across the ten volunteers.
[1786] vixra:2002.0161 [pdf]
Absolute Velocity and Total Stellar Aberration(II)
n this paper, we will show that in addition to measuring annual and diurnal stellar aberration it is also possible directly to measure the angle of secular aberration caused by the motion of the solar system relative to other stars.In the manuscript [1] we dealt with this problem and gave a short description of a special telescope. Using such a telescope we would be able to measure the exact position of the cosmic objects and thus eliminate errors that occur due to the stellar aberration. Assuming that the tube of the telescope is filled with some optical medium [2], we will show that this does not significantly affect the measurement of the stellar aberration angle, but also that these differences are still large enough to enable us to determine the velocity at which the solar system moves relative to the other stars
[1787] vixra:2002.0145 [pdf]
Disruptive Gravitation
Viewing gravity as a spacetime bending force instead of just a spacetime curvature, we come to the conclusion of inertial mass relativity since it yields equivalent equations as General Relativity. A close analysis of the Schwarzschild metric leads us naturally to the Vacuum Energy Invariance principle from which we derive the metric equation. Applying this theory to cosmology, we can explain the acceleration of the universe expansion in a way that doesn't require Dark Energy. This theory has the same predictive power as General Relativity for every local experimental tests of the latter since it's based on a slight modification of the Schwarzschild metric.
[1788] vixra:2002.0120 [pdf]
Half Truths and Rant Against ECO Paradigm by Chandra Prakash
One Chandra Pakash of unknown affiliation has publised his maiden preprint (vixra.org/abs/2001.0501) entitled: ``Abhas Mitra and Eternally Collapsing Objects: A Review of 22 Years of Misconceptions"[1]. Although there are 20 odd peer reviewed journal papers on various aspects of the ECO paradigm Prakash cites only one old paper[2] and claims to have debunked the entire research by Mitra, Leiter, Robertson and Schild spread over almost 15 years. He even uploaded a 9 min youtube video debunking not only ECO paradigm by me as a physicist. Incidentally, he happens to be one my FB friends, and had casually raised a few points. He appeared to be satisfied with my clarifications, and never told me that he had already written a preprint "debunking" the ECO paradigm. While it is likely that one of the 20 odd related papers is invalidated, the results and conclusions behind ECO paradigm remain firmly established by series of independent papers. In fact the conclusion that a collapsing massive star should turn into an ultra-hot ball of magnetized plasma (ECO, MECO) follows from 5 papers: 1 in Physical Review (D), 2 in MNRAS Letters, 1 in MNRAS and 1 in New Astronomy {3,4,5,6,7,8}; and this result is valid irrespective of any mathematical proof for non-existence of finite mass true black holes. Incidentally, the maiden proof that a true mathematical Schwarzschild black hole should have zero gravitational mass was given by L. Bel in 1969 (Bel, JMP 10, 1051, 1969)[9], three decades before I independently suggested the same result.
[1789] vixra:2002.0096 [pdf]
On the Poincaré Algebra in a Complex Space-Time Manifold
We extend the Poincar´e group to the complex Minkowski spacetime. Special attention is paid to the corresponding algebra that we achieve through matrices as well as differential operators. We also point out the generalizations of the two Casimir operators.
[1790] vixra:2002.0060 [pdf]
A Short Remark on the Result of Jozsef Sandor
We point out that Corollary 2.2 in the recently published article, ’On the Iyengar-Madhava Rao-Nanjundiah inequality and it’s hyperbolic version’ [3] by Jozsef Sandor is slightly incorrect since its proof contains a gap. Fortunately, the proof can be corrected and this is the main aim of this note.
[1791] vixra:2002.0036 [pdf]
Infinity Furthers Anomaly in the Complex Numbers
In the paper an anomaly in complex number theory is reported. Similar to a previousnote, the ingredients of the analysis are Euler’s identity and the DeMoivre rule for n =2. If a quadratic and definitely not weak equation has two solutions, then, acontradiction can be derived from ±1 functions in complex number theory. Aconstructivist finite approach to cos and sin is briefly discussed to resolve the anomaly.
[1792] vixra:2002.0011 [pdf]
Something is wrong in the state of QED
Quantum electrodynamics (QED) is considered the most accurate theory in the history of science. However, this precision is based on a single experimental value: the anomalous magnetic moment of the electron (g-factor). An examination of the history of QED reveals that this value was obtained in a very suspicious way. These suspicions include the case of Karplus & Kroll, who admitted to having lied in their presentation of the most relevant calculation in the history of QED. As we will demonstrate in this paper, the Karplus & Kroll affair was not an isolated case, but one in a long series of errors, suspicious coincidences, mathematical inconsistencies and renormalized infinities swept under the rug.
[1793] vixra:2002.0007 [pdf]
The Imaginary Parity of Elementary Particles
The Dirac equation with the imaginary reflection is considered. The intrinsic imaginary parity can be generated in the proton antiproton Dalitz reaction. The violation of the intrinsic parity of an elementary particle can be explained as the oscillation of parity before the particle decay into plus and minus parity system. The explanation is the alternative form of the older explanation.
[1794] vixra:2001.0690 [pdf]
Naturally Numbers Are Three Plus One Dimensional Final Version 31.01.2020
Riemann hypothesis stands proved in three different ways.To prove Riemann hypothesis from the functional equation concept of Delta function is introduced similar to Gamma and Pi function. Other two proofs are derived using Eulers formula and elementary algebra. Analytically continuing gamma and zeta function to an extended domain, poles and zeros of zeta values are redefined. Hodge conjecture, BSD conjecture are also proved using zeta values. Other prime conjectures like Goldbach conjecture, Twin prime conjecture etc.. are also proved in the light of new understanding of primes. Numbers are proved to be multidimensional as worked out by Hamilton. Logarithm of negative and complex numbers are redefined using extended number system. Factorial of negative and complex numbers are redefined using values of Delta function.
[1795] vixra:2001.0683 [pdf]
Qubit, Quantum Entanglement and all that: Quantum Computing Made Simple
Quantum computing, a fancy word resting on equally fancy fundamentals in quantum mechanics, has become a media hype, a mainstream topic in popular culture and an eye candy for high-tech company researchers and investors alike. Quantum computing has the power to provide faster, more efficient, secure and accurate computing solutions for emerging future innovations. Governments the world over, in collaboration with high-tech companies, pour in billions of dollars for the advancement of computing solutions quantum-based and for the development of fully functioning quantum computers that may one day aid in or even replace classical computers. Despite much hype and publicity, most people do not understand what quantum computing is, nor do they comprehend the significance of the developments required in this field, and the impact it may have on the future. Through this lecture notes, we embark on a pedagogic journey of understanding quantum computing, gradually revealing the concepts that form its basis, later diving in a vast pool of future possibilities that lie ahead, concluding with understanding and acknowledging some major hindrance and speed breaking bumpers in their path.
[1796] vixra:2001.0673 [pdf]
Kaluza in Four Dimensions (with Complex Time)
A number of researchers have employed complex time. If time is complex, it is reasonable to allow the time components of the metric tensor to also be complex. The four imaginary quantities in the metric tensor can be shown to be consistent with the values of the magnetic vector potential. The (four-dimensional complex) line element is then shown to be functionally identical to the Kaluza 5-dimentional line element. The four dimensional (complex) metric therefore inherits the results Kaluza obtained when he included the magnetic vector potential in a ve dimensional metric.
[1797] vixra:2001.0655 [pdf]
On the Number of Monic Admissible Polynomials in the Ring $\mathbb{z}[x]$
In this paper we study admissible polynomials. We establish an estimate for the number of admissible polynomials of degree $n$ with coeffients $a_i$ satisfying $0\leq a_i\leq H$ for a fixed $H$, for $i=0,1,2, \ldots, n-1$. In particular, letting $\mathcal{N}(H)$ denotes the number of monic admissible polynomials of degree $n\geq 3$ with coefficients satisfying the inequality $0\leq a_i\leq H$, we show that \begin{align}\frac{H^{n-1}}{(n-1)!}+O(H^{n-2})\leq \mathcal{N}(H) \leq \frac{n^{n-1}H^{n-1}}{(n-1)!}+O(H^{n-2}).\nonumber \end{align} Also letting $\mathcal{A}(H)$ denotes the number of monic irreducible admissible polynomials, with coefficients satisfying the same condition , we show that \begin{align}\mathcal{A}(H)\geq \frac{H^{n-1}}{(n-1)!}+O\bigg( H^{n-4/3}(\log H)^{2/3}\bigg).\nonumber \end{align}
[1798] vixra:2001.0654 [pdf]
The Prime Index Function
In this paper we introduce the prime index function \begin{align}\iota(n)=(-1)^{\pi(n)},\nonumber \end{align} where $\pi(n)$ is the prime counting function. We study some elementary properties and theories associated with the partial sums of this function given by\begin{align}\xi(x):=\sum \limits_{n\leq x}\iota(n).\nonumber \end{align}
[1799] vixra:2001.0653 [pdf]
Complete Sets
In this paper we introduce the concept of completeness of sets. We study this property on the set of integers. We examine how this property is preserved as we carry out various operations compatible with sets. We also introduce the problem of counting the number of complete subsets of any given set. That is, given any interval of integers $\mathcal{H}:=[1,N]$ and letting $\mathcal{C}(N)$ denotes the complete set counting function, we establish the lower bound $\mathcal{C}(N)\gg N\log N$.
[1800] vixra:2001.0647 [pdf]
Characterizations of Pre-R0 and Pre-R1 Topological Spaces
In this paper we introduce two new classes of topological spaces called pre-R0 and pre-R1 spaces in terms of the concept of preopen sets and investigate some of their fundamental properties.
[1801] vixra:2001.0645 [pdf]
More on Almost Contra $\lambda$-Continuous Functions
In 1996, Dontchev [14] introduced and investigated a new notion of non-continuity called contra-continuity. Recently, Baker et al. [6] of- fered a new generalization of contra-continuous functions via $\lambda$-closed sets, called almost contra $\lambda$-continuous functions. It is the objective of this paper to further study some more properties of such functions.
[1802] vixra:2001.0644 [pdf]
On Some Applications of B-Open Sets in Topological Spaces
The purpose of this paper is to introduce some new classes of topological spaces by utilizing b-open sets and study some of their fundamental properties
[1803] vixra:2001.0643 [pdf]
Strongly S-Closed Spaces and Firmly Contra-Continuous Functions
In the present paper, we offer a new form of firm continuity, called firm contra-continuity, by which we characterize strongly S-closed spaces. Moreover, we investigate the basic properties of firmly contra-continuous functions. We also introduce and investigate the notion of locally contra-closed graphs.
[1804] vixra:2001.0641 [pdf]
More on $\lambda s$-Semi-$\theta$-Closed Sets
It is the object of this paper to study further the notion of $\Lambda s$-semi- $\theta$-closed sets which is defined as the intersection of a $\theta$- $\Lambda s$-set and a semi--closed set. Moreover, we introduce some low separation axioms using the above notions. Also we present and study the notions of $\Lambda s$- continuous functions, $\Lambda s$-compact spaces and $\Lambda s$-connected spaces.
[1805] vixra:2001.0640 [pdf]
On Some Properties of Weakly LC-Continuous Functions
M. Ganster and I.L. Reilly [2] introduced a new decomposition of continuity called LC-continuity. In this paper, we introduce and investigate a generalization LC- continuity called weakly LC-continuity.
[1806] vixra:2001.0638 [pdf]
On $\lambda$-Generalized Continuous Functions
In this paper, we introduce a new class of continuous functions as an application of $\Lambda$-generalized closed sets (namely $\Lambda_g$-closed set, $\Lambda$-g-closed set and $g \Lambda$-closed set) namely $\Lambda$-generalized continuous functions (namely $\Lambda g$-continuous, $\Lambda$-g-continuous and $g \Lambda$-continuous) and study their properties in topological space.
[1807] vixra:2001.0637 [pdf]
Upper and Lower Rarely $\alpha$-Continuous Multifunctions
Recently the notion of rarely $\alpha$-continuous functions has been introduced and investigated by Jafari [1]. This paper is devoted to the study of upper (and lower) rarely $\alpha$-continuous multifunctions.
[1808] vixra:2001.0633 [pdf]
Frame-Independent Synchronization for a Theory of Presentism
We introduce a synchronization scheme that reconciles presentism with special relativity. Specifically, we address the challenge of defining in special relativity a global present of an observer at a particular location that is independent of reference frame. This is achieved by postulating that the coordinate time that elapses for light to propagate from emitter to absorber is a fundamental constant of nature and is independent of the separation distance and also independent of the reference frame of the absorber. We show that the proposed theory predicts cosmological redshifts consistent with observation if an asymmetry exists between the emitter's and absorber's perceptions of time.
[1809] vixra:2001.0620 [pdf]
Disruptive Gravitation Theory
Viewing gravity as a spacetime bending force instead of just a spacetime curvature, we come to the conclusion of rest mass relativity since it yields equivalent equations as General Relativity. A close analysis of the Schwarzschild metric leads us naturally to the Vacuum Energy Invariance principle from which we derive the metric equation. Applying this theory to cosmology, we can explain the acceleration of the universe expansion in a way that doesn't require Dark Energy. This theory has the same predictive power as General Relativity for every local experimental tests of the latter since it's based on a slight modification of the Schwarzschild metric.
[1810] vixra:2001.0614 [pdf]
A Map of a Research Programme for Subtlety Theory
The scope of this short note is to outline a research programme for the exploration of 'subtlety theory', which can be thought of as a framework for exploring various classes of structure associated to higher categories. It is hoped that this might form a logical springboard for researchers wishing to explore said ideas and potentially take them further.
[1811] vixra:2001.0595 [pdf]
Is Science a Pyramid Scheme? the Correlation Between an Author's Position in the Academic Hierarchy and Her Scientific Output Per Year
A grievance expressed by some PhD students and Postdocs is that science works like a pyramid scheme: Young scientists are encouraged to invest into building scientific careers although the chances at remaining in science are extremely slim. This issue is investigated quantitatively by connecting it with the way authorship on papers is distributed. I analyzed a large bibliographic dataset made available by Microsoft under the name Academic Graph to create a histogram with the number of articles an author produces per year. The histogram has the shape of a pyramid, and different layers in it correlate with positions in the academic hierarchy. The super-prolific authors at the top of the pyramid with more than 40 publications per year are usually heads of large institutes with many subgroups and large numbers of PhD students, while the bottom of the pyramid is populated by PhD students and Postdocs with less than 5 publications per year. The mechanism that allows 'manager scientists' to appropriate publications generated in their sphere of influence is related to other issues, such as the evaluation of scientific performance based on scientometric indicators and the lenient enforcement of authorship rules. A new index, the Ponzi factor, is proposed to quantify this phenomenon.
[1812] vixra:2001.0590 [pdf]
On Entire Functions-Minorants for Subharmonic Functions Outside of a Small Exceptional Set
Let u be an arbitrary subharmonic function of finite order on the complex plane. We construct a nonzero entire function f such that ln|f| does not exceed the function u everywhere outside some very small exceptional set E.
[1813] vixra:2001.0589 [pdf]
The Quasicrystal Rosetta Stone
The standard model is unified with gravity in an F4 gauge theory where the spacetime is a quasicrystalline compactication of an E9 Lorentzian lattice. The Higgs is played by a neutrino condensate. Using three languages where the first is a Hilbert loop, the second is a list of ver- tices in the E8 Lattice and the third is a two-dimensional quasicrystal exhibiting five-fold symmetry, the longtime sought generation formula for general five-fold symmetric quasicrystals are revealed.
[1814] vixra:2001.0586 [pdf]
Division by Zero Calculus, Derivatives and Laurent's Expansion
Based on a preprint survey pape, we will give a fundamental relation among the basic concepts of division by zero calculus, derivatives and Laurent's expansion as a direct extension of the preprint which gave the generalization of the division by zero calculus to differentiable functions. In particular, we will find a new viewpoint and applications to the Laurent expansion, in particular, to residures in the Laurent expansion. $1/0=0/0=z/0=\tan(\pi/2) =\log 0 =0, (z^n)/n = \log z$ for $n=0$, $e^{(1/z)} = 1$ for $z=0$. 
[1815] vixra:2001.0579 [pdf]
Theory for Elementary Particles, Dark Matter, Dark Energy, and Galaxies
We show theory that spans tiny and vast aspects of physics. We suggest descriptions for new elementary particles, dark matter, and dark energy. We use those descriptions to explain data regarding dark matter effects, dark energy effects, and galaxy formation. Our mathematics-based modeling, descriptions, and explanations embrace and augment standard physics theory and modeling. One basis for our modeling is an extension to mathematics for harmonic oscillators.
[1816] vixra:2001.0563 [pdf]
First-Order Perturbative Solution to Schr¨odinger Equation for Charged Particles
Perturbative solution to Schr¨odinger equation for N charged particles is studied. We use an expansion that is equivalent to Fock’s one. In the case that the zeroth-order approximation is a harmonic homogeneous polynomial a first-order approximation is found.
[1817] vixra:2001.0561 [pdf]
On $\rho$-Homeomorphisms in Topological Spaces
In this paper, we first introduce a new class of closed map called $\rho$- closed map. Moreover, we introduce a new class of homeomorphism called a $\rho$-homeomorphism.We also introduce another new class of closed map called $\rho*$-closed map and introduce a new class of homeomorphism called a $\rho*$-homeomorphism and prove that the set of all $\rho*$-homeomorphisms forms a group under the operation of composition of maps.
[1818] vixra:2001.0560 [pdf]
Low Separation Axioms Associated with ^g*s-Closed Sets
In this paper, we introduce kT½ -spaces, k*T½ -spaces, kT_b -spaces, kT_c -spaces, kT_d -spaces, kT_f -spaces, kT_^g* -spaces and T^k_b-spaces and investigate their characterizations.
[1819] vixra:2001.0542 [pdf]
On $\lambda_b$-Sets and the Associated Topology $\tau_b ^{*}$
In this paper we define the concept of $\Lambda_b$-sets (resp. $V_b$-sets) of a topological space, i.e., the intersection of b-open (resp. the union of b-closed) sets. We study the fundamental property of $\Lambda_b$-sets (resp. $V_b$-sets) and investigate the topologies defined by these families of sets.
[1820] vixra:2001.0540 [pdf]
On Some Very Strong Compactness Conditions
The aim of this paper is to consider compactness notions by utilizing $\lambda$-sets, V - sets, locally closed sets, locally open sets, $\lambda$-closed sets and $\lambda$-open sets. We are able to completely characterize these variations of compactness, and also provide various interesting examples that support our results.
[1821] vixra:2001.0539 [pdf]
On PC-Compact Spaces
In this paper we consider a new class of topological spaces, called pc-compact spaces. This class of spaces lies strictly between the classes of strongly compact spaces and C- compact spaces. Also, every pc-compact space is p-closed in the sense of Abo-Khadra. We will investigate the fundamental properties of pc-compact spaces, and consider their behaviour under certain mappings.
[1822] vixra:2001.0538 [pdf]
More on go-Compact and go-(M, n)-Compact Spaces
Balachandran [1] introduced the notion of GO-compactness by involving g-open sets. Quite recently, Caldas et al. in [8] and [9] investigated this class of compactness and characterized several of its properties. In this paper, we further investigate this class of compactness and obtain several more new properties. Moreover, we introduce and study the new class of GO-(m, n)-compact spaces.
[1823] vixra:2001.0531 [pdf]
The Landau-Lifshitz Pseudotensor - Another Meaningless Concoction of Mathematical Symbols
In an attempt to make A. Einstein’s General Theory of Relativity comply with the usual conservation of energy and momentum for a closed system which a vast array of experiments has ascertained, Mr. L. Landau and Mr. E. Lifshitz constructed, ad hoc, their pseudotensor, as a proposed improvement upon the pseudotensor of Mr. Einstein. Their pseudotensor is symmetric (Mr. Einstein’s is not) and, they say, it permits a conservation law including angular momentum. That it is not a tensor is outside the very mathematical structure of Mr. Einstein’s theory. Beyond that, it violates the rules of pure mathematics. It is therefore a meaningless concoction of mathematical symbols.
[1824] vixra:2001.0518 [pdf]
A Boundary Operator for Simplices
We generalize the very well known boundary operator of the ordinary singular homology theory. We describe a variant of this ordinary simplicial boundary operator, where the usual boundary (n-1)-simplices of each n-simplex, i.e. the `faces´, are replaced by combinations of internal (n-1)-simplices parallel to the faces. This construction may lead to an infinte class of extraordinary non-isomorphic homology theories. Further, we show some interesting constructions on the standard simplex.
[1825] vixra:2001.0501 [pdf]
A Revisit of the Problems of Abhas Mitra's Eternally Collapsing Object
We critically review here the concepts which gave rise to the Eternally Collapsing Object (ECO) paradigm over almost 22 years. All the mathematical analysis will be dealt here to conclude the proofs required for ECO paradigm to begin with is ad-hoc. First part in rejecting black holes always considers the possibility of formation of trapped surfaces so we will begin our work by looking into ``non occurrence of trapped surfaces'' then we analyze Dr. Mitra's indirect claim regarding how R = 0 (inside black hole) could also be treated as another coordinate singularity! After that we review if indeed his another claim about ``mass of Schwarzschild black hole being zero'' is right or not. Do black holes really exist or they are just a fairy tale made by physicists to elude us into fiction? And are these so called ``Black Holes'', actually black hole or indeed as Mitra likes to call them ``Eternally Collapsing Object'' or ECO. Are ECOs really a true alternative to Black Holes? The analysis presented here will show us, why ECOs are baseless can not really a solution to black hole problem.
[1826] vixra:2001.0499 [pdf]
Einstein’s Pseudotensor a Meaningless Concoction of Mathematical Symbols
In an attempt to make his General Theory of Relativity comply with the usual conservation of energy and momentum for a closed system which a vast array of experiments has ascertained, Mr. A. Einstein constructed, ad hoc, his pseudotensor. That it is not a tensor is outside the very mathematical structure of his theory. Beyond that, it violates the rules of pure mathematics. It is therefore a meaningless concoction of mathematical symbols.
[1827] vixra:2001.0482 [pdf]
On D-Sets, DS-Sets and Decompositions of Continuous, a-Continuous and AB-Continuous Functions
The main purpose of this paper is to introduce the notions of D-sets, DS-sets, D-continuity and DS-continuity and to obtain decompositions of continuous functions, A-continuous functions and AB-continuous functions. Also, properties of the classes of D-sets and DS-sets are discussed.
[1828] vixra:2001.0481 [pdf]
On DS*-Sets and Decompositions of Continuous Functions
In this paper, the notions of DS*-sets and DS*-continuous functions are introduced and their properties and their relationships with some other types of sets are investigated. Moreover, some new decompositions of continuous functions are obtained by using DS*-continuous functions, DS-continuous functions and D-continuous functions.
[1829] vixra:2001.0474 [pdf]
The Prime Pairs Are Equidistributed Among the Coset Lattice Congruence Classes
In this paper we show that for some constant $c>0$ and for any $A>0$ there exist some $x(A)>0$ such that, If $q\leq (\log x)^{A}$ then we have \begin{align}\Psi_z(x;\mathcal{N}_q(a,b),q)&=\frac{\Theta (z)}{2\phi(q)}x+O\bigg(\frac{x}{e^{c\sqrt{\log x}}}\bigg)\nonumber \end{align}for $x\geq x(A)$ for some $\Theta(z)>0$. In particular for $q\leq (\log x)^{A}$ for any $A>0$\begin{align}\Psi_z(x;\mathcal{N}_q(a,b),q)\sim \frac{x\mathcal{D}(z)}{2\phi(q)}\nonumber \end{align}for some constant $\mathcal{D}(z)>0$ and where $\phi(q)=\# \{(a,b):(p_i,p_{i+z})\in \mathcal{N}_q(a,b)\}$.
[1830] vixra:2001.0473 [pdf]
Balanced Matrices
In this paper we introduce a particular class of matrices. We study the concept of a matrix to be balanced. We study some properties of this concept in the context of matrix operations. We examine the behaviour of various matrix statistics in this setting. The crux will be to understanding the determinants and the eigen-values of balanced matrices. It turns out that there does exist a direct communication among the leading entry, the trace, determinants and, hence, the eigen-values of these matrices of order $2\times 2$. These matrices have an interesting property that enables us to predict their quadratic forms, even without knowing their entries but given their spectrum.
[1831] vixra:2001.0472 [pdf]
The Compression Method and Applications
In this paper we introduce and develop the method of compression of points in space. We introduce the notion of the mass, the rank, the entropy, the cover and the energy of compression. We leverage this method to prove some class of inequalities related to Diophantine equations. In particular, we show that for each $L<n-1$ and for each $K>n-1$, there exist some $(x_1,x_2,\ldots,x_n)\in \mathbb{N}^n$ with $x_i\neq x_j$ for all $1\leq i<j\leq n$ such that \begin{align}\frac{1}{K^{n}}\ll \prod \limits_{j=1}^{n}\frac{1}{x_j}\ll \frac{\log (\frac{n}{L})}{nL^{n-1}}\nonumber \end{align}and that for each $L>n-1$ there exist some $(x_1,x_2,\ldots,x_n)$ with $x_i\neq x_j$ for all $1\leq i<j\leq n$ and some $s\geq 2$ such that \begin{align}\sum \limits_{j=1}^{n}\frac{1}{x_j^s}\gg s\frac{n}{L^{s-1}}.\nonumber \end{align}
[1832] vixra:2001.0468 [pdf]
Naturally Numbers Are Three Plus One Dimensional Final Version
Riemann hypothesis stands proved in three different ways.To prove Riemann hypothesis from the functional equation concept of Delta function is introduced similar to Gamma and Pi function. Other two proofs are derived using Eulers formula and elementary algebra. Analytically continuing gamma and zeta function to an extended domain, poles and zeros of zeta values are redefined. Hodge conjecture, BSD conjecture are also proved using zeta values. Other prime conjectures like Goldbach conjecture, Twin prime conjecture etc.. are also proved in the light of new understanding of primes. Numbers are proved to be multidimensional as worked out by Hamilton. Logarithm of negative and complex numbers are redefined using extended number system. Factorial of negative and complex numbers are redefined using values of Delta function.
[1833] vixra:2001.0462 [pdf]
Distribution of Boundary Points of Expansion and Application to the Lonely Runner Conjecture
In this paper we study the distribution of boundary points of expansion. As an application, we say something about the lonely runner problem. We show that given $k$ runners $\mathcal{S}_i$ round a unit circular track with the condition that at some time $||\mathcal{S}_i-\mathcal{S}_{i+1}||=||\mathcal{S}_{i+1}-\mathcal{S}_{i+2}||$ for all $i=1,2\ldots,k-2$, then at that time we have \begin{align}||\mathcal{S}_{i+1}-\mathcal{S}_i||>\frac{\mathcal{D}(n)\pi}{k-1}\nonumber \end{align}for all $i=1,\ldots, k-1$ and where $\mathcal{D}(n)>0$ is a constant depending on the degree of a certain polynomial of degree $n$. In particular, we show that given at most eight $\mathcal{S}_i$~($i=1,2,\ldots, 8$) runners running round a unit circular track with distinct constant speed and the additional condition $||\mathcal{S}_i-\mathcal{S}_{i+1}||=||\mathcal{S}_{i+1}-\mathcal{S}_{i+2}||$ for all $1\leq i\leq 6$ at some time $s>1$, then at that time their mutual distance must satisfy the lower bound\begin{align}||\mathcal{S}_{i}-\mathcal{S}_{i+1}||>\frac{\pi}{7C\sqrt{3}}\nonumber \end{align}for some constant $C>0$ for all $1\leq i \leq 7$.
[1834] vixra:2001.0460 [pdf]
The Theta Splitting Function
In this paper we study the Theta splitting function $\Theta(s+1)$, a function defined on the positive integers. We study the distribution of this function for sufficiently large values of the integers. As an application we show that \begin{align}\sum \limits_{m=0}^{s}\prod \limits_{\substack{0\leq j \leq m\\\sigma:[0,m]\rightarrow [0,m]\\\sigma(j)\neq \sigma(i)}}(s-\sigma(j))\sim s^s\sqrt{s}e^{-s}\sum \limits_{m=1}^{\infty}\frac{e^m}{m^{m+\frac{1}{2}}}.\nonumber \end{align} and that \begin{align}\sum \limits_{j=0}^{s-1}e^{-\gamma j}\prod \limits_{m=1}^{\infty}\bigg(1+\frac{s-j}{m}\bigg)e^{\frac{-(s-j)}{m}}\sim \frac{e^{-\gamma s}}{\sqrt{2\pi}}\sum \limits_{m=1}^{\infty}\frac{e^m}{m^{m+\frac{1}{2}}}.\nonumber \end{align}
[1835] vixra:2001.0437 [pdf]
Operations on Neutrosophic Vague Graphs
In this manuscript, the operations on neutrosophic vague graphs are introduced. Moreover, Cartesian product, cross product, lexicographic product, strong product and composition of neutrosophic vague graph are investigated and the proposed concepts are illustrated with examples.
[1836] vixra:2001.0435 [pdf]
Proof Fermat Last Theorem (Using 6 Methods)
The Pythagorean theorem is perhaps the best known theorem in the vast world of mathematics.A simple relation of square numbers, which encapsulates all the glory of mathematical science, isalso justifiably the most popular yet sublime theorem in mathematical science. The starting pointwas Diophantus’ 20 th problem (Book VI of Diophantus’ Arithmetica), which for Fermat is for n= 4 and consists in the question whether there are right triangles whose sides can be measuredas integers and whose surface can be square. This problem was solved negatively by Fermat inthe 17 th century, who used the wonderful method (ipse dixit Fermat) of infinite descent. Thedifficulty of solving Fermat’s equation was first circumvented by Willes and R. Taylor in late1994 ([1],[2],[3],[4]) and published in Taylor and Willes (1995) and Willes (1995). We presentthe proof of Fermat’s last theorem and other accompanying theorems in 4 different independentways. For each of the methods we consider, we use the Pythagorean theorem as a basic principleand also the fact that the proof of the first degree Pythagorean triad is absolutely elementary anduseful. The proof of Fermat’s last theorem marks the end of a mathematical era; however, theurgent need for a more educational proof seems to be necessary for undergraduates and students ingeneral. Euler’s method and Willes’ proof is still a method that does not exclude other equivalentmethods. The principle, of course, is the Pythagorean theorem and the Pythagorean triads, whichform the basis of all proofs and are also the main way of proving the Pythagorean theorem in anunderstandable way. Other forms of proofs we will do will show the dependence of the variableson each other. For a proof of Fermat’s theorem without the dependence of the variables cannotbe correct and will therefore give undefined and inconclusive results . It is, therefore, possible to prove Fermat's last theorem more simply and equivalently than the equation itself, without monomorphisms.
[1837] vixra:2001.0417 [pdf]
The Horizon of Two Interacting Bodies
Two concomitant Whitehead's models are used to describe the gravitational interaction of two point-like particles. If the two masses are different, and they are at rest, Newton's laws are slightly modified and for some particular values of the two masses and the distance separating them the force is a repulsion. This suggests a nice interpretation of the so-called horizon of the single particle model as the surface where gravitation becomes a repulsive force.
[1838] vixra:2001.0411 [pdf]
Bioperations on $\alpha$-Separations Axioms in Topological Spaces
In this paper, we consider the class of $\alpha_{[\gamma, \gamma']}generalized closed set in topological spaces and investigate some of their properties. We also present and study new separation axioms by using the notions of $\alpha$-open and $\alpha$-bioperations. Also, we analyze the relations with some well known separation axioms.
[1839] vixra:2001.0410 [pdf]
G*bp-Continuous, Almost G*bp-Continuous and Weakly G*bp-Continuous Functions
In this paper we introduce new types of functions called g*bp-continuous function, almost g*bp-continuous function, and weakly g*bp-continuous function in topological spaces and study some of their basic properties and relations among them.
[1840] vixra:2001.0409 [pdf]
A Note on Properties of Hypermetric Spaces
The note studies further properties and results of analysis in the setting of hypermetric spaces. Among others, we present some results concerning the hyper uniform limit of a sequence of continuous functions, the hypermetric identication theorem and the metrization problem for hypermetric space.
[1841] vixra:2001.0408 [pdf]
On Generalized Closed Sets and Generalized Pre-Closed Sets in Neutrosophic Topological Spaces
In this paper, the concept of generalized neutrosophic pre-closed sets and generalized neutrosophic pre-open sets are introduced. We also study relations and various properties between the other existing neutrosophic open and closed sets. In addition, we discuss some applications of generalized neutrosophic pre-closed sets, namely neutrosophic pT_{1/2} space and neutrosophic gpT_{1/2} space. The concepts of generalized neutrosophic connected spaces, generalized neutrosophic compact spaces and generalized neutrosophic extremally disconnected spaces are established. Some interesting properties are investigated in addition to giving some examples.
[1842] vixra:2001.0405 [pdf]
Intuitionistic Fuzzy Ideals on Approximation Systems
In this paper, we initiate the concept of intuitionistic fuzzy ideals on rough sets. Using a new relation we discuss some of the algebraic nature of intuitionistic fuzzy ideals of a ring.
[1843] vixra:2001.0404 [pdf]
On a Function Modeling $n$-Step Self Avoiding Walk
We introduce and study the needle function. We prove that this function is a function modeling $n$-step self avoiding walk. We show that the total length of the $l$-step self-avoiding walk modeled by this function is of the order \begin{align}\ll \frac{n^{\frac{3}{2}}}{2}\bigg(\mathrm{\max}\{\mathrm{sup}(x_j)\}_{1\leq j\leq \frac{l}{2}}+\mathrm{max}\{\mathrm{sup}(a_j)\}_{1\leq j\leq \frac{l}{2}}\bigg).\nonumber \end{align}
[1844] vixra:2001.0383 [pdf]
Weak Separation Axioms Via Pre-Regular $p$-Open Sets
In this paper, we obtain new separation axioms by using the notion of $(\delta; p)$-open sets introduced by Jafari [3] via the notion of pre-regular $p$-open sets [2].
[1845] vixra:2001.0376 [pdf]
On a Certain Identity Involving the Gamma Function
The goal of this paper is to prove the identity \begin{align}\sum \limits_{j=0}^{\lfloor s\rfloor}\frac{(-1)^j}{s^j}\eta_s(j)+\frac{1}{e^{s-1}s^s}\sum \limits_{j=0}^{\lfloor s\rfloor}(-1)^{j+1}\alpha_s(j)+\bigg(\frac{1-((-1)^{s-\lfloor s\rfloor +2})^{1/(s-\lfloor s\rfloor +2)}}{2}\bigg)\nonumber \\ \bigg(\sum \limits_{j=\lfloor s\rfloor +1}^{\infty}\frac{(-1)^j}{s^j}\eta_s(j)+\frac{1}{e^{s-1}s^s}\sum \limits_{j=\lfloor s\rfloor +1}^{\infty}(-1)^{j+1}\alpha_s(j)\bigg)=\frac{1}{\Gamma(s+1)},\nonumber \end{align}where \begin{align}\eta_s(j):=\bigg(e^{\gamma (s-j)}\prod \limits_{m=1}^{\infty}\bigg(1+\frac{s-j}{m}\bigg)\nonumber \\e^{-(s-j)/m}\bigg)\bigg(2+\log s-\frac{j}{s}+\sum \limits_{m=1}^{\infty}\frac{s}{m(s+m)}-\sum \limits_{m=1}^{\infty}\frac{s-j}{m(s-j+m)}\bigg), \nonumber \end{align}and \begin{align}\alpha_s(j):=\bigg(e^{\gamma (s-j)}\prod \limits_{m=1}^{\infty}\bigg(1+\frac{s-j}{m}\bigg)e^{-(s-j)/m}\bigg)\bigg(\sum \limits_{m=1}^{\infty}\frac{s}{m(s+m)}-\sum \limits_{m=1}^{\infty}\frac{s-j}{m(s-j+m)}\bigg),\nonumber \end{align}where $\Gamma(s+1)$ is the Gamma function defined by $\Gamma(s):=\int \limits_{0}^{\infty}e^{-t}t^{s-1}dt$ and $\gamma =\lim \limits_{n\longrightarrow \infty}\bigg(\sum \limits_{k=1}^{n}\frac{1}{k}-\log n\bigg)=0.577215664\cdots $ is the Euler-Mascheroni constant.
[1846] vixra:2001.0373 [pdf]
Point-Free Topological Monoids and Hopf Algebras on Locales and Frames
In this note, we are intended to offer some theoretical consideration concerning the introduction of point-free topological monoids on the locales and frames. Moreover, we define a quantum group on locales by utilizing the Drinfeld-Jimbo group.
[1847] vixra:2001.0351 [pdf]
Biot-Savart Law and Stokes' Theorem
Biot-Savart law describes magnetic field due to the electric current in a conductive wire. For a long straight wire, the magnetic field is proportional to (I/r). The curl of magnetic field is proportional to (dI/dr). For a constant current, the curl of magnetic field is zero. Consequently, the surface integral of the curl of magnetic field is zero but the line integral of the magnetic field is not. Stokes' theorem can not be applied to the magnetic field vector generated by a constant electric current because the magnetic field is not a differentiable vector.
[1848] vixra:2001.0337 [pdf]
Ab Initio Cyclic Voltammetry on Cu(111), Cu(100) and Cu(110) in Acidic, Neutral and Alkaline Solutions
Electrochemical reactions depend on the electrochemical interface; between the catalytic surfaces and the electrolytes. To control and advance electrochemical reactions there is a need to develop realistic simulation models of the electrochemical interface to understand the interface from an atomistic point-of-view. Here we present a method for obtaining thermodynamic realistic interface structures, a procedure to derive specific coverages and to obtain ab initio simulated cyclic voltammograms. As a case study, the method and procedure is applied in a matrix study of three Cu facets in three different electrolyte. The results are validated by a direct comparison with experimental cyclic voltammograms. The alkaline (NaOH) electrolyte CV are described by H* and OH*, while neutral (KHCO3) the CO3* species are present and in acidic (KCl) the Cl* species dominate. An almost one-to-one mapping is observed from simulation to experiments giving an atomistic understanding of the interface structure of the Cu facets. The strength of atomistic understanding the interface at electrolyte conditions will allow realistic investigations of electrochemical reactions in future studies.
[1849] vixra:2001.0332 [pdf]
Stability of Ice Lenses in Saline Soils
A model of the growth of an ice lens in a saline porous medium is developed. At high lens growth rates the pore fluid becomes supercooled relative to its equilibrium Clapeyron temperature. Instability occurs when the supercooling increases with distance away from the ice lens. Solute diffusion in the pore fluid significantly enhances the instability. An expression for the segregation potential of the soil is obtained from the condition for marginal stability of the ice lens. The model is applied to a clayey silt and a glass powder medium, indicating parameter regimes where the ice lens stability is controlled by viscous flow or by solute diffusion. A mushy layer, composed of vertical ice veins and horizontal ice lenses, forms in the soil in response to the instability. A marginal equilibrium condition is used to estimate the segregated ice fraction in the mushy layer as a function of the freezing rate and salinity.
[1850] vixra:2001.0327 [pdf]
Three Circle Chains Arising from Three Lines
We generalize a problem in Wasan geometry involving a triangle and its incircle and get simple relationships between the three chains arising from three lines.
[1851] vixra:2001.0311 [pdf]
On Some New Notions in Nano Ideal Topological Spaces
The purpose of this paper is to introduce the notion of nano ideal topological spaces and investigate the relation between nano topological space and nano ideal topological space. Moreover, we offer some new open and closed sets in the context of nano ideal topological spaces and present some of their basic properties and characterizations.
[1852] vixra:2001.0285 [pdf]
On Upper and Lower Slightly $\delta$-$\beta$-Continuous Multifunctions
In this paper, we introduce and study upper and lower slightly $\delta$-$\beta$- continuous multifunctions in topological spaces and obtain some characterizations of these new continuous multifunctions.
[1853] vixra:2001.0282 [pdf]
On qi-Open Sets in Ideal Bitopological Spaces
In this paper, we introduce and study the concept of qI-open set. Based on this new concept, we dene new classes of functions, namely qI-continuous functions, qI-open functions and qI- closed functions, for which we prove characterization theorems.
[1854] vixra:2001.0258 [pdf]
A Scenario for Asymmetric Genesis of Matter
We use a previous supersymmetric preon model to propose a heuristic mechanism for the creation of the matter-antimatter asymmetric universe during its early phases. The asymmetry is predicted with probability 1.0 by the charge symmetric model. The baryon-photon ratio is not quantitatively obtained.
[1855] vixra:2001.0229 [pdf]
Asymptotic Safety, Black-Hole Cosmology and the Universe as a Gravitating Vacuum State
A model of the Universe as a dynamical homogeneous anisotropic self-gravitating fluid, consistent with Kantowski-Sachs homogeneous anisotropic cosmology and Black-Hole cosmology, is developed. Renormalization Group (RG) improved black-hole solutions resulting from Asymptotic Safety in Quantum Gravity are constructed which explicitly $remove$ the singularities at $t = 0$. Two temporal horizons at $ t _- \simeq t_P$ (Planck time) and $ t_+ \simeq t_H$ (Hubble time) are found. For times below the Planck time $ t < t_P$, and above the Hubble time $ t > t_H$, the components of the Kantowski-Sachs metric exhibit a key sign $change$, so the roles of the spatial $z$ and temporal coordinates $ t$ are $exchanged$, and one recovers a $repulsive$ inflationary de Sitter-like core around $ z = 0$, and a Schwarzschild-like metric in the exterior region $ z > R_H = 2 G_o M $. Therefore, in this fashion one has found a dynamical Universe $inside$ a Black Hole whose Schwarzschild radius coincides with the Hubble radius $ r_s = 2 G_o M = R_H$. For these reasons we conclude by arguing that our Universe could be seen as a Gravitating Vacuum State inside a Black-Hole.
[1856] vixra:2001.0214 [pdf]
From Deriving Mass-Energy Equivalence with Classical Physics to Mass-Velocity Relation and Charge-Velocity Relation of Electrons
There are controversies on mass-velocity relation and charge of moving electrons, which are related with mass-energy equivalence. Author thought that mass-energy equivalence was expressing the energy relation of bodies and space as mass. In this paper author proposed relative kinetic energy $E_{rk}=mv^2$ to explain this relation. With relative kinetic energy theory, author inferred the equations for mass-energy equivalence and mass-velocity relation. While analyzing the electron acceleration movement, author found the reasons for the two unreal equation of mass-velocity relation, and determined the equations of mass-velocity relation and charge-velocity relation of electrons. These determinations are of important significance for relativity theory and electrodynamics, even for superconducting research. [eastear@outlook.com, eastear@163.com]
[1857] vixra:2001.0204 [pdf]
The Theory of the Collatz Process
In this paper we introduce and develop the theory of the Collatz process. We leverage this theory to study the Collatz conjecture. This theory also has a subtle connection with the infamous problem of the distribution of Sophie germain primes. We also provide several formulation of the Collatz conjecture in this language.
[1858] vixra:2001.0176 [pdf]
Sustaining Wavefunction Coherence via Topological Impedance Matching: Stable Polarized Muon Beams at 255 x 255 GeV/c?
What the Hell is Going On?" is Peter Woit's 'Not Even Wrong' blog comment on Nima Arkani- Hamed's view of the barren state of LHC physics, the long-dreaded Desert[1]. Two essential indispensibles - geometric wavefunctions and quantized impedances of wavefunction interactions - are absent from particle theory, the community oblivious, mired in the consequent four decades of stagnation. Synthesis of the two offers a complementary Standard Model perspective, examining not conservation of energy and its flow between kinetic and potential of Hamiltonian and Lagrangian, but rather what governs amplitude and phase of that flow, quantum impedance matching of geometric wavefunction interactions. Applied to muon decay, the model suggests that translation gauge fields (RF cavities) of relativistic lifetime enhancement might be augmented by introducing rotation gauge fields of carefully chosen topological impedances to an accelerator.
[1859] vixra:2001.0161 [pdf]
On the Erdos-Ulam Problem in the Plane.
In this paper we apply the method of compression to construct a dense set of points in the plane at rational distance from each other. We provide a positive solution to the Erd˝os-Ulam problem.
[1860] vixra:2001.0155 [pdf]
Structure Model of Atomic Nuclei
Neutrons are the particles that move on circular orbits inside the nuclei (with the remaining half of their kinetic energy) around immobilized protons which have spin only. If protons were rotating they would cause orbital magnetism, which has never been observed, beyond magnetic dipole moment of nucleons spin. In addition, no regression of proton has occurred, because it would cause alternating magnetism, which has also never been observed. The first nuclear units are the deuterium, the tritium, the helium-3 and the helium-4, which is the basic structure unit of the large nuclei. The spin, the magnetic moment and the mass deficit of the above units and of the bonding neutrons are the three experimental constants upon which the nuclei structure is based.
[1861] vixra:2001.0152 [pdf]
Assuming C<rad^2(abc), the Abc Conjecture is True
In this paper, assuming that c<rad^2(abc) is true, we give a proof of the abc conjecture by proposing the expression of the constant K(\epsilon), then we approve that \forall \epsilon>0, for a,b,c positive integers relatively prime with c=a+b, we have c<K(\epsilon).rad(abc)^{1+\epsilon}. Some numerical examples are given.
[1862] vixra:2001.0151 [pdf]
Naturally Numbers Are Three Plus One Dimensional Final
Riemann hypothesis stands proved in three different ways.To prove Riemann hypothesis from the functional equation concept of Delta function is introduced similar to Gamma and Pi function. Other two proofs are derived using Eulers formula and elementary algebra. Analytically continuing gamma and zeta function to an extended domain, poles and zeros of zeta values are redefined. Hodge conjecture, BSD conjecture are also proved using zeta values. Other prime conjectures like Goldbach conjecture, Twin prime conjecture etc.. are also proved in the light of new understanding of primes. Numbers are proved to be multidimensional as worked out by Hamilton. Logarithm of negative and complex numbers are redefined using extended number system. Factorial of negative and complex numbers are redefined using values of Delta function.
[1863] vixra:2001.0147 [pdf]
Number Theory and Cosmology and Particle Physics
Riemann hypothesis stands proved in three different ways.To prove Riemann hypothesis from the functional equation concept of Delta function is introduced similar to Gamma and Pi function. Other two proofs are derived using Eulers formula and elementary algebra. Analytically continuing gamma and zeta function to an extended domain, poles and zeros of zeta values are redefined. Hodge conjecture, BSD conjecture are also proved using zeta values. Other prime conjectures like Goldbach conjecture, Twin prime conjecture etc.. are also proved in the light of new understanding of primes. Numbers are proved to be multidimensional as worked out by Hamilton. Logarithm of negative and complex numbers are redefined using extended number system. Factorial of negative and complex numbers are redefined using values of Delta function.
[1864] vixra:2001.0103 [pdf]
Integrals of Entire, Meromorphic and Subharmonic Functions on Small Sets of the Positive Semiaxis
In this note, we announce the results on estimates of integrals of entire, meromorphic, and subharmonic functions on small subsets of the positive semiaxis. These results develop one classical theorem of R. Nevanlinna and the well-known lemmas on small arcs or intervals of A. Edrei, W.H.J. Fuchs, A.F. Grishin, M.L. Sodin and T.I. Malyutina.
[1865] vixra:2001.0097 [pdf]
Definitive Tentative of a Proof of the \textit{abc} Conjecture
In this paper, we consider the $abc$ conjecture. Firstly, we give anelementaryproof that $c<3rad^2(abc)$. Secondly, the proof of the $abc$ conjecture is given for $\epsilon \geq 1$, then for $\epsilon \in ]0,1[$. We choose the constant $K(\epsilon)$ as $K(\epsilon)=\frac{3}{e}.e^{ \left(\frac{1}{\epsilon^2} \right)}$ for $0<\epsilon <1$ and $K(\epsilon)=3$ for $\epsilon \geq 1$. Some numerical examples are presented.
[1866] vixra:2001.0094 [pdf]
On a Connected $T_{1/2}$ Alexandroff Topology and $^*g\hat{\alpha}$-Closed Sets in Digital Plane
The Khalimsky topology plays a significant role in the digital image processing. In this paper we define a topology $\kappa_1$ on the set of integers generated by the triplets of the form $\{2n, 2n+1, 2n+3\}$. We show that in this space $(\mathbb{Z}, \kappa_1)$, every point has a smallest neighborhood and hence this is an Alexandroff space. This topology is homeomorphic to Khalimskt topology. We prove, among others, that this space is connected and $T_{3/4}$. Moreover, we introduce the concept of $^*g\hat{\alpha}$-closed sets in a topological space and characterize it using $^*g\alpha o$-kernel and closure. We investigate the properties of $^*g\hat{\alpha}$-closed sets in digital plane. The family of all $^*g\hat{\alpha}$-open sets of $(\mathbb{Z}^2, \kappa^2)$, forms an alternative topology of $\mathbb{Z}^2$. We prove that this plane $(\mathbb{Z}^2, ^*g\hat{\alpha}O)$ is $T_{1/2}$. It is well known that the digital plane $(\mathbb{Z}^2, \kappa^2)$ is not $T_{1/2}$, even if $(\mathbb{Z}, \kappa)$ is $T_{1/2}$.
[1867] vixra:2001.0091 [pdf]
Division by Zero Calculus for Differentiable Functions L'Hôpital's Theorem Versions
We will give a generalization of the division by zero calculus to differentiable functions and its basic properties. Typically, we can obtain l'Hôpital's theorem versions and some deep properties on the division by zero. Division by zero, division by zero calculus, differentiable, analysis, Laurent expansion, l'Hôpital's theorem, $1/0=0/0=z/0=\tan(\pi/2) =\log 0 =0, (z^n)/n = \log z$ for $n=0$, $e^{(1/z)} = 1$ for $z=0$. 
[1868] vixra:2001.0052 [pdf]
Marginal Likelihood Computation for Model Selection and Hypothesis Testing: an Extensive Review
This is an up-to-date introduction to, and overview of, marginal likelihood computation for model selection and hypothesis testing. Computing normalizing constants of probability models (or ratio of constants) is a fundamental issue in many applications in statistics, applied mathematics, signal processing and machine learning. This article provides a comprehensive study of the state-of-the-art of the topic. We highlight limitations, benefits, connections and differences among the different techniques. Problems and possible solutions with the use of improper priors are also described. Some of the most relevant methodologies are compared through theoretical comparisons and numerical experiments.
[1869] vixra:2001.0037 [pdf]
Anomaly Detection for Cybersecurity: Time Series Forecasting and Deep Learning
Finding anomalies when dealing with a great amount of data creates issues related to the heterogeneity of different values and to the difficulty of modelling trend data during time. In this paper we combine the classical methods of time series analysis with deep learning techniques, with the aim to improve the forecast when facing time series with long-term dependencies. Starting with forecasting methods and comparing the expected values with the observed ones, we will find anomalies in time series. We apply this model to a bank cybersecurity case to find anomalous behavior related to branches applications usage.
[1870] vixra:2001.0023 [pdf]
Interpretation of Shannon Entropies with Various Bases by Means of Multinary Searching Games
Ben-Naim used twenty question games to illustrate Shannon entropy with base 2 as a measure of the amount of information in terms of the minimum average number of binary questions. We found that Shannon entropy with base 2 equal to the minimum average number of binary questions is only valid under a special condition. The special condition is referred to as the equiprobability condition, which requires that the outcomes of every question have equal probability, thus restricting the probability distribution. This requirement is proven for a ternary game and a proposed multinary game as well. The proposed multinary game finds a coin hidden in one of several boxes by using a multiple pan balance. We have shown that the minimum average number of weighing measurements by using the multiple pan balance can be directly obtained by using Shannon entropy with base b under the equiprobability condition. Therefore, Shannon entropy with base b can be interpreted as the minimum average number of weighing measurements by using the multiple pan balance when the multiple outcomes have equal probability every time.
[1871] vixra:2001.0018 [pdf]
What's a Muon Anyways!?
Understanding the role of muons in Particle Physics is an important step understanding generations and the origin of mass as an expression of internal structure. A possible connection between muonic atoms and cycloatoms is used as a pretext to speculate on the above core issue of the Standard Model.
[1872] vixra:1912.0547 [pdf]
The Qubit Model: A Platonic and Exceptional Quantum Theory
Recently, GUTs based on the exceptional Lie algebras attempt unification of interactions of the Standard Model as a gauge field theory, e.g. Garrett Lisi's E8-TOE. But the modern growing trend in quantum physics is based on the Quantum Information Processing paradigm (QIP). The present proposal will develop the Qubit Model, a QIP analog of the Quark Model within the SM framework. The natural principle that "quantum interactions should be discrete", technically meaning the reduction of the gauge group to finite subgroups of SO(3)/SU(2), implies that qubit-frames (3D-pixels), playing the role of baryons, have the Platonic symmetries as their Klein Geometry (Three generations of flavors): T,O,I, and hence their "doubles", the binary point groups are the root systems E6,7,8 of the exceptional Lie algebras, which control their Quantum Dynamics. The Qubit Model conceptually reinterprets the experimental heritage modeled into the SM, and has clear prospects of explaining the mass spectrum of elementary particles, consistent with the works of other researchers, including Mac Gregor and Palazzi regarding the quatization of mass (Elementary Particles), or Moon and Cook regarding the structure of the nucleus (Nuclear Phsyics).
[1873] vixra:1912.0540 [pdf]
A Remark on the Erd\'{o}s-Straus Conjecture
In this paper we discuss the Erd\'{o}s-Straus conjecture. Using a very simple method we show that for each $L\in \mathbb{N}$ with $L>n-1$ there exist some $(x_1,x_2,\ldots,x_n)\in \mathbb{N}^n$ with $x_i\neq x_j$ for all $1\leq i<j\leq n$ such that \begin{align}\frac{n}{L}\ll \sum \limits_{j=1}^{n}\frac{1}{x_j}\ll \frac{n}{L}\nonumber \end{align}In particular, for each $L\geq 3$ there exist some $(x_1,x_2,x_3)\in \mathbb{N}^3$ with $x_1\neq x_2$, $x_2\neq x_3$ and $x_3\neq x_1$ such that \begin{align}c_1\frac{3}{L}\leq \frac{1}{x_1}+\frac{1}{x_2}+\frac{1}{x_3}\leq c_2\frac{3}{L}\nonumber \end{align}for some $c_1,c_2>1$.
[1874] vixra:1912.0538 [pdf]
The Little $\ell$ Function
In this short note we introduce a function which iteratively behaves in a similar fashion compared to the factorial function. However the growth rate of this function is not as dramatic and sudden as the factorial function. We also propose an approximation for this function for any given input, which holds for sufficiently large values of n.
[1875] vixra:1912.0537 [pdf]
Surgical Analysis of Functions
In this paper we introduce the concept of surgery. This concept ensures that almost all discontinuous functions can be made to be continuous without redefining their support. Inspite of this, it preserves the properties of the original function. Consequently we are able to get a handle on the number of points of discontinuities on a finite interval by having an information on the norm of the repaired function and vice-versa.
[1876] vixra:1912.0531 [pdf]
The Connection Between X^2+1 and Balancing Numbers
Balancing numbers as introduced by Behera and Panda [1] can be shown to be connected to the formula x^2+1=N in a very simple way. The goal of this paper is to show that if a balancing number exists for the balancing equation 1+ 2+ ... + (y-1) = (y+1)+(y+2)+...+(y+m), then there is a corresponding(2y)^2+1=N, where N is composite. We will also show how this can be used to factor N.
[1877] vixra:1912.0529 [pdf]
On a Local Spectra Inequality
In this note we show that under certain conditions the inequality holds \begin{align}\sum \limits_{\lambda_i\in \mathrm{Spec}(ab^{T})}\mathrm{min}\{\log |t-\lambda_i|\}_{[||a||,||b||]}&\leq \# \mathrm{Spec}(ab^T)\log\bigg(\frac{||b||+||a||}{2}\bigg)\nonumber \\&+\frac{1}{||b||-||a||}\sum \limits_{\lambda_i\in \mathrm{Spec}(ab^T)}\log \bigg(1-\frac{2\lambda_i}{||b||+||a||}\bigg).\nonumber \end{align}Also under the same condition, the inequality also holds\begin{align}\int \limits_{||a||}^{||b||}\log|\mathrm{det}(ab^{T}-tI)|dt&\leq \# \mathrm{Spec}(ab^T)(||b||-||a||)\log\bigg(\frac{||b||+||a||}{2}\bigg)\nonumber \\&+\sum \limits_{\lambda_i\in \mathrm{Spec}(ab^T)}\log \bigg(1-\frac{2\lambda_i}{||b||+||a||}\bigg).\nonumber \end{align}
[1878] vixra:1912.0528 [pdf]
A Proof of the Twin Prime Conjecture
In this paper we prove the twin prime conjecture by showing that \begin{align} \sum \limits_{\substack{p\leq x\\p,p+2\in \mathbb{P}}}1\geq (1+o(1))\frac{x}{2\mathcal{C}\log^2 x}\nonumber \end{align}where $\mathcal{C}:=\mathcal{C}(2)>0$ fixed and $\mathbb{P}$ is the set of all prime numbers. In particular it follows that \begin{align} \sum \limits_{p,p+2\in \mathbb{P}}1=\infty\nonumber \end{align}by taking $x\longrightarrow \infty$ on both sides of the inequality. We start by developing a general method for estimating correlations of the form \begin{align} \sum \limits_{n\leq x}G(n)G(n+l)\nonumber \end{align}for a fixed $1\leq l\leq x$ and where $G:\mathbb{N}\longrightarrow \mathbb{R}^{+}$.
[1879] vixra:1912.0526 [pdf]
Naturally Numbers Are Three Plus One Dimensional
Riemann hypothesis stands proved in three different ways.To prove Riemann hypothesis from the functional equation concept of Delta function is introduced similar to Gamma and Pi function. Other two proofs are derived using Eulers formula and elementary algebra. Analytically continuing gamma and zeta function to an extended domain, poles and zeros of zeta values are redefined. Hodge conjecture, BSD conjecture are also proved using zeta values. Other prime conjectures like Goldbach conjecture, Twin prime conjecture etc.. are also proved in the light of new understanding of primes. Numbers are proved to be multidimensional as worked out by Hamilton. Logarithm of negative and complex numbers are redefined using extended number system. Factorial of negative and complex numbers are redefined using values of Delta function.
[1880] vixra:1912.0523 [pdf]
Unsorted Collection of Unruly Random Thoughts: Started for no Good Reason in A.D. 2019
Sometimes abstracts are best served as warnings. Here is one: If you are humor challenged, cannot think in metaphors, or (much worse) believe in the honesty of politicians (and that’s a metaphor too), this may not be suitable for you. Here is another: this document will most likely be expanded.
[1881] vixra:1912.0506 [pdf]
Golden Ratio Geometry and the Fine-Structure Constant
The golden ratio is found to be related to the fine-structure constant, which determines the strength of the electromagnetic interaction. The golden ratio and classical harmonic proportions with quartic equations give an approximate value for the inverse fine-structure constant the same as that discovered previously in the geometry of the hydrogen atom. With the former golden ratio results, relationships are also shown between the four fundamental forces of nature: electromagnetism, the weak force, the strong force, and the force of gravitation.
[1882] vixra:1912.0494 [pdf]
Twin Prime Conjecture(newer Version)
I proved the Twin Prime Conjecture. The probability twin prime approximately is slightly lower than 4/3 times the square of the probability that a prime will appear in. I investigated up to 5$\times10^{12}$.\\ When the number grows to the limit, the primes to be produced rarely, but since Twin Primes are slightly lower than 4/3 times the square of the distribution of primes, the frequency of production of Twin Primes is very equal to 0.\\ However, it is not 0. Because, primes continue to be produced. Therefore, Twin Primes continue to be produced.\\ If the Twin Primes is finite, the primes is finite.\\ This is because slightly lower than 4/3 times the square of the probability of primes is the probability of Twin Primes.\\ This is contradiction. Because there are an infinite of primes.\\ \ \\ $[Probability\ of\ the\ Existence\ of\ primes]^2\times4/3$=\\ (Probability\ of\ the\ Existence\ of\ Twin\ Primes)\\ When the number becomes extreme, the generation of prime numbers becomes extremely small. However, it is not 0.\\ Very few, but prime numbers are generated.\\ Therefore, even if the number reaches the limit, twin prime numbers are also generated.\\ That is, Twin Primes exist forever.\\
[1883] vixra:1912.0464 [pdf]
Speed of Light in FG5 Gravimeter
The absolute gravimeter measures the gravitational constant by dropping a corner cube retro-reflector in a vacuum. The light reflected by the corner cube interferes with another light from the same emission. The interference pattern can not be explained by the theory if the speed of light remains constant upon reflection. Two research teams were obliged to propose new definition of acceleration to match their test data. Neither team understands that the speed of light actually changes upon reflection by a moving mirror. The definition of acceleration should remain intact. The speed of reflected light should increase to match the observed fringe pattern from the gravimeter.
[1884] vixra:1912.0456 [pdf]
On (An)Abelian Geometry and Deformation Theory with Applications to Quantum Physics
The Betti-de Rham period isomorphism ("Abelian Geometry") is related to algebraic fundamental group (Anabelian Geometry), in analogy with the classical context of Hurewicz Theorem. To investigate this idea, the article considers an "Abstract Galois Theory", as a separated abstract structure from, yet compatible with, the Theory of Schemes, which has its historical origin in Commutative Algebra and motivation in the early stages of Algebraic Topology. The approach to Motives via Deformation Theory was suggested by Kontsevich as early as 1999, and suggests Formal Manifolds, with local models formal pointed manifolds, as the source of motives, and perhaps a substitute for a "universal Weil cohomology". The proposed research aims to gain additional understanding of periods via a concrete project, the discrete algebraic de Rham cohomology, a follow-up of author's previous work. The connection with Arithmetic Gauge Theory should provide additional intuition, by looking at covering maps as flat connection spaces, and considering branching covers of the Riemann sphere as the more general case. The research on Feynman/Veneziano Amplitudes and Gauss/Jacobi sums, allows to deepen the parallel between the continuum and discrete frameworks: an analog of Virasoro algebra in finite characteristic. A larger project is briefly considered, consisting in deriving Motives from the Theory of Deformations, as suggested by Kontsevich. Following Soibelman and Kontsevich, the idea of defining Formal Manifolds as groupoids of pointed formal manifolds (after Maurer-Cartan ``exponentiation''), with associated torsors as ``gluing data'' (transition functions) is suggested. This framework seems to be compatible with the ideas from Theory of Periods, sheaf theory / etale maps and Grothedieck's development of Galois Theory (Anabelian Geometry). The article is a preliminary evaluation of a research plan of the author. Further concrete problems are included, since they are related to the general ideas mentioned above, and especially relevant to understanding the applications to scattering amplitudes in quantum physics.
[1885] vixra:1912.0430 [pdf]
From Periods to Anabelian Geometry and Quantum Amplitudes
To better understand and investigate Kontsevich-Zagier conjecture on abstract periods, we consider the case of algebraic Riemann Surfaces representable by Belyi maps. The category of morphisms of Belyi ramified maps and Dessins D'Enfant, will be investigated in search of an analog for periods, of the Ramification Theory for decomposition of primes in field extensions, controlled by theirs respective algebraic Galois groups. This suggests a relation between the theory of (cohomological, Betti-de Rham) periods and Grothendieck's Anabelian Geometry (homotopical/ local systems), towards perhaps an algebraic analog of Hurwitz Theorem, relating the the algebraic de Rham cohomology and algebraic fundamental group, both pioneered by A. Grothendieck. There seem to be good prospects of better understanding the role of absolute Galois group in the physics context of scattering amplitudes and Multiple Zeta Values, with their incarnation as Chen integrals on moduli spaces, as studied by Francis Brown, since the latter are a homotopical analog of de Rham Theory. The research will be placed in the larger context of the ADE-correspondence, since, for example, orbifolds of finite groups of rotations have crepant resolutions relevant in String Theory, while via Cartan-Killing Theory and exceptional Lie algebras, they relate to TOEs. Relations with the author's reformulation of cohomology of cyclic groups as a discrete analog of de Rham cohomology and the Arithmetic Galois Theory will provide a purely algebraic toy-model of the said algebraic homology/homotopy group theory of Grothendieck as part of Anabelian Geometry. It will allow an elementary investigation of the main concepts defining periods and algebraic fundamental group, together with their conceptual relation to algebraic numbers and Galois groups. The Riemann surfaces with Platonic tessellations, especially the Hurwitz surfaces, are related to the finite Hopf sub-bundles with symmetries the ``exceptional'' Galois groups. The corresponding Platonic Trinity leads to connections with ADE-correspondence, and beyond, e.g. TOEs and ADEX-Theory. Quantizing "everything" (cyclotomic quantum phase and finite Platonic-Hurwitz geometry of qubits/baryons) could perhaps be The Eightfold (Petrie polygon) Way to finally understand what quark flavors and fermion generations really are.
[1886] vixra:1912.0394 [pdf]
Navier-Stokes Fluid Millennium Prize Problem
The Millennium Prize problem is solved because inconsistency of Navier-Stokes fluid and the perfect fluid is found. In several examples, the inconsistency of known Physics of fluid is shown.
[1887] vixra:1912.0378 [pdf]
Chemical Analysis Of Plain Distilled Water May Refute Mass-Energy Equivalence Of E=mc²
Despite E= mc² being a foundational equation of modern physics, it has not been experimentally verified. Though four eminent physicists claimed ‘A direct test of E=mc²’ (Nature 2006) giving verification accurate to 1:10⁶, the experiment was not any verification of E=mc², but rather an alternative experiment to deduce the mass of the neutron. Instead of the usual Deuteron interaction, they used the nuclear interaction involving sulfur S-32 and silicon Si-28. The claim of accuracy of 1:10⁶ is about the comparison of the new value with the accepted value of the mass of the neutron. This paper shows that a chemical analysis (with a good analytical balance) of the mass composition of oxygen and hydrogen in plain distilled water may show that the law of conservation of mass is universally valid without the need for the hypothesis of mass-energy equivalence; this would also imply an unequivocal refutation of the equation E=mc². Such an experiment could easily be carried out by any laboratory in today’s universities. The experiment should be simple and straightforward, yet its outcome may have enormous consequences for the world of physics.
[1888] vixra:1912.0376 [pdf]
Is Black Hole the hole?
Presented evidence, that Black Hole is hole. Namely right behind the black surface (event horizon for non-rotational BH) there are no space nor time. No spacetime. Just as it would be prior to the Big Bang. First composite image of Black Hole from Event Horizon Telescope is another evidence for that, with the resulting correction of the reported mass.
[1889] vixra:1912.0360 [pdf]
$e, \pi, \chi \cdots \alpha?$
Feynman amplitudes are periods, and also coefficients of the QED partition function with a formal deformation parameter the fine structure constant $\alpha$. Moreover, this truly fundamental mathematical constant is the ratio of magnetic (fluxon) vs. electric charge, as well as the grading of the decay lifetimes telling apart weak from strong ``interactions''. On the other (Mathematical) hand $e$ is the ``inverse'' of $\pi$, another deformation parameter (no ordinary period), as Euler's famous identity $exp(2\pi i)=1$ suggests. In a recent work, Atiyah related $\alpha$ and the Todd function. But Todd classes are inverses of Chern classes, suggesting further ``clues'' to look for conceptual relationships between these mathematical constants, in an attempt to catch a a Platonic and Exceptional Universe by the TOE.
[1890] vixra:1912.0359 [pdf]
On Deviation Equation
Due to his solution of Energy Localization problem in General Relativity, the author finds out, that tidal forces of Black Hole can compress the falling astronaut instead of ripping him into parts. Moreover, found necessity of inclusion mathematical correction made "by hand" into first order Deviation equation.
[1891] vixra:1912.0352 [pdf]
On Proofs of the Poincare Conjecture
On December 22, 2006, the journal Science honored Perelman's proof of the Poincare Conjecture as the scientific ``Breakthrough of the Year", the first time this honor was bestowed in the area of mathematics. However, I have critical questions about Perelman's proof of Poincare Conjecture. The conjecture states, that ``Every simply connected, closed 3-manifold is homeomorphic to the 3-sphere.'' The ``homeomorphic" means that by non-singular deformation one produces perfect sphere - the equivalent of initial space. However, pasting in foreign caps will not make such deformation. My short proofs are given.
[1892] vixra:1912.0347 [pdf]
Q-Analogs of Sinc Sums and Integrals
$q$-analogs of sum equals integral relations $\sum_{n\in\mathbb{Z}}f(n)=\int_{-\infty}^\infty f(x)dx$ for sinc functions and binomial coefficients are studied. Such analogs are already known in the context of $q$-hypergeometric series. This paper deals with multibasic `fractional' generalizations that are not $q$-hypergeometric functions.
[1893] vixra:1912.0344 [pdf]
Unification of Gravity with Quantum Mechanics The Beauty and the Beast
Isaac Newton did not invent the gravity constant G, nor did he use it or need it. Newton’s original formula was F = Mm/r^2 and not the formula F = GMm/r^2 , which had evolved over time. Newton’s formula can easily be unified with quantum mechanics, while the modification to his formula can only be unified with quantum mechanics by introducing a very akward notation as shown in this paper. Modern physics indirectly uses two different definitions for mass without knowing it; one for gravity and another for the rest of physics. This, we will prove, has made it impossible to unify quantum mechanics with gravity. However, once we understand the cause of the problem, it can be fixed easily by going back to the key insight given by Newton, which leads to a beautiful simple unified theory, in both conception and notation. Alternatively, one can arrive at the same theory, but with unattractive notation that hides the beauty at the depth of reality. We will show a beautiful way to unify gravity and quantum mechanics and also an ugly way. Both are the essentially the same, but only one way, the Newton inspired way, gives the deep insight on matter, energy, time, space, and gravity and even quantum mechanics. Modern physics has ignored Newton’s insight on matter and altered the mass definition, and therefore Newton’s gravity formula was modified as well, such that a unified theory seemed to become impossible. Newton himself would probably not have approved of the gravity constant; it is a flaw on the foundation of his theory and his gravity formula. Still, when one understands what the gravity constant really represents, one can unify standard physics, by adding it in other places, as needed. As an example of its power, our new quantum gravity theory can predict galaxy rotation based on baryonic matter only. This strongly indicates that dark matter is an extraneous and awkward factor used in today’s standard gravity model in order to get an incomplete model to fit observations.
[1894] vixra:1912.0340 [pdf]
Higher Accuracy Order in Differentiation-by-Integration
In this text explicit forms of several higher precision order kernel functions (to be used in he differentiation-by-integration procedure) are given for several derivative orders. Also, a system of linear equations is formulated which allows to construct kernels with an arbitrary precision for an arbitrary derivative order. A computer study is realized and it is shown that numerical differentiation based on higher precision order kernels performs much better (w.r.t. errors) than the same procedure based on the usual Legendre-polynomial kernels. Presented results may have implications for numerical implementations of the differentiation-by-integration method.
[1895] vixra:1912.0300 [pdf]
Essential Problems on the Origins of Mathematics; Division by Zero Calculus and New World
Based on the preprint survey paper (What Was Division by Zero?; Division by Zero Calculus and New World, viXra:1904.0408 submitted on 2019-04-22 00:32:30) we will give a viewpoint of the division by zero calculus from the origins of mathematics that are the essences of mathematics. The contents in this paper seem to be serious for our mathematics and for our world history with the materials in the preprint. So, the author hopes that the related mathematicians, mathematical scientists and others check and consider the topics from various viewpoints.
[1896] vixra:1912.0281 [pdf]
Models for Elementary Particles, Dark Matter, Dark Energy, and Galaxies
We show theory that spans tiny and vast aspects of physics. We suggest descriptions for new elementary particles, dark matter, and dark energy. We use those descriptions to explain data regarding dark matter effects, dark energy effects, and galaxy formation. Our mathematics-based modeling, descriptions, and explanations embrace and augment standard physics theory and modeling. One basis for our modeling is an extension to mathematics for harmonic oscillators.
[1897] vixra:1912.0253 [pdf]
Born's Reciprocal Relativity Theory, Curved Phase Space, Finsler Geometry and the Cosmological Constant
A brief introduction of the history of Born's Reciprocal Relativity Theory, Hopf algebraic deformations of the Poincare algebra, de Sitter algebra, and noncommutative spacetimes paves the road for the exploration of gravity in $curved$ phase spaces within the context of the Finsler geometry of the cotangent bundle $T^* M$ of spacetime. A scalar-gravity model is duly studied, and exact nontrivial analytical solutions for the metric and nonlinear connection are found that obey the generalized gravitational field equations, in addition to satisfying the $zero$ torsion conditions for $all$ of the torsion components. The $curved$ base spacetime manifold and internal momentum space both turn out to be (Anti) de Sitter type. The most salient feature is that the solutions capture the very early inflationary and very-late-time de Sitter phases of the Universe. A $regularization$ of the $8$-dim phase space action leads naturally to an extremely small effective cosmological constant $ \Lambda_{eff}$, and which in turn, furnishes an extremely small value for the underlying four-dim spacetime cosmological constant $ \Lambda$, as a direct result of a $correlation$ between $ \Lambda_{eff} $ and $ \Lambda$ resulting from the field equations. The rich structure of Finsler geometry deserves to be explore further since it can shine some light into Quantum Gravity, and lead to interesting cosmological phenomenology.
[1898] vixra:1912.0225 [pdf]
Numbers Are Three Dimensional, as Nature
Riemann hypothesis stands proved in three different ways.To prove Riemann hypothesis from the functional equation concept of Delta function is introduced similar to Gamma and Pi function. Other two proofs are derived using Eulers formula and elementary algebra. Analytically continuing gamma and zeta function to an extended domain, poles and zeros of zeta values are redefined.
[1899] vixra:1912.0207 [pdf]
Consideration of Twin Prime Conjecture\\ Average Difference is 2.296
I considered the Twin Prime Conjecture. The probability twin prime approximately is slightly lower than 4/3 times the square of the probability that a prime will appear in.\\ When the number grows to the limit, the primes to be produced rarely, but since Twin Primes are slightly lower than 4/3 times the square of the distribution of primes, the frequency of production of Twin Primes is very equal to 0.\\ The places where prime numbers come out are filled with multiples of primes one after another, and eventually disappear almost.\\ Primes can only occur very rarely when the numbers are huge.\\ This is natural from the following equation.\\ \begin{equation} \pi(x)\sim\frac{x}{\log{x}}\ \ \ (x\to\infty) \end{equation}\\ $[Probability\ of\ the\ Existence\ of\ primes]^2\times4/3\sim$ (Probability\ of\ the\ Existence\ of\ Twin\ Primes)\\ When the number becomes extreme, the generation of primes becomes extremely small. However, it is not 0.\\ Very few, but primes are generated.\\ If the twin primes appears as two primes completely independently, Twin Prime Problem is denied.\\ However, if twin primes appear in combination and appear like primes, twin primes consist forever and Twin Prime Problem is correct.\\
[1900] vixra:1912.0205 [pdf]
Almost no Primes in the Infinite World
There are almost no primes in the infinite world. This is because the place where the primes appears is occupied by multiple of the primes. If you think about a hexagon, you can see it right away.
[1901] vixra:1912.0197 [pdf]
Nonlinear Waves in Two-Dimensional Autonomous Cellular Neural Networks Coupled by Memristors
In this paper, we propose two-dimensional autonomous cellular neural networks (CNNs), which are formed by connecting single synaptic-input CNN cells to each node of an ideal memristor grid. Our computer simulations show that the proposed two-dimensional autonomous CNNs can exhibit interesting and complex nonlinear waves. In many two-dimensional autonomous CNNs, we have to use a locally active memristor grid, in order for the autonomous CNNs to exhibit the continuous evolution of nonlinear waves. Some other notable features of the two-dimensional autonomous CNNs are: The autonomous Van der Poll type CNN can exhibit various kinds of nonlinear waves by changing the characteristic curve of the nonlinear resistor in the CNN cell. Furthermore, if we choose a different step size in the numerical integration, it exhibits a different nonlinear wave. This property is similar to the sensitive dependence on initial conditions of chaos. The autonomous Lotka-Volterra CNN can also exhibit various kinds of nonlinear waves by changing the initial conditions. That is, it can exhibit different response for each initial condition. Furthermore, we have to choose a passive memristor grid to avoid an overflow in the numerical integration process. Our computer simulations show that the dynamics of the proposed autonomous CNNs are more complex than we expected.
[1902] vixra:1912.0181 [pdf]
Properties of Quadratic Anticommutative Hypercomplex Number Systems
Hypercomplex numbers are, roughly speaking, numbers of the form x_1 + i_1x_2 + … + i_nx_{n+1} such that x_1 + i_1x_2 + … + i_nx_{n+1} = y_1 + i_1y_2 + … + i_ny_{n+1} if and only if x_j = y_j for all j in {1,2,…,n}. I define a quadratic anticommutative hypercomplex numbers as hypercomplex numbers x_1 + i_1x^2 + … + i_nx_{n+1} such that i_j^2 = p_j for all j (where p_j is a real number) and i_ji_k = - i_ki_j for all k not equal to j. These numbers have some interesting properties. In particular, in this paper I prove a generalized form of the Demoivre’s formula for these numbers, and determine certain conditions required for a function on a Quadratic Anticommutative Hypercomplex plane to be analytic—including generalizations of the Cauchy-Riemann equations.
[1903] vixra:1912.0174 [pdf]
Special Value of Riemann Zeta Function and L Function, Approximate Calculation Formula of ζ(N), L(N)
I made an approximate formula. In the formula, when N is small, the accuracy is very bad, but as N increases, the accuracy also improves.
[1904] vixra:1912.0161 [pdf]
Solid Strips Configurations
We introduce the idea of Solid Strip Configurations which is a way of construction 3-dimensional compact manifolds alternative to $\Delta$-complexes and CW complexes. The proposed method is just an idea which we believe deserve further formal mathematical investigation.
[1905] vixra:1912.0157 [pdf]
A proof of Twin Prime Conjecture
I proved the Twin Prime Conjecture. The probability that (6n -1) is a prime and (6n+1) is also a prime approximately is slightly lower than 4/3 times the square of the probability that a prime will appear in. I investigated up to 5$\times10^{12}$.\\ All Twin Primes are produced in hexagonal circulation. It does not change in a huge number (forever huge number).\\ The production of Twin Primes equal the existence of Twin Primes.\\ When the number grows to the limit, the primes to be produced rarely, but since Twin Primes are slightly lower than 4/3 times the square of the distribution of primes, the frequency of production of Twin Primes is very equal to 0.\\ However, it is not 0. Because, primes continue to be produced. Therefore, Twin Primes continue to be produced.\\ If the Twin Primes is finite, the primes is finite.\\ This is because slightly lower than 4/3 times the square of the probability of primes is the probability of Twin Primes. This is contradiction. Because there are an infinite of primes.\\
[1906] vixra:1912.0151 [pdf]
A Proof of Twin Prime Conjecture by 30 Intervals Etc.
If (p,p+2) are twin primes, (p+30, p+2+30) or (p+60, p+2+60) or (p+90, p+2+90) or (p+120, p+2+120) or (p+150, p+2+150) ) or (p+180, p+2+180) or (p+210, p+2+210) or (p+240, p+2+240)……. is to be a twin primes.\\ There are three type of twin primes, last numbers are (1, 3)..(7, 9)..(9, 1).\\ They are lined up at intervals such as 30 or 60 or 90 or 120 or 150 or 180 or 210 or 240 or 270 or 300 etc.\ That is, it is a multiple of 30. \\ Repeat this.\\ And the knowledge about prime numbers is also taken into account.\\ That is, Twin Primes exist forever.\\
[1907] vixra:1912.0143 [pdf]
A Second Note on a Possible Anomaly in the Complex Numbers
The paper gives an additional reason why, initially, there are two different solutions associated to a quadratic equation that indicates an anomaly in complex numbers. It is demonstrated that one of the solutions is impossible but plausible \& necessary.
[1908] vixra:1912.0119 [pdf]
Simple Prime Number Determination Method for Natural Numbers Including Carmichael Numbers
Explanation of effective prime number judgment method even for Carmichael number. This method of judgment does not give a 100% correct answer. Care must be taken especially for (n=p^k (P=Prime)) with primitive roots.
[1909] vixra:1912.0114 [pdf]
Disruptive Gravity (Corrected)
Viewing gravity as a spacetime bending force instead of just a spacetime curvature, we come to the conclusion of rest mass relativity since it yields equivalent equations as General Relativity. A close analysis of the Schwarzschild metric leads us naturally to the Vacuum Apparent Energy Invariance principle from which we derive the metric equation. Applying this theory to cosmology, we can explain galaxies redshift as a delayed gravitational redshift which explains Hubble diagrams with no need for Dark Energy. This theory has the same predictive power as General Relativity for every local experimental tests of the latter since it's based on a slight modification of the Schwarzchild metric.
[1910] vixra:1912.0104 [pdf]
Why Are Gravitational Waves Detections so Close to New/full Moon?
Of the 11 gravitational waves detections up to date, seven occurred within 43 hours of New/Full Moon or perihelion and four within the two weeks between the 2017/8/7 and 2017/8/21 eclipses. Why do the gravitational waves coming from millions of light years away arrive to Earth so close to these lunar events? The question is investigated in more detail.
[1911] vixra:1912.0100 [pdf]
Mathematics as Information Compression Via the Matching and Unification of Patterns
This paper describes a novel perspective on the foundations of mathematics: how mathematics may be seen to be largely about "information compression (IC) via the matching and unification of patterns" (ICMUP). That is itself a novel approach to IC, couched in terms of non-mathematical primitives, as is necessary in any investigation of the foundations of mathematics. This new perspective on the foundations of mathematics reflects the idea that, as an aid to human thinking, mathematics is likely to be consonant with much evidence for the importance of IC in human learning, perception, and cognition. This perspective on the foundations of mathematics has grown out of a long-term programme of research developing the "SP Theory of Intelligence" and its realisation in the "SP Computer Model", a system in which a generalised version of ICMUP -- the powerful concept of "SP-multiple-alignment" -- plays a central role. The paper shows with an example how mathematics, without any special provision, may achieve compression of information. Then it describes examples showing how variants of ICMUP may be seen in widely-used structures and operations in mathematics. Examples are also given to show how several aspects of the mathematics-related disciplines of logic and computing may be understood as ICMUP. Also discussed is the intimate relation between IC and concepts of probability, with arguments that there are advantages in approaching AI, cognitive science, and concepts of probability via ICMUP. Also discussed is how the close relation between IC and concepts of probability relates to the established view that some parts of mathematics are intrinsically probabilistic, and how that latter view may be reconciled with the all-or-nothing, "exact", forms of calculation or inference that are familiar in mathematics and logic. There are many potential benefits and applications of the mathematics-as-IC perspective.
[1912] vixra:1912.0095 [pdf]
A Note on Manifolds VS Networks as Mathematics Models in Modern Physics
Some stages of development of Manifold Theory are inspected, and how they evolved into the modern discrete frameworks of lattice and spin networks, with help from Topology and Homological Algebra. Recalling experimental evidence that reality is discrete, notably quantum Hall effect, includes more recent findings of quantum knots and spin-net condensates. Thus Pythagoras, Zeno and Plato were right after all: “Number rules the Uni- verse”, perhaps explaining the “unreasonable effectiveness of mathematics”, but not quite, why Quantum Physics’ scattering amplitudes are often Number Theory’s multiple zeta values.
[1913] vixra:1912.0063 [pdf]
A Maximum Entropy Approach to Wave Mechanics
We employ the maximum entropy principle, in the context of statistical inference by impersonal physical interactions, together with the experimental position-momentum uncertainty phenomenon to construct the general wave mechanical static state of a single, interacting mass particle with no internal degrees of freedom. Subsequently, this first principle approach allows to derive via Newtonian mechanics the dynamical equation of motion in the realm of non-relativistic wave mechanics, i.e., the Schrödinger equation.
[1914] vixra:1912.0030 [pdf]
Zeros of the Riemann Zeta Function Within the Critical Strip and Off the Critical Line
In a recent paper, the author demonstrated the existence of real numbers in the neighborhood of infinity. It was shown that the Riemann zeta function has non-trivial zeros in the neighborhood of infinity but none of those zeros lie within the critical strip. While the Riemann hypothesis only asks about non-trivial zeros off the critical line, it is also an open question of interest whether or not there are any zeros off the critical line yet still within the critical strip. In this paper, we show that the Riemann zeta function does have non-trivial zeros of this variety. The method used to prove the main theorem is only the ordinary analysis of holomorphic functions. After giving a brief review of numbers in the neighborhood of infinity, we use Robinson's non-standard analysis and Eulerian infinitesimal analysis to examine the behavior of zeta on an infinitesimal neighborhood of the north pole of the Riemann sphere. After developing the most relevant features via infinitesimal analysis, we will proceed to prove the main result via standard analysis on the Cartesian complex plane without reference to infinitesimals.
[1915] vixra:1912.0021 [pdf]
Electric Field and Divergence Theorem
The divergence theorem states that the surface integral of the flux is equal to the volume integral of the divergence of the flux. This is not true if there is singularity in the volume integral. One example is the electric field flux described by Coulomb's law. Another example is the gravitational force. Consequently, Gauss's flux theorem is not applicable to the divergence of the electric field.
[1916] vixra:1912.0005 [pdf]
Searching for Waves in the Incompressible Navier-Stokes Equations - The Adventure
This article traces a journey of discovery undertaken to search for wave phenomena in the incompressible Navier-Stokes equations. From the early days of my interest in Computational Fluid Dynamics (CFD) used for consulting purposes, and the use of various commercial solvers eventually leading to research programs, at a number of universities, spanning a number of years. It reviews research programs at Dortmund University (Dortmund, Germany), and this author’s post-graduate study at Chulalongkorn University (Bangkok, Thailand). During the latter, it was noticed that flow solutions became unstable when certain combinations of parameters were used - especially when real-life density and viscosity were used. Clarity was needed. This author had a perception that this research could lead to an understanding of the turbulence phenomenon, Tensor calculus was used to understand the macro nature of the NS Equations, and to place them firmly into the family of wave equations. My research continued in private for some 15 years, with the occasional presentation of findings at conferences, and an internet blog. In recent months major breakthroughs have been made, and now the evidence for wave phenomena is convincingly demonstrated.
[1917] vixra:1911.0518 [pdf]
Compressive Analysis and the Future for Privacy
Compressive analysis is the name given to the family of techniques that map raw data to their smaller representation. Largely, this includes data compression, data encoding, data encryption, and hashing. In this paper, we analyse the prospects of such technologies in realising customisable individual privacy. We enlist the dire need to establish privacy preserving frameworks and policies and how can individuals achieve a trade-off between the comfort of an intuitive digital service ensemble and their privacy. We examine the current technologies being implemented, and suggest the crucial advantages of compressive analysis.
[1918] vixra:1911.0479 [pdf]
Policies for Constraining the Behaviour of Coalitions of Agents in the Context of Algebraic Information Theory
This article takes an oblique sidestep from two previous papers, wherein an approach to reformulation of game theory in terms of information theory, topology, as well as a few other notions was indicated. In this document a description is provided as to how one might determine an approach for an agent to choose a policy concerning which actions to take in a game that constrains behaviour of subsidiary agents. It is then demonstrated how these results in algebraic information theory, together with previous investigations in geometric and topological information theory, can be unified into a single cohesive framework.
[1919] vixra:1911.0477 [pdf]
Evidence, that X^2+y^3=1 and Others Have no Solution in Q>0
Due to the Incompleteness Theorems of Gödel one can say, that some true conjectures do not have valid proofs. One could think it also about my conjectures below, but I was lucky to find evidence for them.
[1920] vixra:1911.0453 [pdf]
Existence and Continuous Dependence for Fractional Neutral Functional Differential Equations
In this paper, we investigate the existence, uniqueness and continuous dependence of solutions of fractional neutral functional differential equations with infinite delay and the Caputo fractional derivative order, by means of the Banach's contraction principle and the Schauder's fixed point theorem.
[1921] vixra:1911.0451 [pdf]
Counting Partitions
In this lecture we count the number of integer partitions P(n) using an elementary algorithm based on the combinatorics of trees, coded using free apps on the iPad.
[1922] vixra:1911.0437 [pdf]
Impossibility of Gravitons and bi-Metric Gravity; Riemann Hypothesis Confirmed; Energy Localization Problem Solved; the Falsifiability of Science is Demonstrated
Paper ``in trend''~\cite{Meissner} talks also about gravitons (at least word ``gravitino'' is in abstract). Gravitons are gravitational force transmitors, but there is no force of Gravity in General Relativity. And how it could be in any adequate theory, if free falling body feels no dragging force (but the weightlessness). So, the paper just adds up to general misunderstanding. Latter is positioned~\cite{drive} as the driving engine of science (like the radioactive mutations in Biology), so the question arises: how many papers are a bit wrong?
[1923] vixra:1911.0425 [pdf]
Dark Matter and Dark Energy Explained by Fix to Vanishing of Falling Matter
Considered motion in Kerr-Newman, Kerr, and Reissner-Nordström spacetimes. As an example, in Kerr spacetime, if you release from rest-state an electrically neutral test-particle (from any position outside the Black Hole, but not at equatorial plane) it will end up in abrupt-end apart from the point of spacetime singularity. As solution to this problem the Dark Matter is used.
[1924] vixra:1911.0418 [pdf]
Friendly Smart Crop Observation System
This paper seeks to propose a monitoring/sensing device as a preliminary prototype to alert farmers or cultivators of crucial information and warnings against critical levels of soil moisture, air temperature and humidity surrounding the crop's vicinity. In it, application of IoT, data analysis and ML techniques are implemented into the design of the said prototype which will be further utilized to evolve the continuously gathered data to construct meaningful forecast of what constitutes a healthy crop and other actionable information. As a result, more meaningful measures can be taken to ensure the safety of the crop based on gradually enhanced and improved data sets in a long run which may help increasing the standards of practice in the evolving agricultural industry.
[1925] vixra:1911.0417 [pdf]
In New Mathematics, Riemann Hypothesis is Mistake
In classical mathematics there will be a complete zero.\\ But in new mathematics there is no perfect zero. At the same time, there is no perfect 1/2 in new mathematics.\\ Hence, Riemann hypothesis is false.\\ In new mathematics, there is no perfect 1 or 2.\\ They are 1 or 2 as close as possible to 1 or 2, and not 1 or 2.\\ I think we should break away from classical mathematics and think about new mathematics.\\ These can be said from quantum mechanics.\\ New mathematics doesn't have perfect zero, 1/2, 1, 2 and so on.\\ There are only numbers close to zero, 1/2, 1, and 2.\\ 1/2 is 0.499999999..... or 0.5000000000.....\\ A perfect 1/2 cannot exist.\\
[1926] vixra:1911.0406 [pdf]
Improved Methodology for Computing Coastal Upwelling Indices from Satellite Data
The article discusses an improved methodology for determining coastal upwelling indices from satellite maps of sea surface temperature and near-surface wind. The main difference of this technique is the determination of upwelling parameters by monthly climatic masks. The algorithm for choosing monthly climate masks is considered in detail. The substantiation of the choice of the region remote from upwelling waters in the open sea is given for calculating the thermal upwelling index and its modifications. In addition to the generally accepted upwelling indices (thermal and Ekman), new indices have been introduced: cumulative and apparent upwelling power, allowing to take into account the upwelling surface area. The technique is considered on the example of the Canary Upwelling. This technique allows you to determine the boundaries of upwelling in each climate month, and therefore, more accurately calculate its indices and environmental parameters located in the upwelling region (surface wind, sea level, geostrophic current, etc.)
[1927] vixra:1911.0379 [pdf]
Zero is only a Mathematical Fantasy
Mathematics returns to Ancient Times.\\ Perfect Zero cannot exist.\\ In physics, there are many particles in a vacuum.\\ 0 is not perfect zero.\\ 0 is almost zero.\\ Zero is only a mathematical fantasy.\\ There is no Zero.\\ 0 may be a return to the womb.\\ And, love is 0 and infinite.\\
[1928] vixra:1911.0358 [pdf]
The Magnetic Moment of the Lee Particle
The Lee model of the unstable particle V going to N + Θ, where N-particle is considered charged and Θ−particle uncharged, is inserted into electromagnetic field. While the Θ−particle propagates undisturbed, the N-particle is deflected by the extended photon source. The result of such process is the additional magnetic moment of the Lee particle. The Schwinger source theory is employed to present the calculation of the magnetic moment of the Lee model of the unstable particle.
[1929] vixra:1911.0353 [pdf]
Intrinsic Vector Potential and Electromagnetic Mass
Electric charges may have mass in part or in full because they are charged. The explanation here avoids charge distribution models by associating the charge's mass with intrinsic quantum mechanical quantities, similar to the way spin angular momentum dispenses with mechanical models. Inhomogeneous Lorentz, i.e. `Poincare, dual fermion, 8-spinor fields are needed. Poincare fields have a probability current that acts as an intrinsic vector potential. The potential obeys a Maxwell-like equation which identifies the charged source. Intrinsic gauge freedom allows the chosen intrinsic gauge to provide the charged source with mass, which is, therefore, `electromagnetic mass'. One of the two fermions obeys the Dirac equation for a massless, chargeless particle while the other is charged and massive. These conventional equations describe neutrinos and electrons and similar lepton pairs with well-known accuracy.
[1930] vixra:1911.0340 [pdf]
Logic About Forming Hydrogen Atom from Higgs Bosons
New theories was introduced in this paper. With these theories, author deduced the forming process of electrons and protons in Higgs field\cite{higgs1}, and analyzed many relations, finally got the theoretical values. By comparing, All experimental data \cite{codata2014} are precisely consistent with the theoretical value. It confirms that hydrogen atom is a coordinating whole with strict logic as this paper. Email: eastear@outlook.com eastear@163.com
[1931] vixra:1911.0316 [pdf]
The Prime Counting Function and the Sum of Prime Numbers
In this paper it is proved that the sum of consecutive prime numbers up to the square root of a given natural number is asymptotically equivalent to the prime counting function. Also, they are found some solutions such that both series are equal. Finally, they are listed the prime numbers at which both series are equal, and exposed some conjectures regarding this type of prime numbers.
[1932] vixra:1911.0311 [pdf]
A Note on a Possible Anomaly in the Complex Numbers
In the present paper a conflict in basic complex number theory is reported. The ingredients of the analysis are Euler's identity and the DeMoivre rule for n=2. The outcome is that a quadratic equation only has one single solution because one of the existing solutions gives rise to an impossibility.
[1933] vixra:1911.0302 [pdf]
A New Hamiltonian Model of the Fibonacci Quasicrystal Using Non-Local Interactions: Simulations and Spectral Analysis
This article presents a novel Hamiltonian architecture based on vertex types and empires for demonstrating the emergence of aperiodic order (quasicrystal growth) in one dimension by a suitable prescription for breaking translation symmetry. At the outset, the paper presents different algorithmic, geometrical, and algebraic methods of constructing empires of vertex configurations of a given quasi- lattice. These empires have non-local scope and form the building blocks of the new lattice model. This model is tested via Monte Carlo simulations beginning with randomly arranged N tiles. The simulations clearly establish the Fibonacci configuration, which is a one dimensional quasicrystal of length N, as the final relaxed state of the system. The Hamiltonian is promoted to a matrix operator form by performing dyadic tensor products of pairs of interacting empire vectors followed by a summation over all permissible configurations. A spectral analysis of the Hamiltonian matrix is performed and a theoretical method is presented to find the exact solution of the attractor configuration that is given by the Fibonacci chain as predicted by the simulations. Finally, a precise theoretical explanation is provided which shows that the Fibonacci chain is the most probable ground state. The proposed Hamiltonian is a one dimensional model of quasicrystal growth.
[1934] vixra:1911.0287 [pdf]
Recurring Pairs of Consecutive Entries in the Number-of-Divisors Function
The Number-of-Divisors Function tau(n) is the number of divisors of a positive integer n, including 1 and n itself. Searching for pairs of the format (tau(n), tau(n+1)), some pairs appear (very) often, some never and some --- like (1,2), (4,9), or (10,3) --- exactly once. The manuscript provides proofs for 46 pairs to appear exactly once and lists 12 pairs that conjecturally appear only once. It documents a snapshot of a community effort to verify sequence A161460 of the Online Encyclopedia of Integer Sequences that started ten years ago.
[1935] vixra:1911.0275 [pdf]
Exponential Factorization and Polar Decomposition of Multivectors in $Cl(p,q)$, $p+q \leq 3$
In this paper we consider general multivector elements of Clifford algebras $Cl(p,q)$, $n=p+q \leq 3$, and study multivector equivalents of polar decompositions and factorization into products of exponentials, where the exponents are frequently blades of grades zero (scalar) to $n$ (pseudoscalar).
[1936] vixra:1911.0231 [pdf]
Multiplication by Zero Calculus, Addition by Zero Calculus, and Subtraction by Zero Calculus
In physics, there are many particles in a vacuum.\\ Perfect zero cannot exist.\\ 0 is not perfect zero.\\ 0 is almost zero.\\ Perfect zero is only a mathematical fantasy.\\ $a\times0\approx0$, but $a\times0\neq0$.\\ $a\times0\times0\times0\times0\times0<a\times0\times0\times0\times0<a\times0\times0\times0<a\times0\times0<a\times0<a$.\\ $a-0-0-0<a-0-0<a-0<a<a+0<a+0+0<a+0+0+0$.\\
[1937] vixra:1911.0206 [pdf]
Finite-Time Lyapunov Exponents in the Instantaneous Limit and Material Transport
Lagrangian techniques, such as the Finite-Time Lyapunov Exponent (FTLE) and hyperbolic Lagrangian coherent structures, have become popular tools for analyzing unsteady fluid flows. These techniques identify regions where particles transported by a flow will converge to and diverge from over a finite-time interval, even in a divergence-free flow. Lagrangian analyses, however, are time consuming and computationally expensive, hence unsuitable for quickly assessing short-term material transport. A recently developed method called OECSs rigorously connected Eulerian quantities to short-term Lagrangian transport. This Eulerian method is faster and less expensive to compute than its Lagrangian counterparts, and needs only a single snapshot of a velocity field. Along the same line, here we define the instantaneous Lyapunov Exponent (iLE), the instantaneous counterpart of the finite-time Lyapunov exponent (FTLE), and connect the Taylor series expansion of the right Cauchy-Green deformation tensor to the infinitesimal integration time limit of the FTLE. We illustrate our results on geophysical fluid flows from numerical models as well as analytical flows, and demonstrate the efficacy of attracting and repelling instantaneous Lyapunov exponent structures in predicting short-term material transport.
[1938] vixra:1911.0180 [pdf]
Prime Sextuplet Conjecture
Prime Sextuplet and Twin Primes have exactly the same dynamics. All Prime Sextuplet are executed in hexagonal circulation. It does not change in a huge number (forever huge number). In the hexagon, Prime Sextuplet are generated only at (6n -1)(6n+5). [n is a positive integer] When the number grows to the limit, the denominator of the expression becomes very large, and primes occur very rarely, but since Prime Sextuplet are 48/3 times of the sixth power distribution of primes, the frequency of occurrence of Prime Quintuplet is very equal to 0. However, it is not 0. Therefore, Prime Sextuplet continue to be generated. If Prime Sextuplet is finite, the Primes is finite. The probability of Prime Sextuplet 48/3 times of the sixth power probability of appearance of the Prime. This is contradictory. Because there are an infinite of Primes. That is, Prime Sextuplet exist forever.
[1939] vixra:1911.0179 [pdf]
Prime Quintuplet Conjecture
Prime Quintuplet and Twin Primes have exactly the same dynamics. All Prime Quintuplet are executed in hexagonal circulation. It does not change in a huge number (forever huge number). In the hexagon, Prime Quintuplet are generated only at (6n -1)(6n+5). [n is a positive integer] When the number grows to the limit, the denominator of the expression becomes very large, and primes occur very rarely, but since Prime Quintuplet are 96/3 times of the 5th power distribution of primes, the frequency of occurrence of Prime Quintuplet is very equal to 0. However, it is not 0. Therefore, Prime Quintuplet continue to be generated. If Prime Quintuplet is finite, the Primes is finite. The probability of Prime Quintuplet 96/3 times of the 5th power probability of appearance of the Prime. This is contradictory. Because there are an infinite of Primes. That is, Prime Quintuplet exist forever.
[1940] vixra:1911.0177 [pdf]
Sexy Primes Conjecture
Sexy Primes Conjecture were prooved. Sexy Primes and Twin Primes and Cousin Primes have exactly the same dynamics. All Primes are executed in hexagonal circulation. It does not change in a huge number (forever huge number). In the hexagon, Sexy Primes are generated only at (6n+1)(6n -1). [n is a positive integer] When the number grows to the limit, the denominator of the expression becomes very large, and primes occur very rarely, but since Sexy Primes are 8/3 times the square of the distribution of primes, the frequency of occurrence of Sexy Primes is very equal to 0. However, it is not 0. Therefore, Sexy Primes continue to be generated. If Sexy Primes is finite, the Primes is finite. Because, Sexy Primes are 8/3 times the square of the distribution of primes. This is contradictory. Since there are an infinite of Primes. That is, Sexy Primes exist forever.
[1941] vixra:1911.0173 [pdf]
Energy Density of a Vacuum Observed by Background Radiation
In the paper zero-point energy density of free photons is estimated for an empty space surrounded by — and observed by — a bath of thermal background photons. Interpreting the results, the outline of the cosmological arrow of time is suggested.
[1942] vixra:1911.0168 [pdf]
Physics Mathematical Approximations
There are many ad hoc expressions for the mass ratio of the proton to the electron. the models presented here are different from others in that they rely strictly on volumes and areas. One geometry is based on ellipsoids constructed with values taken from one of the two number sets: {(4pi), (4pi-1/pi), (4pi-2/pi)} or {(4pi+2), (4pi-2), (4pi-2/pi)}. The product of the three values of each number set approximates the value given by CODATA for the mass ratio of the proton to the electron. Another approximate is formed from a solid ball of radius, r = (4pi-1/pi), with a conical sector, wedge, or internal ellipsoid removed. Each extracted solid has curved surface area of (4pi-1/pi)/(pi^2). With the advent of the Higg’s Boson, its value can be approximated by H^0 = (4pi)(4pi-1/pi)(4pi-2/pi)(4pi-3/pi)(4pi-4/pi). Define the function F as follows: Let the initial set be the positive integers, the final set be the real numbers, and the rule assigning each member of the initial set to one member of the final set: F(m) =(4pi)...(4pi-(m-1)/pi). Conclusion: The function F(2)=1836.15... approximates the experimental value of the mass ratio of the proton to the electron and F(4) approximates the mass ratio of the Higg's Boson to the electron. The neutron-to-electron ratio is approximated with ln(4pi)+F(2). Email: harry.watson@att.net
[1943] vixra:1911.0156 [pdf]
Nonconvex Stochastic Nested Optimization via Stochastic ADMM
We consider the stochastic nested composition optimization problem where the objective is a composition of two expected-value functions. We proposed the stochastic ADMM to solve this complicated objective. In order to find an $\epsilon$ stationary point where the expected norm of the subgradient of corresponding augmented Lagrangian is smaller than $\epsilon$, the total sample complexity of our method is $\mathcal{O}(\epsilon^{-3})$ for the online case and $\cO \Bigl((2N_1 + N_2) + (2N_1 + N_2)^{1/2}\epsilon^{-2}\Bigr)$ for the finite sum case. The computational complexity is consistent with proximal version proposed in \cite{zhang2019multi}, but our algorithm can solve more general problem when the proximal mapping of the penalty is not easy to compute.
[1944] vixra:1911.0144 [pdf]
Prime Quadruplet Conjecture
Prime Quadruplet and Twin Primes have exactly the same dynamics. All Prime Quadruplet are executed in hexagonal circulation. It does not change in a huge number (forever huge number). In the hexagon, Prime Quadruplet are generated only at (6n -1)(6n+5). [n is a positive integer] When the number grows to the limit, the denominator of the expression becomes very large, and primes occur very rarely, but since Prime Quadruplet are 16/3 of the fourth power distribution of primes, the frequency of occurrence of Prime Quadruplet is very equal to 0. However, it is not 0. Therefore, Cousin Primes continue to be generated. If Prime Quadruplet is finite, the Primes is finite. The probability of Prime Quadruplet 16/3 of the fourth power probability of appearance of the Prime. This is contradictory. Because there are an infinite of Primes. That is, Prime Quadruplet exist forever.
[1945] vixra:1911.0127 [pdf]
Robust Quaternion Estimation with Geometric Algebra
Robust methods for finding the best rotation aligning two sets of corresponding vectors are formulated in the linear algebra framework, using tools like the SVD for polar decomposition or QR for finding eigenvectors. Those are well established numerical algorithms which on the other hand are iterative and computationally expensive. Recently, closed form solutions has been proposed in the quaternion’s framework, those methods are fast but they have singularities i.e., they completely fail on certain input data. In this paper we propose a robust attitude estimator based on a formulation of the problem in Geometric Algebra. We find the optimal eigen-quaternion in closed form with high accuracy and with competitive performance respect to the fastest methods reported in literature.
[1946] vixra:1911.0120 [pdf]
The Absolute Smallest Possible Money Unit! When Money Crashes into the Laws of Physics
In this paper, we demonstrate that there is an absolute physical limit on how small the smallest money unit can be, no matter how much we are able to improve our technology. The smallest money unit seems to be directly linked to the smallest possible energy unit needed to store one bit. If the smallest money unit is smaller than the cost of energy of storing one bit then there seems to be an arbitrage, which will also constrain money producers such as central banks from issuing money with a smaller denomination than this minimum money unit. Keywords: Money units, money creation, arbitrage, information theory, Launder limit, the Planck constant.
[1947] vixra:1911.0115 [pdf]
General Order Differentials and Division by Zero Calculus
In this paper, we will give several examples that in the general order $n$ differentials of functions we find the division by zero and by applying the division by zero calculus, we can find the good formulas for $n=0$. This viewpoint is new and curious at this moment for some general situation. Therefore, as prototype examples, we would like to discuss this property. Why division by zero for zero order representations for some general differential order representations of functions does happen?
[1948] vixra:1911.0083 [pdf]
Cousin Primes Conjecture
Cousin Primes Conjecture were performed using WolframAlpha and Wolfram cloud from the beginning this time, as in the case of the twin primes that we did the other day. Cousin Primes and Twin Primes have exactly the same dynamics. All Cousin Primes are executed in hexagonal circulation. It does not change in a huge number (forever huge number). In the hexagon, Cousin Primes are generated only at (6n+1)(6n+5). [n is a positive integer] When the number grows to the limit, the denominator of the expression becomes very large, and primes occur very rarely, but since Cousin Primes are 4/3 times the square of the distribution of primes, the frequency of occurrence of Cousin Primes is very equal to 0. However, it is not 0. Therefore, Cousin Primes continue to be generated. That is, Cousin Primes exist forever.
[1949] vixra:1911.0069 [pdf]
Mathematics Behind the Standard Model
In this thesis, I went through derivation of equations of motion for some free particles using symmetry, and also went through the math underneath spontaneous symmetry breaking and the Higgs mechanism in restoring missing mass in interaction between particles, which both showing the self-consistency of the Standard Model.
[1950] vixra:1911.0007 [pdf]
Ulipristal Acetate Determination Using MBTH
A simple visible spectrophotometric method is proposed for the determination of ulipristal acetate present in bulk and tablet formulation. The currently proposed method is established based on MBTH oxidation by ferric ions to form an active coupling species (electrophile), followed by its coupling with the ulipristal in acidic medium to form high intensi��ied green colored chromophore having max at 609 nm. Validated the method as per the current guidelines of ICH. Beer’s law was obeyed in the concentration range of 6.25 – 37.50 g mL��1 with a high regression coef��icient (r > 0.999). Reproducibility, accuracy, and precision of the method are evident from the low values of R.S.D. This method can be used in quality control laboratories for routine analysis of ulipristal acetate in bulk drug and pharmaceutical dosage forms.
[1951] vixra:1911.0002 [pdf]
In Twin prime Conjecture Constance 4/3
I proved the Twin Prime Conjecture. However, a new problem of mystery with a Constance 4/3 occurred. I have studied this in various ways, but I don't know.
[1952] vixra:1910.0630 [pdf]
Fusion Reactor with Electrodynamic Stabilization
The magnetic confinement of thermonuclear plasma can be significantly improved by using the reaction of an electrical conductive wall in combination with the AC driving of the plasma current. In this way the magnetic fields can be confined in some well defined space and with it the plasma itself. Also many plasma instabilities are suppressed or reduced.
[1953] vixra:1910.0624 [pdf]
Gender Issues in Fundamental Physics: a Bibliometric Analysis
We analyse bibliometric data about fundamental physics world-wide from 1970 to now extracting quantitative data about gender issues. We do not find significant gender differences in hiring rates, hiring timing, career gaps and slowdowns, abandonment rates, citation and self-citation patterns. Furthermore, various bibliometric indicators (number of fractionally-counted papers, citations, etc) exhibit a productivity gap at hiring moments, at career level, and without integrating over careers. The gap persists after accounting for confounding factors and manifests as an increasing fraction of male authors going from average to top authors in terms of bibliometric indices, with a quantitative shape that can be fitted by higher male variability.
[1954] vixra:1910.0613 [pdf]
The Two Couriers Problem
The Two Couriers Problem is an algebra problem, originally stated in 1746 by the French mathematician Clairaut. For over a century, the Two Couriers Problem has been re-used in various forms as a mathemat- ical problem, in textbooks and journals, by different mathematicians and authors. The Two Couriers Problem involves cases where division by zero arises in practice, where each has a real-world, actual result for the solution. Thus the Two Couriers Problem is a centuries old algebra problem with actual applied results that involve division by zero. It is an excellent mathematical problem to evaluate different methods for dividing by zero. Division by zero has many different mathematical approaches. Conventional mathematics handles division by zero as an indeterminate or undefined result. Transmathematics defines division by zero as either nul- lity or explicitly positive or negative infinity. Two other approaches are by Saitoh, who defines division by zero simply as zero, and Barukčić who defines division by zero as either unity or explicitly positive or implicitly negativity infinity. The question is, which approach is best to solve the mathematical problem of division by zero? The paramount goal of this paper is to use the Two Couriers Problem as an objective test to examine and evaluate mathematical approaches to division by zero – and find which one is best.
[1955] vixra:1910.0578 [pdf]
Unsupervised Decomposition of Multi-Author Document
This paper proposes an improvement over a paper[A generic unsupervised methods for decomposing multi-author documents, N. Akiva and M. Koppel 2013]. We have worked on two aspects, In the first aspect, we try to capture writing style of author by ngram model of words, POS Tags and PQ Gram model of syntactic parsing over used basic uni-gram model. In the second aspect, we added some layers of refinements in existing baseline model and introduce new term ”similarity index” to distinguish between pure and mixed segments before unsupervised labeling. Similarity index uses overall and sudden change of writing style by PQ Gram model and words used using n-gram model between lexicalised/unlexicalised sentences in segments for refinement. In this paper, we investigate the role of feature selection that captures the syntactic patterns specific to an author and its overall effect in the final accuracy of the baseline system. More specifically, we insert a layer of refinement to the baseline system and define a threshold based on the similarity measure among the sentences to consider the purity of the segments to be given as input to the GMM.The key idea of our approach is to provide theGMMclustering with the ”good segments” so that the clustering precision is maximised which is then used as labels to train a classifier. We also try different features set like bigrams and trigrams of POS tags and an PQ Grams based feature on unlexicalised PCFG to capture the distinct writing styles which is then given as an input to a GMM trained by iterative EM algorithm to generate good clusters of the segments of the merged document.
[1956] vixra:1910.0568 [pdf]
Sentiment Classification Over Brazilian Supreme Court Decisions Using Multi-Channel CNN
Sentiment analysis seeks to identify the viewpoint(s) underlying a text document; In this paper, we present the use of a multichannel convolutional neural network which, in effect, creates a model that reads text with different n-gram sizes, to predict with good accuracy sentiments behind the decisions issued by the Brazilian Supreme Court, even with a very imbalanced dataset we show that a simple multichannel CNN with little to zero hyperparameter tuning and word vectors, tuned on network training, achieves excellent results on the Brazilian Supreme Court data. We report results of 97% accuracy and 84% average F1- score in predicting multiclass sentiment dimensions. We also compared the results with classical classification machine learning models like Naive Bayes and SVM.
[1957] vixra:1910.0567 [pdf]
Proof of the Riemann Hypothesis [final Edition]
Up to now, I have tried to expand this equation and prove Riemann hypothesis with the equation of cos, sin, but the proof was impossible. However, I realized that a simple formula before expansion can prove it. The real value is zero only when the real part of s is 1/2. Non-trivial zeros must always have a real value of zero. The real part of s being 1/2 is the minimum requirement for s to be a non-trivial zeros.
[1958] vixra:1910.0561 [pdf]
An Inconsistency in Modern Physics and a Simple Solution
In this paper, we will point out an important inconsistency in modern physics. When relativistic momentum and relativistic energy are combined with key concepts around Planck momentum and Planck energy, we find an inconsistency that has not been shown before. The inconsistency seems to be rooted in the fact that momentum, as defined today, is linked to the de Broglie wavelength. By rewriting the momentum equation in the form of the Compton wavelength instead, we get a consistent theory. This has a series of implications for physics and cosmology.
[1959] vixra:1910.0554 [pdf]
Electronic Data Transmission with Three Times the Speed of Light and Data Rates of 2000 Bits Per Second Over Long Distances in Buffer Amplifier Chains
Recently, during the experimental testing of basic assumptions in electrical engineering, it became apparent that ultra-low-frequency (ULF) voltage signals in coaxial cables with a length of only a few hundred meters propagate significantly faster than light. Starting point for this discovery was an experiment in which a two-channel oscilloscope is connected to a signal source via a short coaxial cable and the second input to the same signal source via a long coaxial cable. It was observed that the delay between the two channels can be for short cables and low frequencies so small that the associated phase velocity exceeds the speed of light. In order to test whether the discovered effect can be exploited to transmit information over long distances, a cable was examined in which the signal is refreshed at regular distances by buffer amplifiers. The result was that such an setup is indeed suitable for transmitting wave packets at three times the speed of light and bit rates of about 2 kbit/s over arbitrary distances. The statement that information cannot propagate faster than light is therewith clearly experimentally disproved and can therefore no longer be sustained.
[1960] vixra:1910.0551 [pdf]
Using Decimals to Prove Zeta(n >= 2) is Irrational
With a strange and ironic twist an open number theory problem, show Zeta(n) is irrational for natural numbers greater than or equal to 2, is solved with the easiest of number theory concepts: the rules of representing fractions with decimals.
[1961] vixra:1910.0550 [pdf]
Representation of Momentum
The inelastic collision between two identical particles shows that the ratio of the momentum to the mass includes an extra term in addition to the velocity. An extra function independent of the speed of the particle is part of the momentum. This function can be determined empirically from the parameters of Large Hadron Collider at CERN.
[1962] vixra:1910.0534 [pdf]
Problème du Voyageur de Commerce TSP
The travelling salesman problem (TSP) asks the following question: "Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city and returns to the origin city?" It is an NP-hard problem in combinatorial optimization, important in operations research and theoretical computer science.
[1963] vixra:1910.0532 [pdf]
Preprocessing Quaternion Data in Quaternion Spaces Using the Quaternion Domain Fourier Transform
Recently a new type of hypercomplex Fourier transform has been suggested. It consequently transforms quaternion valued signals (for example electromagnetic scalar-vector potentials, color data, space-time data, etc.) defined over a quaternion domain (space-time or other 4D domains) from a quaternion ”position” space to a quaternion ”frequency” space. Therefore the quaternion domain Fourier transform (QDFT) uses the full potential provided by hypercomplex algebra in higher dimensions, such as 3D and 4D transformation covariance. The QDFT is explained together with its main properties relevant for applications such as quaternionic data preprocessing.
[1964] vixra:1910.0514 [pdf]
Review Highlights: Opinion Mining on Reviews: a Hybrid Model for Rule Selection in Aspect Extraction
This paper proposes a methodology to extract key insights from user generated reviews. This work is based on Aspect Based Sentiment Analysis (ABSA) which predicts the sentiment of aspects mentioned in the text documents. The extracted aspects are fine-grained for the presentation form known as Review Highlights. The syntactic approach for extraction process suffers from the overlapping chunking rules which result in noise extraction. We introduce a hybrid technique which combines machine learning and rule based model. A multi-label classifier identifies the effective rules which efficiently parse aspects and opinions from texts. This selection of rules reduce the amount of noise in extraction tasks. This is a novel attempt to learn syntactic rule fitness from a corpus using machine learning for accurate aspect extraction. As the model learns the syntactic rule prediction from the corpus, it makes the extraction method domain independent. It also allows studying the quality of syntactic rules in a different corpus.
[1965] vixra:1910.0494 [pdf]
The Proof of Goldbach’s Conjecture
Since the set of AS(+) and AS(×) is a bijective function, we use the improved the theorem of asymptotic density to prove that there exist prodcut of two odd primes in any AS(×). At the same time, in any AS(+), the sum of two odd primes can be obtained.
[1966] vixra:1910.0477 [pdf]
Remainder Theorem and the Division by Zero Calculus
In this short note, for the elementary theorem of remainder in polynomials we recall the division by zero calculus that appears naturally in order to show the importance of the division by zero calculus.
[1967] vixra:1910.0474 [pdf]
Wave Pulse Theory Of Light (WPTOL) Based On Bohr Model
The Bohr Model is an alternative theory of light to Maxwell's theory. It is extended to a wave pulse theory of light, WPTOL. It is a classical aether theory. When an orbital electron of an atom makes a quantum jump to a lower energy state, it emits a single polarization wave pulse of one wavelength in the aether. There is no neutron in the nucleus of the atom in the `Simple Unified Theory', SUT; the neutron is replaced with a proton and an nuclear electron. As the nucleus of the atom has only proton and electron, WPTOL covers the emission of gamma-rays originating from the nucleus of the atom. The binding energy within the nucleus is also the Coulomb electric force; there is no strong force. All radiation consist of single wavelength wave pulses and the wave pulses are all separate wave entities;there is no `train of light waves' in WPTOL. The train of light waves as found in Maxwell's theory has no physical basis. A light wave pulse has energy E=hν and momentum P=E/c, the same energy and momentum relation of the relativistic photon. It is this single wave pulse of one wavelength that gives the illusion of light as particle. WPTOL eliminates the `wave particle duality' hypothesis for light. The theoretical value of light speed in the aether is c=m_e e⁴ /(8ε₀h³R), R being the Rydberg constant. Newton's first law is extended to a light wave pulse in the aether which explains why light propagates in a straight line. There is no dissipation of energy of a wave pulse as it propagates in the aether. WPTOL does not need the concept of magnetism. The WPTOL is a much stronger theory of light as compared to the Maxwell's theory.
[1968] vixra:1910.0452 [pdf]
Kinetics of Periodate Oxidation of Polyoxyethylene – 300, a Biodegradable Pharmaceutical Polymer
Polyoxyethylene – 300 (POE) is a well-known biodegradable pharmaceutical polymer. In order to understand the stability of POE and to derive the reaction rate law, the title reaction was carried out in aqueous alkaline medium. Reaction was found to be ��irst order dependent on the concentration oxidant (periodate) and independent of substrate (POE) concentration. A retardation of reaction rate with an increase in hydroxide concentration shows an inverse fractional order in it. Based on the studies of the temperature dependence of reaction, evaluated the activation parameters.
[1969] vixra:1910.0442 [pdf]
Tangles and Cubes for Gravity
In this short lecture we describe the correspondence between tangles and associahedra tiles, where R occurs in the case of braid tangles, leading to a natural extension to ribbons. Such tangles come from the Temperley-Lieb algebra and were used by Bar-Natan to study Khovanov complexes in nice cobordism categories.
[1970] vixra:1910.0435 [pdf]
An Incorporatory and Non-Discriminate Analysis of Mystopropanic Physics
According to all known laws of chemistry, mystopropane should not be able to form, but it does anyway because mystopropane does not care what humans think is impossible. Recent developments in the intense and very mysterious field of isobutane have revealed different forms of this strange molecule that have been shown to be created in new and novel ways that were previously thought to be impossible. Mystopropane is structurally very similar to isobutane and may look the same to the naked eye (considering one cannot see a molecule with the naked eye). The differences between isobutane and mystopropane will be revealed throughout this study, which will include the in-depth research of scientists like Daved Von Walkerheim II and Devang Deepak. The “Great Kacklehauser-Shimeryton Debate” will also be thoroughly mentioned because of its contributions to the continued research on mystopropane.
[1971] vixra:1910.0433 [pdf]
RTOP: A Conceptual and Computational Framework for General Intelligence
A novel general intelligence model is proposed with three types of learning. A unified sequence of the foreground percept trace and the command trace translates into direct and time-hop observation paths to form the basis of Raw learning. Raw learning includes the formation of image-image associations, which lead to the perception of temporal and spatial relationships among objects and object parts; and the formation of image-audio associations, which serve as the building blocks of language. Offline identification of similar segments in the observation paths and their subsequent reduction into a common segment through merging of memory nodes leads to Generalized learning. Generalization includes the formation of interpolated sensory nodes for robust and generic matching, the formation of sensory properties nodes for specific matching and superimposition, and the formation of group nodes for simpler logic pathways. Online superimposition of memory nodes across multiple predictions, primarily the superimposition of images on the internal projection canvas, gives rise to Innovative learning and thought. The learning of actions happens the same way as raw learning while the action determination happens through the utility model built into the raw learnings, the utility function being the pleasure and pain of the physical senses.
[1972] vixra:1910.0423 [pdf]
A Computer Violation of the Chsh
If a clear no-go for Einsteinian hidden parameters is real, it must be in no way possible to violate the CHSH with local hidden variable computer simulation. In the paper we show that with the use of a modified Glauber-Sudarshan method it is possible to violate the CHSH. The criterion value comes close to the quantum value and is $> 2$. The proof is presented with the use of an R computer program. The important snippets of the code are discussed and the complete code is presented in an appendix.
[1973] vixra:1910.0414 [pdf]
Divergence Series and Integrals From the Viewpoint of the Division by Zero Calculus
In this short note, we would like to refer to the fundamental new interpretations that for the fundamental expansion $1/(1-z) = \sum_{j=0}^{\infty} z^j$ it is valid in the sense $0=0$ for $z=1$, for the integral $\int_1^{\infty} 1/x dx $ it is zero and in the formula $\int_0^{\infty} J_0(\lambda t) dt = 1/\lambda$, it is valid with $0=0$ for $\lambda =0$ in the sense of the division by zero.
[1974] vixra:1910.0400 [pdf]
On the Maximum X Entropy Negation of a Complex-Valued Distribution
In this paper, we propose a generalized model of the negation function, so that it can has more powerful capability to represent the knowledge and uncertainty measure. In particular, we first define a vector representation of complex-valued distribution. Then, an entropy measure is proposed for the complex-valued distribution, called X entropy. After that, a transformation function to acquire the negation of the complex-valued distribution is exploited. Finally, we verify that the proposed negation method has a maximal entropy.x
[1975] vixra:1910.0382 [pdf]
Intrusion Detection using Sequential Hybrid Model
A large amount of work has been done on the KDD 99 dataset, most of which includes the use of a hybrid anomaly and misuse detection model done in parallel with each other. In order to further classify the intrusions, our approach to network intrusion detection includes use of two different anomaly detection models followed by misuse detection applied on the combined output obtained from the previous step. The end goal of this is to verify the anomalies detected by the anomaly detection algorithm and clarify whether they are actually intrusions or random outliers from the trained normal (and thus to try and reduce the number of false positives). We aim to detect a pattern in this novel intrusion technique itself, and not the handling of such intrusions. The intrusions were detected to a very high degree of accuracy.
[1976] vixra:1910.0366 [pdf]
A Complete Proof of Beal's Conjecture
In 1997, Andrew Beal announced the following conjecture: textit{Let $A, B,C, m,n$, and $l$ be positive integers with $m,n,l > 2$. If $A^m + B^n = C^l$ then $A, B,$ and $C$ have a common factor.} We begin to construct the polynomial $P(x)=(x-A^m)(x-B^n)(x+C^l)=x^3-px+q$ with $p,q$ integers depending on $A^m,B^n$ and $C^l$. We resolve $x^3-px+q=0$ and we obtain the three roots $x_1,x_2,x_3$ as functions of $p$ and a parameter $theta$. Since $A^m,B^n,-C^l$ are the only roots of $x^3-px+q=0$, we discuss the conditions that $x_1,x_2,x_3$ are integers and have or do have not a common factor. Three numerical examples are given.
[1977] vixra:1910.0362 [pdf]
Walkrnn: Reading Stories from Property Graphs
WalkRNN, the approach described herein, leverages research in learning continuous representations for nodes in networks, layers in features captured in property graph attributes and labels, and uses Deep Learning language modeling via Recurrent Neural Networks to read the grammar of an enriched property graph. We then demonstrate translating this learned graph literacy into actionable knowledge through graph classification tasks.
[1978] vixra:1910.0345 [pdf]
Unimodular Rotation of E8 to H4 600-Cells
We introduce a unimodular Determinant=1 8x8 rotation matrix to produce four 4 dimensional copies of H4 600-cells from the 240 vertices of the Split Real Even E8 Lie group. Unimodularity in the rotation matrix provides for the preservation of the 8 dimensional volume after rotation, which is useful in the application of the matrix in various fields, from theoretical particle physics to 3D visualization algorithm optimization.
[1979] vixra:1910.0344 [pdf]
Nature of the Dark Side of Our Universe
In this paper, I have made a great discovery. I realized that there was space-time before the Big Bang. I also found out that the universe was shrinking and dark energy was negative before the Big Bang. Moreover, I perceived that the universe expanded after the Big Bang. I also predicted that ordinary matter, which makes up five percent of the universe was made before the Big Bang, while dark matter was made after the Big Bang. The reason for this prediction is that I put the essence of dark energy into Wilmore energy and made dark matter equivalent to the Hawking mass. Dark matter has a mathematically imaginary nature, and according to its physical interpretation, it can be in and out of space-time. Another discovery in this study was unveiling of the fine-tuning problem.
[1980] vixra:1910.0309 [pdf]
Does the Polytropic Gas Yield Better Results in a 5D Framework?
In this work, we study the polytropic gas (PG) cosmology in a d-dimensional (dD) form of the flat Friedmann-Robertson-Walker (FRW) framework. In this context, we focus on the evolution of the corresponding energy density as a first step. Next, we use the most recent data from the Type Ia Supernova (SN Ia), observational values of the cosmic Hubble parameter (OHD) and the updated Planck-results to place constraints on the free parameters defined in the model. We show that the 5D form of the scenario is more compatible with the recent observations. Moreover, according to the best values of the auxiliary parameters, we compte age of the cosmos theoretically.
[1981] vixra:1910.0308 [pdf]
Variable Polytropic Gas Cosmolgy
We mainly study a cosmological scenario represented by the variable Polytropic gas (VPG) unified energy density proposal. To reach this aim, we start with reconstructing a variable form of the original Polytropic gas (OPG) definition. We show that this model is a generalization of the OPG, cosmological constant plus cold dark matter (ΛCDM) and two different Chaplygin gas models. Later, we fit the auxiliary parameters given in the model and discuss essential cosmological features of the VPG proposal. Besides, we compare the VPG with the OPG by focusing on recent observational dataset given in literature including Planck 2018 results. We see that the VPG model yields better results than the OPG description and it fits very well with the recent experimental data. Moreover, we discuss some thermodynamical features of the VPG and conclude that the model describes a thermodynamically stable system.
[1982] vixra:1910.0307 [pdf]
Variable Generalized Chaplygin Gas in a 5D Cosmology
We construct the variable generalized Chaplygin gas (VGCG) defining a unified dark matterenergy scenario and investigate its essential cosmological properties in a universe governed by the Kaluza-Klein (KK) theory. A possible theoretical basis for the VGCG in the KK cosmology is argued. Also, we check the validity of thermodynamical laws and reimpelement dynamics of tachyons in the KK universe.
[1983] vixra:1910.0306 [pdf]
Variable Chaplygin Gas in Kaluza-Klein Framework
We investigate cosmological features of the variable Chaplygin gas (VCG) describing a unified dark matter-energy scenario in a universe governed by the five dimensional (5D) Kaluza-Klein (KK) gravity. In such a proposal, the VCG evolves as from the dust-like phase to the phantom or the quintessence phases. It is concluded that the background evolution for the KK type VCG definition is equivalent to that for the dark energy interacting with the dark matter. Next, after performing neo-classical tests, we calculated the proper, luminosity and angular diameter distances. Additionally, we construct a connection between the VCG in the KK universe and a homogenous minimally coupled scalar field by introducing its self-interacting potential and also we confirm the stability of the KK type VCG model by making use of thermodynamics. Moreover, we use data from Type Ia Supernova (SN Ia), observational H(z) dataset (OHD) and Planck-2015 results to place constraints on the model parameters. Subsequently, according to the best-fit values of the model parameters we analyze our results numerically.
[1984] vixra:1910.0305 [pdf]
Machine Learning Algorithm in a Caloric View Point of Cosmology
In the present work, we mainly discuss the variable polytropic gas (VPG henceforth) proposal, which is describing a self-gravitating gaseous sphere and can be considered as a crude approximation for realistic stellar denitions, from a caloric perspective. In order to reach this aim, we start with reconstructing the VPG model by making use of thermodynamics. And then, the auxiliary parameters written in the proposal are tted by focusing on updated experimental dataset published in literature. We also discuss the model in view of the statistical perspective and conclude that the caloric VPG model (cVPG henceforth) is in good agreement with the recent astrophysical observations. With the help of the statistical discussions, we see that the cVPG model is suitable for the statistical cosmology and can be used to make useful predictions for the future of the universe via the machine learning (ML henceforth) methods like the linear regression (LR henceforth) algorithm. Moreover, according to the results, we also perform a rough estimation for the lifetime of the universe and conclude that the cosmos will be torn apart after 51Gyr which means our universe has spent 21 percent of its lifetime.
[1985] vixra:1910.0293 [pdf]
Motivating Abstract with Elementary Algebra
There are natural lead-ins to abstract algebra that occur in elementary algebra. We explore function composition using linear functions and permutations on letters in misspellings of words. Groups and the central idea of abstract algebra, proving 5th degree and greater polynomials are unsolvable, are put into focus for college students.
[1986] vixra:1910.0283 [pdf]
A Computing Method About How Many `comparable' Pairs of Elements Exist in a Certain Set
Given two sets, one consisting of variables representing distinct positive n numbers, the other set `a kind of power set' of this n-element set. I got interested in the fact that for the latter set, depending on the values of two elements, it can occur that not every pair of elements is `comparable', that is to say, it is not always uniquely determined which of two elements is smaller. By proving theorems in order to go ahead with our research, we show a table which describes for how many `comparable' cases exist, for several n's.
[1987] vixra:1910.0281 [pdf]
A New Look at Potential vs. Actual Infinity
The {\it technique} of classical mathematics involves only potential infinity, i.e. infinity is understood only as a limit, and, as a rule, legitimacy of every limit is thoroughly investigated. However, {\it the basis} of classical mathematics does involve actual infinity: the infinite ring of integers $Z$ is the starting point for constructing infinite sets with different cardinalities, and, even in standard textbooks on classical mathematics, it is not even posed a problem whether $Z$ can be treated as a limit of finite sets. On the other hand, finite mathematics starts from the ring $R_p=(0,1,...p-1)$ (where all operations are modulo $p$) and the theory deals only with a finite number of elements. We give a direct proof that $Z$ can be treated as a limit of $R_p$ when $p\to\infty$, and the proof does not involve actual infinity. Then we explain that, as a consequence, finite mathematics is more fundamental than classical one.
[1988] vixra:1910.0266 [pdf]
A Resolution to the Vacuum Catastrophe
This paper presents a theoretical estimate for the vacuum energy density which turns out to be near zero and thus much more palatable than an infinite or a very large theoretical value obtained by imposing an ultraviolet frequency cut-off. This result helps address the "vacuum catastrophe" and the "cosmological constant problem".
[1989] vixra:1910.0263 [pdf]
The Clebsch Diagonal, the Associahedra and Motivic Gravity
The associahedra appear in a line configuration space for the real Clebsch diagonal surface, which we relate to e6 in the magic star, with applications to mass generation.
[1990] vixra:1910.0255 [pdf]
A Deep Neural Network as Surrogate Model for Forward Simulation of Borehole Resistivity Measurements
Inverse problems appear in multiple industrial applications. Solving such inverse problems require the repeated solution of the forward problem. This is the most time-consuming stage when employing inversion techniques, and it constitutes a severe limitation when the inversion needs to be performed in real-time. In here, we focus on the real-time inversion of resistivity measurements for geosteering. We investigate the use of a deep neural network (DNN) to approximate the forward function arising from Maxwell's equations, which govern the electromagnetic wave propagation through a media. By doing so, the evaluation of the forward problems is performed offline, allowing for the online real-time evaluation (inversion) of the DNN.
[1991] vixra:1910.0245 [pdf]
Complex Hadamard Matrices and Applications
A complex Hadamard matrix is a square matrix $H\in M_N(\mathbb C)$ whose entries are on the unit circle, $|H_{ij}|=1$, and whose rows and pairwise orthogonal. The main example is the Fourier matrix, $F_N=(w^{ij})$ with $w=e^{2\pi i/N}$. We discuss here the basic theory of such matrices, with emphasis on geometric and analytic aspects.
[1992] vixra:1910.0239 [pdf]
Inequality in the Universe, Imaginary Numbers and a Brief Solution to P=NP? Problem
While I was working about some basic physical phenomena, I discovered some geometric relations that also interest mathematics. In this work, I applied the rules I have been proven to P=NP? problem over impossibility of perpendicularity in the universe. It also brings out extremely interesting results out like imaginary numbers which are known as real numbers currently. Also it seems that Euclidean Geometry is impossible. The actual geometry is Riemann Geometry and complex numbers are real.
[1993] vixra:1910.0234 [pdf]
The Pascal Triangle of Maximum Deng Entropy
Pascal-Triangle (known as Yang Hui Triangle) is an important structure in mathematics, which has been used many fields. Entropy plays an essential role in physics. In various, information entropy is used to measure the uncertainty of information. Hence, setting the connection between Pascal Triangle and information uncertainty is a question worth exploring. Deng proposed the Deng entropy that it can measure non-specificity and discord of basic probability assignment (BPA) in Dempster-Shafer (D-S) evidence theory. D-S evidence theory and power set are very closely related. Hence, by analysing the maximum Deng entropy, the paper find that there is an potential rule of BPA with changes of frame of discernment. Finally, the paper set the relation between the maximum Deng entropy and PascalTriangle.
[1994] vixra:1910.0230 [pdf]
Another Method to Solve the Grasshopper Problem (The International Mathematical Olympiad)
The 6th problem of the 50th International Mathematical Olympiad (IMO), held in Germany, 2009, is called 'the grasshopper problem'. To this problem Kos developed theory from unique viewpoints by reference of Noga Alon’s combinatorial Nullstellensatz. We have tried to solve this problem by an original method inspired by a polynomial function that Kos defined, then examined for n=3, 4 and 5. For almost cases the claim of this problem follows, but there remains imperfection due to 'singularity'.
[1995] vixra:1910.0198 [pdf]
The Area and Volume of a J=Q=0 Black Hole
The present note adresses a paper by DiNunno & Matzner, in which the authors claim that 1) the volume of a J=Q=0 black hole as measured in "Schwarzschild coordinates" vanishes and 2) the volume itself is coordinate-dependent. We refute these statements as elementary conceptual mistakes, which originate from a basic misunderstanding of general covariance in the context of the gauge theory of General Relativity.
[1996] vixra:1910.0161 [pdf]
The Structure and Properties of Elementary Particles
We have developed simple models of the elementary particles based on the assumption that the particle interior is influenced by just two force fields, gravity and electrostatics. The fundamental particles are electrons, positrons, neutrinos and photons. All the other elementary particles are composed of these fundamental entities. A semi-classical approach is used to obtain simple expressions that give properties all in good agreement with experimental results. This approach is able to make several predictions. For example: All the elementary particles are composed of the particles they decay into. All particles are made of matter. There is no antimatter. The muon is not point-like. It is a composite particle with internal structure. Neutrinos have a small quantity of mass and charge. The neutron also has a small charge determined by the charge of its neutrino. A particle's lifetime is determined by its size relative to its Schwarzschild radius. Single protons should be produced in electron-positron collisions below the two-proton energy threshold.
[1997] vixra:1910.0140 [pdf]
Remote Sensing and Computer Science
The implications of optimal archetypes have been far-reaching and pervasive. In fact, few analysts would disagree with the visualization of neural networks. While such a hypothesis is largely an appropriate objective, it is supported by existing work in the field. Our focus in this paper is not on whether A* search can be made peer-to-peer, pseudorandom, and pseudorandom, but rather on presenting a real-time tool for visualizing RAID [1]
[1998] vixra:1910.0131 [pdf]
Thoughts Are Faster Than Light
When I read the probabilities of quantum mechanics and general relativity, I was wondering why physicists do not have the general Japanese philosophy that thoughts are transmitted to the entire universe in an instant. Perhaps it was an idea that I had personally as I traveled across various religions. In quantum mechanics, it is a thought that “at the time of observation, there is an inexact interaction between two substances”.
[1999] vixra:1910.0125 [pdf]
Минимально необходимая локально-нелокальная модель эволюции элементарных частиц и фундаментальных взаимодействий ранней Вселенной
В статье рассматриваются следствия предложенного ранее Ли Смолиным (Lee Smolin) механизма формирования вероятностей в индетерминированнных квантовых процессах. Результатом экстраполяции этих следствий на области физики высоких энергий и физики ранней Вселенной является предложенная в статье модель эволюции элементарных частиц и фундаментальных взаимодействий ранней Вселенной, в которой наблюдаемый сегодня порядок, описываемый Стандартной моделью физики элементарных частиц, развивается в несколько этапов с участием дополняющих друг друга локальных и нелокальных процессов. Включение в модель нелокальных квантовых эффектов позволило сделать её полнее других предшественников и непротиворечиво решить в её рамках некоторые, не решённые в полностью локальных теориях, проблемы, такие как: проблема барионной асимметрии; проблема иерархии фермионных масс; проблема калибровочной иерархии фундаментальных взаимодействий; вопрос о природе и происхождении частиц тёмной материи; экспериментально наблюдаемые в распадах мезонов отклонения от предсказаний Стандартной модели и др. Модель хорошо согласуется с экспериментальными данными, лежащими в основе Стандартной модели и выходящими за рамки её предсказательной силы, совместима с теорией инфляционного расширения Вселенной и космологической моделью $\Lambda$CDM и включает некоторые элементы теорий симметрии и суперсимметрии и теории струн.
[2000] vixra:1910.0103 [pdf]
The Yang-Mills Flow for Connections
For a family of connections in a vector fiber bundle over a riemannian manifold, a Yang-Mills flow is defined with help of the riemannian curvature of the connections.
[2001] vixra:1910.0100 [pdf]
Lagrangian Quantum Mechanics for Indistinguishable Fermions: a Self-Consistent Universe.
This work corresponds to a paradigmatic classical mechanic approach to quantum mechanics and, as a consequence, the paradigm of expanding universe is replaced for a universe of contracting particles which allows explaining the cosmological redshift because as the time progresses the hydrogen atoms absorb smaller wavelengths. Quantum particles are defined as linearly independent indistinguishable normalized classical bi-spinor fields with quartic interactions, this matter allows defining positive energy spectra and to evade the problems with infinities associated with quantization procedure. To have a consistent particle interpretation in each inertial system, a large N approach for the number of fermions must be imposed. The following model, based in dynamical mass generation methods, explains the quark confinement and the hadronic mass behavior in a trivial form and allows the oscillation of low massive neutrinos inside of massive matter.
[2002] vixra:1910.0084 [pdf]
Conservation of Mass in Collision
The conservation of mass in inertial reference frame is a property of the conservation law of momentum. The inelastic collision between two identical objects shows that the total momentum is zero in the COM frame (center of mass). In another inertial reference frame, the rest frame of one object before collision, the total momentum before the collision is equal to the total momentum after the collision. This conservation of momentum shows that the mass of an object is also conserved in all inertial reference frames. The mass of a moving object is independent of its velocity.
[2003] vixra:1910.0081 [pdf]
Twin Prime Conjecture (New Edition)
I proved the Twin Prime Conjecture. All Twin Primes are executed in hexadecimal notation. It does not change in a huge number (forever huge number). In the hexagon, prime numbers are generated only at [6n -1] [6n+1]. (n is a positive integer) The probability that a twin prime will occur is 6/5 times the square of the probability that a prime will occur. If the number is very large, the probability of generating a prime number is low, but since the prime number exists forever, the probability of generating a twin prime number is very low, but a twin prime number is produced. That is, twin primes exist forever.
[2004] vixra:1910.0079 [pdf]
Photon is Interpreted by the Particleization/normalization of the Mutual Energy Flow of the Electromagnetic Fields
Quantum mechanics has the quantization. The quantization offer us a method from the mechanic equation to build the quantum wave equation. For example the Canonical quantization offers a method to build the Schrödinger equation from Hamilton in classical mechanics. This is also referred as first quantization. In general, Maxwell equation itself is wave equation, hence, it doesn't need the first quantization. There is second quantization, for electromagnetic field. The second quantization discuss how many photons can be created when the energy of electromagnetic field is known. This is not interesting to this author. This author is interested how to build a particle from the wave equations (Maxwell equations or Schrödinger equation). Here the particle should confined in space locally. It should has the properties of wave. Our traditional quantization is to find the wave equation. This author try to build a particle from this wave equation, this process can be called as particleization. This author has introduced the mutual energy principle, the mutual energy principle successfully solved the problem of conflict between the Maxwell equations and the law of the energy conservation. The mutual energy flow theorem is derived from the mutual energy principle. The mutual energy flow is consist of the retarded wave and the advanced wave. The mutual energy flow theorem tell us the total energy of the energy flow passes through any surfaces between the emitter to the absorber are all exactly same. This property is required by the photon and any particle in quantum mechanics. Hence, this author has linked the mutual energy flow to the photon and also other particle. The mutual energy flow has the property of waves and also confined in space locally. However there is still a problem, the field of an emitter or the field of an absorber decreases according to the distance from the field point to the source point. If the current (or charge) of a source or sink for a photon is constant. The energy of the photon which equals the inner product of the current and the field will depended on the distance between the the source and the sink of the photon. If the distance increases, the amount of photon energy will decrease to infinite small. This is not correct. The energy of a photon should be a constant E=hv. The energy of the photon cannot decrease with the distance between the emitter and the absorber. In order overcome this difficulty, this author suggests a normalization for the mutual energy flow. It is assume that the retarded wave sent from the emitter has collapse back in all direction. But the mutual energy flow build a energy channel between the source and sink. Since the energy can only go through this channel, the total energy of one photon must go through this channel. Hence, the total energy of the mutual energy flow has to be normalized to the energy of one photon. The mutual energy flow will increase to the energy of one photon. This leads that the amplitude of the wave does not decrease on the direction along the energy channel. The amplitude of the advanced wave also does not decrease on the direction of the energy channel. The electromagnetic wave in the space between an emitter (source) and an absorber (sink) looks like a wave inside a wave guide. The wave in a wave guide, the amplitude does not decrease alone the wave guide if the loss of energy can be omitted. This wave guide can be referred as the “nature wave guide”. In the nature wave guide the advanced wave leads the the retarded wave, hence, the retarded wave can only goes at the direction where has advanced wave. The retarded wave also leads the advanced wave. The advanced wave can only goes in the direction of the retarded wave. This normalization process successfully particularized the the mutual energy flow. This author believe this theory about the normalization/particleization of the mutual energy flow is also correct for other particle for example electron.
[2005] vixra:1910.0077 [pdf]
Grimm's Conjecture
The collection of the consecutive composite integers is the composite connected, and no pair of its distinct integers may be generated by a single prime number. Composite connectedness implies the two-primes rule and the singularity propagation/breaking rule.Failure of the singularity propagation proves the Gremm's Conjecture.
[2006] vixra:1910.0035 [pdf]
Why the Spin of the Particles is Equal to S=1/2?
According to the unified theory of dynamic space, the first (Universal) and the second (local) deformation of space is described, which change the geometric structure of the isotropic space. These geometric deformations created the dynamic space, the Universe, and the space holes (bubbles of empty space), the early form of matter. The neutron cortex is structured around these space holes with the electrically opposite elementary units (in short: units) at the light speed, as the third deformation of space (electrical and geometric deformation), resulting in the creation of surface electric charges (quarks), to which the particles spin is due. The constant kinetic energy of the particle spin is calculated at 100 million Joule. Thus, a single electron is enough to become the heart of a future energy machine.
[2007] vixra:1910.0029 [pdf]
On the Evolution of Universe using Relativistic Cosmology
For the collided region of two gravitationally bound structures A and B, geodesic equation is derived using calculus of variations. With the help of geodesic equation derived for the collided region of A and B, a method to calculate any possible curvature of universe beyond observable flat universe is detailed. This relativistic method is used to describe a generic idea on the evolution of gravitationally bound structures and its effect on the evolution of universe. Using this idea, distribution of matter and antimatter in the universe, observed accelerating expansion of universe, cosmic inflation, and large-scale structure of present universe are explained.
[2008] vixra:1910.0022 [pdf]
Using the Rational Root Test to Factor with the TI-83
The rational root test gives a way to solve polynomial equations. We apply the idea to factoring quadratics (and other polynomials). A calculator speeds up the filtering through possible rational roots.
[2009] vixra:1910.0010 [pdf]
Technical Work Report
I observed my Student Industrial Work Experience Scheme at Nigerian Institute of Social and Economic Research (NISER), Ibadan, Oyo State, Nigeria. During my SIWES, I was able to learn how to make use of some complex statistical (computational) packages in coding and analyses of data, either primary data or secondary data. And I also learned how to interpret analyzed data for end users. Furthermore, I saw the practical applications of Mathematics to solve problems in organizations, companies and some other subsidiary institutes. The research institute enabled me to bridge the gap between theory and practical.
[2010] vixra:1909.0658 [pdf]
On the Value of the Function $\exp {(ax)}/f(a)$ at $a=0$ for $f(a)=0$
In this short note, we will consider the value of the function $\exp {(ax)}/f(a)$ at $a=0$ for $f(a)=0$. This case appears for the construction of the special solution of some differential operator $f(D)$ for the polynomial case of $D$ with constant coefficients. We would like to show the power of the new method of the division by zero calculus, simply and typically.
[2011] vixra:1909.0653 [pdf]
ζ(4), ζ(6).......ζ(80), ζ(82) Are Irrational Number
ζ(4), ζ(6).......ζ(80), ζ(82) considered. From these equations, it can be said that ζ(4),ζ(6).......ζ(80),ζ(82) are irrational numbers. ζ(84),ζ(86) etc. can also be expressed by these equations. Because I use π2, these are to be irrational numbers. The fact that the even value of ζ(2n) is irrational can also be explained by the fact that each even value of ζ(2n) is multiplied by π2.
[2012] vixra:1909.0646 [pdf]
Proof of the Inconsistency of the Maxwell Equations to the Measurement Result of the Maxwell-Lodge Experiment
This short paper proofs mathematically that the Maxwell equations are not able to explain the Maxwell-Lodge experiment. Not even if the vector potential is used instead of the magnetic induction.
[2013] vixra:1909.0645 [pdf]
Perihelion Advance Formula Inference from Newton Gravity Law Relative-Velocity Dependence Completed
While the original Newton’s law of gravitation does not lead to the formula in question, the same law relative-velocity dependence completed does, briefly, with no hypothesis. Keywords: perihelion advance; interactions relative-velocity dependence.
[2014] vixra:1909.0641 [pdf]
Can We Predict?
We simulate artificial data for a sinusoid having a period P=1. Then we show that this period can be detected from a short Delta T = 0.3 slice of data. We proceed to show that the slice length is irrelevant for high quality measurements. The frustrating frequency resolution limit f_0=1/\Delta T of the power spectrum methods is pulverized. It is possible to predict the behaviour of non-linear periodic models.
[2015] vixra:1909.0552 [pdf]
Intention Cosmology: Resolving the Discrepancy Between Direct and Inverse Cosmic Distance Ladder Through a New Cosmological Model
A new cosmological model is presented, which derives from a new physics within a theory of everything. It introduces, beyond radiation and baryonic matter, a unique and new ingredient, which is the substance of the universe, and which can be assimilated to the cold dark matter of the standard cosmology. The new model, although profoundly different from the ΛCDM model, exhibits the same metric and an almost identical distance scale. So it shares the same chronology and the same theory of nucleosynthesis, but solves the problem of the horizon, the fatness of space and the homogeneity of the distribution of matter in a natural way, without having to resort to an additional theory like that of inflation and without dark energy. Eventually it resolves the tension between the direct and the inverse cosmic distance ladder.
[2016] vixra:1909.0517 [pdf]
What Was Division by Zero?; Division by Zero Calculus and New World (Compact Version)
Based on the preprint survey paper, we will introduce the importance of the division by zero and its great impact to elementary mathematics and mathematical sciences for some general people. For this purpose, we will give its global viewpoint in a self-contained manner by using the related references. This version was written for the Proceedings of ICRAMA2019 (16-18 July, 2019) with the 8 pages restriction under the requested form.
[2017] vixra:1909.0515 [pdf]
The Requirements on the Non-trivial Roots of the Riemann Zeta via the Dirichlet Eta Sum
An explanation of the Riemann Hypothesis is given in sections, using the well known Dirichlet Eta sum equivalence, beginning with a brief history of the paper and a statement of the problem. The next 3 sections dissect the complex Eta sum into 8 real valued sums and 2 constants. Parts 6 and 8 explain a recursive relationship between the sums and constants, via 2 systems of 2 equations, while parts 7 and 9 explain the conditions generated from both systems. Finally, section 10 concludes the explanation in terms of the original inputs of the Dirichlet Eta sum, proves Riemann's suspicion, and it shows that the only possible solution for the real portion of the complex input, commonly labeled a, is that it must equal 1/2 and only 1/2.
[2018] vixra:1909.0513 [pdf]
Captcha Generation and Identification Using Generative Adversarial Networks
Adversarial attacking is an emerging worrying angle in the field of AI, capable of fooling even the most efficiently trained models to produce results as and when required. Inversely, the same design powering adversarial attacks can be employed for efficient white-hat modeling of deep neural networks. Recently introduced GANs <i>(Generative Adversarial Networks)</i> serve precisely this purpose by generating forged data. Consequently, authentic data identification is a crucial problem to be done away with, considering increased adversarial attacks. This paper proposes an approach using DCGANs <i>(Deep Convolutional Generative Adversarial Networks)</i> to both - generate and distinguish artificially produced fake captchas. The generator model produces a significant number of unseen images, and the discriminatory model classifies them as fake (0) or genuine (1). Interestingly enough, both the models can be configured to learn from each other and become better as they train along.
[2019] vixra:1909.0476 [pdf]
Prediction of the Neutrinos Masses Using Some Empirical Formulae
In a paper in year 2011 the author implicitly predicted the masses of the neutrinos using some old empirical formulae. Now this prediction is explicitly written, mass of one neutrino is calculated as 21.44240(50) meV, respectively 21.36459(49) meV, dependent of the model. (The other masses of the neutrinos are connected with one of these values.) According to the upper bounds of neutrinos masses in 2005, and respectively in 2019, the author calculates that the upper bound of the lightest neutrino mass approached between 3 times to 20 times to the predicted value in 2019 according to 2005. At the same time, cosmologists predict that in the next five years we will have values of the neutrino masses, not only the lower and upper bounds. Thus this prediction can be tested, and maybe p-value for accident will not be large. Besides, the prediction of the gravitational constant with the same formulae is also mentioned.
[2020] vixra:1909.0473 [pdf]
Formula of ζ Even-Numbers
I published the odd value formula for ζ, but I realized that this was true even when it was even. Therefore, it will be announced.
[2021] vixra:1909.0471 [pdf]
Galactic Rotation Curves and Spiral Form
The two problems in the title, concerning massive core galaxies, unexplained by the original gravity law of Newton, are normal features according to the same law Relative-Velocity Dependence completed,thus—among other hypotheses—the dark matter is no longer necessary. Keywords: spiral galaxy; rotation curve; dark matter; interactions relative-velocity dependence; gravitational refractive index; black captor/hole; atom radius.
[2022] vixra:1909.0463 [pdf]
Theoretical Value for Gravitational Constant
This paper develops the theoretical ratio of the gravitational force to electromagnetic force between two electrons. The resulting ratio produces a gravitational constant with the precision of electromagnetic constants.
[2023] vixra:1909.0461 [pdf]
Fibonacci's Answer to Primality Testing?
In this paper, we consider various approaches to primality testing and then ask whether an effective deterministic test for prime numbers can be found in the Fibonacci numbers.
[2024] vixra:1909.0459 [pdf]
Jagged Islands of Bound Entanglement and Witness-Parameterized Probabilities
We report several witness-parameterized families of bound-entangled probabilities. Two pertain to the $d=3$ (two-qutrit) and a third to the $d=4$ (two-ququart) subsets analyzed by Hiesmayr and L{\"o}ffler of ``magic" simplices of Bell states that were introduced by Baumgartner, Hiesmayr and Narnhofer. The Hilbert-Schmidt probabilities of positive-partial-transpose (PPT) states--within which we search for bound-entangled states--are $\frac{8 \pi }{27 \sqrt{3}} \approx 0.537422$ ($d=3$) and $\frac{1}{2}+\frac{\log \left(2-\sqrt{3}\right)}{8 \sqrt{3}} \approx 0.404957$ ($d=4$). We obtain bound-entangled probabilities of $-\frac{4}{9}+\frac{4 \pi }{27 \sqrt{3}}+\frac{\log (3)}{6} \approx 0.00736862$ and $\frac{-204+7 \log (7)+168 \sqrt{3} \cos ^{-1}\left(\frac{11}{14}\right)}{1134} \approx 0.00325613$ ($d=3$) and $\frac{8 \log (2)}{27}-\frac{59}{288} \approx 0.00051583$ and $\frac{24 \text{csch}^{-1}\left(\frac{8}{\sqrt{17}}\right)}{17 \sqrt{17}}-\frac{91}{544} \approx 0.00218722$ ($d=4$). (For $d=3$, we also obtain $\frac{2}{81} \left(4 \sqrt{3} \pi -21\right) \approx 0.0189035$ based on the realignment criterion.) The families, encompassing these results, are parameterized using generalized Choi and Jafarizadeh-Behzadi-Akbari witnesses. In the $d=3$, analyses, we utilized the mutually unbiased bases (MUB) test of Hiesmayr and L{\"o}ffler, and also the Choi $W^{(+)}$ test. The same bound-entangled probability was achieved with both--the sets (``jagged islands") detected having void intersection. The entanglement (bound and ``non-bound"/``free") probability for each was $\frac{1}{6} \approx 0.16667$, while their union and intersection gave $\frac{2}{9} \approx 0.22222$ and $\frac{1}{9} \approx 0.11111$. Further, we examine generalized Horodecki states, as well as estimating PPT-probabilities of approximately 0.39339 (very well-fitted by $\frac{7 \pi}{25 \sqrt{5}} \approx 0.39338962$) and 0.115732 (conjecturally, $\frac{1}{8}+\frac{\log \left(3-\sqrt{5}\right)}{13 \sqrt{5}} \approx 0.115737$) for the original (8- [two-qutrit] and 15 [two-ququart]-dimensional) magic simplices themselves.
[2025] vixra:1909.0448 [pdf]
Face Alignment Using a Three Layer Predictor
Face alignment is an important feature for most facial images related algorithms such as expression analysis, face recognition or detection etc. Also, some images lose information due to factors such as occlusion and lighting and it is important to obtain those lost features. This paper proposes an innovative method for automatic face alignment by utilizing deep learning. First, we use second order gaussian derivatives along with RBF-SVM and Adaboost to classify a first layer of landmark points. Next, we use branching based cascaded regression to obtain a second layer of points which is further used as input to a parallel and multi-scale CNN that gives us the complete output. Results showed the algorithm gave excellent results in comparison to state-of-the-art algorithms.
[2026] vixra:1909.0432 [pdf]
Transition Theory of An Electron Traveling from Uncertain to Causal Basis
One of Einstein’s most famous quotes is; ‘I am at all events convinced that God does not play dice’. This study attempted to be a conversion of “uncertainty principle” to a “certainty principle.” With the previous electron model, bare electrons moved discretely, and the photons surrounding them moved continuously. In this study, we shall notice the traveling of free electrons with thermal conductance, and how to determine the unique discrete traveling point by using thermal potential energy gradient.
[2027] vixra:1909.0428 [pdf]
Braid Logic for Mass Condensation
In quantum logic, the emergence of spacetime and related symmetries goes hand in hand with the emergence of the real and complex numbers themselves. In this paper, we show how finite fields are surprisingly sufficient for most physical questions, once we throw away classical geometrical models in favour of categorical axioms. In particular, generalised Pauli matrix algebras are closely related to braid and ribbon diagrams, and holographic information for mass localisation gains its intuition from algebras for anyon condensation. We discuss definitions of homology and cohomology associated to braids, recalling the twistor construction of massive solutions in H2.
[2028] vixra:1909.0409 [pdf]
Naturalness Revisited: not Spacetime, But Rather Spacephase
What defines the boundary of a quantum system is phase coherence, not time coherence. Time is the same for all three spatial degrees of freedom in flat 4D Minkowski spacetime. However, in the quantum mechanics of wavefunctions in 3D space, phases of wavefunction components are not necessarily the same in all three orientations. Consequently, the S-matrix generated by the geometric Clifford product of two 3D wavefunctions exists not in 4D spacetime, but rather in 6D `spacephase'.
[2029] vixra:1909.0397 [pdf]
Resolving Schrodinger's Cat, Wigner's Friend and Frauchiger-Renner's Paradoxes at a Single-Quantum Level
Schrodinger's cat and Wigner's friend paradoxes are analyzed using the `wave-particle non-dualistic interpretation of quantum mechanics at a single-quantum level' and are shown to be non-paradoxes within the quantum formalism. Then, the extended version of Wigner's friend thought experiment, proposed in a recent article titled, ``Quantum theory cannot consistently describe the use of itself '', Nature Communications {\bf 9}, 3711 (2018), by Frauchiger and Renner (FR) is considered. In quantum mechanics, it's well-known that, statistically observing a large number of identical quantum systems at some particular quantum state, which results in Born's probability, and merely inferring its presence in the same quantum state with the same probability yield distinct physical phenomena. If this fact is not taken care while interpreting any experimental outcomes, then FR type paradoxes pop up. ``What an astonishingly self-consistent the Quantum Theory is!' - is explicitly worked out in the case of FR gedankenexperiment. The present work shows the importance of single-quantum phenomenon for the non-paradoxical interpretation of statistically observed experimental outcomes.
[2030] vixra:1909.0385 [pdf]
Formula of ζ Odd-Numbers
I tried to find a new expression for zeta odd-numbers. It may be a new expression and will be published here. The correctness of this formula was confirmed by WolframAlpha to be numerically com- pletely correct.
[2031] vixra:1909.0384 [pdf]
ζ(4), ζ(6).......ζ(108), ζ(110) Are Irrational Number
ζ(4), ζ(6).......ζ(108), ζ(110) considered. From these equations, it can be said that ζ(4),ζ(6).......ζ(108),ζ(110) are irrational numbers. ζ(112),ζ(114) etc. can also be expressed by these equations. Because I use π2, these are to be irrational numbers. The fact that the even value of ζ(2n) is irrational can also be explained by the fact that each even value of ζ(2n) is multiplied by π2.
[2032] vixra:1909.0372 [pdf]
Dark Energy and the Time Dependence of Fundamental Particle Constants
The cosmic time dependencies of $G$, $\alpha$, $\hbar$ and of Standard Model parameters like the Higgs vev and elementary particle masses are studied in the framework of a new dark energy interpretation. Due to the associated time variation of rulers, many effects turn out to be invisible. However, a rather large time dependence is claimed to arise in association with dark energy measurements, and smaller ones in connection with the Standard Model. Finally, the dark energy equation of state and a formula for the full size of the universe are derived in an appendix.
[2033] vixra:1909.0366 [pdf]
Free Quantum Groups and Related Topics
The unitary group $U_N$ has a free analogue $U_N^+$, and the closed subgroups $G\subset U_N^+$ can be thought of as being the ``compact quantum Lie groups''. We review here the general theory of such quantum groups. We discuss as well a number of more advanced topics, selected for their beauty, and potential importance.
[2034] vixra:1909.0359 [pdf]
Mathematics as Information Compression Via the Matching and Unication of Patterns
This paper describes a novel perspective on the foundations of mathematics: how mathematics may be seen to be largely about 'information compression (IC) via the matching and unification of patterns' (ICMUP. That is itself a novel approach to IC, couched in terms of non-mathematical primitives, as is necessary in any investigation of the foundations of mathematics. This new perspective on the foundations of mathematics reects the idea that, as an aid to human thinking, mathematics is likely to be consonant with much evidence for the importance of IC in human learning, perception, and cognition. This perspective on the foundations of mathematics has grown out of a long-term programme of research developing the SP Theory of Intelligence and its realisation in the SP Computer Model, a system in which a generalised version of ICMUP -- the powerful concept of SP-multiple-alignment -- plays a central role. The paper shows with an example how mathematics, without any special provision, may achieve compression of information. Then it describes examples showing how variants of ICMUP may be seen in widely-used structures and operations in mathematics. Examples are also given to show how several aspects of the mathematics-related disciplines of logic and computing may be understood as ICMUP. Also discussed is the intimate relation between IC and concepts of probability, witharguments that there are advantages in approaching AI, cognitive science, and concepts of probability via ICMUP. Also discussed is how the close relation between IC and concepts of probability relates to the established view that some parts of mathematics are intrinsically probabilistic, and how that latter view may be reconciled with the all-or-nothing, 'exact', forms of calculation or inference that are familiar in mathematics and logic. There are many potential benefits and applications of the mathematics-as-IC perspective.
[2035] vixra:1909.0334 [pdf]
The Characteristics of Primes
The prime numbers has very irregular pattern. The problem of finding pattern in the prime numbers is the long-open problem in mathematics. In this paper, we try to solve the problem axiomatically. We propose some natural properties of prime numbers.
[2036] vixra:1909.0315 [pdf]
ζ(5), ζ(7)..........ζ(331), ζ(333) Are Irrational Number
Using the fact that ζ(3) is an irrational number, I prove that ζ(5), ζ(7)...........ζ(331) and ζ(333) are irrational numbers. ζ(5), ζ(7)...........ζ(331) and ζ(333) are confirmed that they were in perfect numerical agreement. This is because I created an odd-number formula for ζ, and the formula was created by dividing the odd- number for ζ itself into odd and even numbers.
[2037] vixra:1909.0296 [pdf]
On the Estimation of the Economic Value of a Dash Proposal
This paper is concerned with the derivation of a consistent formal method to allow for estimating the economic value of a Dash proposal. Standing on the Currency Fair Value[1] theory as rational financial pricing model of a currency, the paper will arrive at a straightforward and objective calculation tool, in the form of several simple equations. These will allow Masternode owners and individuals who submit proposals to the Dash treasury to estimate the expected value return of the economic proposals and thus, enable them to make more rational decisions. Development of this new model will require differential analysis of the fair value equation, as a basis for the analytical expressions expected by the main target audience. This analysis goes beyond the scope of Dash and many other currency research efforts may also drawn upon it. Keywords: Dash·Proposal·Quantitative Finance·Asset Pricing·Currency Fair Value·Investing
[2038] vixra:1909.0261 [pdf]
Fine-Structure Constant from Sommerfeld to Feynman
The fine-structure constant, which determines the strength of the electromagnetic interaction, is briefly reviewed beginning with its introduction by Arnold Sommerfeld and also includes the interest of Wolfgang Pauli, Paul Dirac, Richard Feynman and others. Sommerfeld was very much a Pythagorean and sometimes compared to Johannes Kepler. The archetypal Pythagorean triangle has long been known as a hiding place for the golden ratio. More recently, the quartic polynomial has also been found as a hiding place for the golden ratio. The Kepler triangle, with its golden ratio proportions, is also a Pythagorean triangle. Combining classical harmonic proportions derived from Kepler’s triangle with quartic equations determine an approximate value for the fine-structure constant that is the same as that found in our previous work with the golden ratio geometry of the hydrogen atom. These results make further progress toward an understanding of the golden ratio as the basis for the fine-structure constant.
[2039] vixra:1909.0216 [pdf]
Einstein vs Bell? Bell's Inequality Refuted, Bell's Error Corrected.
Bell's inequality is widely regarded as a profound proof that nature is nonlocal, not Einstein-local. Against this, and supporting Einstein, we refute Bell's inequality and correct his error. We thus advance the principle of true-local-realism (TLR); the union of true-locality (no beables move superluminally, after Einstein) and true-realism (some beables change interactively, after Bohr). Importantly, for STEM teachers: we believe our commonsense results require no knowledge of quantum mechanics. Let us see.
[2040] vixra:1909.0130 [pdf]
Calculating the Hawking Temperature in a Variant of the Near Horizon Metric
A modied version of the near horizon metric is introduced, that puts the near horizon metric in the same form as one of the most commonly-used metric variants of Rindler space. The metric is then used to calculate the Hawking temperature, using the WKB tunneling approximation.
[2041] vixra:1909.0118 [pdf]
Electronic Data Transmission at Three Times the Speed of Light and Data Rates of 2000 Bits Per Second Over Long Distances in Buffer Amplifier Chains
During the experimental testing of basic assumptions in electrical engineering, it has become apparent that ultra-low-frequency (ULF) voltage signals in coaxial cables with a length of only a few hundred meters propagate significantly faster than light. The starting point for this discovery was an experiment in which a two-channel oscilloscope was connected to a signal source via both a short and a long coaxial cable. It was observed that the delay between the two channels for short cables and low frequencies can be so small that the associated phase velocity exceeds the speed of light by one order of magnitude. To test whether the discovered effect can be exploited to transmit information over long distances, a cable was examined in which the signal was refreshed at regular distances by buffer amplifiers. The results show that such a setup is indeed suitable for transmitting wave packets at three times the speed of light and bit rates of approximately 2 kbit/s over arbitrary distances. The statement that information cannot propagate faster than light seems to be false and can no longer be sustained.
[2042] vixra:1909.0080 [pdf]
Division by Zero Because Next Infinity is Zero
tan(π)=0, 0 =1, 1 =z =0 2000 When I saw this expression, I was surely suspicious. But I knew intuitively that Next infinity is zero. For me, infinite and zero were equal, that’s true now. The universe did not start with the Big Burn. The universe has existed for an infinite amount of time, and has repeated an infinite number of big burns. In other words, the universe is a repetition of Next infinity is zero.
[2043] vixra:1909.0074 [pdf]
Deep Reinforcement Learning for Visual Question Answering
La conception de bout en bout des systèmes de dialogue est récemment devenue un sujet de recherche populaire grâce à des outils puissants tels que des architectures codeur-décodeur pour l'apprentissage séquence à séquence. Pourtant, la plupart des approches actuelles considèrent la gestion du dialogue homme-machine comme un problème d’apprentissage supervisé, visant à prédire la prochaine déclaration d’un participant, compte tenu de l’historique complet du dialogue. Cette vision est aussi simpliste pour rendre le problème de planification intrinsèque inhérent au dialogue ainsi que sa nature enracinée, rendant le contexte d'un dialogue plus vaste que seulement l'historique. C’est la raison pour laquelle seules les tâches de bavardage et de réponse aux questions ont été traitées jusqu’à présent en utilisant des architectures de bout en bout. Dans ce rapport, nous présentons une méthode d’apprentissage par renforcement profond permettant d’optimiser les dialogues axés sur les tâches, basés sur l’algorithme policy gradient. Cette approche est testée sur un ensemble de données de 120 000 dialogues collectés via Mechanical Turk et fournit des résultats encourageants pour résoudre à la fois le problème de la génération de dialogues naturels et la tâche de découvrir un objet spécifique dans une image complexe.
[2044] vixra:1909.0059 [pdf]
If Riemann’s Zeta Function is True, it Contradicts Zeta’s Dirichlet Series, Causing "Explosion". If it is False, it Causes Unsoundness.
Riemann's "analytic continuation" produces a second definition of the Zeta function, that Riemann claimed is convergent throughout half-plane $s \in \mathbb{C}$, $\text{Re}(s)\le1$, (except at $s=1$). This contradicts the original definition of the Zeta function (the Dirichlet series), which is proven divergent there. Moreover, a function cannot be both convergent and divergent at any domain value. In other mathematics conjectures and assumed-proven theorems, and in physics, the Riemann Zeta function (or the class of $L$-functions that generalizes it) is assumed to be true. Here the author shows that the two contradictory definitions of Zeta violate Aristotle's Laws of Identity, Non-Contradiction, and Excluded Middle. The of Non-Contradiction is an axiom of classical and intuitionistic logics, and an inherent axiom of Zermelo-Fraenkel set theory (which was designed to avoid paradoxes). If Riemann's definition of Zeta is true, then the Zeta function is a contradiction that causes deductive "explosion", and the foundation logic of mathematics must be replaced with one that is paradox-tolerant. If Riemann's Zeta is false, it renders unsound all theorems and conjectures that falsely assume that it is true. Riemann's Zeta function appears to be false, because its derivation uses the Hankel contour, which violates the preconditions of Cauchy's integral theorem.
[2045] vixra:1909.0001 [pdf]
Predictions for Elementary Particles and Explanations for Data About Dark Matter, Dark Energy, and Galaxy Formation
We suggest descriptions for new elementary particles, dark matter, and dark energy. We use those descriptions to explain data regarding dark matter effects, dark energy effects, and galaxy formation. Our mathematics-based modeling, descriptions, and explanations embrace and augment traditional physics theory modeling.
[2046] vixra:1908.0630 [pdf]
Determining Satisfiability of 3-Sat in Polynomial Time
In this paper, we provide a polynomial time (and space), algorithm that determines satisfiability of 3-SAT. The complexity analysis for the algorithm takes into account no efficiency and yet provides a low enough bound, that efficient versions are practical with respect to today's hardware. We accompany this paper with a serial version of the algorithm without non-trivial efficiencies (link: polynomial3sat.org).
[2047] vixra:1908.0613 [pdf]
Evidence Universal Gravitation in Evidence Theory
Since the introduction of the law of universal gravitation, it has been widely used in the field of natural sciences and theoretical exploration. In other disciplines, based on the law of universal gravitation, some scholars have proposed universal gravitation search algorithms, swarm intelligence optimization algorithms and fuzzy control. However, there is no research to apply the law of universal gravitation to the field of evidence theory. In this paper, we present for the first time the concept of evidence universal gravitation. In the evidence universal gravitation formula we define the evidence gravitation parameter and the evidence quality generation algorithm. The evidence universal gravitation formula satisfies some basic properties. This paper gives some numerical examples to further illustrate the significance of the gravitational evidence. In addition, because conflict management is an open question, the measurement of conflict has not been reasonably resolved. In this paper, we apply the evidence universal gravitation to conflict processing, and illustrate its wide applicability through the comparison of numerical examples.
[2048] vixra:1908.0575 [pdf]
There Is Only Charge!
A picture of the universe is presented where electromagnetic charge accounts for all observed phenomena. This picture is based on the Heisenberg relations of quantum mechanics. All the results obtained are consistent with EM charge being responsible for both what we classically identify as mass, and for the interactions required to keep intact the nucleons, and the nuclei of atoms. The approach is grounded in both quantum mechanics and general relativity.
[2049] vixra:1908.0571 [pdf]
The Goedel Theorem and the Lorentz Contraction
The power spectral formula of the Cherenkov radiation of the system of two equal charges is derived in the framework of the source theory. The distance between charges is supposed to be relativistically contracted which manifests in the spectral formula. The knowledge of the spectral formula then can be used to verify of the Lorentz contraction of the relativistic length. A feasible experiment for the verification of the Lorentz contraction is suggested.
[2050] vixra:1908.0562 [pdf]
A Performance Study of RDF Stores for Linked Sensor Data
The ever-increasing amount of Internet of Things (IoT) data emanating from sensor and mobile devices is creating new capabilities and unprecedented economic opportunity for individuals, organizations and states. In comparison with traditional data sources, and in combination with other useful information sources, the data generated by sensors is also providing a meaningful spatio-temporal context. This spatio-temporal correlation feature turns the sensor data become even more valuables, especially for applications and services in Smart City, Smart Health-Care, Industry 4.0, etc. However, due to the heterogeneity and diversity of these data sources, their potential benefits will not be fully achieved if there are no suitable means to support interlinking and exchanging this kind of information. This challenge can be addressed by adopting the suite of technologies developed in the Semantic Web, such as Linked Data model and SPARQL. When using these technologies, and with respect to an application scenario which requires managing and querying a vast amount of sensor data, the task of selecting a suitable RDF engine that supports spatio-temporal RDF data is crucial. In this paper, we present our empirical studies of applying an RDF store for Linked Sensor Data. We propose an evaluation methodology and metrics that allow us to assess the readiness of an RDF store. An extensive performance comparison of the system-level aspects for a number of well-known RDF engines is also given. The results obtained can help to identify the gaps and shortcomings of current RDF stores and related technologies for managing sensor data which may be useful to others in their future implementation efforts.
[2051] vixra:1908.0556 [pdf]
The Space of Unsolvable Tasks. Formulation of the Problem. or Anti-Tank Hyper-Hedgehogs in N-Dimensional Space.
With a narrow specialization of scientists, the development of science leads to the rapid growth of the space of unsolvable tasks, that grows faster than the area of existing scientific knowledge. The practical development of the field of unsolvable tasks is possible only by the forces of universal scientists of the future, who must have a high level of scientific knowledge in several general scientific disciplines.
[2052] vixra:1908.0553 [pdf]
規範倫理学における公理について
I wrote about the idea that absolute ethical laws could be found in an axiomatic way in normative ethics, and wrote a self-objection and an improvement against it. Before it, I wrote reviews about the concept of axiom and the major ideas of normative ethics.
[2053] vixra:1908.0542 [pdf]
Envelopes in Function Spaces with Respect to Convex Sets
We discuss the existence of an envelope of a function from a certain subclass of function space. Here we restrict ourselves to considering the model space of functions locally integrable with respect to the Lebesgue measure in a domain from the finite dimensional Euclidean space
[2054] vixra:1908.0538 [pdf]
Explicit Analysis of Spin-1/2 System, Young's Double-slit Experiment and Hanbury Brown-Twiss Effect Using the Non-Dualistic Interpretation of Quantum Mechanics
The main ideas of the wave-particle non-dualistic interpretation of quantum mechanics are elucidated using two well-known examples, viz., (i) a spin-1/2 system in the Stern-Gerlach experiment and (ii) Young's double-slit experiment, representing the cases of observables with discrete and continuous eigenvalues, respectively. It's proved that only Born's rule can arise from quantum formalism as a limiting case of the relative frequency of detection. Finally, non-duality is used to unambiguously explain Hanbury Brown-Twiss effect, at the level of individual quanta, for the two-particle coincidence detection.
[2055] vixra:1908.0521 [pdf]
Deriving the Electromagnetic Radiation:(1) Photon, (2) Anti-Photon, (3) Unsuccessful Radiation Through the Mutual Energy Principle and Self-Energy Principle
The solutions of Maxwell equations includes the restarted and the advanced waves. There are a few famous scientists supported the concept of advanced wave. Wheeler and Feynman have introduced the absorber theory in1945 which told us the absorbers can send the advanced wave. The absorber theory is based on the action-at-a-distance of Schwarzschild, Tetrode and Fokker, which told us the electric current sends a half retarded wave and a half advanced wave. John Cramer has introduced transactional interpretation in quantum mechanics, which said the retarded wave and the advanced wave have a handshake. What is the advanced wave in electromagnetic field theory? In 1960, Welch introduced the reciprocity theorem in arbitrary time domain which involved the advanced wave. In 1963 V.H. Rumsey mentioned a method to transform the Lorentz reciprocity theorem to a new formula. In early of 1987 Shuang-ren Zhao (this author) introduced the mutual energy theorem in frequency domain. In the end of 1987 de Hoop introduced the time-domain cross-correlation reciprocity theorem. All these theories can be seen a same theorem in different domain: Fourier domain or in time domain. This theorem can be applied to prove the directivity diagram of a receiving antenna are equal to the directivity diagram when the receiving antenna are used as a transmitting antenna. According to this theory, the receiving antenna sends the advanced wave. As a reciprocity theorem the two fields in it do not need to be real for all. Hence, for the reciprocity theorem of Welch, Rumsey and Hoop do not need to claim that the advanced wave is a physical wave. However, when Shuang-ren Zhao said it is a energy theorem that means the two waves the retarded wave and the advanced wave in the theorem must be all real physical waves. After the mutual energy theorems has been published 30 years, Shuang-ren Zhao re-enter this field. First, the mutual energy flow theorem is derived. The mutual energy flow produced by the superposition of the retarded wave and the advanced wave. The mutual energy flow can carry the energy from the transmitting antenna to the receiving antenna. Our textbook of electromagnetic field tell us the energy is carried by the energy flow of the Poynting vector which is the self-energy flow. Hence, there is a question that the energy of electromagnetic field is transferred by the mutual energy or self-energy or by both? This author found that only the former can offer a self-consistency theory. This author also proved that the energy is transferred by the self-energy or both all conflict with the energy conservation law and hence cannot be accept. If the energy is transferred by the mutual energy, the axioms of the electromagnetic field needs to be modified. Hence, the mutual energy principle is introduced to replace the Maxwell equations as axioms. The mutual energy principle can be derived from the Maxwell equations. The Maxwell equations can also be derived from the mutual energy principle. However, the mutual energy principle does not equivalent to the Maxwell's equations. Starting from the mutual energy principle, the solution needs to be two groups of Maxwell equations existing together. One group of the Maxwell equation is corresponding to the retarded wave, another is corresponding to the advanced wave. The two waves must be synchronized to produce the mutual energy flow. The conflict of Maxwell equations with the energy conservation law further suggest that there exist a time-reversal wave and self-energy principle. Self energy principle tells us that self-energy flow or the energy flow corresponding to Poynting vector does not carry or transfer the energy, because there exist 2 time-reversal waves corresponding to the retarded wave and the advanced wave. The energy flow of the time-reversal waves cancels the energy flows of the self-energy flows of the retarded wave and the advanced wave. This also tell us there are 4 waves for electromagnetic fields, the retarded wave, the advanced wave, the 2 time-reversal waves corresponding to the retarded wave and the advanced wave. The self-energy flow of these 4 waves are all canceled. However the mutual energy flow of the retarded wave and the advanced wave does not disappear. The energy of electromagnetic field is transferred by the mutual energy flow. Photons can be explained by the mutual energy flows. There is also the time-reversal mutual energy flow which can wipe out the half-photon or partial photon. Anti-particle can also be explained by the time-reversal mutual energy flow. This theory has been widen to the quantum mechanics. That means all particles for example electron is also consist of 4 waves and 6 energy flows. There is the mutual energy principle and self-energy principle corresponding to the Schrödinger equation. In this article 3 modes of radiation are introduced which are photon, anti-photon, ant unsuccessful radiation. Photon is consist of one mutual energy flow, self-energy flow for the retarded wave and advanced wave, self-energy flow of the time-reversal waves. All self-energy flows are canceled. Hence only the mutual energy flow survive. Anti-photon is consist of the time-reversal mutual energy flow; Time-reversal self-energy flow and self-energy flows. All the self-energy flow canceled. Only the time-reversal mutual energy survived. The last mode is the unsuccessful radiation. The retarded wave is sent out but it did not meat any advanced wave to handshake/synchronized. Hence, the energy is returned, the radiation is unsuccessful. Photon is transfer the radiation energy. Anti-photon is responsible to eliminate the half-photon or partial photon. The unsuccessful radiation is the necessary result that the source and sink both send the waves, one is the retarded wave and the other is the advanced wave.
[2056] vixra:1908.0519 [pdf]
Derive the Huygens Principle Through the Mutual Energy Flow Theorem
Absorber theory published in 1945 and 1949 by Wheeler and Feynman. In the electromagnetic field theory, W. J. Welch introduced the reciprocity theorem in 1960. V.H. Rumsey mentioned a method to transform the Lorentz reciprocity theorem to a new formula in 1963. In early of 1987 Shuang-ren Zhao (this author) introduced the mutual energy theorem in frequency domain. In the end of 1987. Adrianus T. de Hoop introduced the time-domain cross-correlation reciprocity theorem. All these theories can be seen as a same theorem in different domain: Fourier domain or in time domain. After 30 years silence on this topic, finally, this author has introduced the mutual energy principle and self-energy principle which updated the Maxwell's electromagnetic field theory and Schrödinger's quantum mechanics. According to the theory of mutual energy principle, the energy of all particles are transferred through the mutual energy flows. The mutual energy flow are inner product of the retarded wave and the advanced wave. The mutual energy flow satisfies the mutual energy flow theorem. The retarded wave is the action the emitter gives to the absorber. The advanced wave is the reaction the absorber gives to the emitter. In this article this author will derive the Huygens principle from the mutual energy flow theorem. The bra, ket and the unit operator of the quantum mechanics will be applied to the inner space defined on 2D surface instead of the 3D volume.
[2057] vixra:1908.0517 [pdf]
The Relation Between the Particle of the Mutual Energy Principle and the Wave of Schrödinger Equation
This author has replaced the Maxwell equations with the corresponding mutual energy principle, self-energy principle as the axioms in electromagnetic field theory. The advantage of doing this is that it can overcome the difficulty of the Maxwell equations, which conflicts to the energy conservation law. The same conflict also exists in the Schrödinger equation in the quantum mechanics. This author would like to intruded the mutual energy principle to quantum mechanics, but has met the difficulty that there is no advanced solution for the Schrödinger equation. This difficulty is overcome by introducing a negative radius. After this, all the theory about the mutual energy can be extend from the field satisfying Maxwell equations to the field satisfying Schrödinger equation. The Schrödinger equation can also be derived from the corresponding mutual energy principle. However, this doesn't mean both sides are equivalent. The mutual energy principle cannot derive a single solution of Schrödinger equation. The mutual energy principle can only derive a pair solutions of the Schrödinger equations. One is for retarded waves and another is for advanced waves. The retarded wave and the advanced wave must be synchronized. The solutions of the mutual energy principle is in accordance with the theory of the action-at-a-distance and the absorber theory. A action is done always between two objects, for example a source (emitter) and a sink (absorber). The mutual energy principle tell us that a particle is an action and a reaction between the source and the sink. In other hand the wave satisfying Schrödinger equation only need one source or one sink. From the mutual energy principle, it is easy to derive the mutual energy theorem, the mutual energy flow theorem, corresponding Huygens–Fresnel principle. All these will solve the wave-particle duality paradox.
[2058] vixra:1908.0516 [pdf]
Developing an Integrative Framework of Artificial Intelligence and Blockchain for Augmenting Smart Governance
Government systems are often slow, opaque and prone to corruption. The public benefits system, in general, suffers from slowness and bureaucracy. In this paper, we propose a system that utilizes blockchain and artificial intelligence techniques for governance, which enables the government to function more efficiently and with more transparency, thus increasing the level of trust the people have in their government and in democracy. Public Distributed-Ledger Ethereum (MainNet) is the back-bone of this proposed system. SHA-256 of Elliptic-Curve cryptography generates a Public-Private keypair. Each transaction is validated by P2SH and the consensus is achieved through Proof-of-Work Algorithm. A smart contract encodes the algorithm and enforces constraints on users activity. Artificial Intelligence is used to analyze the data wherever necessary and the output of the network is used as a trigger for activating the smart contract, which can be connected via IoT services and automation devices. This can be used for making government contracts more transparent. Some use cases are automatic payments based on achieved deadlines and allowing public consensus on government policies.Other applications include a better functioning public benefits system that allows the government to directly provide the public with incentives rather than relying on middlemen. Decentralization via blockchain is a complete end-to-end solution for democratizing the current systems.
[2059] vixra:1908.0511 [pdf]
Affine Balayage of Measures in Domains of the Complex Plane with Applications to Holomorphic Functions
Let u and M are two non-trivial subharmonic functions in a domain D in the complex plane. We investigate two related but different problems. The first is to find the conditions on the Riesz measures of functions u and M respectively under which there exists a non-trivial subharmonic function h on D such that u+h< M. The second is the same question, but for a harmonic function h on D. The answers to these questions are given in terms of the special affine balayage of measures introduced in our recent previous works. Applications of this technique concern the description of distribution of zeros for holomorphic functions f on the domain D satisfying the restriction |f|< exp M.
[2060] vixra:1908.0509 [pdf]
On the Smallest Volume Scale and Dark Energy
In this new study we want to propose an updated heuristic model to compute and to interpret the dark energy content of our universe. To this purpose we include the mass-energy of the static gravitational field in Newtonian gravity, finding agreement with general relativity at large scales. We then compute its effect at very small distances also including quantum effects. From this analysis, we obtain an estimation of the smallest volume in empty space. Our result is compatible with loop quantum gravity and this enables the embedding in it. After that we show, how this can be used to compute a natural energy cutoff $k_c$ for all quantum fields and study its utility in computing the dark energy density and its implications on the content of fermionic and bosonic elementary fields. Indeed for the vacuum equation of state $w\,{=}\,p_{vac}/\rho_{vac}$ we obtain an expression depending on $\Delta N\,{=}\, N_f - N_b$, which represents the difference between the number of species of fermions and bosons. Finally comparing our result with the measured value of $w$, we discuss general constraints on the field content beyond the Standard Model of the elementary particles.
[2061] vixra:1908.0489 [pdf]
The Mathematical Expressions of Quranic Exegeses and the Mathematical Definition of the Quranic Correctness
I succeeded to give mathematical expressions to any correct Quranic Exegeses and define the Quranic correctness as the unique existence of Tahara I function. In a precise mathematical sense, the expressions and the definition are ill-defined however they might have meanings to prove the Quranic correctness.
[2062] vixra:1908.0486 [pdf]
Minimizing Acquisition Maximizing Inference a Demonstration on Print Error Detection
Is it possible to detect a feature in an image without ever being able to look at it? Images are known to be very redundant in spatial domain. When transformed to bases like Discrete Cosine Transform (DCT) or wavelets, they acquire a sparser (more effective) representation. Compressed Sensing is a technique which proposes simultaneous acquisition and compression of any signal by taking very few random linear measurements (M) instead of uniform samples at more than twice the bandwidth frequency (Shannon-Nyquist theorem). The quality of reconstruction directly relates with M, which should be above a certain threshold (determined by the level of sparsity, k) for a reliable recovery. Since these measurements can non-adaptively reconstruct the signal to a faithful extent using purely analyticalmethodslikeBasisPursuit,MatchingPursuit,Iterativethresholding,etc.,wecanbeassured that these compressed samples contain enough information about any relevant macro-level feature contained in the (image) signal. Thus if we choose to deliberately acquire an even lower number of measurements-inordertothwartthepossibilityofacomprehensiblereconstruction,buthighenough to infer whether a relevant feature exists in an image - we can achieve accurate image classification while preserving its privacy. Through the print error detection problem, it is demonstrated that such a novel system can be implemented in practise.
[2063] vixra:1908.0484 [pdf]
Cutting the Gordian Knot of Theoretical Physics (The Unification of Gravitational and Maxwellian Fields)
It concerns the unification of Maxwell-Field and Gravitational-Field without compromise consisting of: 1. A derivation of the general equations of continuously differentiable fluctuating 3-dimensional vector fields turning out to be generalized Maxwell-Equations 2. Identifying the Einstein-Space as the result of deforming an Euclidean Space, 3. Identifying the fluctuating hypersurface of the Einstein-Space as gravitational wave propagating with the velocity of light seen from an observer space or rather coordinate space, This leads to 1. the quantitative unification of Maxwell-Field and Gravitational-Field, 2. the facilitation of quantizing gravitational fields, 3. considerations of general gravitational waves from a new perspective. With the described unification electromagnetism is directly led back to the most fundamental terms of physics, space and time.Last but not least the importance of the Einstein-Equations for microphysics is proved
[2064] vixra:1908.0461 [pdf]
Newton Did not Invent or Use the so-Called Newton's Gravitational Constant G. Big G is not Needed in Physics; it Has Mainly Caused Confusion!
Newton did not invent or use the so-called Newton’s gravitational constant G. Newton’s original gravity formula was F = Mm/r and not F = GMm/r. In this paper, we will show how a series of major gravity phenomena can be calculated and predicted without the gravitational constant. This is to some degree well known, at least for those that have studied a significant amount of the older literature on gravity. However, to understand gravity at a deeper level, still without G, one needs to trust Newton’s formula. It is first when we combine Newton’s observation that matter and light ultimately consist of hard indivisible particles with new insight in atomism that we can truly begin to understand gravity. This leads to a quantum gravity theory that is unified with quantum mechanics where there is no need for G and even no need for Planck’s constant. We claim that two mistakes have been made in physics, which have held back progress towards a unified quantum gravity theory. First, it has been common practice to consider Newton’s gravitational constant almost holy and untouchable. Thus we have neglected to see an important aspect of mass, namely the indivisible particle that also Newton held in high regard. Second, we have built our version of quantum mechanics around the de Broglie wavelength, rather than the Compton wavelength. We claim the de Broglie wavelength is merely a mathematical derivative of the Compton wavelength.
[2065] vixra:1908.0446 [pdf]
Predictions for Elementary Particles and Explanations for Astrophysics Data
We suggest descriptions for new elementary particles, dark matter, and dark energy. We use those descriptions to explain data regarding dark matter effects, dark energy effects, and galaxy evolution. We use mathematics-based modeling that can feature objects and de-emphasize motion. The modeling, descriptions, and explanations embrace and augment traditional physics theory modeling.
[2066] vixra:1908.0443 [pdf]
N-SAT in P with Non-Coherent Space Factorization
We know since Cook that Boolean satisfiability problems with at least three literals in each clauses are in NP and are NP-complete. With proving that 3-SAT (or more) is in P, corollary proves that P = NP. In this document, we explain how to find a SAT problem solution in polynomial complexity time.
[2067] vixra:1908.0436 [pdf]
Balayage of Measures and Their Potentials: Duality Theorems and Extended Poisson-Jensen Formula
We investigate some properties of balayage of measures and their potentials on domains or open sets in finite-dimensional Euclidean space. Main results are Duality Theorems for potentials of balayage of measures, for Arens-Singer and Jensen measures and potentials, and also a new extended and generalized variant of Poisson-Jensen formula for balayage of measure and their potentials.
[2068] vixra:1908.0427 [pdf]
The Riemann Hypothesis Proof
We take the integral representation of the Riemann Zeta Function over entire complex plane, except for a pole at 1. Later we draw an equivalent to the Riemann Hypothesis by studying its monotonicity properties.
[2069] vixra:1908.0422 [pdf]
Replication of the Keyword Extraction Part of the Paper "Without the Clutter of Unimportant Words": Descriptive Keyphrases for Text Visualization
"Keyword Extraction" refers to the task of automatically identifying the most relevant and informative phrases in natural language text. As we are deluged with large amounts of text data in many different forms and content - emails, blogs, tweets, Facebook posts, academic papers, news articles - the task of "making sense" of all this text by somehow summarizing them into a coherent structure assumes paramount importance. Keyword extraction - a well-established problem in Natural Language Processing - can help us here. In this report, we construct and test three different hypotheses (all related to the task of keyword extraction) that take us one step closer to understanding how to meaningfully identify and extract "descriptive" keyphrases. The work reported here was done as part of replicating the study by Chuang et al. [3].
[2070] vixra:1908.0420 [pdf]
A Final Proof of The abc Conjecture
In this paper, we consider the abc conjecture. As the conjecture c<rad^2(abc) is less open, we give firstly the proof of a modified conjecture that is c<2rad^2(abc). The factor 2 is important for the proof of the new conjecture that represents the key of the proof of the main conjecture. Secondly, the proof of the abc conjecture is given for \epsilon \geq 1, then for \epsilon \in ]0,1[. We choose the constant K(\epsion) as K(\epsilon)=2e^{\frac{1}{\epsilon^2} } for $\epsilon \geq 1 and K(\epsilon)=e^{\frac{1}{\epsilon^2}} for \epsilon \in ]0,1[. Some numerical examples are presented.
[2071] vixra:1908.0410 [pdf]
Sagnac Effect on Inertial Motion
Wang, Zheng, and Yao carried out an experiment in 2004 to determine if the Sagnac effect can be applied to the inertial motion. Their experiment showed that there was indeed phase difference between the two light beams passing through the same optical fiber in inertial motion. However, the focus of their experiment was on the rest frame of the light source. They did not realize that the time difference from the phase difference should exist in all reference frames. The difference in the elapsed time taken by two light beams to pass through the linear fiber segment in the rest frame of the segment effectively corresponds to the speed difference between two light beams. The Sagnac effect in inertial motion provides a precise experimental evidence that the speed of light is different in a different reference frame.
[2072] vixra:1908.0403 [pdf]
Unishox Guaranteed Compression for Short Unicode Strings
A new hybrid encoding method is proposed, with which short unicode strings could be compressed using context aware pre-mapped codes and delta coding resulting in surprisingly good ratios.
[2073] vixra:1908.0393 [pdf]
Compton Scattering Not Well Verified
Compton scattering cannot yet be said to have been verified beyond any doubt. The only verification experiment performed is still the original experiment of A.H.Compton from 1922. To date, there is not another experiment done to corroborate the 1922 findings. There are modern experiments done in the laboratories of today’s universities to verify Compton scattering with γ-ray sources from radionuclides where the energies of the rays are measured with NaI scintillation detectors or germanium detectors. Such experiments are shown to be invalid as the result that shows clear agreement with the theory is all due to the calibration of the detectors and nothing besides. Furthermore, a verification that the inverse of energy 1/E varying linearly with (1 − cos θ) is insufficient as a verification of the full Compton scattering formula for wavelength changes. A proper verification requires verifying the wavelength formula where the wavelength is measured directly.
[2074] vixra:1908.0307 [pdf]
On Certain Pi_{q}-Identities of W. Gosper
In this paper we employ some knowledge of modular equations with degree 5 to confirm several of Gosper's Pi_{q}-identities. As a consequence, a q-identity involving Pi_{q} and Lambert series, which was conjectured by Gosper, is proved. As an application, we confirm an interesting q-trigonometric identity of Gosper.
[2075] vixra:1908.0306 [pdf]
Causality Between Events with Space-Like Separation
Since the first part of the twentieth century, it has been maintained that faster-than-light motion could produce time travel into the past with its accompanying causality-violating paradoxes. Part of the problem is that the Lorentz transformation (LT) presumes that time is isotropic, as does the Minkowski diagram based upon it, whereas entropy and the arrow of time govern in the real world. This paper demonstrates that time travel into the past and causality violation occur only when speeds "greater than" infinity are involved, and this absurdity is refuted by studying relativistic dynamics in certain scenarios that purportedly lead to causality violation and allowing it to instruct us in limiting the LT in certain other scenarios. Thus there is no justification for the block universe concept and the implication that the past is "back there somewhere" and can be accessed from the present, thus preventing causality paradoxes.
[2076] vixra:1908.0286 [pdf]
A New Appraoch to Proof the Riemann Hypothesis Using New Operator
In this note we present a new approach to proof the Riemann hypothesis one of the most important open problem in pure mathematics using a new operator derived from unitary operator groups acts on Riemann-Siegal function and it uses partition function for Hamiltonian operator , The interest idea is to compute the compositional inverse of Riemann zeta function at $s=-\frac12$ such as we show that:$\zeta^{-1}(-\frac12) =\zeta(\frac12+i \beta)=0$ for some $\beta >0 $
[2077] vixra:1908.0242 [pdf]
Mass Interaction Principle as an Origin of Quantum Mechanics
This paper proposes mass interaction principle (MIP) as: the particles will be subjected to the random frictionless quantum Brownian motion by the collision of space time particle (STP) ubiquitous in spacetime. The change in the amount of action of the particles during each collision is an integer multiple of the Planck constant $h$. The motion of particles under the action of STP is a quantum Markov process. Under this principle, we infer that the statistical inertial mass of a particle is a statistical property that characterizes the difficulty of particle diffusion in spacetime. Within the framework of MIP, all the essences of quantum mechanics are derived, which proves that MIP is the origin of quantum mechanics. Due to the random collisions between STP and the matter particles, matter particles are able to behave exactly as required by the supervisor and shepherd for all microscopic behaviors of matter particles. More importantly, we solve a world class puzzle about the anomalous magnetic moment of muon in the latest experiment, and give a self-consistent explanation to the lifetime discrepance of muon between standard model prediction and experiments at the same time.
[2078] vixra:1908.0239 [pdf]
Using a Grandfather Pendulum Clock to Measure the World’s Shortest Time Interval, the Planck Time (with Zero Knowledge of G).
Haug [1] has recently introduced a new theory of unified quantum gravity coined “collision space-time.” From this new and deeper understanding of mass we can also understand how a grandfather pendulum clock can be used to measure the world shortest time interval, namely the Planck time [2, 3], indirectly. Such a clock can, therefore, also be used to measure the diameter of an indivisible particle indirectly. Further, such a clock can easily measure the Schwarzschild radius of the gravity object and what we will call “Schwarzschild time.” This basically proves that the Newton gravitational constant is not needed to find the Planck length or the Planck time; it is also not needed to find the Schwarzschild radius. Unfortunately, there is significant inertia in the current physics establishment towards new ideas that could significantly alter our perspective on the fundamentals, but this is not new in the history of science. Still, the idea that the Planck time can be measured totally independent of any knowledge of Newton’s gravitational constant could be very important for moving forward in physics.
[2079] vixra:1908.0235 [pdf]
Puzzling Radii of Calcium Isotopes: $^{40}{\rm Ca} \rightarrow ^{44}{\rm Ca} \rightarrow ^{48}{\rm Ca} \rightarrow ^{52}{\rm Ca}$}, and Duality in the Structure of $^{42}_{14}{\rm Si}_{28}$ and $^{48}_{20}{\rm Ca}_{28}$
In this paper we study the issue of the puzzle of the radii of calcium isotopes. Despite an excess of eight neutrons, strangely $^{48}{\rm Ca}$ exhibits essentially the same charge radius as $^{40}{\rm Ca}$ does. A fundamental microscopic description of this is still lacking. Also strange is a peak in charge radius of calcium at N = 24. The $^{52}{\rm Ca}$ (N = 32) nucleus, well known to be doubly magical, amazingly has recently been found to have a very large charge radius. Also amazing is the property of $^{42}_{14}{\rm Si}_{28}$ which simultaneously appears to be both magical/spherical and strongly deformed as well. We use a Quantum Chromodynamics based model, which treats triton as elementary entity to make up $^{42}_{14}{\rm Si}_{28}$. We show here how this QCD based model is able to provide a consistent physical understanding of simultaneity of magicity/sphericity and strong deformation of a single nucleus. This brings in an essential duality in the structure of $^{42}_{14}{\rm Si}_{28}$ and subsequently also that of $^{48}_{20}{\rm Ca}_{28}$ We also provide consistent understanding of the puzzling radii of calcium isotopes. We predict that the radius of $^{54}{\rm Ca}$ should be even bigger than that of $^{52}{\rm Ca}$; and also that the radius of $^{60}{\rm Ca}$ should be the same as that of $^{40}{\rm Ca}$. In addiion we also show wherefrom arises the neutron E2 effective charge of $\frac{1}{2}$.
[2080] vixra:1908.0229 [pdf]
Predictions for Elementary Particles and Explanations for Data About Dark Matter, Dark Energy, and Galaxies
We suggest descriptions for new elementary particles, dark matter, and dark energy. We use those descriptions to explain data regarding dark matter effects, dark energy effects, and galaxy evolution. We use mathematics-based models that feature objects and, originally, de-emphasize motion. The models, descriptions, and explanations add to traditional physics, provide clarity regarding aspects of nature for which people point to possible inadequacies in traditional physics models, and embrace traditional physics models in realms for which people have validated traditional physics models.
[2081] vixra:1908.0222 [pdf]
A `constant Lagrangian' RMW-RSS Quantified Fit of the Galaxy Rotation Curves of the Complete Sparc Database of 175 LTG Galaxies.
In this paper I categorize and analyze the `constant Lagrangian' model fits I made of the complete SPARC database of 175 LTG galaxies. The difference with the previous papers is the application of the RMWRSS (Root Mean Weighted Residual Sum of Squares) method to quantify the quality of the fit, using a continuous curve. Of the 175 galaxies, 77 allowed a single fit rotation curve, so about 44 percent. Another 18 galaxies could almost be plotted on a single fit. Then 13 galaxies could be fitted really nice on crossing dual curves. The reason for the appearance of this dual curve, in its two versions, could be given and related to the galactic constitution and dynamics. From then on, the fitting got more and more complex. So I got a 44 percent positive rate for a direct fit of the measured rotation curves on the prime model. This result rules out stochastic coincidence as an explanation of those fits.
[2082] vixra:1908.0213 [pdf]
Ensuring Efficient Convergence to a Given Stationary Distribution
How may we find a transition matrix that guarantees the long-run convergence of a Markov Chain to a given stationary distribution? Solving for this (usually) undetermined system is non-trivial and presents unique computational challenges. Five different of methods of directly solving for a transition matrix are presented along with their limitations. Relaxations of the two core assumptions underlying these direct methods - the Identityless and Independence Assumptions - are considered. A method of generating a Mass Matrix - the transition matrix underlying hops between entire population states - is described while developing the notion of successively-bounded weak compositions. An algorithm for their exhaustive generation is also presented. Applications of some methods are provided with respect to optimizing firm profit via optimally distributing workers among wage brackets and optimizing measures of national wealth via manipulation of class distribution and immigration policy. A generalization of all applications is formulated.
[2083] vixra:1908.0190 [pdf]
Naturalness Begets Naturalness: an Emergent Definition
We offer a model based upon three `assumptions'. The first is geometric, that the vacuum wavefunction is comprised of Euclid's fundamental geometric objects of space - point, line, plane, and volume elements - components of the geometric representation of Clifford algebra. The second is electromagnetic, that physical manifestation follows from introducing the dimensionless coupling constant \textbf{$\alpha$}. The third takes the electron mass to define the scale of space. Such a model is arguably maximally `natural'. Wavefunction interactions are modeled by the geometric product of Clifford algebra. What emerges is more naturalness. We offer an emergent definition.
[2084] vixra:1908.0100 [pdf]
Fundamental of Mathematics; Division by Zero Calculus and a New Axiom
Based on the preprint survey paper (\cite{sur}), we will discuss the theoritical point of the division by zero calculus. We will need a new axiom for our mathematics. The contents in this paper seem to be serious for our mathematics and for our world history with the materials in \cite{sur}. So, the author hopes that the related mathematicians, mathematical scientists and others check and consider the topics from various viewpoints.
[2085] vixra:1908.0080 [pdf]
New Recognization for the Newton's Third Law: the Reaction Force is Advanced According to the Mutual Energy Principle
Absorber theory published in 1945 and 1949 by Wheeler and Feynman which tells us that if the sun is put in a empty space where apart from the sun is nothing, the sun cannot shine. That means only with the source, the radiation cannot be produced. The radiation is phenomena of an action-at-a-distance. The action at a distance needs at least two object: the source and the sink or the emitter and the absorber. Only with one charge even it has the acceleration, it still cannot make any radiation. However this result is not reflect at the Maxwell's theory. According to the theory of Maxwell, a single charge can produce the radiation without any help of the absorber. Hence, Maxwell theory is different with the absorber theory of Wheeler and Feynman, this author thought that Wheeler and Feynman is correct. According the absorber theory the source (emitter) sends the retarded wave. The sink (absorber) sends the advanced wave. In the electromagnetic field theory, W. J. Welch introduced the reciprocity theorem in 1960. V.H. Rumsey mentioned a method to transform the Lorentz reciprocity theorem to a new formula in 1963. In early of 1987 Shuang-ren Zhao (this author) introduced the mutual energy theorem in frequency domain. In the end of 1987 Adrianus T. de Hoop introduced the time-domain cross-correlation reciprocity theorem. All these theories can be seen as a same theorem in different domain: Fourier domain or in time domain. The reciprocity theorem of Welch, Rumsey and Hoop has been applied to find out the directivity diagram of receiving antenna from the corresponding transmitting antenna. The mutual energy theorem of Zhao, has been applied to define an inner product space of electromagnetic radiation fields, and hence, to applied to the sphere wave expansion and the plane wave expansion. All these theorems the transmitting antenna sends the retarded waves and the receiving antenna sends the advanced waves. The reciprocity theorems of Welch, Rumsey and Hoop talk about reciprocity theorem in which the two fields one can be real one can be virtual. The mutual energy theorem tell us the two fields the retarded wave sent out from the transmitting antenna and the advanced wave sent out from the receiving antenna are real and are physical waves with energy. After 30 years silence on this topic, finally, this author has introduced the mutual energy principle and self-energy principle which updated the Maxwell's electromagnetic field theory and Schrödinger's quantum mechanics. According to the theory of mutual energy principle, the energy of all particles are transferred through the mutual energy flows. The mutual energy flow are inner product of the retarded wave and the advanced wave. The retarded wave is the action the emitter gives to the absorber. The advanced wave is the reaction the absorber gives to the emitter. When the absorber received the retarded wave, it received a force from the emitter, that is the action from emitter to the absorber. When the emitter receive the advanced wave, it obtained the reaction from the absorber. This reaction is express as the recoil force of the particle to the emitter. Hence, action is retarded and the reaction is advanced. In this article the action is retarded and the reaction is advanced will be widen to the macroscopic object for example a stone or a piece of wood. Hence, even the waves in water, in air or wood all involve the advanced reaction. The author reviewed the Newton's third law, found that only when the reaction is advanced, the Newton's third law can be applied on arbitrary surface of the object. Hence the reaction is advanced must be correct.
[2086] vixra:1908.0040 [pdf]
A Sceptical Analysis of Quantized Inertia
We perform an analysis of the derivation of Quantized Inertia (QI) theory, formerly known with the acronym MiHsC, as presented by McCulloch (2007, 2013). Two major flaws were found in the original derivation. We derive a discrete black-body radiation spectrum, deriving a different formulation for $F(a)$ than the one presented in the original theory. We present a numerical result of the new solution which is compared against the original prediction.
[2087] vixra:1908.0015 [pdf]
Why Finite Mathematics Is More Fundamental Than Classical One
In our previous publications we have proved that quantum theory based on finite mathematics is more fundamental than standard quantum theory, and, as a consequence, finite mathematics is more fundamental than classical one. The goal of the present paper is to explain without formulas why those conclusions are natural.
[2088] vixra:1908.0005 [pdf]
Error in Modern Astronomy
The conservation of the interference pattern from double slit interference proves that the wavelength is conserved in all inertial reference frames. However, there is a popular belief in modern astronomy that the wavelength can be changed by the choice of reference frame. This erroneous belief results in the problematic prediction of the radial speed of galaxy. The reflection symmetry shows that the elapsed time is conserved in all inertial reference frames. From both conservation properties, the velocity of the light is proved to be different in a different reference frame. This different velocity was confirmed by lunar laser ranging test at NASA in 2009. The relative motion between the light source and the light detector bears great similarity to the magnetic force on a moving charge. The motion changes the interference pattern but not the wavelength in the rest frame of the star. This is known as the blueshift or the redshift in astronomy. The speed of light in the rest frame of the grism determines how the spectrum is shifted. Wide Field Camera 3 in Hubble Space Telescope provides an excellent example on how the speed of light can change the spectrum.
[2089] vixra:1907.0581 [pdf]
A Characterization of the Golden Arbelos Involving an Archimedean Circle
We consider a problem in Wasan geometry involving a golden arbelos and give a characterization of the golden arbelos involving an Archimedean circle. We also construct a self-similar circle configuration using the figure of the problem.
[2090] vixra:1907.0549 [pdf]
Emergent Cosmological Constant from a Holographic Mass/Energy Distribution
A new methodology is introduced suggesting that an exact cosmological constant is theoretically and numerically derived and described as the squared ratio of Planck length and the particle horizon radius. Additionally, equations relating the sterile neutrino mass, Planck mass and mass of the universe are established. Furthermore, the mass of the universe can be derived as encoded information located on the cosmic horizon. Finally, a relationship of the Hubble radius and comoving radius is reviewed. This hypothesis is tested for convergence for an overall flat curvature using the Friedmann equations.
[2091] vixra:1907.0494 [pdf]
An Algebraic Way of Simultaneously Analyzing Both Einstein, Podolsky and Rosen Paper and Bohr's Reply to it
In their celebrated paper titled "Can quantum mechanical description of physical reality be considered complete?", Einstein, Podolsky and Rosen (EPR) showed for the first time the existence of `Spooky action-at-a-distance'. Though the result of their paper is unquestionable, but the conclusion of the same became sensational because of its challenge to quantum mechanical formalism whether it's complete or not in describing the physical reality of Nature. Bohr's physical and philosophical reply to that conclusion justified the completeness of quantum mechanics. Here, a simple algebraic way is presented for the results of these two classic papers in such a way that the actual reason behind why quantum world necessarily exhibits the action-at-a-distance and how Bohr defended against the incompleteness of the quantum formalism will become clear.This approach naturally reveals what physical assumption of EPR went wrong while considering the entangled quantum system and also provides the missing mathematical argument in Bohr's reply.
[2092] vixra:1907.0492 [pdf]
A Motivic Sterile Neutrino
Despite the resounding experimental success of the Standard Model, the mystery of neutrino mass and neutrino oscillations must be approached from a framework for quantum gravity. Using well established results in condensed matter physics and in motivic mathematics, we present a new view of the quantum vacuum based on neutrino braid diagrams in quantum computation. The prediction of an effective 1.29 eV non local sterile state from the Koide matrix for neutrino masses fits known observational constraints.
[2093] vixra:1907.0491 [pdf]
Fractional Calculus
This paper generalises the limit definitions of calculus to define differintegrals of complex order, calculates some differintegrals of elementary functions, and introduces the notion of a fractional differential equation. An application to quantum theory is explored, and we conclude with some operator algebra. Functions in this paper will only have one variable.
[2094] vixra:1907.0490 [pdf]
An Urdu Translation of the Landmark EPR Article from 1935
An Urdu translation of the landmark article by Einstein, Podolsky and Rosen from the year 1935 is presented with the hope that it will be of academic and research interest to the readers in that language.
[2095] vixra:1907.0472 [pdf]
Two-Proton Knockout Cross Section ${\sigma}_{-2p} (^{44}S \rightarrow ^{42}{si})$: Strong Evidence of Magicity and Sphericity of $^{42}_{14}si_{28}$
The issue of whether $^{42}_{14}Si_{28}$ is doubly magical or not has been a contentious one. Fridmann {\it et al.} (Nature 435 (2005) 922) through studies of two-proton knockout reaction $^{44}_{16}S_{28} \rightarrow ^{42}_{14}Si_{28}$, presented a strong empirical evidence in support of magicity and sphericity of $^{42}_{14}Si_{28}$. However in complete conflict with this, Bastin {\it et al.} (Phys. Rev. Lett. 99 (2007) 022503) gave equally strong empirical evidences, to show that the N = 28 magicity had completely collapsed, and that $^{42}_{14}Si_{28}$ was a well deformed nucleus. At present the popular consensus (Gade {\it et al.}, Phys. Rev. Lett. 122 (2019) 222501) strongly supports the latter one and discards the former one. Here, while we accept the latter experiment as being fine and good, through a careful study of an RMF model calculation, we show that actually the experimental results of Fridmann are also independently good and consistent. As per the Fridmann experiment, the sphericity and magicity of $^{42}_{14}Si_{28}$ is manifested only through proton number Z=14 being a strong magic number, while the neutron magic number N=28 disappears (or goes into hiding); and still this nucleus is spherical. This is a new and amazing property manifesting itself in this exotic nucleus $^{42}_{14}Si_{28}$. In this paper we provide a consistent understanding of this novel reality within a QCD based model. This model, which has been successful in explanation of the halo phenomenon in exotic nuclei, comes forward to provide the physical reason as to why the Fridmann experiment is correct. This QCD based model shows that it is tritons, as elementary entity making up $^{42}_{14}Si_{28}$, which then provides consistency to the above amazing conclusions arising from the Fridmann experiment.
[2096] vixra:1907.0463 [pdf]
Analytic Continuation of the Zeta Function Violates the Law of Non-Contradiction (LNC)
The Dirichlet series of the Zeta function was long ago proven to be divergent throughout half-plane Re(s) =< 1. If also Riemann's proposition is true, that there exists an "expression" of the Zeta function that is convergent at all values of s (except at s = 1), then the Zeta function is both divergent and convergent throughout half-plane Re(s) =< 1 (except at s = 1). This result violates all three of Aristotle's "Laws of Thought": the Law of Identity (LOI), the Law of the Excluded Middle (LEM), and the Law of Non-Contradition (LNC). In classical and intuitionistic logics, the violation of LNC also triggers the "Principle of Explosion": Ex Contradictione Quodlibet (ECQ). In addition, the Hankel contour used in Riemann's analytic continuation of the Zeta function violates Cauchy's integral theorem, providing another proof of the invalidity of analytic continuation of the Zeta function. Also, Riemann's Zeta function is one of the L-functions, which are all invalid, because they are generalizations of the invalid analytic continuation of the Zeta function. This result renders unsound all theorems (e.g. Modularity, Fermat's last) and conjectures (e.g. BSD, Tate, Hodge, Yang-Mills) that assume that an L-function (e.g. Riemann's Zeta function) is valid. We also show that the Riemann Hypothesis (RH) is not "non-trivially true" in classical logic, intuitionistic logic, or three-valued logics (3VLs) that assign a third truth-value to paradoxes (Bochvar's 3VL, Priest's LP).
[2097] vixra:1907.0454 [pdf]
A Weak Extension of Complex Structure on Hilbert Spaces
The purpose of this paper is to try to replicate what happens in C on spaces where there are more then one of immaginary units. All these spaces, in our definition, will have the same Hilbert structure. At first we will introduce the sum and product operations on C(H):=RxH (where H is an Hilbert space), then we'll investigate on its algebraic properties. In our construction we lose only the associative of multiplication regardless of H, exept when dim H=1 (in this case RxH = C), and this is why we say "weak extension". One of the most important result of this study is the Weak Integrity Theorem according to which in particular conditions there exist zero divisors. The next result is the Foundamental Theorem according to which for all z in C(H) there exists w in C(H) such that z=w^2. Afterwards we will study tranformations between these spaces which keep operation (that's why we will call them C-morphisms). At the end we will look at the "commutative" functions, i.e. maps C(H) to C(H') which can be rapresented by complex transformations C to C
[2098] vixra:1907.0448 [pdf]
Bell's Correlation Formula and Anomalous Spin
In this paper it is demonstrated that a hidden spin component may exist that in local hidden variables, but quantum manner, invalidates the nonlocality analysis with e.g. inequalities such as CHSH.
[2099] vixra:1907.0437 [pdf]
Values of the Riemann Zeta Function by Means of Division by Zero Calculus
In this paper, we will give the values of the Riemann zeta function for any positive integers by means of the division by zero calculus. Zero, division by zero, division by zero calculus, $0/0=1/0=z/0=\tan(\pi/2) = \log 0 =0 $, Laurent expansion, Riemann zeta function, Gamma function, Psi function, Digamma function.
[2100] vixra:1907.0420 [pdf]
On the V and C in the Lorentz Transformation and Absence of Movement
The v in the Lorentz transformation is the velocity of the origin of the second frame of reference. A point with no dimensions. From this v we cannot conclude a maximum speed (or anything) for physical objects. The c is the velocity of information. We can choose c to be any value and we see that that chosen value of c is measured the same in both reference frames. Then the speed of light is not a special case. The Lorentz transformation is an approximation of the Galileo transformation in which information has velocity c = ∞. The Galileo transformation is 'real-time'. We thereby consider a (absolute) frame of reference from any point in the universe. We can not establish absence of movement and we do not have information traveling at infinite velocity. Therefore the Lorentz transformation is our best approximation of the Galileo transformation and thus of reality.
[2101] vixra:1907.0403 [pdf]
Modeling that Predicts Elementary Particles and Explains Dark Matter, Dark Energy, and Galaxy Formation Data
We propose steps forward regarding the following challenges in elementary particle physics, cosmology, and astrophysics. Predict new elementary particles. Describe mechanisms governing the rate of expansion of the universe. Describe dark matter. Explain ratios of effects of dark matter to effects of ordinary matter. Describe the formation and evolution of galaxies. Integrate modeling that provides those predictions, descriptions, and explanations and modeling that traditional physics theory includes.
[2102] vixra:1907.0398 [pdf]
Energy-Momentum is not Defined Globally, But Locally
Slow precession of the Earth rotation axis, and the Moon-Earth orbital resonance were accompanied during centuries by Newton's and Laplace's explanations. However, in the present paper, the author considers the possibility of additional factor: the small violation of the global energy-momentum conservation. Thus, the energy-momentum concept being not conserved, cannot be regarded as total (global) energy-momentum of system. The recently experimentally verified Lense-Thirring Effect and the Mercury's perihelion anomalous shift cannot be found in Newton Physics, and the latter demands the global energy-momentum conservation. Thus, the shift violates global energy-momentum conservation. Why? Because the energy-momentum is defined locally, not globally.
[2103] vixra:1907.0370 [pdf]
Resolving the Discrepancy Between Direct and Inverse Cosmic Distance Ladder Through a New Cosmological Model
A new cosmological model is presented, which derives from a new physics within a theory of everything. It introduces, beyond radiation and baryonic matter, a unique and new ingredient, which is the substance of the universe, and which can be assimilated to the cold dark matter of the standard cosmology. The new model, although profoundly different from the ΛCDM model, exhibits the same metric and an almost identical distance scale. So it shares the same chronology and the same theory of nucleosynthesis, but solves the problem of the horizon, the flatness of space and the homogeneity of the distribution of matter in a natural way, without having to resort to an additional theory like that of inflation and without dark energy. Eventually it resolves the tension between the direct and the inverse cosmic distance ladder.
[2104] vixra:1907.0327 [pdf]
Bohr's Complementarity and Afshar's Experiment: Non-Dualistic Study at the Single-Quantum Level
Using a newly proposed `wave-particle non-dualistic interpretation' of the quantum formalism, Bohr's principle of complementarity is analyzed in the context of the single-slit diffraction and the Afshar's experiments - at the single-quantum level. The fundamental flaw in the Afshar's argument is explicitly pointed out.
[2105] vixra:1907.0302 [pdf]
A Note on Jordan Algebras, Three Generations and Exceptional Periodicity
It is shown that the algebra $ {{\bf J } }_3 [ { \bf C \otimes O } ] \otimes {\bf Cl(4,C) } $ based on the complexified Exceptional Jordan, and the complex Clifford algebra in $ {\bf 4D}$, is rich enough to describe all the spinorial degrees of freedom of three generations of fermions in ${\bf 4D}$, and include additional fermionic dark matter candidates. Furthermore, the model described in this letter can account also for the Standard Model gauge symmetries. We extend these results to the Magic Star algebras of Exceptional Periodicity developed by Marrani-Rios-Truini and based on the Vinberg cubic $ {\bf T } $ algebras which are generalizations of exceptional Jordan algebras. It is found that there is a one-to-one correspondence among the real spinorial degrees of freedom of ${\bf 4}$ generations of fermions in $ {\bf 4D}$ with the off-diagonal entries of the spinorial elements of the $pair$ $ {\bf T}_3^{ 8, n}, ( {\bf {\bar T}}_3^{ 8, n } ) $ of Vinberg matrices at level $n = 2$. These results can be generalized to higher levels $ n > 2 $ leading to a higher number of generations beyond $ {\bf 4 } $. Three $pairs$ of ${\bf T}$ algebras and their conjugates $ {\bf {\bar T} }$ were essential in the Magic Star construction of Exceptional Periodicity \cite{Alessio1} that extends the $ {\bf e}_8$ algebra to $ {\bf e}_8^{ (n) } $ with $ n $ integer.
[2106] vixra:1907.0236 [pdf]
Standing on the Shoulders of Giants: Derivations of Einstein's E=mc² from Newtonian Laws of Motion
This report presents a simple derivation of Einstein's famous equation, E=mc². Through the use of elementary physical quantities of Newtonian mechanics such as distance, force, momentum, velocity, and energy, our approach resembles a `handling units' method. Further, two other proofs premising on the notion of mass and its dependence on velocity are discussed. These methods prove to be simple and physically intuitive, thus stimulating the amateur enthusiasts to a better understanding of various complex and difficult-to-derive formulas which are otherwise understood at a sophisticated academic level. Pedagogic significance of these methods is further discussed.
[2107] vixra:1907.0234 [pdf]
Blueshift and Redshift In Wide Angle Diffraction
The observation of spectral shift in astronomy bears great similarity to the frequency shift in the Doppler effect. Both blueshift and redshift can be described by the movement of the double-slit interference. In the rest frame of the star, the light passes through the slit to travel a straight path to reach the projection screen. The intersection of this path and the screen determines how the spectrum is shifted. If the screen moves away from the path, the spectrum will be shifted away from the center of the screen. This is known as redshift. If the screen moves toward the path, the spectrum will be shifted toward the center of the screen. This is known as blueshift. The spectrum not only shifts in position but also resizes proportionally. The spectral shift is caused by the motion of earth in the rest frame of the star while the wavelength of the star light remains constant. The redshift places a maximum limit on the radial velocity of the remote galaxy. The galaxy can not be detected if the earth moves faster than the light in the rest frame of the galaxy. This is dark galaxy.
[2108] vixra:1907.0206 [pdf]
On Non-Trivial Zero Point
In the Riemann zeta function, when the value of the nontrivial zero is zero, the value of the real part of the function is negative from 0 to 0.5, but the value of the real part of the function is 0.5 to 1 I found it to be positive. We also found that the positive and negative of the imaginary part also interchanged with the real part 0.5. This tendency is seen as a tendency near the non-trivial zero value, but becomes less and less as it deviates from the non-trivial zero value. We present and discuss the case of four non-trivial zero values. This seems to be an important finding and will be announced here.
[2109] vixra:1907.0179 [pdf]
Intuitionistic Fuzzy Decision-Making in the Framework of Dempster-Shafer Structures
The main emphasis of this paper is placed on the problem of multi-criteria decision making (MCDM) in intuitionistic fuzzy environments. Some limitations in the existing literature that explains Atanassov’ intuitionistic fuzzy sets (A-IFS) from the perspective of Dempster-Shafer theory (DST) of evidence have been analyzed. To address the issues of using Dempster’s rule to aggregate intuitionistic fuzzy values (IFVs), a novel aggregation operator named OWA-based MOS is proposed based on ordered weighted averaging (OWA) aggregation operator, which allows the expression of decision makers’ subjectivity by introducing the attitudinal character. The effectiveness of the developed OWAbased MOS approach in aggregating IFVs is demonstrated by the known example of MCDM problem. To compare different IFVs obtained from the OWA-based MOS approach, the golden rule representative value for IFVs comparison is introduced, which can get over the shortcomings of score functions. The hierarchical structure of the proposed decision approach is presented based on the above researches, which allow us to solve MCDM problem without intermediate defuzzification when not only criteria, but their weights are represented by IFVs. The proposed OWA-based MOS approach is illustrated as a more flexible decision-making method, which can better solve the problem of intuitionistic fuzzy multi-criteria decision making in the framework of DST.
[2110] vixra:1907.0167 [pdf]
Fundamental Solution of the Turbulence Problem Avoiding Hypotheses
Locality, natural causality and deterministics are the fundamental principles of this treatise. A dense fluctuating point set with unique linking to physical movements facilitates the setting up of partial differential equations. So a clear definition of a turbulent fluid is established. Stochasticity in the sense of an ensemble theory is considered by distributions of motion quantities of an unlimited number of parallelly existent deterministic systems. First, particle transport equations of 1. Brownian motion as molecular self diffusion, 2. Stochastic transport by longitudinal fluctuations of a continuum, 3. Stochastic transport by turbulent continuum-fluctuations are developed. So Transition probabilities of moving quantities are evolved. The connection of deterministics and stochastics in the sense of an ensemble theory enables a complete deterministic turbulence equation set. The result is pure geometrodynamics of turbulence on one side and pure geometrodynamics of deformation on the other side. At the end, the incorrectness of the known equation set of laminar fluiddynamics for turbulence problems is discussed.
[2111] vixra:1907.0085 [pdf]
Physical Mechanism Underlying ``einstein's Spooky-Action-at-a-Distance" and the Nature of Quantum Entanglement
The delayed-choice entanglement swapping experiments, both in space and time, are casually explained at a single quantum level by using the `wave-particle non-dualistic interpretation of quantum mechanics'. In order to achieve this, the actual mechanisms involved in the Wheeler's delayed-choice experiment and Einstein's spooky-action-at-a-distance are uncovered from the quantum formalism. The continuity in the motion of any individual quantum particle, due to the constants of motion, is responsible for the outcomes of Wheeler's delayed-choice experiment. The purpose for the existence of spooky action in Nature is to strictly maintain the conservation laws in absence of exchange-interactions. The presence of a casual structure in the entanglement swapping is shown by detailed analysis of the experimental results presented in the papers, ``X-S. Ma et al., Nature. Phys. 8, 480, (2012)'' and ``E. Megidish et al., Phys. Rev. Lett. 110, 210403 (2013)'', at the level of individual quantum events. These experiments are directly confirming the wave-particle non-duality.
[2112] vixra:1907.0077 [pdf]
Expansions of Maximum and Minimum from Generalized Maxwell Distribution
Generalized Maxwell distribution is an extension of the classic Maxwell distribution. In this paper, we concentrate on the joint distributional asymptotics of normalized maxima and minima. Under optimal normalizing constants, asymptotic expansions of joint distribution and density for normalized partial maxima and minima are established. These expansions are used to educe speeds of convergence of joint distribution and density of normalized maxima and minima tending to its corresponding ultimate limits. Numerical analysis are provided to support our results.
[2113] vixra:1907.0076 [pdf]
Double Input Boost/Y-Source DC-DC Converter for Renewable Energy Sources
With the increasing adoption of renewable energy sources by domestic users, decentralisation of the grid is fast becoming a reality. Distributed generation is an important part of a decentralised grid. This approach employs several small-scale technologies to produce electrical energy close to the end users or consumers. The higher reliability of these systems proves to be of advantage when compared to traditional generation systems. Multi-Input Converters (MICs) perform a decisive function in Distributed Energy Resources (DERs). Making use of such MICs prove to be beneficial in terms of size, cost, number of components used, efficiency and reliability as compared to using several independent converters. This thesis proposes a double input DC-DC converter which makes use of a quasi Y-source converter in tandem with a boost converter. The quasi Y-source converter has the advantage of having a very high gain for low duty cycles. The associated operating modes are analysed and the operation of the MIC is verified using simulation result. A hardware prototype is built for large signal analysis in open loop. Different loads are applied and the efficiency of the MIC as a whole as well as the load sharing between the different sources is investigated.
[2114] vixra:1907.0066 [pdf]
Physics on a Branched Knotted Spacetime Manifold
This paper reproduces the dynamics of quantum mechanics with a four-dimensional spacetime manifold that is branched and embedded in a six-dimensional Minkowski space. Elementary fermions are represented by knots in the manifold, and these knots have the properties of the familiar particles. We derive a continuous model that approximates the behavior of the manifold's discrete branches. The model produces dynamics on the manifold that corresponds to the gravitational, strong, and electroweak interactions.
[2115] vixra:1907.0062 [pdf]
A Question About the Consistency of Bell's Correlation Formula
In the paper it is demonstrated that two equally consistent but conflicting uses of sign functions in the context of a simple probability density shows that Bell's formula is based on only one consistent principle. The two conflicting principles give different result. However, according to use of powers, i.e. $3\times (1/2)= (1/2)\times 3$, one must have the same result in both cases.
[2116] vixra:1907.0037 [pdf]
Disproof of the Riemann Hypothesis
In my previous paper “Consideration of the Riemann hypothesis” c=0.5 and x is non- trivial zero value, and it was described that it converges to almost 0, but a serious proof in mathematical expression could not be obtained. It is impossible to make c = 0.5 exactly like this. c can only be 0.5 and its edge. It is considered that “when the imaginary value increases to infinity, the denominator of the number becomes infinity and shifts from 0.5 to 0”.
[2117] vixra:1907.0035 [pdf]
Blueshift and Redshift In Small Angle Diffraction
The observation of spectral shift in astronomy arises from the relative motion between the observed star and the earth. Both blueshift and redshift can be explained with the relative movement of the double-slit interference. In the rest frame of the star, the light passes through the slit to travel a straight path to reach the projection screen. The intersection of this path and the screen is shifted by the movement of the screen. If the screen moves away from the path, the spectrum will be shifted away from the center of the screen. This is known as redshift. If the screen moves toward the path, the spectrum will be shifted toward the center of the screen. This is known as blueshift. The spectrum not only shifts in position but also expands in size. The spectral shift is the result of the relative motion between the projection screen and the path of phase shift. It is not the result of any variation in the wavelength.
[2118] vixra:1907.0011 [pdf]
Unified Electro-Gravity (UEG) Theory Applied to Stellar Gravitation, and the Mass-Luminosity Relation (MLR)
The Unified Electro-Gravity (UEG) theory is applied to model gravitational effects of an individual star or a binary-star system, including that of the sun which is the only star of our solar system. The basic UEG theory was originally developed to model elementary particles, as a substitute for the standard model of particle physics. The UEG theory is extended in this paper (a) to model the gravitational force due to light radiation from an individual star, which determines its energy output due to nuclear fusion in the star, as well as (b) to model the gravitational force between two nearby stars, which determines the orbital dynamics in a binary-star system. The mass-luminosity relation (MLR) derived separately from each of the above two models are compared and studied together with the MLR currently available from measured orbital data for binary stars, as well as from an existing energy-source model for stellar nuclear fusion (Eddington's model). The current MLR data uses conventional Newtonian gravity, where the gravitational force is produced only due to the gravitational mass of the star, which is assumed to be equal to the inertial mass as per the principle of equivalence. This Newtonian gravitational model is modified by including the new UEG effect due to the light radiation of a star, in order to establish the actual MLR which can be significantly different from the currently available MLR. The new UEG theory is applied to an individual isolated star (for modeling the force for stellar nuclear fusion), which is spherically symmetric about its own center, in a fundamentally different manner from its application to a binary-star system (for modeling orbital motion of a binary-star), which is not a spherically-symmetric structure.
[2119] vixra:1907.0010 [pdf]
Unified Electro-Gravity (UEG) Theory Applied to Spiral Galaxies
The unified electro-gravity (UEG) theory, which has been successfully used for modeling elementary particles, as well as single and binary stars, is extended in this paper to model gravitation in spiral galaxies. A new UEG model would explain the ``flat rotation curves'' commonly observed in the spiral galaxies. The UEG theory is developed in a fundamentally different manner for a spiral galaxy, as compared to prior applications of the UEG theory to the elementary particle and single stars. This is because the spiral galaxy, unlike the elementary particles or single stars, is not spherically symmetric. The UEG constant $\gamma$, required in the new model to support the galaxies' flat rotation speeds, is estimated using measured data from a galaxy survey, as well as for a selected galaxy for illustration. The estimates are compared with the $\gamma$ derived from a UEG model of elementary particles. The UEG model for the galaxy is shown to explain the empirical Tuly-Fisher Relationship (TFR), is consistent with the Modified Newtonian Dynamics (MOND), and is also independently supported by measured trends of galaxy thickness with surface brightness and rotation speed.
[2120] vixra:1907.0009 [pdf]
The Unified Electro-Gravity (UEG) Theory Applied to Cosmology
The Unified Electro-Gravity (UEG) theory is extended for the unique conditions of cosmology, which may support a possible reversal of the current expansionary phase of the universe, explain the current accelerated expansion of the universe without need for any dark energy, and also explain the signatures of the baryon acoustic oscillation (BAO) in the cosmic microwave background (CMB) and in the correlation function of galaxy distribution, without any dark matter. UEG effects due to the the CMB radiation in the recent universe, and in the ionized environment before recombination, as well as those due to anticipated star lights in the future universe, are modeled with suitable cosmological assumptions. This may provide a new theoretical paradigm, which can potentially answer some of the most fundamental questions in cosmology today.
[2121] vixra:1906.0579 [pdf]
'supralogic' or a Method for Predicting Stochastic Mapping Outcomes by Interpolating Their Probabilties
In a stochastic mapping model, a method is described for interpolating un-sampled mapping probabilities given a successive set of observed mappings. The sampled probabilities are calculated from the observed mappings. The previously described method of interpolating values in code space is used to interpolate the un-observed mapping probabilities. The outcomes for subsequent mappings can then be predicted by finding the processes with maximal interpolated probability. Finally, a software package is created and demonstrated to implement the method and tested on a variety of situations for filling in missing element values or categorising data arrays.
[2122] vixra:1906.0577 [pdf]
The Nonlinear Schroedinger Equation with Infinite Mass Wave
The Schroedinger equation with the logarithmic nonlinear term is derived by the natural generalization of the hydrodynamical model of quantum mechanics. The nonlinear term appears to be logically necessary because it enables explanation of the infinite mass limit of the wave function. The article is the modified version of the articles by author (Pardy, 1993; 2001).
[2123] vixra:1906.0561 [pdf]
Emerging Trends Indigitalauthentication
This manuscript attempts to shed the light on the authentication systems’ evolution towards Multi-factor Authentication (MFA) from traditional text based password systems. The evolution of authen-tication systems is commensurate with that of security breaching techniques. While many strongauthentication products, such as multi-factor authentication (MFA), single sign-on (SSO), biometricsand privileged access management (PAM), have existed for a long time, the constant deluge ofdata breaches and password database leaks has re-illustrated the weakness in many authenticationparadigms. As a result, the industry is both re-thinking they way we approach authentication andmaking efforts to simplify previously complex or expensive authentication technologies for the everyhuman being.
[2124] vixra:1906.0505 [pdf]
A Clock Paradox in Gravity
We consider a static spherically symmetric distribution of matter. Time intervals for a free falling and a stationary clock are compared. This does not agree with that calculated using Newton approximation to gravity and the equivalence principle.
[2125] vixra:1906.0490 [pdf]
Compton Particles and Quantum Forces
An alternative physical model for fundamental particles, fundamental forces & black holes is presented based on classical physics, an unconventional variant of quantum physics as well as holographic & fractal principles whereby the presented model is primarily based on work from Horst Thieme and Nassim Haramein. In this document their models are combined, refined and extended into a joint model that is wider in scope and which also adopts some elements from the work of Randell Mills and Erik Verlinde. The deduced equations produce a good number of interesting results and new understandings which might be perceived as controversial, though, with regard to contemporary physics. The presented content covers a broad range of topics in physics to demonstrate the model’s wide applicability and to spark more future research. In particular it is shown that entropy plays an even larger and more fundamental role in physics than currently acknowledged and that the Planck units are more than an arbitrary system of units.
[2126] vixra:1906.0463 [pdf]
Finding The Hamiltonian
We first find a Hamiltonian H that has the Hurwitz zeta functions ζ(s,x) as eigenfunctions. Then we continue constructing an operator G that is self-adjoint, with appropriate boundary conditions. We will find that the ζ(s,x)-functions do not meet these boundary conditions, except for the ones where s is a nontrivial zero of the Riemann zeta, with the real part of s being greater than 1/2. Finally, we find that these exceptional functions cannot exist, proving the Riemann hypothesis, that all nontrivial zeros have real part equal to 1/2.
[2127] vixra:1906.0458 [pdf]
Solution of the Central Problem of Fluid Turbulence
The theory consists of: I. a clear formulation of the turbulence problem by 1. definition of a fluid continuum, 2. definition of a turbulent fluid continuum, 3. derivation, that Navier-Stokes-like equations cannot describe a turbulent fluid continuum II. solution of the turbulence problem by establishing the link between the theory of deterministic fluctuating vector fields and stochastic vector fields in the sense of an ensemble theory as a counterpart: 1. derivation of a deterministic equation system of coupled vector vortex and curvature vector fields 2. derivation of a complete equation set for turbulent fluid movements The formulation of geometrodynamics of turbulence does not need an existent local thermodynamic equilibrium. In the case of fluid turbulence there is no requirement for establishing chaos theories.
[2128] vixra:1906.0447 [pdf]
Study of (σ,τ)-Generalized Derivations with Their Composition of Semiprime Rings
The main purpose of this paper is to study and investigate certain results concerning the (σ,τ)-generalized derivation D associated with the (σ,τ)-derivation d of semiprime and prime rings R, where σ and τ act as two automorphism mappings of R. We focus on the composition of (σ,τ)-generalized derivations of the Leibniz’s formula, where we introduce the general formula to compute the composition of the (σ,τ)-generalized derivation D of R.
[2129] vixra:1906.0443 [pdf]
Discrete Motives for Moonshine
From the holographic perspective in quantum gravity, topological field theories like Chern-Simons are more than toy models for computation. An algebraic construction of the CFT associated to Witten's j-invariant for 2+1 dimensional gravity aims to compute coefficients of modular forms from the combinatorics of quantum logic, dictated by axioms in higher dimensional categories, with heavy use of the golden ratio. This paper is self contained, including introductory material on lattices, and aims to show how the Monster group and its infinite module arise when the automorphisms of the Leech lattice are extended by special point sets in higher dimensions, notably the 72 dimensional lattice of Nebe.
[2130] vixra:1906.0440 [pdf]
A New Unified Electro-Gravity Theory for the Electron
A rigorous model for the electron is presented by generalizing the Coulomb's Law or Gauss's Law of electrostatics, using a unified theory of electricity and gravity. The permittivity of the free-space is allowed to be variable, dependent on the energy density associated with the electric field at a given location, employing generalized concepts of gravity and mass/energy density. The electric field becomes a non-linear function of the source charge, where concept of the energy density needs to be properly defined. Stable solutions are derived for a spherically symmetric, surface-charge distribution of an elementary charge. This is implemented by assuming that the gravitational field and its equivalent permittivity function is proportional to the energy density, as a simple first-order approximation, with the constant of proportionality referred to as the Unifield Electro-Gravity (UEG) constant. The stable solution with the lowest mass/energy is assumed to represent a ``static'' electron without any spin. Further, assuming that the mass/energy of a static electron is half of the total mass/energy of an electron including its spin contribution, the required UEG constant is estimated. More fundamentally, the lowest stable mass of a static elementary charged particle, its associated classical radius, and the UEG constant are related to each other by a dimensionless constant, independent of any specific value of the charge or mass of the particle. This dimensionless constant is numerologically found to be closely related to the the fine structure constant. This possible origin of the fine structure constant is further strengthened by applying the proposed theory to successfully model the Casimir effect, from which approximately the same above relationship between the UEG constant, electron's mass and classical radius, and the fine structure constant, emerges.
[2131] vixra:1906.0439 [pdf]
A Generalized Unified Electro-Gravity (UEG) Model Applicable to All Elementary Particles
The Unified Electro-Gravity (UEG) theory, originally developed to model an electron, is generalized to model a variety of composite charged as well as neutral particles, which may constitute all known elementary particles of particle physics. A direct extension of the UEG theory for the electron is possible by modifying the functional dependence between the electro-gravitational field and the energy density, which would lead to a general class of basic charged particles carrying different levels of mass/energy, with the electron mass at the lowest level. The basic theory may also be extended to model simple composite neutral particles, consisting of two layers of surface charges of equal magnitudes but opposite signs. The model may be similarly generalized to synthesize more complex structures of composite charged or neutral particles, consisting of increasing levels of charged layers. Depending upon its specific basic or composite structure, a particle could be highly stable like an electron or a proton, or relatively unstable in different degrees, which may be identified with other known particles of the standard model of particle physics. The generalized UEG model may provide a new unified paradigm for particle physics, as a substitute for the standard model currently used, making the weak and strong forces of the standard model redundant.
[2132] vixra:1906.0438 [pdf]
Unified Electro-Gravity (UEG) Theory and Quantum Electrodynamics
The Unified Electro-Gravity (UEG) theory, originally developed to model a stable static charge, is extended to a spinning charge using a ``quasi-static'' UEG model. The results from the new theory, evaluated in comparison with concepts and parameters from basic quantum mechanics (QM) and quantum electrodynamics (QED), show that the QM and the QED trace their fundamental origins to the UEG theory. The fine structure constant and the electron g-factor, which are key QED parameters, can be directly related to the proportionality constant (referred to as the UEG constant) used in the UEG theory. A QM wave function is shown to be equivalent to a space-time ripple in the permittivity function of the free space, produced by the UEG fields surrounding a spinning charge, and the basic QM relationships between energy and frequency naturally emerge from the UEG model. Further extension and generalization of the theory may also explain all other quantum mechanical concepts including particle-wave duality, frequency shift in electrodynamic scattering, and charge quantization, leading to full unification of the electromagnetics and gravity with the quantum mechanics.
[2133] vixra:1906.0433 [pdf]
Evidential Distance Measure in Complex Belief Function Theory
In this paper, an evidential distance measure is proposed which can measure the difference or dissimilarity between complex basic belief assignments (CBBAs), in which the CBBAs are composed of complex numbers. When the CBBAs are degenerated from complex numbers to real numbers, the proposed distance will degrade into the Jousselme et al.’s distance. Therefore, the proposed distance provides a promising way to measure the differences between evidences in a more general framework of complex plane space.
[2134] vixra:1906.0425 [pdf]
A Possible Sign of Critical Transition
Forecast of critical transitions in a dynamical system is one of the most important research problems in recent time. In this short communication, we discuss a possible novel sign of critical transitions in nonlinear systems. We have shown that the higher order terms of the Taylor series play an important role in determining critical transitions in a system. Moreover, we explain our approach using the Logistic map.
[2135] vixra:1906.0404 [pdf]
Via Geometric Algebra: Direction and Distance Between Two Points on a Spherical Earth
As a high-school-level example of solving a problem via Geometric (Clifford) Algebra, we show how to calculate the distance and direction between two points on Earth, given the locations' latitudes and longitudes. We validate the results by comparing them to those obtained from online calculators. This example invites a discussion of the benefits of teaching spherical trigonometry (the usual way of solving such problems) at the high-school level versus teaching how to use Geometric Algebra for the same purpose.
[2136] vixra:1906.0392 [pdf]
Quantum Impedance Matching of Rabi Oscillations
We present a model of Geometric Wavefunction Interactions, the GWI model, that offers an alternative (perhaps equivalent) representation of QED, and use it to explore the quantized impedance structure governing energy flow in Rabi oscillations.
[2137] vixra:1906.0383 [pdf]
Supervised Dimensionality Reduction for Multi-Label Nearest Neighbors
The ML-kNN algorithm is one of the most famous and most efficient multi-label classifier. Its performances are very remarkable when compared with the other state-of-art multi-label classifiers. Nevertheless, it suffers from two major drawbacks: its accuracy crucially depends on the metric function used to compute distances between instances, and when dealing with high dimensions data, the neighborhoods identification task becomes very slow. So, both metric learning and dimensionality reduction are essential to improve the ML-kNN performances. In this report, we propose a novel multi-label Mahalanobis distance learned via a supervised dimensionality reduction approach that we call ML-ARP. ML-ARP is a process that adapts random projections on a multi-label dataset to improve the ML-kNN performances. Unlike most state of art multi-label dimensionality reduction approaches that solve eigenvalue or inverse problem, our method is iterative and scales up with high dimensions. There is no eigenvalue or inverse problems to solve. Experiments show that the ML-ARP allows us to highly upgrade the ML-kNN classifier. Statistical tests assert that the MLARP is better than the remaining state-of-art multi-label dimensionality reduction approaches
[2138] vixra:1906.0377 [pdf]
Zero Points of Riemann Zeta Function
In this article, we assume that the Riemann Zeta Function equals to the Euler product at the non zero points of the Riemann Zeta function. From this assumption we can prove that there are no zero points of Riemann Zeta function, ς(s) in Re(s) > 1/2. We applied proof by contradiction.
[2139] vixra:1906.0374 [pdf]
A Simple Proof for Catalan's Conjecture
Catalan's Conjecture was first made by Belgian mathematician Eugène Charles Catalan in 1844, and states that 8 and 9 (2^3 and 3^2) are the only consecutive powers, excluding 0 and 1. That is to say, that the only solution in the natural numbers of a^x - b^y=1 for a,b,x,y > 1 is a=3, x=2, b=2, y=3. In other words, Catalan conjectured that 3^2-2^3=1 is the only nontrivial solution. It was finally proved in 2002 by number theorist Preda Mihailescu making extensive use of the theory of cyclotomic fields and Galois modules.
[2140] vixra:1906.0352 [pdf]
Double Slit Interference and Doppler Effect
The double-slit interference shows that the product of the wavelength and the distance from the slit plate to the projection screen is conserved in all inertial reference frames. This conservation ensures that the observed wavelength in any inertial reference frame is identical to the original wavelength in the rest frame of the light source. According to the Doppler effect, the observed frequency depends on the choice of inertial reference frame. With the same wavelength but different frequency, the speed of light is different in a different inertial reference frame.
[2141] vixra:1906.0351 [pdf]
Beta Decay Emits No Neutrino
The 1927 Ellis-Wooster calorimetry experiment was an attempt to resolve the controversy over the continuous energy distribution spectrum of beta decay. A Radium E source was placed within a calorimeter in order to capture and measure the heat generated by beta decay. If the beta decay energy is assumed to be quantized, the captured heat energy should match the maximum spectrum energy of 1.05 MeV if the calorimeter captured all the disintegration energy. The result of the experiment gave the captured average heat of beta decay to be 350,000 eV instead of the expected 1.05 MeV. The 350,000 eV was accepted to be a match to the average spectrum energy of 390,000 eV. The experiment indicated some energy escaped the Ellis-Wooster calorimeter - thus the notion of "missing energy". The thesis of this paper is that the conclusion of the Ellis-Wooster experiment depends on whether the heat of calorimetry is consistent with relativistic kinetic energy or with classical kinetic energy. The spectrum energy used by the experiment was based on relativistic energy. If the values are converted to classical energy, the the maximum spectrum energy would only be 230,000 eV and the average 120,000 eV. The captured heat was much greater than the average of 120,000 eV. This reinterpretation would dismiss the notion of any missing energy in the experiment. The question of whether there was any missing energy is related to whether physical reality is consistent with special relativity or with Newtonian mechanics. The basis upon which Wolfgang Pauli proposed his 1930 neutrino hypothesis was the conclusion of the 1927 Ellis-Wooster experiment which supposedly supported the idea of "missing energy". The neutrino and the current neutrino physics would remain if special relativity is found to be the correct mechanics representing the physical world. On the other hand, if Newtonian mechanics is found to be correct, then all of neutrino physics would have to be dismissed. The one experiment that could decide on the issue is to determine the maximum speed with which beta particles are ejected in beta decay using the direct time-of-flight method. If Newtonian mechanics is correct, then there would be beta particles found to go beyond the speed of light; otherwise, it would be experimental evidence supporting special relativity. The result of this experiment would settle unequivocally the question concerning the nature of physical reality. But to date, this experiment has not been carried out.
[2142] vixra:1906.0336 [pdf]
A Michelson-Morley Type Experiment Should be Performed in Low Earth Orbit and Interplanetary Space
This paper supports those who have proposed that a Michelson-Morley type experiment (MMX) be performed in outer space. It predicts results that will falsify the foundational postulates of Einstein's relativity and it explains why these these unexpected results are predicted. The prediction is that a Michelson-Morley type experiment performed in low Earth orbit will show an unambiguous non-null result with a fringe or frequency variation proportional to the square of its orbital velocity (7.6km/sec for a 500 km orbital altitude). If performed in interplanetary space, the result will be equivalent to the space-craft's orbital velocity around the Sun (∼ 30km/sec). These predictions are based on an alternative ether concept proposed by the late Prof. Petr Beck-mann in 1986 and independently developed by late Prof. Ching-Chuan Su in 2000. Prof. Su called it the local-ether model. It explains that the reason terrestrial MMX type experiments have reported null results is not because there is no "ether-wind" to detect; it is because the actual value of the "ether-wind" is due only to the velocity of Earth's rotation at the latitude of the laboratory (464cosθ meters/sec). This is too small for even the most sensitive recent versions of the MMX to unambiguously detect. Finally we will discuss accomplishing the experiment with private funding.
[2143] vixra:1906.0329 [pdf]
Some Conjectures On Inequalities In Operator Axioms
The Operator axioms have deduced number systems. In this paper, we conjecture some inequalities in Operator axioms. The general inequalities show the value of Operator axioms.
[2144] vixra:1906.0318 [pdf]
Asymptotic Closed-form Nth Zero Formula for Riemann Zeta Function
Assuming the Riemann Hypothesis to be true, we propose an asymptotic and closed-form formula to find the imaginary part for non-trivial zeros of the Riemann Zeta Function.
[2145] vixra:1906.0302 [pdf]
A Trigonometric Proof of Oppenheim’s and Pedoe Inequality
This problem first appeared in the American Mathematical Monthly in 1965, proposed by Sir Alexander Oppenheim. As a matter of curiosity, the American Mathematical Monthly is the most widely read mathematics journal in the world. On the other hand, Oppenheim was a brilliant mathematician, and for the excellence of his work in mathematics, obtained the title of “ Sir ”, given by the English to English citizens who stand out in the national and international scenario.Oppenheim is better known in the academic world for his contribution to the field of Number Theory, known as the Oppenheim Conjecture.
[2146] vixra:1906.0295 [pdf]
Expanding Universe from Weyl Cosmology and Asymptotic Safety in Quantum Gravity
A study of the simplest Jordan-Brans-Dicke-like action within the context of Weyl geometry, combined with the findings of Weinberg’s Asymptotic Safety program in quantum gravity, leads to a plethora of nice numerical results : (i) like singling out the quartic potential from all the others; (ii) having (Anti) de Sitter space as the most natural solution; (iii) furnishing the value of the observed vacuum energy density at the Hubble scale (3/8 pi G_N R_H^2) ~ 10^{−122} M_P^4 ; (iv) and a value of {3/8 pi) M^4_P for the vacuum energy density at the Planck scale; (v) interpreting the “Bang” of the Big Bang as the singularity of the Weyl gauge field of dilations at t = 0 ushering in the era of inflation, and (vi) allowing the possibility that our universe is a Black Hole whose horizon coincides with the cosmological Hubble horizon. It is warranted to explore deeper the interplay among Weyl geometry, Asymptotic safety and Maldacena’s AdS/CFT correspondence (holographic renormalization group flow). Also relevant is the work by Wetterich on the role of dilatation symmetry in higher dimensions and the vanishing of the cosmological constant. Last, but not least, we should also consider the implications of Penrose’ Conformal Cyclic Cosmology and Nottale’s Scale Relativity Theory with the key findings of this work.
[2147] vixra:1906.0278 [pdf]
A Trigonometric Proof of Oppenheim’s Inequality
This problem first appeared in the American Mathematical Monthly in 1965, proposed by Sir Alexander Oppenheim. As a matter of curiosity, the American Mathematical Monthly is the most widely read mathematics journal in the world. On the other hand, Oppenheim was a brilliant mathematician, and for the excellence of his work in mathematics, obtained the title of “ Sir ”, given by the English to English citizens who stand out in the national and international scenario.Oppenheim is better known in the academic world for his contribution to the field of Number Theory, known as the Oppenheim Conjecture.
[2148] vixra:1906.0275 [pdf]
Perfect Contrast Cannot be Obtained in the Electron Double-Slit Experiment
Conventionally, the wave of particles which through the double-slit is assumed plane waves. In this research, we considered that the interference fringes built up through the double-slit have a difference amplitudes between the case of electrons and the case of photons. The difference between the two fringes is in the troughs of the waves. In this research, it is hypothesized that the amplitudes of waves passing through the left and right slits are not even in the double-slit experiment of electrons. Computer simulations performed to obtain the results supporting this hypothesis. The concept that waves of different amplitudes pass through a double-slit is reasonably to have the notion that two spinor particles pass through each slit.
[2149] vixra:1906.0258 [pdf]
Graph Signal Processing: Towards the Diffused Spectral Clustering
Graph signal processing is an emerging field of research. When the structure of signalscanberepresentedasagraph,itallowstofullyexploittheirinherentstructure. It has been shown that the normalized graph Laplacian matrix plays a major role in the characterization of a signal on a graph. Moreover, this matrix plays a major role in clustering large data set. In this paper, we present the diffused spectral clustering: a novel handwritten digits clustering algorithm based on the normalizedgraphLaplacianproperties. It’saclevercombinationbetweenagraph feature space transformation and the spectral clustering algorithm. Experimentally, our proposal outperforms the other algorithms of the state-of-art.
[2150] vixra:1906.0257 [pdf]
Isotropy Of Light In Reference Frame
The speed of light is identical in all directions in the rest frame of the light source. In a different inertial reference frame, the direction of light may change due to the motion of the light source. The speed of light in the longitudinal direction of the motion of the light source is compared to the speed of light in the transverse direction. The result shows that these two speeds are equal only if the speed of the light source is greater than the speed of light.
[2151] vixra:1906.0245 [pdf]
The Contributions of the Gallo Team and the Montagnier Team to the Discovery of the AIDS Virus
In this paper I review the main works of the teams headed by Robert Gallo and Luc Montagnier which led to the discovery of the HIV retrovirus and to the blood test with which one can prove HIV infection. I show that this discovery which saved millions of human lifes (and perhaps the survival of mankind) was made possible only (i) because Gallo's team discovered the T-cell lymphocyte growth factor with which they were able to discover the first retrovirus that infects humans (HTLV-I) and their hypothesis that AIDS is caused by a retrovirus, and (ii) because Montagnier's team detected an antibody against alpha interferon in order to enhance retrovirus production with which they were able to discover the HIV retrovirus and their examination and blood test that gave evidence that HIV causes AIDS. Their examination was improved by the Gallo team who proved without doubt that HIV is the cause of AIDS. I leave the question open whether Gallo deserved the Nobel Prize or whether the Nobel committee's decision to award the prize only to Montagnier and Barre-Sinoussi was correct.
[2152] vixra:1906.0243 [pdf]
Speed and Measure Theorems Related to the Lonely Runner Conjecture
We prove an important new result on this problem: Given any epsilon > 0 and k >= 5, and given any set of speeds s_1 < s_2 < ... < s_k, there is a set of speeds v_1 < v_2 < ... < v_k for which the lonely runner conjecture is true and for which |s_i - v_i| < epsilon. We also prove some measure theorems.
[2153] vixra:1906.0214 [pdf]
Exact Periodic Solutions of Truly Nonlinear Oscillator Equations and Quadratic Liénard-Type Equations
The present research contribution is devoted to solving the integrability problem of Liénard type differential equations. It is shown that such a problem may be solved by nonlocal transformation for some classes of equations. By doing so, it is observed that the integrability of a class of restricted Duffing type equations with integral power or fractional power nonlinearity may be secured by that of a general class of quadratic Liénard type differential equation, and vice versa. Such a restricted Duffing type equation is also shown to be closely related to a quadratic Li´enard type equation for which exact and explicit general solution may be computed. In this context it has been shown that exact and general periodic solutions may be computed for these two classes of restricted Duffing equations and quadratic Liénard type equations. The comparison of obtained solutions with some well-known results is carried out in some cases.
[2154] vixra:1906.0213 [pdf]
Descriptions of Elementary Particles plus Dark Matter plus Dark Energy and Explanations for Some Related Data
We suggest united models and specific predictions regarding elementary particles, dark matter, aspects of galaxy evolution, dark energy, and aspects of the cosmology timeline. Results include specific predictions for new elementary particles and specific descriptions of dark matter and dark energy. Some of our modeling matches known elementary particles and extrapolates to predict other elementary particles, including bases for dark matter. Some modeling explains observed ratios of effects of dark matter to effects of ordinary matter. Some models suggest aspects of galaxy formation and evolution. Some modeling correlates with eras of increases or decreases in the observed rate of expansion of the universe. Our modeling framework features mathematics for isotropic quantum harmonic oscillators and provides a framework for creating physics theories. Some aspects of our approach emphasize existence of elementary particles and de-emphasize motion. Some of our models complement traditional quantum field theory and, for example, traditional calculations of anomalous magnetic dipole moments.
[2155] vixra:1906.0212 [pdf]
Integrability Analysis of a Generalized Truly Nonlinear Oscillator Equation
The integrability of a general class of Liénard type equations is investigated through equation transformation theory. In this way it is shown that such a class of Liénard equations can generate a generalization of some interesting truly nonlinear oscillator equations like the cube and fifth root differential equations. It has then become possible to compute the exact and general solution to the generalized truly nonlinear oscillator equation. Under an appropriate choice of initial conditions, exact and explicit solution has been obtained in terms of Jacobi elliptic functions.
[2156] vixra:1906.0211 [pdf]
Sparse Ensemble Learning with Truncated Convolutional Autoencoders for Cleaning Stained Documents
This paper mainly focus on how to extract clean text from the stained document. It may happen sometimes that due to stains it becomes very difficult to understand the documents and from the previous work it has been seen that one particular modelling technique either through Image processing or Machine learning which alone can’t perform for all the cases in general. As we all know ensemble techniques combine many of the modelling techniques and result in much reduced error that would not be possible by just having single model. But the features used for different models should be sparse or non-overlapping enough to guarantee the independence of each of the modelling techniques. XGBoost is one such ensemble technique in comparison to gradient boosting machines which are very slow due to this it’s not possible to combine more than three models with reasonable execution time. This work mainly focus on combining the truncated convolutional Autoencoders with sparsity take into account to that of machine learning and Image processing models using XGBoost such that the whole model results in much reduced error as compared to single modelling techniques. Experimentation’s are carried out on the public dataset NoisyOffice published on UCI machine learning repository, this dataset contains training, validation and test dataset with variety of noisy greyscale images some with ink spots, coffee spots and creased documents. Evaluation metric is taken to be RMSE(Reduced Mean Squared Error) to show the performance improvement on the variety of images which are corrupted badly
[2157] vixra:1906.0206 [pdf]
The Magnitude of Electromagnetic Time Dilation.
Theories unifying gravity and electromagnetism naturally give rise to the question of whether there might be a time dilation associated with the electromagnetic 4-potential. We show here that the magnitude of EM time dilation can be computed from elementary considerations that are independent of specific unified theories. We further show that the electrostatic part of the effect is well within reach of experiment, while the magnetic part is not.
[2158] vixra:1906.0199 [pdf]
A Concise Proof for Beal's Conjecture
In this paper, we show how a^x - b^y can be expressed as a binomial expansion (to an indeterminate power, z, and use it as the basis for a proof for the Beal Conjecture.
[2159] vixra:1906.0185 [pdf]
Division by Zero Calculus in Multiply Dimensions and Open Problems (An Extension)
In this paper, we will introduce the division by zero calculus in multiply dimensions in order to show some wide and new open problems as we see from one dimensional case.
[2160] vixra:1906.0184 [pdf]
Core Issues in "Foundations of QFT: 2019 Annual Philosophy of Physics Conference"
This year's workshop is focused on three core issues. Paraphrasing and rearranging their order, we examine optimal mathematical formalisms for the wavefunction and its interactions (particularly in light of the problem of renormalization), phenomenological foundations, and relativistic extensions of quantum mechanics.
[2161] vixra:1906.0163 [pdf]
Maximal Generalization of Lanczos' Derivative Using One-Dimensional Integrals
Derivative of a function can be expressed in terms of integration over a small neighborhood of the point of differentiation, so-called differentiation by integration method. In this text a maximal generalization of existing results which use one-dimensional integrals is presented together with some interesting non-analytic weight functions.
[2162] vixra:1906.0162 [pdf]
Photons and Independent E/m Waves
The independent E/M waves are created, during the electron oscillations at an emission antenna. These waves do not compose the constant photon length, in contrast to the fundamental E/M waves of photon. Additionally, the Compton phenomenon is interpreted, while the atomic orbitals are standing waves the self-superposition of the motion wave of electrons.
[2163] vixra:1906.0148 [pdf]
On Some Isoperimetric Inequalities for Dirichlet Integrals; Green's Function and Dirichlet Integrals
In this paper, as a direct application of Q. Guan's result on the conjugate analytic Hardy $H_2$ norm we will derive a new type isoperimetric inequality for Dirichlet integrals of analytic functions.
[2164] vixra:1906.0101 [pdf]
Standing Wave And Doppler Effect
The harmonic mode of standing wave requires the number of nodes to be conserved in all inertial reference frames. The half wavelength is proportional to the width of the microwave cavity. The same cavity width is observed by all stationary observers in the same inertial reference frame. All observers observe the same wavelength from the standing wave in a moving cavity. According to the Doppler effect, the observer will detect a higher frequency if the microwave cavity is approaching. The observer will detect a lower frequency if the microwave cavity is receding. With the same wavelength but different frequency, the speed of microwave in the standing wave is different for different observer.
[2165] vixra:1906.0045 [pdf]
Proposal for Definition and Implementation of a Unified Geodetic Referential for North Africa
In this paper, we present the details of the proposal for definition and implementation of a unified geodetic referential for the countries of North Africa as : - the choice of the system of the referential, - the steps of the realization of the project, - the training of the geodesists working on the project.
[2166] vixra:1906.0025 [pdf]
A New Proof of the ABC Conjecture
In this paper, using the recent result that $c<rad(abc)^2$, we will give the proof of the $abc$ conjecture for $\epsilon \geq 1$, then for $\epsilon \in ]0,1[$. We choose the constant $K(\epsilon)$ as $K(\epsilon)=e^{\frac{1}{\epsilon^2} $. Some numerical examples are presented.
[2167] vixra:1905.0614 [pdf]
Nature Works the Way Number Works
Based on Eulers formula a concept of dually unit or d-unit circle is discovered. Continuing with, Riemann hypothesis is proved from different angles, Zeta values are renormalised to remove the poles of Zeta function and relationships between numbers and primes is discovered. Other unsolved prime conjectures are also proved with the help of theorems of numbers and number theory. Imaginary number i can be defined such a way that it eases the complex logarithm without needing branch cuts. Pi can also be a base to natural logarithm and complement complex logarithm.Grand integrated scale is discovered which can reconcile the scale difference between very big and very small. Complex constants derived from complex logarithm following Goldbach partition theorem and Eulers Sum to product and product to unity can explain lot of mysteries in the universe.
[2168] vixra:1905.0596 [pdf]
Ancillary Inflation and the Theory of General Relativity
This is a review of the main work on the ancient and experimental issues in the study of the potential for inflation in the framework of the classical general relativity. The major subjects considered include the extension of general relativity to the universe in a curved spacetime, and the implications of new and innovative methods for peculiar and general theories of relativity.
[2169] vixra:1905.0588 [pdf]
Quantum Model of Mass Function
Dempster-Shafer (D-S) evidence theory has been used in many fields due to the flexibility and effectiveness in modeling uncertainties, which is the extension of classical probability. Recently, quantum probability which can express uncertainty has been used in many fields due to the existing of interference. Especially for human decision and cognition, interference can better model the process of decision. In order to better expand the applications of D-S evidence theory, the paper proposed quantum model of mass function which can consider the interference. In proposed quantum method, quantum mass function uses euler formula to represent. The paper also discusses some operations in quantum model of mass function. Moreover, the paper also discusses the relationship between quantum mass function and classical mass function by using some numerical examples. Classical mass function is the special case when there is no interference in quantum mass function.
[2170] vixra:1905.0552 [pdf]
About One Geometric Variation Problem
Translation of the article of Emanuels Grinbergs, ОБ ОДНОЙ ГЕОМЕТРИЧЕСКОЙ ВАРИАЦИОННОЙ ЗАДАЧЕ that is published in LVU Zinātniskie darbi, 1958.g., sējums XX, izlaidums 3, 153.-164., in Russian https://dspace.lu.lv/dspace/handle/7/46617.
[2171] vixra:1905.0546 [pdf]
Consideration of the Riemann Hypothesis
I considered Riemann’s hypothesis. At first, the purpose was to prove, but can not to prove. It is written in the middle of the proof, but it can not been proved at all. (The calculation formula is also written, but the real value 0.5 was not shown at all) The non-trivial zero values match perfectly in the formula of this paper. However, the formula did not reach the real value 0.5. In this case, it only reaches the pole near the real value 0.5.
[2172] vixra:1905.0513 [pdf]
Proton Charge Radius Distortion in Dirac Hydrogen by "Electron Zitterbewegung"
The commutator of the Dirac free-particle's velocity operator with its Hamiltonian operator is nonzero and independent of Planck's constant, which starkly violates the quantum correspondence-principle requirement that commutators of observables must vanish when Planck's constant vanishes, as well as violating the extended Newton's First Law principle that relativistic free particles do not accelerate. The consequent nonphysical particle "zitterbewegung" is of course absent altogether when the natural relativistic free-particle square-root Hamiltonian operator, which is the transparent consequence of the free particle's Lorentz-covariant energy-momentum, replaces the free-particle Dirac Hamiltonian. The energy spectrum of the pathology-free relativistic square-root free-particle Hamiltonian is, however, matched perfectly by the positive-energy sector of the Dirac free-particle Hamiltonian's energy spectrum. But when a hydrogen type of potential energy is added to the free particle Dirac Hamiltonian, Foldy-Wouthuysen unitary transformation of the result reveals a "Darwin term" in its positive-energy sector which stems from nonphysical "zitterbewegung"-smearing of that potential energy. This physically nonexistent smearing of the potential energy can alternatively be viewed as having been produced by physically nonexistent smearing of its proton charge density source, which using the Dirac theory for data analysis erroneously compensates, resulting in a misleadingly contracted impression of the proton's charge radius.
[2173] vixra:1905.0511 [pdf]
A Solution of the Laplacian Using Geodetic Coordinates
Using the geodetic coordinates $(\varphi,\lambda,h)$, we give the expression of the laplacian $\Delta V=\ds \frac{\partial^2 V}{\partial x ^2}+\frac{\partial^2 V}{\partial y ^2}+\frac{\partial^2 V}{\partial z ^2}$ in these coordinates. A solution of $\Delta V=0$ of type $V=f(\lambda).g(\varphi,h)$ is given. The partial differential equation satisfied by $g(\varphi,h)$ is transformed in an ordinary differential equation of a new variable $u=u(\varphi,h)$.
[2174] vixra:1905.0507 [pdf]
In-vivo and In-vitro Blood Glucose Measurement using Surface Waves
In this paper, however, we demonstrate with in-vivo and in-vitro experimental proof that, when a measurement was conducted predominantly in surface modes, a positive association between the glucose concentration and the blood permittivity was noticeably observed. During an in-vivo experiment, a cylindrical volume of meat tissues with a known blood glucose level was first formed with the help of a suction aspirator. The exterior wall of the suction aspirator was then wound with one turn of a gelatine-coated copper wire, which is referred thereafter as Goubau line sensor. The two terminals of the Goubau line sensor were then connected to ports 1 and 2 of a Vector Network Analyzer for measuring the S-parameters. The results of our in-vivo experiment show that, in the absence of any significant leaky wave radiation from the sensor, the measured blood glucose concentration positively correlates with the measured S21 parameters in a highly reproducible manner. Keywords: Blood Glucose, Surface Waves, Diabetes, Leaky Waves, Goubau line, Smoking cessation
[2175] vixra:1905.0505 [pdf]
Multifaceted Approaches to a Berkeley Problem: Part 2
We once presented a few ways to solve a Berkeley problem without paying attention to initial values, which we try taking into account in this sequel.
[2176] vixra:1905.0485 [pdf]
Approximation of Sum of Harmonic Progression
Background: The harmonic sequence and the infinite harmonic series have been a topic of great interest to mathematicians for many years. The sum of the infinite harmonic series has been linked to the Euler-Mascheroni constant. It has been demonstrated by Euler that, although the sum diverges, it can be expressed as the Euler-Mascheroni constant added to the natural log of infinity. Utilizing the Euler-Maclaurin method, we can extend the expression to approximate the sum of finite harmonic series with a fixed first term and a variable last term. However, a natural extension is not possible for a variable value of the first term or the common difference of the reciprocals.Aim: The aim of this paper is to create a formula that generates an approximation of the sum of a harmonic progression for a variable first term and common difference. The objective remains that the resultant formula is fundamentally similar to Euler's equation of the constant and the result using the Maclaurin method. Method: The principle result of the paper is derived using approximation theory. The assertion that the graph of harmonic progression closely resembles the graph of y=1/x is key. The subsequent results come through a comparative view of Euler's expression and by using numerical manipulations on the Euler-Mascheroni Constant.Results: We created a general formula that approximates the sum of harmonic progression with variable components. Its fundamental nature is apparent because we can derive the results of the Maclaurin method from our results.
[2177] vixra:1905.0474 [pdf]
Corrigendum to "Polyomino Enumeration Results. (Parkin et al., SIAM Fall Meeting 1967)"
This work provides a Java program which constructs free polyominoes of size n sorted by width and height of the convex hull (i.e., its rectangular bounding box). The results correct counts for 15-ominoes published in the 1967 proceedings of the SIAM Fall Meeting, and extend them to 17-ominoes and partially to even larger polyominoes. [vixra:1905.0474]
[2178] vixra:1905.0470 [pdf]
The Robustness of the Spectral Properties of the V4 Wave Function
A simple model known as the V4 wave function is studied for a simple (non-equivalent) wave function, which describes the in-medium oscillations of the space-time. The new model is employed for the detailed study of the spectral properties of the V4 wave function, and one of the upcoming projects is to use it to study the spectral properties of the strongly interacting quark-gluon interaction.
[2179] vixra:1905.0468 [pdf]
Crazy proof of Fermat's Last Theorem
This paper magically shows very interesting and simple proof of Fermat’s Last Theorem. The proof identifies sufficient derivations of equations that holds the statement true and describes contradictions on them to satisfy the theorem. If Fermat had proof, his proof is most probably similar to this one. The proof does not require any higher field of mathematics and it can be understood in high school level of mathematics. It uses only modular arithmetic, factorization and some logical statements.
[2180] vixra:1905.0459 [pdf]
Collision Space-Time. Unified Quantum Gravity. Gravity is Lorentz Symmetry Break Down at the Planck Scale.
We have recently presented a unified quantum gravity theory [23]. Here we extend on that work and present an even simpler version of that theory. For about hundred years, modern physics has not been able to build a bridge between quantum mechanics and gravity. However, a solution may be found here; we present our quantum gravity theory, which is rooted in indivisible particles where matter and gravity are related to collisions and can be described by collision space-time. In this paper, we also show that we can formulate a quantum wave equation rooted in collision space-time, which is equivalent to mass and energy. The beauty of our theory is that most of the main equations that currently exist in physics are not changed (in terms of predictions), except at the Planck scale. The Planck scale is directly linked to gravity and gravity is, surprisingly, actually a Lorentz symmetry as well as a form of Heisenberg uncertainty break down at the Planck scale. Our theory gives a dramatic simplification of many physics formulas without altering the output predictions. The relativistic wave equation, the relativistic energy momentum relation, and Minkowski space can all be represented by simpler equations when we understand mass at a deeper level. This not attained at a cost, but rather a reflection of the benefit in having gravity and quantum mechanics unified under the same theory.
[2181] vixra:1905.0455 [pdf]
Particle Physics Based on ``focal Point'' Representation of Particles
The present work is based on a model where subatomic particles are represented as focal points of rays of Fundamental Particles (FPs) that move from infinite to infinite. FPs are emitted by the focal point and at the same time regenerate it. Interactions between subatomic particles are the product of the interactions of the angular momenta of the FPs. The interaction between two charged subatomic particles tend to zero for the distance between them tending to zero, allowing to place the zero of the potential energy at the distance zero. Atomic nuclei can thus be represented as swarms of electrons and positrons that neither attract nor repel each other. As atomic nuclei are composed of nucleons which are composed of quarks, the quarks can also be seen as swarms of electrons and positrons. This allows a completely new interpretation of the interactions between quarks and the corresponding energy states.
[2182] vixra:1905.0452 [pdf]
Nature of Light: What Hidden Behind Young's Double-Slit Experiment?
We experimentally prove that the famous single- and double-slit experiments are merely the scattered-light phase transition by slit edges rather than the conventional view of the transmitted-light effect by slits. The nature of the wave-particle duality of light quanta can be well understood with the help of the hypothesis of the quantized chiral photons having an intrinsic dual-energy cyclic exchange property. With the suggested theoretical framework, the experimental diffraction pattern of a single slit is analytically determined and numerically confirmed.
[2183] vixra:1905.0425 [pdf]
Uncover the Logic of Fine Structure Constant
In this paper, author introduced two theories, energy spiral as a unified field and dual resonance for energy transfer. This paper is based on these theories. Through studying formation process of particles, as was indicated that: vacuum particles and 12 basic particles are existing in the vacuum, which will become protons or electrons. In the hydrogen atomic, the external vacuum energy affects on the energy spiral of the electron, then revolution energy and rotation energy of the electron are determined, finally the parameters of Bohr structure are determined. And logical value of Fine Structure constant is obtained. [alpha=sqrt[4]{2}pi/512]. This data is supported by the calculation results of experimental data.Email: eastear@outlook.com eastear@163.com
[2184] vixra:1905.0407 [pdf]
Division by Zero Calculus and Pompe's Theorem
In this paper, we will introduce the application of the division by zero calculus to geometry and it will show the power of the new calculus.
[2185] vixra:1905.0405 [pdf]
Emergence of Cancer by Exchanging Fields of Microgravity Between Earth's DNA and Dark Dnas in Extra Dimensions
Recently, it has been shown that in the absence of gravity, microgravity let us to explore some new fields which have direct effects on the communications between cells and their growth. We show that the origin of these fields may be some DNA-like structures interior of the earth's core. These structures have a long around $10^{9}$ times the diameter of the earth which are compacted in very smaller places like the core of the earth. This compacting is very similar to the compacting of DNAs interior of cells and leads to the emergence of high temperature and pressure. We measure temperature around DNA-like structures and show that it is in good agreement with predicted temperature of core. Also, we calculate number of microstates of DNA-like structures in microgravity. We will show that DNA-like structures of the core exchange microstates and fields with dark part of DNA in extra dimensions. This dark DNA includes missing genes that are needed for the animal's life and their chemical products can be observed in the activity of body. In microgravity, the absence of gravity lets to DNA-like structures to recover more states of dark DNAs. These extra states accelerate the production of extra cells and may lead to the cancer. To show this, we inject tumor cells into two fertilized eggs and incubate them for 58h. Then, we put one of them in a devices similar to clinostat and try to provide the conditions of incubation in microgravity. We consider the growth of tumor cells under microgravity and compare with normal conditions. We observe that fields of microgravity increase the velocity of production of tumor cells. This experiment confirms our theory that in the absence of gravity, communications between DNA-like structure of the earth and dark DNA leads to the an increase in number of microstates of cancerous cells.
[2186] vixra:1905.0356 [pdf]
The Henstock-Kurzweil-Feynman-Pardy Integral in Quantum Physics
The Feynman integral is generalised so as to involve the random fluctuations of vacuum, from this integral the generalized Schroedinger equation is derived and the energy spectrum for the Coulomb potential determined.
[2187] vixra:1905.0346 [pdf]
The Complete Theory of Everything: a Proposal
This paper presents a model that bridges the gap between general relativity and quantum mechanics by providing a new understanding of space and time. The core principle of this formulation is impermanence, the fact that energy is never static, which has non-obvious consequences at the quantum level and leads to a formulation of quantum gravity. The results allow for a new interpretation of black holes and a resolution of the black hole information paradox. The model also presents the causes leading to the Big Bang as well as the Universe evolution, offering a new perspective on galaxy formation and a natural explanation to the origin of dark matter, dark energy and the matter-antimatter asymmetry.
[2188] vixra:1905.0269 [pdf]
Zeros of Gamma
160 years ago that in the complex analysis a hypothesis was raised, which was used in principle to demonstrate a theory about prime numbers, but, without any proof; with the passing Over the years, this hypothesis has become very important, since it has multiple applications to physics, to number theory, statistics, among others In this article I present a demonstration that I consider is the one that has been dodging all this time.
[2189] vixra:1905.0268 [pdf]
Ecriture Détaillée Des Equations de la Relativité Générale :Cas D’Une Métrique Diagonale
In this note, we study Einstein equations (EE) of general relativity considering a manifold M with a diagonal metric g_{ij}. We calculate the expression of the components of Ricci and Riemann tensors and the value of the scalar curvature R. Then we give the expression of the (EE) : -a- for the case where g_{ii}=g_i=g_i(x_i); -b- for the case where g_1=g_1(x_1=t) and g_i=g_i(t,x_i) for i=2,3,4$; -c- for the case (b) with x_4=z_0=constant.
[2190] vixra:1905.0264 [pdf]
Disruptive Gravity
Gravity is currently understood as a space-time curvature. If gravity was a force, it quantization would be much easier. Then what if gravity was a force able to bend space-time? We will see that gravity can be seen as such and what it would imply. We’ll then derive a new space-time bending equation thanks to a new principle equivalent to Einstein’s Equivalence Principle in low intensity fields. Thanks to quantum mechanics studies in curved space-time we can then blend gravity in quantum mechanics in a coherent way. Eventually, we'll see that applying these principles to cosmology, we can explain what we current call Dark Energy.
[2191] vixra:1905.0253 [pdf]
Theory of Tracer Diffusion in Concentrated Hard-Sphere Suspensions
A phenomenological theory of diffusion and cross-diffusion of tracer particles in con- centrated hard-sphere suspensions is developed within the context of Batchelor's theory of multicomponent diffusion. Expressions for the diffusion coeffcients as functions of the host particle volume fraction are obtained up to the close-packing limit. In concen- trated systems the tracer diffusivity decreases because of the reduced pore space available for diffusion. Tracer diffusion, and segregation during sedimentation, ceases at a critical trapping volume fraction. The tracer diffusivity can be modelled by a Stokes-Einstein equation with an effective viscosity that depends on the pore size. The tracer cross- diffusion coeffcient increases near the glass transition and diverges in the close-packed limit.
[2192] vixra:1905.0248 [pdf]
Solution of a Vector-Triangle Problem Via Geometric (Clifford) Algebra
As a high-school-level application of Geometric Algebra (GA), we show how to solve a simple vector-triangle problem. Our method highlights the use of outer products and inverses of bivectors.
[2193] vixra:1905.0222 [pdf]
Exploring Extra Dimensions by the Help of Dnas of the Egg Cell and the Earth
In this research, we introduce two natural telescope for detecting events of extra dimensions. First, we will show that missing genes which are needed for keepping the animals alive, could be existed in extra dimensions. These genes could act like the receiver or sender of radio waves and transmit information from extra dimensions into our universe. These genes could lead to some changes in radiated waves of egg cells and missing some electrons. Thus, by considering evolutions of egg cells, we can obtain some information about biological events interior of extra dimensions. For example, if we put a fertilized egg and a non-fertilized egg interior of an inductor and produce a magnetic field, we can observe that non-fertilized egg obtains some properties of fertilized egg. It seems that sperms are teleported via extra dimensions into non-fertilized eggs. However, this system can only send us a report of biological events and for considering cosmological events, we need to a bigger object. We show that earth has a system similar to the DNAs in egg cell however with the cosmological size and can communicate with objects in extra dimensions. This DNA-like shell is located interior of the earth's core and leads to the emergence of high temperature. Exchanging information between the earth's DNA like shell and objects in extra dimensions leads to the production of some extra matters around the core, unstability interior of the earth's layers and the emergence of some earthquakes. Also, DNA-like object interior of the core has a direct effect on the water, ions and charged particles around the earth. In fact, this object induced some properties into water in clouds via extra dimensions and for this reason, water of rain is very different from normal water. For example, water of rain can communicate with DNA of plants very better of other waters.
[2194] vixra:1905.0211 [pdf]
Modelling Passive Forever Churn via Bayesian Survival Analysis
This paper presents an approach to modelling passive forever churn (i.e., the probability that a user never returns to a game that does not require them to cancel it). The approach is based on parametric mixture models (Weibull, Gamma, and Log-normal) for return times. The model and data are inverted using Bayesian methods (MCMC and DIC) to get parameter estimates, uncertainties, as well as determine the return time distribution for retained users. The inversion scheme is tested on three groups of simulated data sets and one observed data set. The simulated data are generated with each of the parametric models. Each data set is censored to six time horizons, creating 18 data sets. All data sets are inverted with all three parametric models and the DIC is used to select the return time distribution. For all data sets the true return time distribution (i.e., the one that is used to simulate the data) has the best DIC value; for 16 inversions the true return time distribution is found to be significantly better than the other options. For the observed data set inversion, the scheme is able to accurately estimate the \% of users that did return (before the game transitioned into open beta) to given 14 days of observations.
[2195] vixra:1905.0208 [pdf]
Do We Really Need de Broglie’s Waves?
In this article we refer to what was already stated in [1, pp. 153-156] and, moreover, we propose a particle model that excludes the de Broglie model that associates a wave with each particle. Instead, it we claim that each particle, and in particular the electron, is a corpuscle not perfectly spherical (but which could potentially have a wavy surface), to justify the same results obtained with the classic experiments that are brought to confirm the model of de Broglie. No need, therefore, to refer to the indeterminacy principle and to that of complementarity. What would now be required is the translation this proposal into a solid mathematical model that can make quantitative predictions according to the experimental data, provided that young physicists capable of doing it do not encounter the same ostracism encountered by others for 90 years. Finally, we propose to repeat the experiment of Merli, Missiroli and Pozzi [3] in a fog chamber or similar, to confirm the hypothesis that the single electron follows a very precise trajectory, against the widespread interpretation of Copenhagen.
[2196] vixra:1905.0200 [pdf]
Systèmes de Référence, Systèmes Projectifs
Many geodetic works currently exist on the surface of the globe, which have developed through regional networks, usually each having a fundamental point, where the astronomical data (\phi= latitude,\lambda=longitude,Az=azimuth) of a reference are confused with the counterparts geodetic data. The comparison of 2 networks, and, step by step, of all the connectable networks, can be done by the analysis of the coordinates of their common points. To this end, we can use 3 types of coordinates: - Geographical coordinates = simple method, but not very convenient for different ellipsoids. - Three-dimensional cartesian coordinates, the most rigorous method in the case where the so-called geoid correction has been made. - Coordinates in conformal projection. An analysis of the main formulas that can be used is studied by the first author in this article.
[2197] vixra:1905.0196 [pdf]
Solution to the Poisson Boltzmann Equation Involving Various Spherical Geometries
The distribution of free charges within fluids or plasma is often modeled using linearized Poisson-Boltzmann equation (PBE). However, this author has recently shown that the usual boundary conditions (BC), namely the Dirichlet condition and the Neumann condition cannot be used to solve the PBE due to some physical reasons. This author has used the BC of `mixed' type to obtain the physical solution to the 1-D PBE and derived the charged density distribution $\rho_e$ within {\it rectangular} and {\it cylindrical} geometries before. Here the 1-D formulae of $\rho_e$ (i) within, (ii) between and (iii) outside {\it spherical} geometries has been derived. The result shows that the electric field is high at the surface of small objects, immersed in electrolyte solution. These formulae could be very useful in explaining similar physical situations that are found in nature or made in the laboratories.
[2198] vixra:1905.0192 [pdf]
Stochastic, Granular, Space-Time, and a New Interpretation of (Complex) Time: a Root Model for Both Relativity and Quantum Mechanics
A stochastic model is presented for the Planck-scale nature of space-time. From it, many features of quantum mechanics and relativity are derived. As mathematical points have no extent, the stochastic manifold cannot be tessellated with points and so a granular model is required. A constant grain size is not Lorentz invariant but, since volume is a Lorentz invariant, we posit grains with constant volumes. We treat both space and time stochastically and thus require a new interpretation of time to prevent an object being in multiple places at the same time. As the grains do have a definite volume, a mechanism is required to create and annihilate grains (without leaving gaps in space-time) as the universe, or parts thereof, expands or contracts. Making the time coordinate complex provides a mechanism. As this is a 'root' model, it attempts to explicate phenomena usually taken for granted, such as gravity and the nature of time. Both the General Relativity field equations (the master equations of Relativity) and the Schrödinger equation (the master equation of quantum mechanics) are produced.
[2199] vixra:1905.0151 [pdf]
Speed Of Light From Motion Of Light Source
The relative motion between a pair of observers represents a reflection symmetry. In the rest frame of first observer, the second observer moves at a distance away. In the rest frame of the second observer, the first observer moves with the same speed at an identical distance away but in the opposite direction. The reflection symmetry shows that the elapsed time in each observer's rest frame is conserved in both rest frames. The light takes the same elapsed time to move between two observers in both rest frames. However, the distance traveled by the light is different in different rest frame. With identical elapsed time but different distance, the speed of light is proved to be different in different reference frame.
[2200] vixra:1905.0143 [pdf]
The Finite Speed of Gravity and Heat from the Big Bang Era can Explain Dark Energy
The standard cosmological model calculates the gravitational mass-energy contribution of the Cosmic Background Radiation (CBR) to the mass of the Universe from the single energy density currently observed. The model assumes a homogeneous energy distribution at zero red shift and applies the energy density across the Universe. After reviewing the Friedmann equations for a matter dominated universe and Lematre extension of the space time metric to relativistic energy, we offer an alternative mathematical calculation of the gravitational mass-energy of the CBR component. A complete propagation history of the photons comprising the CBR is used rather than only the current energy density. Because the effects of gravity travel at the speed of light (according to general relativity), and using hot big bang cosmology, we suggest that the higher energy states of the CBR photons in the past also contribute to the currently observed gravitational effects. In our alternative calculation, the CBR energy density is integrated over a range of red shifts in order to account for the gravitational effects of the radiation energy density as it was in the past. By accounting for propagation effects, the resultant gravitational mass-energy calculated for the CBR radiation component almost exactly equals the amount attributed to dark energy. The calculation suggests an extension of the standard cosmological model in which co-moving distance at higher red shift increases with red shift more slowly than it does in the standard model. When compared with Type 1A Supernova data (in an available range of red shift from z = 0.4 to z = 1.5), distance predictions from the extended model have reasonable agreement with observation. The predictions also compare closely with the standard model (up to z = 1.0). Further observations are needed for z > 1.5 to make a final comparison between the standard model and the proposed extended model.
[2201] vixra:1905.0129 [pdf]
Number of Microstates of Dark Dnas in Extra Dimensions for Normal and Cancerous Cells
Recently, Hargreaves ( New Scientist, Volume 237, Issue 3168, March 2018, Pages 29-31 ) has argued that some animal genomes seem to be missing certain genes, ones that appear in other similar species and must be present to keep the animals alive. He called these apparently missing genes by “dark DNA”. On the other hand, Sepehri and his collaborations ( Open Physics, 16(1), pp. 463-475) has discussed that some biological events like DNA teleportation and water memory may be due to existence of some extra genes in extra dimensions. Collecting these results, we can conclude that origin of some cancers may be evolutions of dark DNA in extra dimension. To show this, we propose a model for calculating number of microstates of a DNA for a chick embryo in extra dimension and compare with experimental data. We show that number of microstates in extra dimension for a normal chick embryo is liss than number of microstates for a cancerous chick embryo. In fact, extra microstates are transformed to four dimensions.
[2202] vixra:1905.0122 [pdf]
The Burnside Q-Algebras of a Monoid
To each monoid M we attach an inclusion A --> B of Q-algebras, and ask: Is B flat over A? If our monoid M is a group, A is von Neumann regular, and the answer is trivially Yes in this case.
[2203] vixra:1905.0088 [pdf]
Via Geometric (Clifford) Algebra: Equation for Line of Intersection of Two Planes
As a high-school-level example of solving a problem via Geometric Algebra (GA), we show how to derive an equation for the line of intersection between two given planes. The solution method that we use emphasizes GA's capabilities for expressing and manipulating projections and rotations of vectors.
[2204] vixra:1905.0080 [pdf]
A Fast Algorithm for NetworkForecasting Time Series
Time series has a wide range of applications in various fields. Recently, a new math tool, named as visibility graph, is developed to transform the time series into complex networks. One shortcoming of existing network-based time series prediction methods is time consuming. To address this issue, this paper proposes a new prediction algorithm based on visibility graph and markov chains. Among the existing network-based time series prediction methods, the main step is to determine the similarity degree between two nodes based on link prediction algorithm. A new similarity measure between two nodes is presented without the iteration process in classical link prediction algorithm. The prediction of Construct Cost Index (CCI) shows that the proposed method has the better accuracy with less time consuming.
[2205] vixra:1905.0078 [pdf]
The Universal Profinitization of a Topological Space
To a topological space X we attach in two equivalent ways a profinite space X' and a continuous map F: X --> X' such that, for any continuous map f: X --> Y, where Y is a profinite space, there is a unique continuous map f': X' --> Y such that f'oF = f.
[2206] vixra:1905.0041 [pdf]
A Final Tentative of The Proof of The ABC Conjecture - Case c=a+1
In this paper, we consider the abc conjecture in the case c=a+1. Firstly, we give the proof of the first conjecture that c<rad^2(ac) using a polynomial function. It is the key of the proof of the abc conjecture. Secondly, the proof of the abc conjecture is given for \epsilon >1, then for \epsilon \in ]0,1[ for the two cases: c<rad(ac) and c> rad(ac). We choose the constant K(\epsilon) as K(\epsilon)=e^{\frac{1}{\epsilon^2}). A numerical example is presented.
[2207] vixra:1905.0030 [pdf]
Foundations of Conic Conformal GeometricAlgebra and Compact Versors for Rotation,Translation and Scaling
This paper explains in algebraic detail how two-dimensional conics can be defined by the outer products of conformal geometric algebra (CGA) points in higher dimensions. These multivector expressions code all types of conics in arbitrary scale, location and orientation. Conformal geometric algebra of two-dimensional Euclidean geometry is fully embedded as an algebraic subset. With small model preserving modifications, it is possible to consistently define in conic CGA versors for rotation, translation and scaling, similar to Hrdina et al. (Adv. Appl Cliff. Algs. Vol. 28:66, pp. 1–21, https://doi.org/10.1007/s00006-018-0879-2,2018), but simpler, especially for translations.
[2208] vixra:1905.0026 [pdf]
Cubic Curves and Cubic Surfaces from Contact Points in Conformal Geometric Algebra
This work explains how to extend standard conformal geometric algebra of the Euclidean plane in a novel way to describe cubic curves in the Euclidean plane from nine contact points or from the ten coefficients of their implicit equations. As algebraic framework serves the Clifford algebra Cl(9,7) over the real sixteen dimensional vector space R^{9,7}. These cubic curves can be intersected using the outer product based meet operation of geometric algebra. An analogous approach is explained for the description and operation with cubic surfaces in three Euclidean dimensions, using as framework Cl(19,16). Keywords: Clifford algebra, conformal geometric algebra, cubic curves, cubic surfaces, intersections
[2209] vixra:1905.0023 [pdf]
Fast Frame Rate Up-conversion Using Video Decomposition
Video is one of the most popular media in the world. However, video standards that are followed by different broadcasting companies and devices differ in several parameters. This results in compatibility issues in different hardware while handling a particular video type. One of such major, yet important parameter is frame rate of a video. Though it is easy to reduce the frame rate of a video by dropping frames at a particular interval, frame rate up-conversion is a non-trivial yet important problem in video communication. In this paper, we apply video decomposition algorithm to extract the moving regions in a video and interpolate the background and the sparse information separately for a fast up-conversion. We test our algorithm for different video contents and establish that the proposed algorithm performs faster than the existing up-conversion method without producing any visual distortion.
[2210] vixra:1905.0021 [pdf]
Weights at the Gym and the Irrationality of Zeta(2)
This is an easy approach to proving zeta(2) is irrational. The reasoning is by analogy with gym weights that are rational proportions of a unit. Sometimes the sum of such weights is expressible as a multiple of a single term in the sum and sometimes it isn't. The partials of zeta(2) are of the latter type. We use a result of real analysis and this fact to show the infinite sum has this same property and hence is irrational.
[2211] vixra:1905.0016 [pdf]
The Easy Way to Cherenkov-Compton Effect
The two-photon production by motion of a charged particle in a medium is considered. This process is called Cherenkov-Compton effect because the production of photons is calculated from the Feynman diagram for the Compton effect in a medium. This process is not forbidden in quantum electrodynamics of dielectric media.The cross section of this process in Mandelstam variables is calculated for pair of photons with one moving at the opposite direction to the electron motion and the second one inside the Cherenkov cone. The opposite motion is not caused by the collision with a particle of a medium. The relation of this process to the CERN experiments is considered.
[2212] vixra:1905.0008 [pdf]
An Interpretation of the Identity $ 0.999999...... =1$
In this short paper, we will give a very simple and important interpretation for the identity: $ 0.999999......=1$, because we have many questions for the identity from general people. Furthermore, even mathematicians and mathematics teachers will see an interesting interpretation in this paper.
[2213] vixra:1905.0006 [pdf]
Arguments that Prehistorical and Modern Humans Belong to the Same Species
I argue that the evidence of the Out-of-Africa hypothesis and the evidence of multiregional evolution of prehistorical humans can be understood if there has been interbreeding between Homo erectus, Homo neanderthalensis, and Homo sapiens at least during the preceding 700,000 years. These interbreedings require descendants who are capable of reproduction and therefore parents who belong to the same species. I suggest that a number of prehistorical humans who are at present regarded as belonging to different species belong in fact to one single species.
[2214] vixra:1904.0584 [pdf]
Fibonacci Motives
Motives are well connected to graphical techniques in quantum field theory. In motivic quantum gravity we consider categorical axioms, starting with the ribbon category of the Fibonacci anyon. Quantum logic dictates that the cardinality of a classical set is replaced by the dimension of a space. In this note we look at the geometry underlying Fibonacci numbers and apply it to the algebra of multiple zeta values and cyclic particle braids.
[2215] vixra:1904.0567 [pdf]
Theory of Gravitational Energy, Momentum and Stress
In general relativity, the density of gravitational energy is not well-defined: it is said to be 'nonlocalizable'. The new theory solves the problem of gravitational dynamics. It provides a tensor for the density of gravitational energy, momentum and stress. Moreover, it provides an unambiguous definition of gravitational force and power. The theory predicts both longitudinal and transverse gravitational waves. It is time to launch a search for longitudinal waves, in the data at LIGO and Virgo.
[2216] vixra:1904.0560 [pdf]
Collection of Exercises and Problems of Topography, Astronomy, Geodesy and Least Squares Theory
I have often received students in geomatic requests asking me to provide them with exercises or problems of geodesy, topography, astronomy or the application of the theory of least squares. The paper is a collection of exercises and problems that comes to fill the need of the students in this matter.
[2217] vixra:1904.0551 [pdf]
Modified General Relativity and the Klein-Gordon Equation in Curved Spacetime
From the existence of a line element field $(A^{\beta},-A^{\beta}) $ on a four-dimensional time oriented Lorentzian manifold with metric, the Klein-Gordon equation in curved spacetime, $ \nabla_{\mu}\nabla^{\mu}\Psi=k^{2}\Psi $, can be constructed from one of the pair of regular vectors in the line element field, its covariant derivative and associated spinor-tensor; and scalar product for spins 1,1/2 and 0, respectively. The left side of the asymmetric wave equation can then be symmetrized. The symmetric part, $ \tilde{\varPsi}_{\alpha\beta}$, is the Lie derivative of the metric, which links the Klein-Gordon equation to modified general relativity for spins 1,1/2 and 0. Modified general relativity is intrinsically hidden in the Klein-Gordon equation for spins 2 and 3/2. Massless gravitons do not exist as force mediators of gravity in a four-dimensional time oriented Lorentzian spacetime. The diffeomorphism group Diff(M) is not restricted to the Lorentz group. $ \tilde{\varPsi}_{\alpha\beta}$ can instantaneously transmit information to, and quantum properties from, its antisymmetric partner $ K_{\alpha\beta} $ along $ A^{\beta} $. This establishes the concept of entanglement.
[2218] vixra:1904.0547 [pdf]
Finally a Unified Quantum Gravity Theory! Collision Space-Time: the Missing Piece of Matter! Gravity is Lorentz and Heisenberg Break Down at the Planck Scale. Gravity Without G
Based on a very simple model where mass, at the deepest level, is colliding indivisible particles and energy is indivisible particles not colliding, we get a new and simple model of matter that seems to be consistent with experiments. Gravity appears to be directly linked to collision time and also the space the collisions take up; we could call it collision space-time. This leads to a completely new quantum gravity theory that is able to explain and predict all major gravity phenomena without any knowledge of Newton’s gravitational constant or the mass size in the traditional sense. In addition, the Planck constant is not needed. Our model, combined with experimental data, strongly indicates that matter is granular and consists of indivisible particles that are colliding. Further, from experiments it is clear that the diameter of the indivisible indivisible particle is the Planck length. Our theory even predicts that there can be no time dilation in quasars, something that is consistent with observations and yet is inconsistent with existing gravity theories. Several modern quantum gravity models indicate that Lorentz symmetry is broken at the Planck scale, but there have been no signs of this occurring, despite extensive efforts to look for Lorentz symmetry break downs. We show that Lorentz symmetry break downs indeed happen and, to our own surprise, this is actually very easy to detect. In our model, it is clear that Lorentz symmetry break down is gravity itself. This seems contradictory, as Planck energies are very high energy levels, but we show that this must be seen in a new perspective. We also introduce a new quantum wave equation that tells us that gravity is both Lorentz symmetry break down and Heisenberg uncertainty break down at the Planck scale. Our wave equation in this sense includes gravity. For masses smaller than a Planck mass, probability will also dominate gravity; it is then a probability for Heisenberg uncertainty break down. At the Planck mass size and up, determinism dominates. For the first time, we have a quantum theory that unifies gravity with the quantum, all derived from a very simple model about the quantum. Our theory is simple, and we show that an indivisible particle is the fundamental unit of all mass and energy – a quantity that has been missing in physics all this time. Newton was one of the last great physicists who thought that such particle was essential, but it was naturally impossible for one man to solve the entire problem. This paper stands on the shoulders of giants like Newton, Einstein, Planck, and Compton to explore these long-standing questions. The beauty of our theory is that it keeps almost all existing and well-tested equations completely intact (unchanged) all the way to the Planck scale. Anything else would be a surprise; after all, some areas of physics have been extremely successful in predictions and have also been well-tested. Still, in our work, the Planck scale and all equations are united into one simple and powerful theory. Unlike standard physics, there are no inconsistencies in our theory. QM is unified with gravity, and even a simplified version of the Minkowski space- time is consistent with QM and gravity. A long series of mysteries in QM vanish, under our new interpretation.
[2219] vixra:1904.0542 [pdf]
Systems of Linear Dyson-Schwinger Equations
Systems of Dyson-Schwinger equation represent the equations of motion in quantum field theory. In this paper, we follow the combinatorial approach and consider Dyson-Schwinger equations as fixed point equations that determine the perturbation series by usage of graph insertion operators. We discuss their properties under the renormalization flow, prove that fixed points are scheme independent, and construct solutions for coupled systems with linearized arguments of the insertion operators.
[2220] vixra:1904.0525 [pdf]
An Analysis of Noise Folding for Low-Rank Matrix Recovery
Previous work regarding low-rank matrix recovery has concentrated on the scenarios in which the matrix is noise-free and the measurements are corrupted by noise. However, in practical application, the matrix itself is usually perturbed by random noise preceding to measurement. This paper concisely investigates this scenario and evidences that, for most measurement schemes utilized in compressed sensing, the two models are equivalent with the central distinctness that the noise associated with (\ref{eq.3}) is larger by a factor to $mn/M$, where $m,~n$ are the dimension of the matrix and $M$ is the number of measurements. Additionally, this paper discusses the reconstruction of low-rank matrices in the setting, presents sufficient conditions based on the associating null space property to guarantee the robust recovery and obtains the number of measurements. Furthermore, for the non-Gaussian noise scenario, we further explore it and give the corresponding result. The simulation experiments conducted, on the one hand show effect of noise variance on recovery performance, on the other hand demonstrate the verifiability of the proposed model.
[2221] vixra:1904.0489 [pdf]
Sums of Powers of the Terms of Lucas Sequences with Indices in Arithmetic Progression
We evaluate the sums $\sum_{j=0}^k{u_{rj+s}^{2n}\,z^j}$, $\sum_{j=0}^k{u_{rj+s}^{2n-1}\,z^j}$ and $\sum_{j=0}^k{v_{rj+s}^{n}\,z^j}$, where $r$, $s$ and $k$ are any integers, $n$ is any nonnegative integer, $z$ is arbitrary and $(u_n)$ and $(v_n)$ are the Lucas sequences of the first kind and of the second kind, respectively. As natural consequences we obtain explicit forms of the generating functions for the powers of the terms of Lucas sequences with indices in arithmetic progression. This paper therefore extends the results of P.~Sta\u nic\u a who evaluated $\sum_{j=0}^k{u_{j}^{2n}\,z^j}$ and $\sum_{j=0}^k{u_{j}^{2n-1}\,z^j}$; and those of B. S. Popov who obtained generating functions for the powers of these sequences.
[2222] vixra:1904.0471 [pdf]
On the In-Band Full-Duplex Gain Scalability in On-demand Spectrum Wireless Local Area Networks
The advent Self-Interference Cancellation (SIC) techniques has turned in-band Full-Duplex (FD) radios into a reality. FD radios doubles the theoretical capacity of a half-duplex wireless link by enabling simultaneous transmission and reception in the same channel. A challenging question raised by that advent is whether it is possible scale the FD gain in Wireless Local Area Networks (WLANs). Precisely, the question concerns on how a random access Medium Access Control (MAC) protocol can sustain the FD gain over an increasing number of stations. Also, to ensure bandwidth resources match traffic demands, the MAC protocol design is also expected to enable On-Demand Spectrum Allocation (ODSA) policies in the presence of the FD feature. In this sense, we survey the related literature and find out a coupled FD-ODSA MAC solution lacks. Also, we identify a prevailing practice for the design of FD MAC protocols we refer to as the 1:1 FD MAC guideline. Under this guideline, an FD MAC protocol ‘sees’ the whole FD bandwidth through a single FD PHYsical (PHY) layer. The protocol attempts to occupy the entire available bandwidth with up to two arbitrary simultaneous transmissions. With this, the resulting communication range impair the spatial reuse offer which penalizes network throughput. Also, modulating each data frame across the entire wireless bandwidth demands stronger Received Signal Strength Indication (RSSI) (in comparison to narrower bandwidths). These drawbacks can prevent 1:1 FD MAC protocols to scale the FD gain. To face these drawbacks, we propose the 1:N FD MAC design guideline. Under the 1:N guideline, FD MAC protocols ‘see’ the FD bandwidth through N >1 orthogonal narrow-channel PHY layers. Channel orthogonality increases spatial reuse offer and narrow channels relaxes RSSI requisites. Also, the multi-channel arrangement we adopt facilitates the development of ODSA policies at the MAC layer. To demonstrate how an FD MAC protocol can operate under the 1:N design guideline, we propose two case studies. A case study consists of a novel random access protocol under the 1:N design guideline called the Piece by Piece Enhanced Distributed Channel Access (PbP-EDCA). The other case study consists in adapting an existing FD Wi-Fi MAC protocol [Jain et al., 2011]) – we name as the 1:1 FD Busy Tone MAC protocol (FDBT) – to the 1:N design guideline. Through analytical performance evaluation studies, we verify the 1:N MAC protocols can outperform the 1:1 FDBT MAC protocol’s saturation throughput even in scenarios where 1:1 FDBT is expected to maximize the FD gain. Our results indicate that the capacity upper-bound of an arbitrary 1:1 FD MAC protocol improves if the protocol functioning can be adapted to work under the 1:N MAC design guideline. To check whether that assertion is valid, we propose an analytical study and a proof-of-concept software-defined radio experiment. Our results show the capacity upper-bound gains of both 1:1 and 1:N design guidelines corresponds to 2× and 2.2× the capacity upper-bound achieved by a standard half-duplex WLAN at the MAC layer, respectively. With these results, we believe our proposal can inspire a new generation of MAC protocols that can scale the FD gain in WLANs.
[2223] vixra:1904.0443 [pdf]
Conservation of Wavelength In Reference Frame
Parity symmetry maps one object to another object as inverse image. It shows that a displacement and its inverse image are of the same length. The length of a displacement is conserved in all reference frames. The wavelength of a wave is the length of the displacement between two adjacent crests. Therefore, the wavelength is conserved in all reference frames. However, Doppler effect shows that the frequency of light is not conserved in all inertial reference frames. As a result, the speed of light is not conserved in all inertial reference frames.
[2224] vixra:1904.0441 [pdf]
$ads_{3}$-Kerr, Moving Brane, Cardy-Verlinde Formula
A relation like Hubble equation is derived on a $(1+1)$-dimensional brane moving in the $AdS_{1+2}$-kerr space-time. The relation is used to obtain a Cardy-Verlinde formula on the brane.
[2225] vixra:1904.0432 [pdf]
The Detected Direction of the Force Onto a Permanent Magnet, Caused by the Displacement Current in a Wire Gap, Supports Weber Electrodynamics
This article compares Maxwell's electrodynamics with the almost forgotten Weber electrodynamics as test theory by means of an easily reproducible and simple experiment. For this purpose, it is first theoretically inferred in two different ways that when charging a specially designed capacitor with a current source because of the displacement current a force should occur onto a permanent magnet between the plates, which is diametrically different in its direction in both theories. Subsequently, the experimental setup is described and, based on the results, it is determined that nature seems to follow Weber's law of force in this case. The result shows furthermore that Maxwell's magnetostatics can lead to false predictions under specific everyday conditions.
[2226] vixra:1904.0429 [pdf]
MidcurveNN: Encoder-Decoder Neural Network for Computing Midcurve of a Thin Polygon
Various applications need lower dimensional representation of shapes. Midcurve is one-dimensional(1D) representation of a two-dimensional(2D) planar shape. It is used in applications such as animation, shape matching, retrieval, finite element analysis, etc. Methods available to compute midcurves vary based on the type of the input shape (images, sketches, etc.) and processing (thinning, Medial Axis Transform (MAT), Chordal Axis Transform (CAT), Straight Skeletons, etc.). This paper talks about a novel method called MidcurveNN which uses Encoder-Decoder neural network for computing midcurve from images of 2D thin polygons in supervised learning manner. This dimension reduction transformation from input 2D thin polygon image to output 1D midcurve image is learnt by the neural network, which can then be used to compute midcurve of an unseen 2D thin polygonal shape.
[2227] vixra:1904.0418 [pdf]
Expanding Polynomials with Regular Polygons
Expanding the root form of a polynomial for large numbers of roots can be complicated. Such polynomials can be used to prove the irrationality of powers of pi, so a technique for arriving at expanded forms is needed. We show here how roots of polynomials can generate regular polygons whose vertices considered as roots form known expanded polynomials. The product of these polynomials can be simple enough to yield the desired expanded form.
[2228] vixra:1904.0414 [pdf]
Unitary Quantum Groups vs Quantum Reflection Groups
We study the intermediate liberation problem for the real and complex unitary and reflection groups, namely $O_N,U_N,H_N,K_N$. For any of these groups $G_N$, the problem is that of understanding the structure of the intermediate quantum groups $G_N\subset G_N^\times\subset G_N^+$, in terms of the recently introduced notions of ``soft'' and ``hard'' liberation. We solve here some of these questions, our key ingredient being the generation formula $H_N^{[\infty]}=<H_N,T_N^+>$, coming via crossed product methods. Also, we conjecture the existence of a ``contravariant duality'' between the liberations of $H_N$ and of $U_N$, as a solution to the lack of a covariant duality between these liberations.
[2229] vixra:1904.0408 [pdf]
What Was Division by Zero?; Division by Zero Calculus and New World
In this survey paper, we will introduce the importance of the division by zero and its great impact to elementary mathematics and mathematical sciences for some general people. For this purpose, we will give its global viewpoint in a self-contained manner by using the related references.
[2230] vixra:1904.0398 [pdf]
Computing a Well-Connected Midsurface
Computer-aided Design (CAD) models of thin-walled parts such as sheet metal or plastics are often reduced dimensionally to their corresponding midsurfaces for quicker and fairly accurate results of Computer-aided Engineering (CAE) analysis. Generation of the midsurface is still a time-consuming and mostly, a manual task due to lack of robust and automated techniques. Midsurface failures manifest in the form of gaps, overlaps, not-lying-halfway, etc., which can take hours or even days to correct. Most of the existing techniques work on the complex final shape of the model forcing the usage of hard-coded heuristic rules, developed on a case-by-case basis. The research presented here proposes to address these problems by leveraging feature-parameters, made available by the modern feature-based CAD applications, and by effectively leveraging them for sub-processes such as simplification, abstraction and decomposition. In the proposed system, at first, features which are not part of the gross shape are removed from the input sheet metal feature-based CAD model. Features of the gross-shape model are then transformed into their corresponding generic feature equivalents, each having a profile and a guide curve. The abstracted model is then decomposed into non-overlapping cellular bodies. The cells are classified into midsurface-patch generating cells, called ‘solid cells’ and patch-connecting cells, called ‘interface cells’. In solid cells, midsurface patches are generated either by offset or by sweeping the midcurve generated from the owner-feature’s profile. Interface cells join all the midsurface patches incident upon them. Output midsurface is then validated for correctness. At the end, real-life parts are used to demonstrate the efficacy of the approach.
[2231] vixra:1904.0376 [pdf]
Nature Works the Way Number Works
Based on Euler ’s formula a concept of duality unit or dunit circle is discovered. Continuing with Riemann hypothesis is proved from different angles, zeta values are renormalised to remove the poles of zeta function and discover relationships between numbers and primes. Other unsolved prime conjectures are also proved with the help of theorems of numbers and number theory. Imaginary number i can be defined such a way that it eases the complex logarithm and accounts for the scale difference between very big and very small. Pi can also be a base to natural logarithm and complement the scale gap. 96 complex constants derived from complex logarithm can explain everything in the universe.
[2232] vixra:1904.0333 [pdf]
The Theorems of Rao--Blackwell and Lehmann--Scheffe, Revisited
It has been stated in the literature that for finding uniformly minimum-variance unbiased estimator through the theorems of Rao-Blackwell and Lehmann-Scheffe, the sufficient statistic should be complete; otherwise the discussion and the way of finding uniformly minimum-variance unbiased estimator should be changed, since the sufficiency assumption in the Rao-Blackwell and Lehmann-Scheffe theorems limits its applicability. So, it seems that the sufficiency assumptions should be expressed in a way that the uniformly minimum-variance unbiased estimator be derivable via the Rao-Blackwell and Lehmann-Scheffe theorems.
[2233] vixra:1904.0328 [pdf]
A Special Geometry - and its Consequences
It is explained why the geometry of space-time, first found by Rainich, is generally valid. The equations of this geometry, the known Einstein-Maxwell equations, are discussed, and results are listed. We shall see how these tensor equations can be solved. As well, neutrosophics is more supported than dialectics. We shall find even more categories than described in neutrosophics.
[2234] vixra:1904.0323 [pdf]
Attractions of the Sun and the Moon on The Earth
This note gives the elements on the attraction of the sun and the moon. It was inspired by the reading of the book by Helmut Moritz and Ivan I. Muller entitled Earth Rotation: Theory and Observation which can be a tidal introductory course. \\ It includes the following chapters: - The lunar-solar tidal potential. - Zonal, sectoral and tesseral terms
[2235] vixra:1904.0321 [pdf]
The Unprecedented Decade
In response to various reports of ongoing crises throughout the world, this essay has been written with the aim of proposing a radical transition in the way the world currently operates. Through general observation, the case presented below posits that human labour is insufficient to provide the means of modern lifestyles, and that current economic systems are incompatible with a sustainable and decent human lifestyle due to this insufficient productivity. To compensate, mechanised labour has been produced and implemented to offset this insufficiency, but at the cost of the environment and a growing human insolvency. To avoid economic and ecological disaster, this essay posits that human labour must be abandoned and replaced by sustainably powered and automated labour worldwide, simultaneously fulfilling the various global demands freely and obsoleting emissions-intensive mechanised labour. Doing so would eliminate economic contentions that prevent many from attaining a decent quality of life while also addressing the issue of heavily polluting industries.
[2236] vixra:1904.0310 [pdf]
The CMB Energy Equivalence Principle : A Correlation to Planck and Cosmic Horizon Energy
According to the Cosmic Microwave Background (CMB) temperature and Wien's displacement law, the CMB's energy value is equivalent to that of the measured and determined neutrino energy. The resulting CMB/neutrino mass is used to determine a ratio by correlating the accelerative work of two forces which corresponds to the cosmic particle horizon and Planck length. Planck's constant is shown to be proportional to the cosmic particle horizon and the CMB mass/energy and the speed of light in vacuum. Planck's constant, the cosmic horizon, the CMB energy and speed of light all appear to be interconnected and their correlations provide an amending perspective on the concepts of the fundamental laws and theories of the cosmos. Specifically, the squared energy of a CMB/neutrino is equal to the product of the energy of the maximum cosmic Rindler horizon, cosmic diameter, and the Schwarzschild radius for a Planck mass.
[2237] vixra:1904.0272 [pdf]
Heuristic Thoughts about Classical Physics
Classical physics is obsolete for more than a century if the question at stake is an accurate and complete description of Nature. Under that light if an alternative classical assumptions bundle may be capable of withstanding some of the major limitations of classical physics, it is reasonable to say that further discussion is required. This manuscript presents very specific and minimal assumptions in the domain of classical physics and extend them with bold ambitious conjectures. Although relation to nature is not shown, the main purpose of this manuscript is to inspire possibility of such relation.
[2238] vixra:1904.0259 [pdf]
Some Hereditary Properties of the E-J Generalized Cesàro Matrices
A countable subcollection of the Endl-Jakimovski generalized Ces\`{a}ro matrices of positive order is seen to inherit posinormality, coposinormality, and hyponormality from the Ces\`{a}ro matrix of the same order.
[2239] vixra:1904.0221 [pdf]
Report of Spatial Geodesy on the Rapid Implementation of a Unified Geodetic System in Africa
Having a unified reference system, established on a universally accepted and utilitarian basis, is an important contribution to the judicious use of geographic information to promote Africa's economic development at the national level, regional and continental. To do so, it is important today to make the best use of space technologies and to assimilate them, especially through Cartographic National Institutions. This paper: - is written for small African mapping institutions, - defines the organizational prerequisites for rapidly implementing the Unified Reference System at the continental level. - serves as a framework to guide the debate on the real questions to be asked in this area.
[2240] vixra:1904.0192 [pdf]
Dark Gravity
Dark Gravity (DG) is a background dependent bimetric and semi-classical extension of General Relativity with an anti-gravitational sector. The foundations of the theory are reviewed. The main theoretical achievement of DG is the avoidance of any singularities (both black hole horizon and cosmic initial singularity) and an ideal framework to understand the cancellation of vacuum energy contributions to gravity and hopefully solve the old cosmological constant problem. The main testable predictions of DG against GR are on large scales as it provides an acceleration mechanism alternative to the cosmological constant. The detailed confrontation of the theory to SN-Cepheids, CMB and BAO data is presented. The Pioneer effect, MOND phenomenology and Dark Matter are also investigated in the context of this new framework.
[2241] vixra:1904.0184 [pdf]
Interpolating Values in Code Space
A method is described for interpolating un-sampled values attributed to points in code space. A metric is used which counts the number of non-equal corresponding indices shared by two given points. A generalised interpolation equation is derived for values ascribed to nodes on undirected graphs. The equation is then applied specifically to values at points in code space. This interpo- lation equation is then solved in general for a set of given sampled values in the space.
[2242] vixra:1904.0158 [pdf]
From Turbulence to the Unification of Maxwell Field and Gravitational Field (Back to the Roots)
The central focus of the theory lies on the solution of the since more than 165 years unsolved problem of turbulence. To achieve this aim the following interstations are reached successfully: 1. definition of a pure continuum corresponding uniquely to a fluid, 2. stochastic turbulent particle transport by a fluctuating continuum, 3. context of deterministic turbulence and its stochastic counterpart in the sense of an ensemble theory. The result turns out to be geometrodynamics: 1. a pure geometrodynamics of turbulence in a 1+3-dimensional Euclidian Space, 2. a pure geometrodynamics of deformation. Both geometrodynmics lead to 1. evolution equations of General Relativity, 2. the quantitative unification of Maxwell Field and Gravitational Field, 3. the facilitation of quantizing gravitational fields, 4. considerations of general gravitational waves from a new perspective. The importance of the Einstein-Equations for microphysics is proved.
[2243] vixra:1904.0146 [pdf]
A Tentative of The Proof of The ABC Conjecture - Case c=a+1
In this paper, we consider the $abc$ conjecture in the case $c=a+1$. Firstly, we give the proof of the first conjecture that $c<rad^2(ac)$ using the polynomial functions. It is the key of the proof of the $abc$ conjecture. Secondly, the proof of the $abc$ conjecture is given for $\epsilon \geq 1$, then for $\epsilon \in ]0,1[$ for the two cases: $ c\leq rad(ac)$ and $c> rad(ac)$. We choose the constant $K(\epsilon)$ as $K(\epsilon)=e^{\ds \left(\frac{1}{\epsilon^2} \right)}$. A numerical example is presented.}
[2244] vixra:1904.0142 [pdf]
Fresnel Law Awesome Consequences
Not only that the Relative-Velocity Completion of Newton Gravity Law has replaced the General Theory of Relativity (without pointing to any inconsistency or disagreement with experiment), but, herein, also removes the Special Theory of Relativity, via the Fresnel law on his hypothetical ether. We extend Fresnel’s law to vector form, by postulation, and using the gravitational index of refraction apply it to account for Michelson type experiments (including Miller’s). Reasons to resort to the Lorentz transformations, i.e., to the special theory of relativity, to account for the Michelson-Morley experiment do no more exist. Keywords: Fresnel law; Fresnel law vector generalized; Michelson-Morley experiment; Miller experiment.
[2245] vixra:1904.0129 [pdf]
The Surprise Exam Paradox: Students Should be Surprised on Wednesday or Tuesday.
The students in the surprise exam story reasoned that no surprise exam could take place on any day of the week. Actually, however, the students were surprised on Wednesday by the teacher's surprise exam. In this paper, we show where the students' reasoning went wrong and that students should be surprised on Wednesday or Tuesday.
[2246] vixra:1904.0103 [pdf]
Mass Deficit and Topology of Nucleons
Nucleus is identical with the lower inverse electric-nuclear field where a rapid increase of its potential occurs with a corresponding reduction of the space cohesive pressure, resulting to the mass deficit of the neutron entering the nucleus and to the finding of its location potential. Therefore, the so called topology of the nucleons can be now found. So, the neutrons are stable into the lower inverse nuclear field where a reduced cohesive pressure prevails. Moreover, there would be no nuclei without the presence of neutrons that reduce the negativity of the protons field, while neutrons are those that move into the nuclei (with the remaining half of their kinetic energy) on circular orbits around immobilized protons which have spin only.
[2247] vixra:1904.0095 [pdf]
Propositional Logic Without The Deduction Theorem
In propositional logic, given a set of axioms, we can derive formulas. Here we present the derivations of some formulas without the use of the Deduction Theorem. The derivations are presented compactly with only few referrals to other theorems. Most textbooks in this subject avoid this kind of approach.
[2248] vixra:1904.0091 [pdf]
Faux Proton Charge Smearing in Dirac Hydrogen by "Electron Zitterbewegung"
The commutator of the Dirac free-particle's velocity operator with its Hamiltonian operator is nonzero and independent of Planck's constant, which violates the quantum correspondence-principle requirement that commutators of observables must vanish when Planck's constant vanishes, as well as violating the absence of spontaneous acceleration of relativistic free particles. The consequent physically pathological "zitterbewegung" is of course completely absent when the natural relativistic square-root free-particle Hamiltonian operator is used; nevertheless the energy spectrum of that pathology-free natural relativistic square-root free-particle Hamiltonian is exactly matched by the positive-energy sector of the Dirac free-particle Hamiltonian's energy spectrum. Contrariwise, however, Foldy-Wouthuysen unitary transformation of the positive-energy sector of any hydrogen-type Dirac 4 x 4 Hamiltonian to 2 x 2 form reveals a "zitterbewegung"-induced "Darwin-term" smearing of the proton charge density which is completely absent in the straightforward relativistic extension of the corresponding hydrogen-type nonrelativistic Pauli 2 x 2 Hamiltonian. Compensating for an atomic proton's physically absent "electron zitterbewegung"-induced charge smearing would result in a misleadingly contracted impression of its charge radius.
[2249] vixra:1904.0080 [pdf]
Special Relativity: the Old and the New Theory
By adjoining the local time to the Newtonian mechanics together with the constancy of the speed of light;a new unprecedented and insightful derivation of the Lorentz transformation (LT) is proposed. The procedure consist of elementary arguments and simple but rigorous mathematical techniques. The usually assumptions concerning the linearity and homogeneity in the standard derivations of the LT are obtained as results. Moreover, an other, entirely new, transformation is established. As expected, a new special relativity theory ensue from this new transformation. Unlike the special relativity theory (SRT), with this new theory we can tame superluminal velocities.
[2250] vixra:1904.0073 [pdf]
Non-commutativity: Unusual View
Some ambiguities have recently been found in the definition of the partial derivative (in the case of presence of both explicit and implicit dependencies of the function subjected to differentiation). We investigate the possible influence of this subject on quantum mechanics and the classical/quantum field theory. Surprisingly, some commutators of operators of space-time 4-coordinates and those of 4-momenta are not equal to zero. We postulate the non-commutativity of 4-momenta and we derive mass splitting in the Dirac equation. Moreover, two iterated limits may not commute each other, in general. Thus, we present an example when the massless limit of the function of E, p, m does not exist in some calculations within quantum field theory
[2251] vixra:1904.0052 [pdf]
D\"aumler's Horn Torus Model and\\ Division by Zero \\ - Absolute Function Theory -\\ New World
In this paper, we will introduce a beautiful horn torus model by Puha and D\"aumler for the Riemann sphere in complex analysis attaching the zero point and the point at infinity. Surprisingly enough, we can introduce analytical structure of conformal to the model. Here, some basic opinions on the D\"aumler's horn torus model will be stated as the basic ones in mathematics.
[2252] vixra:1904.0048 [pdf]
Quantum String with Periodic Boundary Condition
We consider the string, the left end of which is fixed and the right end of the string is in periodic motion. We derive the quantum internal motion of this system.
[2253] vixra:1904.0007 [pdf]
Quantum Gravity in the Fano Plane
The argument that modern string theory has become lost in math is compelling, controversial, and ever more timely. A perspective arguably of equal compulsion takes the view that the math is just fine, that string theory has become lost in physics. It's all about the wavefunction. Almost a century after Bohr and Copenhagen, ongoing proliferation of conflicting quantum interpretations of the unobservable wavefunction and its interactions attests to the profound confusion in philosophical foundations of basic quantum physics. Taking the octonion wavefunction to be comprised not of one-dimensional oscillators in eight-dimensional space, but rather the eight fundamental geometric objects of the Pauli algebra of 3D space - one scalar, three vectors, three bivector pseudovectors, and one trivector pseudoscalar - yields a long overdue and much needed coherent phenomenology.
[2254] vixra:1904.0002 [pdf]
Model of Quarks and Leptons Based on Spacetime Symmetries
The experimental search of standard model superpartners, and the derivation of the standard model from higher dimensional theories have been challenging for some time now. In this article these technologies are kept but they are applied to a simpler environment. A coherent scenario of particles based on Kaluza-Klein theory and unbroken supersymmetry is proposed. It offers an economic basis for constructing the standard model particles without the superpartner problem of the minimal supersymmetric standard model. With local supersymmetry one arrives at supergravity without Yang-Mills fields. A number of results in the literature would have to be reconsidered according to this model.
[2255] vixra:1903.0566 [pdf]
Division by Zero Calculus in Trigonometric Functions
In this paper, we will introduce the division by zero calculus in triangles and trigonometric functions as the first stage in order to see the elementary properties.
[2256] vixra:1903.0541 [pdf]
Uniquely Distinguishing an Electron’s Spin from Two Quantum States via Riemann Surface Guidance
In this study, we will describe how one electron could consist of a two-state spin system on the basis of a previous study, wherein we obtained a model in which two spinor particles could exist in one electron. The previously reported electronic model used equations to show the energy conservation law of an electron system, which included two spinors. Herein, we will consider these two oscillators as two bases and will start the discussion from the viewpoint that one electron can be considered two-bitwise. For this purpose, we apply the two-bitwise system with a Riemann surface via an analytic continuation. This trial could explain the mixed state of up and down spin states. Furthermore, the two states in which the electron can be of either state can be selected as the disconnection of the analytic continuation of the complex analysis. To consider the magnetic gradient field which would have a force to disconnect the analytic continuation to separate the two domains, it is possible to explain how the spin is fixed in the abovementioned states.
[2257] vixra:1903.0531 [pdf]
Quantum Mechanics Where Spin is SO(3)
In 2014 Steven Weinberg noted that quantum mechanics can avoid various difficulties, such as the many worlds hypothesis, by taking the quantum states to be density matrices without reference to state vectors. An immediate consequence of Weinberg's idea is that electron spin can be taken to follow SO(3) instead of SU(2). This radical departure from present understanding motivates our exploration of density matrices as a method of going beyond the Standard Model. An important tool for Standard Model calculations is the Feynman path integral formulation of quantum field theory. When the path integral is Wick rotated from time to imaginary time or temperature it becomes a method of cooling down density matrices. While this does not show that one goes beyond the Standard Model by Wick rotation, it does show a close relation between this method of cooling density matrices and quantum field theory of the Standard Model. We explore these ideas and exhibit toy models with particle content and symmetry similar to the Standard Model.
[2258] vixra:1903.0530 [pdf]
Fast Radio Bursts from Terraformation
Fast radio bursts (FRBs) are, as the name implies, short and intense pulses of radiation at wavelengths of roughly one metre. FRBs have extremely high brightness temperatures, which points to a coherent source of radiation. The energy of a single burst ranges from $10^{36}$ to $10^{39}$ erg. At the high end of the energy range, FRBs have enough energy to unbind an earth-sized planet, and even at the low end, there is enough energy to vaporise and unbind the atmosphere and the oceans. We therefore propose that FRBs are signatures of an artificial terraformer, capable of eradicating life on another planet, or even destroy the planet entirely. The necessary energy can be harvested from Wolf-Rayet stars with a Dyson sphere ($\sim 10^{38}$ erg s$^{-1}$) , and the radiation can be readily produced by astrophysical masers. We refer to this mechanism as Volatile Amplification of a Destructive Emission of Radiation (VADER). We use the observational information to constrain the properties of the apparatus. We speculate that the non-repeating FRBs are low-energy pulses used to exterminate life on a single planet, but leaving it otherwise intact, and that the stronger repeating FRB is part of an effort to destroy multiple objects in the same solar system, perhaps as a preventative measure against panspermia. In this picture, the persistent synchrotron source associated with the first repeating FRB arises from the energy harvesting process. Finally we propose that Oumuamua might have resulted from a destruction of a planet in this manner.
[2259] vixra:1903.0525 [pdf]
Fusion of Halo Nucleus 6he on 238u :Evidence for Tennis-Ball (Bubble) Structure of the Core of the Halo (Even the Giant-Halo) Nucleus
In a decade-and-a-half old experiment, Raabe et al.(Nature 431 (2004) 823), had studied fusion of an incoming beam of halo nucleus 6He with the target nucleus 238U . We extract a new interpretation of the experiment, different from the one that has been inferred so far. We show that their ex- periment is actually able to discriminate between the structures of the target nucleus (behaving as standard nucleus with density distribution described with canonical RMS radius r = r0 A1/3 with r0 = 1.2 fm), and the ”core” of the halo nucleus, which surprisingly, does not follow the standard density distribution with the above RMS radius. In fact the core has the structure of a tennis-ball (bubble) like nucleus, with a ”hole” at the centre of the den- sity distribution. This novel interpretation of the fusion experiment provides an unambigous support to an almost two decades old model (Abbas, Mod. Phys. Lett. A 16 (2001) 755), of the halo nuclei. This Quantum Chro- modyanamics based model, succeeds in identifyng all known halo nuclei and makes clear-cut and unique predictions for new and heavier halo nuclei. This model supports, the existence of tennis-ball (bubble) like core of even the giant-neutron halo nuclei. This should prove beneficial to the experimen- talists, to go forward more confidently, in their study of exotic nuclei.
[2260] vixra:1903.0503 [pdf]
Extending an Irrationality Proof of Sondow: from e to Zeta(n)
We modify Sondow's geometric proof of the irrationality of e. The modification uses sector areas on circles, rather than closed intervals. Using this circular version of Sondow's proof, we see a way to understand the irrationality of a series. We evolve the idea of proving all possible rational value convergence points of a series are excluded because all partials are not expressible as fractions with the denominators of their terms. If such fractions cover the rationals, then the series should be irrational. Both the irrationality of e and that of zeta(n>=2) are proven using these criteria: the terms cover the rationals and the partials escape the terms.
[2261] vixra:1903.0488 [pdf]
Division by Zero Calculus in Complex Analysis
In this paper, we will introduce the division by zero calculus in complex analysis for one variable at the first stage in order to see the elementary properties.
[2262] vixra:1903.0486 [pdf]
Determination of the Fundamental Impedance of Free Space due to Radiation Energy from the Cosmic Microwave Background and Information Horizons
Here, the fundamental constants namely vacuum permeability and permittivity, which comprise the numerical definition of the speed of light in vacuum, are determined. They are found to be composites correlated to Planck’s constant, Wien’s constant and the mass energy of the cosmic microwave background. Derivations for both a new fundamental composite speed of light in vacuum and vacuum impedance are performed. Furthermore, this newly suggested definition is correlated to a confined quantized radiation spectrum of the cosmic particle horizon.
[2263] vixra:1903.0440 [pdf]
Onium Hamiltonian & Quantum Separability
The existence of a disentangling mathematical transformation of wave function in a Coulomb entangled state of charged molecular radicals, reveals a new chapter to the Einstein- Schrodinger discussion about entanglement.
[2264] vixra:1903.0432 [pdf]
Division by Zero Calculus and Singular Integrals
What are the singular integrals? Singular integral equations are presently encountered in a wide range of mathematical models, for instance in acoustics, fluid dynamics, elasticity and fracture mechanics. Together with these models, a variety of methods and applications for these integral equations has been developed. In this paper, we will give the interpretation for the Hadamard finite part of singular integrals by means of the division by zero calculus.
[2265] vixra:1903.0424 [pdf]
Contextual Transformation of Short Text for Improved Classifiability
Text classification is the task of automatically sorting a set of documents into predefined set of categories. This task has several applications including separating positive and negative product reviews by customers, automated indexing of scientific articles, spam filtering and many more. What lies at the core of this problem is to extract features from text data which can be used for classification. One of the common techniques to address this problem is to represent text data as low dimensional continuous vectors such that the semantically unrelated data are well separated from each other. However, sometimes the variability along various dimensions of these vectors is irrelevant as they are dominated by various global factors which are not specific to the classes we are interested in. This irrelevant variability often causes difficulty in classification. In this paper, we propose a technique which takes the initial vectorized representation of the text data through a process of transformation which amplifies relevant variability and suppresses irrelevant variability and then employs a classifier on the transformed data for the classification task. The results show that the same classifier exhibits better accuracy on the transformed data than the initial vectorized representation of text data.
[2266] vixra:1903.0409 [pdf]
Soft and Hard Liberation of Compact Lie Groups
We investigate the liberation question for the compact Lie groups, by using various ``soft'' and ``hard'' methods, based respectively on joint generation with a free quantum group, and joint generation with a free torus. The soft methods extend the ``easy'' methods, notably by covering groups like $SO_N,SU_N$, and the hard methods partly extend the soft methods, notably by covering the real and complex tori themselves.
[2267] vixra:1903.0407 [pdf]
Gravitational Index of Refraction
Henceforth the fact is admitted as an axiom that all bodies in the universe set up gravitationally the universal optical medium, named gravitational ether, whose strength—from which we derive its index of refraction—is the sum of all relative-velocity dependent gravitational potentials, hence both nonuniform in space and changing in time as bodies move. Besides, Einstein’s E = mc 2 is of gravitational cosmological nature, including pure Newtonian.
[2268] vixra:1903.0389 [pdf]
Violation of Conservation of Momentum by Lorentz Transformation
An isolated physical system of elastic collision between two identical objects is chosen to verify the conservation of momentum in two inertial reference frames. In the first reference frame, the center of mass (COM) is stationary. In the second reference frame, the center of mass moves at a constant velocity. By applying Lorentz transformation to the velocities of both objects, total momentum before and during the collision in the second reference frame can be compared. The comparison shows that conservation of momentum fails to hold when both objects move together at the same velocity.
[2269] vixra:1903.0386 [pdf]
On Thermal Relativity, Modified Hawking Radiation, and the Generalized Uncertainty Principle
After a brief review of the thermal relativistic $corrections$ to the Schwarzschild black hole entropy, it is shown how the Stefan-Boltzman law furnishes large modifications to the evaporation times of Planck-size mini-black holes, and which might furnish important clues to the nature of dark matter and dark energy since one of the novel consequences of thermal relativity is that black holes do $not$ completely evaporate but leave a Planck size remnant. Equating the expression for the modified entropy (due to thermal relativity corrections) with Wald's entropy should in principle determine the functional form of the modified gravitational Lagrangian $ {\cal L } (R_{abcd}) $. We proceed to derive the generalized uncertainty relation which corresponds to the effective temperature $ T_{eff} = T_H ( 1 - { T^2_H \over T^2_P } )^{ - 1/2} $ associated with thermal relativity and given in terms of the Hawking ($T_H$) and Planck ($T_P$) temperature, respectively. Such modified uncertainty relation agrees with the one provided by string theory up to first order in the expansion in powers of $ { (\delta p)^2 \over M^2_P} $. Both lead to a minimal length (Planck size) uncertainty. Finally, an explicit analytical expression is found for the modifications to the purely thermal spectrum of Hawking radiation which could cast some light into the resolution of the black hole information paradox.
[2270] vixra:1903.0373 [pdf]
Mass and the Fifth Dimension
A correlation between mass and a compact 5th dimension is proposed. The 5th coordinate appears to represent the gravitational radius of the black hole. The source of curvature of the spacetime turns out to be an anisotropic null fluid with no energy density and isotropic pressure but nonzero energy flux and anisotropic pressures.
[2271] vixra:1903.0372 [pdf]
An Introduction to the Theory of Everything Using Energy Gradients and Information Horizons
The quest to unify the four fundamental forces has been sought after for decades but has remained elusive to all physicists. The first clues to unification were given when information horizons were associated to radiation by Unruh and Hawking. This was then extended to be a discrete spectrum in nature by McCulloch. Here, it is suggested that the limitation, or confinement, of an allowed spectrum is relevant in order to compute all the fundamental forces. The maximum spectrum is defined by the size of the cosmic particle horizon and the Planck length. Notably, all fundamental forces can be computed by using the same core equation and can be extended to reflect the different information horizons and particle interaction scenarios. This result suggests that for unification, the radiation spectrum provides momentum space alterations to generate energy gradients. The force derivatives of the energy fields indicate numerical convergence to the observed fundamental forces.
[2272] vixra:1903.0371 [pdf]
Division by Zero Calculus in Multiply Dimensions and Open Problems
In this paper, we will introduce the division by zero calculus in multiply dimensions in order to show some wide and new open problems as we see from the one dimensional case.
[2273] vixra:1903.0348 [pdf]
The Effects of Astronomical Bodies on Imouto’s Local Solutions to Rankine–Hugoniot Equations.
We herein present a proof of the existence and smoothness of the Navier-Stokes equations via a new method of manipulating Calabi-Yau manifolds, which in turn leads us to a disproof by contradiction of the Collatz conjecture; four separate, independent proofs of the Jacobian conjecture; a complete decipherment of Linear A; a method of determining whether or not a book is worth reading based on its cover alone; and an entire new field of mathematics which we hereby name ”Weird Calculus”. We completely and utterly fail to present any convincing arguments, but at least there’s some nice text art. We make no attempt to clarify anything in the field. Astute readers may notice the complete lack of content and coherence in this paper.
[2274] vixra:1903.0338 [pdf]
Emergence of a New Type of Life and Alive Creature from Mixing Cells of Plants and Animals
In this research, we show that by mixing cells of plants and animals, a new type of alive creature or life is emerged. To this aim, we cut the skin of some quails and create a hole between skin and skeleton. We put some some beans and lentils in this hole and cover it by a black glue. We open the hole after a week and observe that a bridge of quail's cells is produced between beans and lentils. This bridge has the genus of the periosteum that covers the outer surface of all bones. In this periosteum, there are some stem cells that produce a collection of neuronal circuits. These circuits could join to each other and form a little brain. This brain can control all voluntary and non-voluntary actions of this new alive creature.
[2275] vixra:1903.0290 [pdf]
Specifications for Elementary Particles, Dark Matter, Dark Energy, and Unifying Physics Theories
We suggest united models and specific predictions regarding elementary particles, dark matter, aspects of galaxy evolution, dark energy, and aspects of the cosmology timeline. Results include specific predictions for new elementary particles and specific descriptions of dark matter and dark energy. Some of our modeling matches known elementary particles and extrapolates to predict other elementary particles, including bases for dark matter. Some modeling explains observed ratios of effects of dark matter to effects of ordinary matter. Some models suggest aspects of galaxy formation and evolution. Some modeling correlates with eras of increases or decreases in the observed rate of expansion of the universe. Our modeling framework features mathematics for isotropic quantum harmonic oscillators and provides a framework for creating and unifying physics theories. Aspects of our approach emphasize existence of elementary particles and de-emphasize motion. Some of our models complement traditional quantum field theory and, for example, traditional calculations of anomalous magnetic dipole moments.
[2276] vixra:1903.0280 [pdf]
Investigation of the Characteristics of the Zeros of the Riemann Zeta Function in the Critical Strip Using Implicit Function Properties of the Real and Imaginary Components of the Dirichlet Eta Functionv3 Poster
This poster investigates the characteristics of the zeros of the Riemann zeta function (of s) in the critical strip by using the Dirichlet eta function, which has the same zeros. The characteristics of the implicit functions for the real and imaginary components when those components are equal are investigated and it is shown that the function describing the value of the real component when the real and imaginary components are equal has a derivative that does not change sign along any of its individual curves - meaning that each value of the imaginary part of s produces at most one zero. Combined with the fact that the zeros of the Riemann xi function are also the zeros of the zeta function and xi(s) = xi(1-s), this leads to the conclusion that the Riemann Hypothesis is true.
[2277] vixra:1903.0274 [pdf]
The Wave Function of the Universe near the Big-Bang Singularity and the Generalized Uncertainty Principle
We investigate the Friedmann equations and the Wheeler-DeWitt equations in three dimensional pure gravity under the Generalized Uncertainty Principle (GUP) effects. In addition we study the wave functions near the Big-Bang singularity as the solutions of the deformed Wheeler-DeWitt equation in momentum space. The resulting wave functions are represented as the Mathieu functions. The GUP is considered in the context of the Snyder non-commutative space.
[2278] vixra:1903.0272 [pdf]
Organic Network Control Systems Challenges in Building a Generic Solution for Network Protocol Optimisation
In the last years many approaches for dynamic protocol adaption in networks have been made and proposed. Most of them deal with a particular environment, but a much more desired approach would be to design a generic solution for this problem. Creating an independent system regarding the network type it operates in and therefore the protocol type that needs to be adapted is a big issue. In this paper we want to discuss certain problems that come with this task and why they have to be taken into account when it comes to designing such a generic system. At first we will see a generic architecture approach for such a system followed by a comparison of currently existing Organic Network Control Systems for adapting protocols in a Mobile Ad-hoc network and a Peer-to-Peer network. After identifying major problems we will summarize and evaluate the achieved results.
[2279] vixra:1903.0264 [pdf]
Clever Battery Bio-Computer and Little Brain in Shell-Less Culture Systems of Chick Embryos
A shell less culture system for chick embryo could be used to produce a clever battery or bio-computer. In this system, after 50 hourses after incubatating, a heart is emerged which send blood molecules to each side and produces a biological current. This current carries charged particles and molecules and creates an electrical current and differences potential between center of heart and sides of shell-less vessels. If we put two metal bars in center and side of vessel, we can take electrical current and use of it industry. This causes that shell-less culture system plays the role of battery. This battery is clever and by closing another shell less culture system produce different currents. This is because that a collections of neurons are emerged on the heart in shell less cuture system which plays the role of a little brain. This little brain exchange information with medium and control voluntary actions of system For this reason, this system not only has the role of battery but also could be used as a bio-computer.
[2280] vixra:1903.0260 [pdf]
Current Trends in Extended Classifier System
Learning is a way which improves our ability to solve problems related to the environment surrounding us. Extended Classifier System (XCS) is a learning classifier system that use reinforcement learning mechanism to solve complex problems with robust performance. It is an accuracy-based system that works by observing environment, taking input from it and applying suitable actions. Every action of XCS gets a feedback in return from the environment which is used to improve its performance. It also has ability to apply genetic algorithm (GA) on existing classifiers and create new ones by taking cross-over and mutation which have better performance. XCS handles single step and multi-step problems by using different methods like Q-learning mechanism. The ultimate challenge of XCS is to design an implementation which arrange multiple components in a unique way to produce compact and comprehensive solution in a least amount of time. Real time implementation requires flexibility for modifications and uniqueness to cover all aspects. XCS has recently been modified for real input values and a memory management system is also introduced which enhance its ability in different kind of applications like data mining, control stock exchange. In this article, there will be a brief discussion about the parameter and components of XCS. Main part of this article will cover the extended versions of XCS with further improvements and focus on applications, usage in real environment and relationship with organic computing
[2281] vixra:1903.0248 [pdf]
Universe Expansion Black Holes Nuclear Forces
The accelerated expansion of Universe is caused by the Universal antigravity force, with which the Hubble’s Law is proved. The black holes are sustainable matter forms of the dynamic space that cannot disappear, because of the particulate antigravity force that prevents the further gravitational collapse. The inverse electric-nuclear field causes the nuclear forces, namely the strong nuclear force and the nuclear antigravity one, on which the architecture of the nuclei model is based.
[2282] vixra:1903.0243 [pdf]
Standard Model Fermions and Higgs Scalar Field from PCCR Operator Algebra
A fundamental theory of fermions is proposed by constructing the Dirac gamma matrices using pCCR operators. The result is the fermions of the Standard Model, 3 generations of leptons and quarks and the chiral nature of su(2) is revealed. In addition the pCCR operators generates a Higgs complex scalar doublet and that space-time is found to be 4d.
[2283] vixra:1903.0242 [pdf]
System of Particles and Field in a Unified Field Theory
In previous contributions I have presented a unified theory of particles and field, in the Geometry of General Relativity, which accounts for all the known force fields, as well as the properties of elementary particles, without the need to invoke additional dimension or special physical phenomenon. In this paper the theory is fully detailed, and its focus is on models of systems of elementary particles interacting with the field. The equations are established for continuous systems and solutions, as well as methods to solve the usual cases are exposed in the model of 2 particles. It is then possible to build clear models of systems such as nuclei and atoms and study the conditions for their stability. It gives also another vision of the special behavior of the nuclear forces. Discontinuous processes involve discontinuities in the field and I show that they can be represented by particles-like objects, the bosons. Their interaction with particles is formalized in a rigorous but simple way.
[2284] vixra:1903.0241 [pdf]
A Note of Differential Geometry
In this note, we give an application of the Method of the Repère Mobile to the Ellipsoid of Reference in Geodesy using a symplectic approach.
[2285] vixra:1903.0236 [pdf]
Resolving Limits of Organic Systems in Large Scale Environments: Evaluate Benefits of Holonic Systems Over Classical Approaches
With the rapidly increasing number of devices and application components interacting with each other within larger complex systems, classical system hierarchies increasingly hit their limit when it comes to highly scalable and possibly fluctual organic systems. The holonic approach for self-* systems states to solve some of these problems. In this paper, limits of different state-of-the-art technologies and possible solutions to those will be identified and ranked for scalability, privacy, reliability and performance under fluctuating conditions. Subsequently, the idea and structure of holonic systems will be outlined, and how to utilize the previously described solutions combined in a holonic environment to resolve those limits. Furthermore, they will be classified in the context of current multi-agent-systems (MAS). The focus of this work is located in the area of smart energy grids and similar structures, however an outlook sketches a few further application scenarios for holonic structures.
[2286] vixra:1903.0224 [pdf]
A Model of Lepton and Quark Structure
We propose a model of particles with two very massive fundamental constituents, maxons. One of them is a fractionally charged color triplet and the other is neutral color singlet. Leptons, quarks and the weak bosons are quasiparticles in the system of interacting maxons. Some implications of the model are discussed.
[2287] vixra:1903.0223 [pdf]
Comparing Anytime Learning to Organic Computing
In environments where finding the best solution to a given problem is computationally infeasible or undesirable due to other restrictions, the approach of anytime learning has become the de facto standard. Anytime learning allows intelligent systems to adapt and remain operational in a constantly changing environment. Based on observation of the environment, the underlying simulation model is changed to fit the task and the learning process begins anew. This process is expected to never terminate, therefore continually improving the set of available strategies. Optimal management of uncertainty in tasks, which require a solution in real time, can be achieved by assuming faulty yet improving output. Properties of such a system are not unlike those present in organic systems. This article aims to give an introduction to anytime learning in general as well as to show the similarities to organic computing in regards to the methods and strategies used in both domains.
[2288] vixra:1903.0207 [pdf]
Cellular Automaton Graphics(6)
Developping a regular polyhedron on a plane, setting discrete coordinates on the development and applying a boundary condition of regular polyhedron to it, we realize a symmetrical graphics.
[2289] vixra:1903.0190 [pdf]
Events Simultaneity and Light Propagation in the Context of the Galilean Principle of Relativity
The intent of this work is to present a discussion of the Galilean Principle of Relativity and of its implications for what concerns the nature of simultaneity of events and the characteristics of light propagation. It is shown that by using a clock synchronization procedure that makes use of isotropically propagating signals of generic nature, the simultaneity of distinct events can be established in a unique way by different observers, also when such observers are in relative motion between themselves. Such absolute nature of simultaneity is preserved in the passage from a stationary to a moving reference frame also when a set of generalized space-time coordinates is introduced. The corresponding transformations of coordinates between the two moving frames can be considered as a generalization of the Lorentz transformations to the case of synchronization signals having characteristic speed different from the speed of light in vacuum. The specific invariance properties of these coordinate transformations with respect to the characteristic speed of propagation of the synchronization signals and of the corresponding constitutive laws of the underlying physical phenomenon are also presented, leading to a different interpretation of their physical meaning with respect to the commonly accepted interpretation of the Lorentz transformations. On the basis of these results, the emission hypothesis of W. Ritz, that assumes that light is always emitted with the same relative speed with respect to its source and that is therefore fully consistent with the Galilean Principle of Relativity, is then applied to justify the outcomes of the Michelson-Morley and Fizeau interferometric experiments by introducing, for the latter case, an additional hypothesis regarding the possible influence of turbulence on the refractive index of the fluid. Finally, a test case to verify the validity of either the Galilean or the Relativistic velocity composition rule is presented. The test relies on the aberration of the light coming from celestial objects and on the analysis of the results obtained by applying the two different formulas for the resultant velocity vector to process the data of the observed positions, as measured by a moving observer, in order to determine the actual un-aberrated location of the source.
[2290] vixra:1903.0186 [pdf]
Advancements of Deep Q-Networks
Deep Q-Networks first introduced a combination of Reinforcement Learning and Deep Neural Networks at a large scale. These Networks are capable of learning their interactions within an environment in a self-sufficient manor for a wide range of applications. Over the following years, several extensions and improvements have been developed for Deep Q-Networks. In the following paper, we present the most notable developments for Deep Q-Networks, since the initial proposed algorithm in 2013.
[2291] vixra:1903.0184 [pdf]
Who Did Derive First the Division by Zero $1/0$ and the Division by Zero Calculus $\tan(\pi/2)=0, \log 0=0$ as the Outputs of a Computer?
In this short paper, we will introduce an essence of the division by zero calculus and the situation from the viewpoint of computers that will contain a surprising news on the division by zero calculus.
[2292] vixra:1903.0177 [pdf]
Generalized Deng Entropy
Dempster-Shafer evidence theory as an extension of Probability has wideapplications in many fields. Recently, A new entropy called Deng entropywas proposed in evidence theory. Deng Entropy as an uncertain measurein evidence theory. Recently, some scholars have pointed out that DengEntropy does not satisfy the additivity in uncertain measurements. However,this irreducibility can have a huge effect. In more complex systems, thederived entropy is often unusable. Inspired by this, a generalized entropy isproposed, and the entropy implies the relationship between Deng entropy,R ́enyi entropy, Tsallis entropy.
[2293] vixra:1903.0168 [pdf]
Organic Traffic Control with Dynamic Route Guidance as a Measure to Reduce Exhaust Emissions in Comparison Organic Traffic Control Mit Dynamic Route Guidance Als Maßnahme Zur Reduzierung Von Abgasemissionen im Vergleich
In this paper an Organic Traffic Control system with Dynamic Route Guidance functionality is being looked at regarding its emission-reducing effect on road traffic. This system will be compared to other environmental measures, namely Low Emission Zones, driving bans and hardware upgrades, with respect to its effect on emissions and other criteria. Results from existing literature and a few calculations are used for this comparison. The sparse data allows for only a few quantitive comparisons. Qualitative comparisons show that this system has the potential to effectively lower emission in its area of effect. It reduces the quantity of all exhaust gases and additionally fuel consumption, without disadvantages for certain road users. This is not the case with the comparative measures. ----- In dieser Arbeit wird ein Organic Traffic Control System mit Dynamic Route Guidance Funktionalität hinsichtlich seiner emissionsreduzierenden Wirkung im Verkehr betrachtet. Dieses System wird mit anderen Umweltmaßnahmen, namentlich Umweltzonen, Fahrverboten und Hardwarenachrüstungen, hinsichtlich Wirkung und weiterer Kriterien verglichen. Es werden hierzu Daten und Ergebnisse aus der bestehenden Literatur verwendet und einige wenige Rechnungen durchgeführt. Die Datenlage erlaubt nur teilweise quantitative Vergleiche. Qualitativ zeigt sich, dass das System Potential bietet, effektiv innerhalb seines Installationsbereichs Emissionen zu senken. Es reduziert die Menge aller Abgase und zusätzlich den Spritverbrauch, ohne dass dabei Nachteile für bestimme Verkehrsteilnehmer entstehen. Dies ist bei den Vergleichsmaßnahmen jeweils nicht der Fall.
[2294] vixra:1903.0162 [pdf]
Statement of Quantum Indeterminacy
This article is a concise statement of the machinery of quantum indeterminacy — in response to the question: What is indeterminacy; is it something that can be written down?<br>br>Keywords<br>foundations of quantum theory, quantum randomness, quantum indeterminacy, logical independence, self-reference, logical circularity, mathematical undecidability, Kurt Gödel.
[2295] vixra:1903.0158 [pdf]
Quantum Chromodynamics Based Model: a New Perspective on Halo-Structure and New-Magicity in Exotic Nuclei
A quite recent, ingenious experimental paper (Raabe et al., Nature 431 (2004) 823), studied fusion of an incoming beam of halo nucleus 6-He with the target nucleus 238-U . They managed to extract information which could make basic discrimination between the structures of the target nucleus (behaving as standard nucleus with density distribution described with canonical RMS radius r = r0 A 1/3 with r0 = 1.2 fm), and the ”core” of the halo nucleus, which surprisingly, does not follow the standard density distribution with the above RMS radius. This provides unambiguous and strong support for a Quantum Chromodyanamics based model structure, which shows as to how and why the halo structure arises. This model succeeds in identifyng all known halo nuclei and also makes clear-cut and unique predictions for new halo nuclei. It also provides a consistent and unified understanding of what is imlied for the emergence of new magic numbers in the study of exotic nuclei. It is triton clustering, as apparent from experimental data on neutron-rich nuclei, which guides us to this new model. It provides a new perspective, of how QCD leads to a consistent understanding of the nuclear phenomenon, both of the N ∼ Z nuclei, and of those which are far away from this limit.
[2296] vixra:1903.0157 [pdf]
Consideration of Riemann Hypothesis 43 Counterexamples
I also found a zero point which seems to deviate from 0.5. I thought that the zero point outside 0.5 can not be found very easily in the area which can not be shown in the figure, but this area can not be represented in the figure but can be found one after another. It is completely unknown whether this axis is distorted in the 0.5 axis or just by coincidence. The number of zero points in the area that can not be shown in the figure is now 43. No matter how you looked it was not found in other areas. It seemed that there is no other way to interpret this axis as 0.5 axis is distorted in this area. Somewhere on the net there is a memory that reads the mathematician's view that "there are countless zero points in the vicinity of 0.5 on high area". We are reporting that the zero point search of the high-value area of the imaginary part which was giving up as it is no longer possible with the supercomputer is no longer possible, is reported. 43 zero-point searches in the high-value area of the imaginary part are thus successful. This means that the zero point search in the high-value area of the imaginary part has succeeded in the 43. We will also write 43 zero point searches of the successful high-value area of the imaginary part. There are many counterexamples far beyond 0.5, which is far beyond the limit, but the computer can not calculate it. Moreover, I believe that it can only be confirmed on supercomputer whether this is really counterexample. In addition, it is necessary to make corrections in the supercomputer.
[2297] vixra:1903.0138 [pdf]
A Survey on Reinforcement Learning for Dialogue Systems
Dialogue systems are computer systems which com- municate with humans using natural language. The goal is not just to imitate human communication but to learn from these interactions and improve the system’s behaviour over time. Therefore, different machine learning approaches can be implemented with Reinforcement Learning being one of the most promising techniques to generate a contextually and semantically appropriate response. This paper outlines the current state-of- the-art methods and algorithms for integration of Reinforcement Learning techniques into dialogue systems.
[2298] vixra:1903.0135 [pdf]
A Survey on Classification of Concept Drift with Stream Data
Usually concept drift occurs in many applications of machine learning. Detecting a concept drift is the main challenge in a data stream because of the high speed and their large size sets which are not able to fit in main memory. Here we take a small look at types of changes in concept drift. This paper discusses about methods for detecting concept drift and focuses on the problems with existing approaches by adding STAGGER, FLORA family, Decision tree methods, meta-learning methods and CD algorithms. Furthermore, classifier ensembles for change detection are discussed.
[2299] vixra:1903.0121 [pdf]
Online Transfer Learning and Organic Computing for Deep Space Research and Astronomy
Deep space exploration is the pillars within the field of outer space analysis and physical science. The amount of knowledge from numerous space vehicle and satellites orbiting the world of study are increasing day by day. This information collected from numerous experiences of the advanced space missions is huge. These information helps us to enhance current space knowledge and the experiences can be converted and transformed into segregated knowledge which helps us to explore and understand the realms of the deep space.. Online Transfer Learning (OTL) is a machine learning concept in which the knowledge gets transferred between the source domain and target domain in real time, in order to help train a classifier of the target domain. Online transfer learning can be an efficient method for transferring experiences and data gained from the space analysis data to a new learning task and can also routinely update the knowledge as the task evolves.
[2300] vixra:1903.0120 [pdf]
A Discussion of Detection of Mutual Influences Between Socialbots in Online (Social) Networks
Many people organise themselves online in social networks or share knowledge in open encyclopaedias. However, these networks do not only belong to humans. A huge variety of socialbots that imitate humans inhabit these and are connected to each other. The connections between socialbots lead to mutual influences between them. If the influence socialbots have on each other are too big they adapt the behaviour of the other socialbot and get worse in imitating humans. Therefore, it is necessary to detect when socialbots are mutually influencing each other. For a better overview socialbots in the social networks Facebook, Twitter and in the open encyclopaedia Wikipedia are observed and the mutual influences between them detected. Furthermore, this paper discusses how socialbots could handle the detected influences.
[2301] vixra:1903.0117 [pdf]
A Survey on Different Mechanisms to Classify Agent Behavior in a Trust Based Organic Computing Systems
Organic Computing (OC) systems vary from traditional software systems, as these systems are composed of a large number of highly interconnected and distributed subsystems. In systems like this, it is not possible to predict all possible system configurations and to plan an adequate system behavior entirely at design time. An open/decentralized desktop grid is one example, Trust mechanisms are applied on agents that show the following Self-X properties (Self-organization, Self-healing, Self-organization and so on). In this article, some mechanisms that could help in the classification of agents behavior at run time in trust-based organic computing systems are illustrated. In doing so, isolation of agents that reduce the overall systems performance is possible. Trust concept can be used on agents and then the agents will know if their interacting agents belong to the same trust community and how trustworthy are they. Trust is a significant concern in large-scale open distributed systems. Trust lies at the core of all interactions between the agents which operate in continuously varying environments. Current research leads in the area of trust in computing systems are evaluated and addressed. This article shows mechanisms discussed can successfully identify/classify groups of systems with undesired behavior.
[2302] vixra:1903.0089 [pdf]
Deep Meta-Learning and Dynamic Runtime Exploitation of Knowledge Sources for Traffic Control
In the field of machine learning and artificial intelligence, meta-learning describes how previous learning experiences can be used to increase the performance on a new task. For this purpose, it can be investigated how prior (similar) tasks have been approached and improved, and knowledge can be obtained about achieving the same goal for the new task. This paper outlines the basic meta-learning process which consists of learning meta-models from meta-data of tasks, algorithms and how these algorithms perform on the respective tasks. Further, a focus is set on how this approach can be applied and is already used in the context of deep learning. Here, meta-learning is concerned with the respective machine learning models themselves, for example how their parameters are initialised or adapted during training. Also, meta-learning is assessed from the viewpoint of Organic Computing (OC) where finding effective learning techniques that are able to handle sparse and unseen data is of importance. An alternative perspective on meta-learning coming from this domain that focuses on how an OC system can improve its behaviour with the help of external knowledge sources, is highlighted. To bridge the gap between those two perspectives, a model is proposed that integrates a deep, meta-learned traffic flow predictor into an organic traffic control (OTC) system that dynamically exploits knowledge sources during runtime.
[2303] vixra:1903.0086 [pdf]
Novelty Detection Algorithms and Their Application in Industry 4.0
Novelty detection is a very important part of Intelligent Systems. Its task is to classify the data produced by the system and identify any new or unknown pattern that were not present during the training of the model. Different algorithms have been proposed over the years using a wide variety of different technologies like probabilistic models and neural networks. Novelty detection and reaction is used to enable self*-properties in technical systems to cope with increasingly complex processes. Using the notion of Organic Computing, industrial factories are getting more and more advanced and intelligent. Machines gain the capability of self-organization, self-configuration and self-adaptation to react to outside influences. This survey paper looks at the state-of-the-art technologies used in Industry 4.0 and assesses different novelty detection algorithms and their usage in such systems. Therefore, different data-sources and consequently applications for potential novelty detection are analyzed. Three different novelty detection algorithms are then present using different underlying technologies and the applicability of these algorithms in combination with the defined scenarios is analyzed.
[2304] vixra:1903.0066 [pdf]
Observations of a Possible Unification Algebra
A C-loop algebra is assembled as the product 0f a Clifford algebra and a Cayley-Dickson algebra. Once the principle of spatial equivalence is invoked, a sub-algebra is identified with features that suggest it could provide an underlying basis for the standard model of fundamental particles.
[2305] vixra:1903.0059 [pdf]
Are Imaginary Numbers Rooted in an Asymmetric Number System? The Alternative is a Symmetric Number System!
In this paper, we point out an interesting asymmetry in the rules of fundamental mathematics between positive and negative numbers. Further, we show that there exists an alternative numerical system that is basically identical to today’s system, but where positive numbers dominate over negative numbers. This is like a mirror symmetry of the existing number system. The asymmetry in both of these systems leads to imaginary and complex numbers. We also suggest an alternative number system with perfectly symmetrical rules – that is, where there is no dominance of negative numbers over positive numbers, or vice versa, and where imaginary and complex numbers are no longer needed. This number system seems to be superior to other numerical systems, as it brings simplicity and logic back to areas that have been dominated by complex rules for much of the history of mathematics. We also briefly discuss how the Riemann hypothesis may be linked to the asymmetry in the current number system. The foundation rules of a number system can, in general, not be proven incorrect or correct inside the number system itself. However, the ultimate goal of a number system is, in our view, to be able to describe nature accurately. The optimal number system should therefore be developed with feedback from nature. If nature, at a very fundamental level, is ruled by symmetry, then a symmetric number system should make it easier to understand nature than a asymmetric number system would. We hypothesize that a symmetric number system may thus be better suited to describing nature. Such a number system should be able to get rid of imaginary numbers in space-time and quantum mechanics, for example, two areas of physics that to this day are clouded in mystery.
[2306] vixra:1903.0055 [pdf]
Using Spiral Waves of the Little Brain on the Heart for Designing New Neuronal Circuits in Brain
Recently, some authors have considered the spiral waves in neuronal systems and proposed a model for it. We generalize their consideration to radiated waves from the little brain on the heart of birds to design new neuranal circuits in brain. First, we put two chick embryos in an inductor and send a current by a generator. Radiated waves of chick embryos change the initial current and produce an oscillating current which can be observed by an scope. Using this system, we consider the exchanged spiral waves between two little brains on the hearts of two chick embryos and show that they have direct effects on the life, death and other activities of each other. We put two chick embryos of two different types in this inductor and control the process of formation of neuronal circuits. Each type has it's own circuits and thus, exchanged spiral waves between two chick embryos may change shape of neuronal circuits in each type. Comparing radiated signals of a chick embryo which was under radiation in this inductor with a normal chick embryo without experiencing external wave, we can consider differences between neuronal circuits.
[2307] vixra:1903.0047 [pdf]
Supersymmetry Entière: Preons, Particles and Inflation
A scenario of particles with unbroken supersymmetry has been proposed recently, a supersymmetric preon model. It offers an economic basis for constructing the standard model particles and going beyond it to supergravity. The model predicts that the standard model's superpartners do not exist in nature. The article is largely a review of selected papers. The model is tentatively explored towards quark and lepton structure. The supersymmetric Wess-Zumino and Starobinsky type of models of inflation are discussed. Both are found to agree well with the Plank 2018 CMB data, thus giving experimental support to supersymmetry on an energy scale of $10^{13}$ GeV. Some future directions are hinted.
[2308] vixra:1903.0043 [pdf]
Lorentz Magnetic Force Law Not Precisely Verified.
The Lorentz magnetic force law has not been precisely verified. The experimental basis of it is in the early experiments done through the pioneers around the 1840s and 1850s; no new experiment has since been done when Hendrik Lorentz presented it in 1895 in its current form : F = q(v × B). The NIST data base of atomic mass of the various nuclides is actually the experimental data collected in a international distributed experiment to verify the Lorentz magnetic force law by using it to predict the atomic mass of nuclides. By comparing the predicted values with actual values measured using chemical methods, we could indirectly confirm the correctness of the law quantitatively to as much as 1 part in 10 7 .
[2309] vixra:1903.0036 [pdf]
Stochastic Causality and Quantization of Gravity
Causality is the fundamental principle in the Einstein's theory of general relativity. We consider the theory of broken causality by the assumptions of stochastic nature to causal process. We see that the causality breaking of stochastic property brings the general relativity of broken causality, and this is equivalent to the theory of quantum gravity. We investigate some properties of quantum gravity in relation to the holographic principle. We calculate Shannon's entropy. In those investigations, we see the appearance of holographic principle at the first order in expansion of perturbation theory. The result indicates that the theory of stochastic causality, that we established, is the non-perturbative theory of quantum gravity.
[2310] vixra:1903.0034 [pdf]
Anisotropic Gravity that Gives an Anisotropic Big G Inside the Codata Error Range
At least one observational study has claimed that Newton's gravitational constant seems to vary with the direction relative to the fixed stars, see [1]. We think this is unlikely, but such experiments should be repeated or at least investigated further. If it is the case that gravity is directionally dependent, then how could this be explained, and how could/should our gravity formulas be modified? In this paper, we introduce an anisotropic big G that is dependent on the direction relative to the fixed stars, and therefore on a given location on Earth, dependent on the Earth's rotation. A series of experiments claim to have found the anisotropic one-way speed of light when getting around Einstein-Poincare synchronization, although they have not received a great deal of attention. We do not question that the one-way speed of light is isotropic when measured with Einstein-Poincare synchronized clocks. We hypothesize here that gravity moves with the speed of light and that the true one-way speed of gravity is anisotropic. Based on this, we get an anisotropic gravitational ``constant," which, if calibrated to one-way light experiments, is inside two standard deviations of error as given by CODATA.
[2311] vixra:1903.0025 [pdf]
Is There a Missing Lorentz Shift for Mass?
In special relativity, we operate with length contraction and length transformation; these are not the same thing even though they are related. In addition, we have time dilation and time transformation. However, when it comes to mass, we have only relativistic mass and no mass transformation. We will suggest here, based on a better understanding of mass at the quantum level, that there must also be a Lorentz mass transformation. Recent research strongly indicates that mass is directly linked to the Compton wavelength of the particle in question and since we can operate with both length contraction and length transformation, this means we should have corresponding masses. Length contraction of the Compton wavelength corresponds to what is known today as relativistic mass, while length transformation means we also need mass transformation.
[2312] vixra:1903.0012 [pdf]
A Survey for Testing Self-organizing, Adaptive Systems in Industry 4.0
Complexity in technical development increases rapidly. Regular system are no longer able to fulfill all the requirements. Organic computing systems are inspired by how complexity is mastered in nature. This leads to a fundamental change in software engineering for complex systems. Based on machine learning techniques, a system develops self*-properties which allows it to make decisions at runtime and to operate with nearly no human interaction. Testing is a part of the software engineering process to ensure the functionality and the quality of a system. But when using self-organizing, adaptive systems traditional testing approaches reach their limits. Therefore, new methods for testing such systems have to be developed. There exist already a lot of different testing approaches. Most of them developed within a research group. Nevertheless, there is still a need for further discussion and action on this topic. In this paper the challenges for testing self-organizing, adaptive systems are specified. Three different testing approaches are reviewed in detail. Due to the ongoing fourth industrial revolution it is discussed which of these approaches would fit best for testing industrial manufacturing robots.
[2313] vixra:1903.0006 [pdf]
Multi-Agent Reinforcement Learning - From Game Theory to Organic Computing
Complex systems consisting of multiple agents that interact both with each other as well as their environment can often be found in both nature and technical applications. This paper gives an overview of important Multi-Agent Reinforcement Learning (MARL) concepts, challenges and current research directions. It shortly introduces traditional reinforcement learning and then shows how MARL problems can be modelled as stochastic games. Here, the type of problem and the system configuration can lead to different algorithms and training goals. Key challenges such as the curse of dimensionality, choosing the right learning goal and the coordination problem are outlined. Especially, aspects of MARL that have previously been considered from a critical point of view are discussed with regards to if and how the current research has addressed these criticism or shifted their focus. The wide range of possible MARL applications is hinted at by examples from recent research. Further, MARL is assessed from an Organic Computing point of view where it takes a central role in the context of self-learning and self-adapting systems.
[2314] vixra:1902.0497 [pdf]
Physical Mathematics and the Fine-Structure Constant
Research into ancient physical structures, some having been known as the seven wonders of the ancient world, inspired new developments in the early history of mathematics. At the other end of this spectrum of inquiry the research is concerned with the minimum of observations from physical data as exemplified by Eddington’s Principle. Current discussions of the interplay between physics and mathematics revive some of this early history of mathematics and offer insight into the fine-structure constant. Arthur Eddington’s work leads to a new calculation of the inverse fine-structure constant giving the same approximate value as ancient geometry combined with the golden ratio structure of the hydrogen atom. The hyperbolic function suggested by Alfred Landé leads to another result, involving the Laplace limit of Kepler’s equation, with the same approximate value and related to the aforementioned results. The accuracy of these results are consistent with the standard reference. Relationships between the four fundamental coupling constants are also found.
[2315] vixra:1902.0484 [pdf]
Shox96 - Guaranteed Compression for Short Strings
None of the lossless entropy encoding methods so far have addressed compression of small strings of arbitrary lengths. Although it appears inconsequent, space occupied by several independent small strings become significant in memory constrained environments. It is also significant when attempting efficient storage of such small strings in a database where while block compression is most efficient, retrieval efficiency could be improved if the strings are individually compressed. This paper formulates a hybrid encoding method with which small strings could be compressed using context aware static codes resulting in surprisingly good ratios and also be used in constrained environments like Arduino. We also go on to prove that this technique can guarantee compression for any English language sentence of minimum 3 words.
[2316] vixra:1902.0473 [pdf]
A Possibility of CPT Violation in the Standard Model
It is shown that there is a possibility of violation of CPT symmetry in the Standard Model which does not contradict to the famous CPT theorem. To check this possibility experimentally it is necessary to increase the precision of measurements of the proton and antiproton mass difference by an order of magnitude.
[2317] vixra:1902.0452 [pdf]
Displacement And Wavelength In Non-Inertial Reference Frame
The parity symmetry in physics connects the motions in two different reference frames. By examining the displacements in both reference frames, the length of the displacement can be shown to be conserved in both reference frames. For two frames in relative inertial motion, the displacement is conserved in all inertial reference frames. For two frames free to accelerate, the displacement is conserved in all non-inertial reference frames. The length of a displacement is conserved in all reference frames. The wavelength of a wave is conserved in all reference frames. The frequency varies with reference frame in Doppler effect. Therefore, the speed of light varies with reference frame. Light travels at different speed in different reference frame.
[2318] vixra:1902.0448 [pdf]
On The Model Of Hyperrational Numbers With Selective Ultrafilter
In standard construction of hyperrational numbers using ultrapower we assume that the ultrafilter is selective. It makes possible to assign real value to any finite hyperrational number. So, we can consider hyperrational numbers with selective ultrafilter as extension of traditional real numbers. Also proved the existence of strictly monotonic or stationary representing sequence for any hyperrational number.
[2319] vixra:1902.0429 [pdf]
The Richardson Nobelian Experiment in Magnetic Field
The Richardson thermal effect is considered for the situation where the thermal electrons are inserted into the homogenous magnetic field. The electron flow in magnetic field produces the synchrotron radiation. We calculate the spectral distribution of the synchrotron photons.
[2320] vixra:1902.0401 [pdf]
Three-Dimensional Quadrics in Conformal Geometric Algebras and Their Versor Transformations
This work explains how three dimensional quadrics can be defined by the outer products of conformal geometric algebra points in higher dimensions. These multivector expressions code all types of quadrics in arbitrary scale, location and orientation. Furthermore, a newly modified (compared to Breuils et al, 2018, https://doi.org/10.1007/s00006-018-0851-1.) approach now allows not only the use of the standard intersection operations, but also of versor operators (scaling, rotation, translation). The new algebraic form of the theory will be explained in detail.
[2321] vixra:1902.0386 [pdf]
Diversity of Ensembles for Data Stream Classification
When constructing a classifier ensemble, diversity among the base classifiers is one of the important characteristics. Several studies have been made in the context of standard static data, in particular when analyzing the relationship between a high ensemble predictive performance and the diversity of its components. Besides, ensembles of learning machines have been performed to learn in the presence of concept drift and adapt to it. However,diversity measureshave not received much research interest for evolving data streams. Only a few researchers directly consider promoting diversity while constructing an ensemble or rebuilding them in the moment of detecting drifts. In this paper, we present a theoretical analysis of different diversity measures and relate them to the success of ensemble learning algorithms for streaming data. The analysis provides a deeper understanding of the concept of diversity and its impact on online ensemble Learning in the presence of concept drift. More precisely, we are interested in answering the following research question; Which commonly used diversity measures are used in the context of static-data ensembles and how far are they applicable in the context of streaming data ensembles?
[2322] vixra:1902.0371 [pdf]
The Role of Radiated Non-Linear Electromagnetic Waves from Initial Dnas in Formation of the Little Brain, Neural Circuits and Other Decision Centers: Determining Time of Death by Considering Evolutions of Waves of Death
In this research, we find that after formation of initial DNAs of chick embryo, they exchange electromagnetic signals with medium and each other. In fact, each gene of these DNAs act like the reciever or sender of radio waves. To control the process of transferring information, some circuits are emerged that each circuit depends on the activity of one gene. Because of various types of emitted waves, each neuron has several types of terminals in dendrite and axon to receive and send these waves. Before formation of neurons, initial informations are transmitted by blood molecules. Because of the role of blood molecules in transferring initial information, first circuits are formed on the heart and build the little brain . However, eventually, some circuits are emerged on the head which construct the brain. We bring some reasons for the existence of the little brain. We connect a chick embryo to an scope and observe evolutions of waves during formation of the little brain and the main brain. In absence of brain, the little brain emit some signals, while by passing time and formation of brain, the interaction between these systems leads to an increase in radiated waves. Then, we put two shell-less cultures of chick embryos in an inductor, apply a magnetic field to both of them and show that little brains of these systems interact with each other and change the value of output magnetic field and currents. Next, we consider signals of the little brain of the birds during removing their heads. We find that always, brain send some signals to the little brain and inform it of the end of life. However, if removing is done suddenly, these waves couldnt be exchanged and heart continue to work. This produce some hopes to cure patients.
[2323] vixra:1902.0345 [pdf]
Memristor Circuit Equations with Periodic Forcing
In this paper, we show that the dynamics of a wide variety of nonlinear systems such as engineering, physical, chemical, biological, and ecological systems, can be simulated or modeled by the dynamics of memristor circuits. It has the advantage that we can apply nonlinear circuit theory to analyze the dynamics of memristor circuits. Applying an external source to these memristor circuits, they exhibit complex behavior, such as chaos and non-periodic oscillation. If the memristor circuits have an integral invariant, they can exhibit quasi-periodic or non-periodic behavior by the sinusoidal forcing. Their behavior greatly depends on the initial conditions, the parameters, and the maximum step size of the numerical integration. Furthermore, an overflow is likely to occur due to the numerical instability in long-time simulations. In order to generate a non-periodic oscillation, we have to choose the initial conditions, the parameters, and the maximum step size, carefully. We also show that we can reconstruct chaotic attractors by using the terminal voltage and current of the memristor. Furthermore, in many memristor circuits, the active memristor switches between passive and active modes of operation, depending on its terminal voltage. We can measure its complexity order by defining the binary coding for the operation modes. By using this coding, we show that the memristor's operation modes exhibit the higher complexity, in the forced memristor Toda lattice equations and the forced memristor Van der Pol equations. Furthermore, the memristor has the special operation modes in the memristor Chua circuit.
[2324] vixra:1902.0322 [pdf]
Cross Entropy of Belief Function
Dempster-Shafer evidence theory as an extension of Probability has wide applications in many fields. Recently, A new entropy called Deng entropy was proposed in evidence theory. There were a lot of discussions and applications about Deng entropy. However, there is no discussion on how to apply Deng entropy to measure the correlation between the two evidences. In this article, we first review and analyze some of the work related to mutual information. Then we propose the extension of Deng Entropy: joint Deng entropy, Conditional Deng entropy and cross Deng entropy. In addition, we prove the relevant properties of this entropy. Finally, we also proposed a method to obtain joint evidence.
[2325] vixra:1902.0315 [pdf]
Constant $ e \cdot C / 2 \pi \hspace{2} \alpha $ Determines\\ Magnetic Flux Quantum in Charged Leptons
The constant $ e \cdot c / 2 \pi \hspace{2} \alpha $ is a common characteristic of charged leptons ($e, \mu, \tau $) resulting from their identical fraction $\hat{m}/\lambda_{C}$ of magnetons $\hat{m}$ to Compton-wavelengths $\lambda_{C}$, in spite of their largely differing $\hat{m}$ and $\lambda_{C}$. However the physical interpretation of this constant remained uncertain, but now clarified: It is proven that $ e \cdot c / 2 \pi \hspace{2} \alpha $ is an alternative and equivalent definition of the magnetic flux quantum $ h/2 \hspace{2} e$ which makes up the dipole-fields of charged leptons.
[2326] vixra:1902.0295 [pdf]
Black Hole Mass Lost By Both Thermodynamics and Hydrodynamics Effects
We purpose a new approach of black hole mass decreasing, which takes into account thermodynamics and hydrodynamics processes, in the presence of quintessence and phantom dark energies. Accordingly, we insert some terms into the Schwarzschild metric in order to obtain a new expression of the black hole mass. Further, we show that by the thermodynamics process, the second-order phase transition does not occur when we take into account phantom energy, except for complex entropies or relativistic time. At the end, we show a new principle to analyze black holes coalescence into a space-time, dilated by dark energy.
[2327] vixra:1902.0285 [pdf]
Error in Michelson and Morley Experiment
Michelson and Morley carried out an experiment in 1887 to determine if the theory of ether is correct. The experiment shows that the speed of light is constant in all directions. However, an error in this experiment was introduced by the calculation of the elapsed time for light to travel between two moving mirrors in the rest frame of ether. Both Michelson and Morley assumed that the speed of light is not altered upon reflection by a moving mirror. This critical error produced a small variation in the distance traveled by the light between mirrors in the rest frame of ether. As a result, Lorentz transformation was proposed to explain the concept of length contraction.
[2328] vixra:1902.0279 [pdf]
Divergence Measure of Belief Function
it is important to measure the divergent or conflicting degree among pieces of information for information preprocessing in case for the unreliable results which come from the combination of conflicting bodies of evidence using Dempster's combination rules. However, how to measure the divergence of different evidence is still an open issue. In this paper, a new divergence measure of belief function based on Deng entropy is proposed in order to measure the divergence of different belief function. The divergence measure is the generalization of Kullback-Leibler divergence for probability since when the basic probability assignment (BPA) is degenerated as probability, divergence measure is equal to Kullback-Leibler divergence. Numerical examples are used to illustrate the effectiveness of the proposed divergence measure.
[2329] vixra:1902.0268 [pdf]
Neutrino Mass Replaces Planck Mass as Fundamental Particle
The following derivation shows that the neutrino mass effectively replaces the Planck mass as a fundamental particle associated to Newton's Gravity Law. The neutrino mass is deduced from the cosmic microwave background and matches a previous obtained experimental value. Using a ratio of forces between two Planck mass pairs in comparison to two neutrino pairs, a proportion to the dimension of the Planck length and a Rindler horizon is formed. The work done on the two pairs are equivalent using this proportion. Additionally, it has been concluded that the cosmic diameter, as a particle horizon, can be written in terms of fundamental constants using Wien's displacement law and the Cosmic Microwave Background temperature.
[2330] vixra:1902.0266 [pdf]
Inverse Electric-Nuclear Field
The electrically charged particles cause electrical induction of positive and negative units in the dynamic space, with result inverse electric fields are created. So, the nuclear force is interpreted as an electric force, 100 times stronger than the maximum electric force of the outer electric field that extends beyond the potential barrier. Moreover, the braking radiation emitted from rapidly moving electrons as they are passing close to the nucleus is confirmed.
[2331] vixra:1902.0263 [pdf]
Next Infinite is Zero
Have you ever received emails from the future or letters from the future? It is a person who is thinking of making a time machine seriously. If you put a letter to the past in the Tesla coil (the space must be reversed in the coil) and put a stamp, Japan should definitely put it in the post even if it is on the way (In the Showa era when it was once peaceful). In that way, I would like to write a warning letter to the past.
[2332] vixra:1902.0240 [pdf]
Zero and Infinity; Their Interrelation by Means of Division by Zero
In this paper, we first fix the definitions of zero and infinity in very general senses and we will give their simple and definite relation by means of division by zero. On this problem and relation we have considered over the long history beyond mathematics. As our mathematics, we will be able to obtain some definite result for the relation clearly with new concept and model since Aristotle and Euclid.
[2333] vixra:1902.0235 [pdf]
The Proof of The ABC Conjecture - Part I: The Case c=a+1
In this paper, we consider the abc conjecture in the case c=a+1. Firstly, we give the proof of the first conjecture that c<rad*2(ac). It is the key of the proof of the abc conjecture. Secondly, the proof of the abc conjecture is given for \epsilon \geq 1, then for \epsilon \in ]0,1[ for the two cases: c\leq rad(ac) and c> rad(ac). We choose the constant K(\epsilon) as K(\epsilon)=e^{\left(\frac{1}{\ep*2} \right)}. A numerical example is presented.
[2334] vixra:1902.0223 [pdf]
Horn Torus Models for the Riemann Sphere and Division by Zero
In this paper, we will introduce a beautiful horn torus model by Puha and D\"aumler for the Riemann sphere in complex analysis attaching the zero point and the point at infinity. Surprisingly enough, we can introduce analytical structure of conformality to the model.
[2335] vixra:1902.0222 [pdf]
Anomaly in Sign Function Probability Function Integration III
In the paper it is demonstrated that integration of products of sign functions and probability density functions such as in Bell’s formula for ±1 measurement functions, leads to inconsistencies. Keywords Inconsistency, Bell’s theorem.
[2336] vixra:1902.0220 [pdf]
Comments on the Book "Architects of Intelligence" by Martin Ford in the Light of the SP Theory of Intelligence
The book "Architects of Intelligence" by Martin Ford presents conversations about AI between the author and influential researchers. Issues considered in the book are described in relation to features of the "SP System", meaning the "SP Theory of Intelligence" and its realisation in the "SP Computer Model", both outlined in an appendix. The SP System has the potential to solve most of the problems in AI described in the book, and some others. Strengths and potential of the SP System, which in many cases contrast with weaknesses of deep neural networks (DNNs), include the following: a top-down research strategy has yielded a system with a favourable combination of conceptual "Simplicity" with descriptive or explanatory "Power"; the SP System has strengths and potential with both symbolic and non-symbolic kinds of knowledge and processing; the system has strengths and long-term potential in pattern recognition; it is free of the tendency of DNNs to make large and unexpected errors in recognition; the system has strengths and potential in unsupervised learning, including grammatical inference; the SP Theory of Intelligence provides a theoretically coherent basis for generalisation and the avoidance of under- or over-generalisations; that theory of generalisation may help improve the safety of driverless; the SP system, unlike DNNs, can achieve learning from a single occurrence or experience; it has relatively tiny demands for computational resources and volumes of data, with potential for much higher speeds in learning; the system, unlike most DNNs, has strengths in transfer learning; unlike DNNs, it provides transparency in the representation of knowledge and an audit trail for all its processing; the system has strengths and potential in the processing of natural language; it exhibits several different kinds of probabilistic reasoning; the system has strengths and potential in commonsense reasoning and the representation of commonsense knowledge; other strengths include information compression, biological validity, scope for adaptation, and freedom from catastrophic forgetting. Despite the importance of motivations and emotions, no attempt has been made in the SP research to investigate these areas.
[2337] vixra:1902.0189 [pdf]
The Philosophy of Mathematics
What is mathematics? Why does it exist? Is it consistent? Is it complete? We answer these questions as well as resolve Russell’s Paradox and debunk Godel’s Incompleteness Theorem.
[2338] vixra:1902.0187 [pdf]
The Simple and Typical Physical Examples of the Division by Zero 1/0=0 by Ctes\'ibio (BC. 286-222) and e. Torricelli (1608 1646)
The division by zero 1/0=0 was discovered on 2014.2.2, however, the result may still not be accepted widely with old and wrong feelings. Since we gave already logically mathematics on the division by zero, here we will give very good examples in order to see the division by zero 1/0=0 clearly. By these examples, we will be able to understand the division by zero as a trivial one.
[2339] vixra:1902.0184 [pdf]
Computation, Complexity, and P!=NP Proof
If we refer to a string for Turing machines as a guess and a rejectable substring a flaw, then all algorithms reject similarly flawed guesses flaw by flaw until they chance on an unflawed guess, settle with a flawed guess, or return the unflawed guesses. Deterministic algorithms therefore must identify all flaws before guessing flawlessly in the worst case. Time complexity is then bounded below by the order of the product of the least number of flaws to cover all flawed guesses and the least time to identify a flaw. Since there exists 3-SAT problems with an exponential number of flaws, 3-SAT is not in P, and therefore P!=NP.
[2340] vixra:1902.0183 [pdf]
The Rise and Fall of Evolution
Jeremy England proposed in his “Statistical physics of self-replication” that energy dispersion drives evolution. Such is the explanatory power of his theory that we build on it to rethink the relationship between life and entropy as handed to us by Schodinger, to find a place for the origin and evolution of life within the cosmos, to explain the Cambrian Explosion and the Mass Extinctions from an entropic perspective, hence the title, and finally, to find a way out of the gloom and doom of global warming.
[2341] vixra:1902.0147 [pdf]
Definitive Proof of the Near-Square Prime Conjecture, Landau’s Fourth Problem
The Near-Square Prime conjecture, states that there are an infinite number of prime numbers of the form x^2 + 1. In this paper, a function was derived that determines the number of prime numbers of the form x^2 + 1 that are less than n^2 + 1 for large values of n. Then by mathematical induction, it is proven that as the value of n goes to infinity, the function goes to infinity, thus proving the Near-Square Prime conjecture.
[2342] vixra:1902.0135 [pdf]
Emerging NUI-based Methods for User Authentication: A New Taxonomy and Survey
As the convenience and cost benefits of Natural User Interface (NUI) technologies are hastening their wide adoption, computing devices equipped with such interfaces are becoming ubiquitous. Used for a broad range of applications, from accessing email and bank accounts to home automation and interacting with a healthcare provider, such devices require, more than ever before, a secure yet convenient user authentication mechanism. This paper introduces a new taxonomy and presents a survey of “point-of-entry” user-device authentication mechanisms that employ a natural user interaction. The taxonomy allows a grouping of the surveyed techniques based on the sensor type used to capture user input, the actuator a user applies during interaction, and the credential type used for authentication. A set of security and usability evaluation criteria are then proposed based on the Bonneau, Herley, Van Oorschot and Stajano framework. An analysis of a selection of techniques and, more importantly, the broader taxonomy elements they belong to, based on these evaluation criteria, are provided. This analysis and taxonomy provide a framework for the comparison of different authentication alternatives given an application and a targeted threat model. Similarly, the taxonomy and analysis also offer insights into possibly unexplored, yet potentially rewarding, research avenues for NUI-based user authentication that could be explored.
[2343] vixra:1902.0118 [pdf]
Correlation of the Fine-Structure Constant to the Cosmic Horizon and Planck Length
The following paper derives the fine-structure constant. This derivation suggests that the fine-structure constant can be theoretically determined as the spectrum range of all the energy modes fitting inside the observable universe. This corresponds to the number of allowed radiation modes of a particle from the cosmic horizon down to Planck length. Additionally, an association between Newton's Law of Gravity and Coulomb's Law suggests there is a connection between mass and charge via the fine-structure constant.
[2344] vixra:1902.0058 [pdf]
We Can Divide the Numbers and Analytic Functions by Zero\\ with a Natural Sense.
It is a famous word that we are not permitted to divide the numbers and functions by zero. In our mathematics, {\bf prohibition} is a famous word for the division by zero. For this old and general concept, we will give a simple and affirmative answer. In particular, certainly we gave several generalizations of division as in referred in the above, however, we will wish to understand with some good feelings for {\bf the division by zero}. We wish to know the division by zero with some good feelings. We wish to give clearly a good meaning for the division by zero in this paper.
[2345] vixra:1902.0040 [pdf]
A Complete Proof of the abc Conjecture: The End of The Mystery
In this paper, we consider the abc conjecture. Firstly, we give a proof of a first conjecture that c<rad*2(abc). It is the key of the proof of the abc conjecture. Secondly, a proof of the abc is given for \ep \geq 1, then for \ep \in ]0,1[ for the two cases: c\leq rad(abc) and c>rad(abc). We choose the constant K(\ep) as K(\ep)=6*{1+\ep}e*{\left(\frac{1}{\ep*2}-\ep \right)}. Five numerical examples are presented.
[2346] vixra:1902.0016 [pdf]
The Primary Factors of Biological Evolution
The article discusses the theory of biological evolution. The concepts "primary factors of evolution" (natural factors) and "secondary factors of evolution" (the result of evolution itself) are defined. The terms of the theory of evolution "struggle for existence", "selection" and others are considered from the point of view of the interpretation of facts. In order for the theory of evolution to be complete and as objective as possible, it must be based on primary factors, and interpretations should be kept to a minimum. The article discusses Darwin's theory and the modern theory of evolution in the context of these problems. An attempt is made to eliminate the concept of "the struggle for existence", which leads to the following results. A new concept of "realizing the purpose to exist" has been obtained, which is an analogue of the concept of "the struggle for existence (in a wide sense)". It is substantiated that the realization of the purpose to exist is the result of evolution (that is, the secondary factor), is the main characteristic of all living organisms (can be considered as the primary factor in the context of the living), that is, distinguishes the living from the nonliving. Realization of the purpose to exist in some conditions may take the form of a struggle, which in everyday life is usually understood as a struggle for existence (in the narrow sense). It is substantiated that such behavior is an adaptation that appeared in the process of evolution and can be regulated by means of more complex behavior, which is also an adaptation. The physical bases of biological evolution are also considered on the basis of external measures of existence.
[2347] vixra:1902.0002 [pdf]
Time In Non-Inertial Reference Frame
The parity symmetry in physics shows that the motions in two different reference frames are related to each other. By comparing the displacement and the velocity from two reference frames, the elapsed time can be shown to be conserved in both reference frames. For two frames in relative inertial motion, the elapsed time is conserved in all inertial reference frames. If both frames are in circular motion, then the elapsed time is conserved in all local reference frames on the same circle. If both frames are free to move in any direction at any speed, then the elapsed time is conserved in all non-inertial reference frames. Two simultaneous events are simultaneous in all reference frames.
[2348] vixra:1901.0474 [pdf]
Dual-band Dielectric Light-harvesting Nanoantennae Made by Nature
Mechanisms to use nanoparticles to separate sunlight into photovoltaic useful range and thermally useful range to increase the efficiency of solar cells and to dissipate heat radiatively are discussed based upon lessons we learnt from photosynthesis. We show that the dual-band maxima in the absorption spectrum of bacterial light harvestors not only are due to the bacteriochlorophylls involved but also come from the geometry of the light harvestor. Being able to manipulate these two bands arbitrarily enables us to fabricate the nanoparticles required. Such mechanisms are also useful for the design of remote power charging and light sensors.
[2349] vixra:1901.0436 [pdf]
Definitive Proof of Legendre's Conjecture
Legendre's conjecture, states that there is a prime number between n^2 and (n + 1)^2 for every positive integer n. In this paper, an equation was derived that accurately determines the number of prime numbers less than n for large values of n. Then, using this equation, it was proven by induction that there is at least one prime number between n^2 and (n + 1)^2 for all positive integers n thus proving Legendre’s conjecture for sufficiently large values n. The error between the derived equation and the actual number of prime numbers less than n was empirically proven to be very small (0.291% at n = 50,000), and it was proven that the size of the error declines as n increases, thus validating the proof.
[2350] vixra:1901.0431 [pdf]
Generalized Fibonacci Numbers and 4k+1-Fold Symmetric Quasicrystals
Given that the two-parameter $ p, q$ quantum-calculus deformations of the integers $ [ n ]_{ p, q} = (p^n - q^n)/ ( p - q) = F_n $ coincide precisely with the Fibonacci numbers (integers), as a result of Binet's formula when $ p = \tau = { 1 + \sqrt 5 \over 2}$, $ q = { \tilde \tau} = { 1 - \sqrt 5 \over 2 }$ (Galois-conjugate pairs), we extend this result to the $generalized$ Binet's formula (corresponding to generalized Fibonacci sequences) studied by Whitford. Consequently, the Galois-conjugate pairs $ (p, q = \tilde p ) = { 1\over 2} ( 1 \pm \sqrt m ) $, in the very special case when $ m = 4 k + 1$ and square-free, generalize Binet's formula $ [ n ]_{ p, q} = G_n$ generating integer-values for the generalized Fibonacci numbers $G_n$'s. For these reasons, we expect that the two-parameter $ (p, q = \tilde p)$ quantum calculus should play an important role in the physics of quasicrystals with $4k+1$-fold rotational symmetry.
[2351] vixra:1901.0430 [pdf]
A Note About the Abc Conjecture a Proof of the Conjecture: C<rad*2(abc)
In this paper, we consider the abc conjecture, then we give a proof of the conjecture c<rad^2(abc) that it will be the key to the proof of the abc conjecture.
[2352] vixra:1901.0409 [pdf]
Einstein-Classicality Explains Aspect's Experiment and Refutes Bell's Theorem
With Bell's inequality refuted and his error identified [see References], we now explain Aspect's experiment and refute Bell's theorem via what we call Einstein-classicality: the union of true locality (no influence propagates superluminally) and true realism (some beables change interactively). We also remedy many of Aspect's pro-Bell comments; eg, inability to picture in 3-space, hopeless searching, vindicated nonlocality.
[2353] vixra:1901.0401 [pdf]
Cosmological Acceleration as a Consequence of Quantum de Sitter Symmetry
Physicists usually understand that physics cannot (and should not) derive that $c\approx 3\cdot 10^8m/s$ and $\hbar \approx 1.054\cdot 10^{-34}kg\cdot m^2/s$. At the same time they usually believe that physics should derive the value of the cosmological constant $\Lambda$ and that the solution of the dark energy problem depends on this value. However, background space in General Relativity (GR) is only a classical notion while on quantum level symmetry is defined by a Lie algebra of basic operators. We prove that the theory based on Poincare Lie algebra is a special degenerate case of the theories based on de Sitter (dS) or anti-de Sitter (AdS) Lie algebras in the formal limit $R\to\infty$ where R is the parameter of contraction from the latter algebras to the former one, and $R$ has nothing to do with the radius of background space. As a consequence, $R$ is necessarily finite, is fundamental to the same extent as $c$ and $\hbar$, and a question why $R$ is as is does not arise. Following our previous publications, we consider a system of two free bodies in dS quantum mechanics and show that in semiclassical approximation the cosmological dS acceleration is necessarily nonzero and is the same as in GR if the radius of dS space equals $R$ and $\Lambda=3/R^2$. This result follows from basic principles of quantum theory. It has nothing to do with existence or nonexistence of dark energy and therefore for explaining cosmological acceleration dark energy is not needed. The result is obtained without using the notion of dS background space (in particular, its metric and connection) but simply as a consequence of quantum mechanics based on the dS Lie algebra. Therefore, $\Lambda$ has a physical meaning only on classical level and the cosmological constant problem and the dark energy problem do not arise. In the case of dS and AdS symmetries all physical quantities are dimensionless and no system of units is needed. In particular, the quantities $(c,\hbar,s)$, which are the basic quantities in the modern system of units, are not so fundamental as in relativistic quantum theory. "Continuous time" is a part of classical notion of space-time continuum and makes no sense beyond this notion. In particular, description of the inflationary stage of the Universe by times $(10^{-36}s,10^{-32}s)$ has no physical meaning.
[2354] vixra:1901.0391 [pdf]
N-Dimensional Ads Related Spacetime and Its Transformation
Recently, anti-de Sitter spaces are used in promising theories of quantum gravity like the anti-de Sitter/conformal field theory correspondence. The latter provides an approach to string theorie, which includes more than four dimensions. Unfortunately, the anti-de Sitter model contains no mass and is not able to describe our universe adequately. Nevertheless, the rising interest in higherdimensional theories motivates to take a deeper look at the n-dimensional AdS Spacetime. In this paper, a solution of Einstein's field equations is constructed from a modified anti-de Sitter metric in n dimensions. The idea is based on the connection between Schwarzschild- and McVittie metric: McVittie's model, which interpolates between a Schwarzschild Black Hole and an expanding global Friedmann–Lemaître–Robertson–Walker spacetime, can be constructed by a simple coordinate replacement in Schwarzschild's isotropic intervall, where radial coordinate and it's differential is multiplied by a time dependent scale factor a(t). In a previous work I showed, that an exact solution of Einstein's equations can analogously be generated from a static transformation of de Sitter's metric. The present article is concerned with the application of this method on an AdS (Anti de Sitter) related spacetime in n dimensions. It is shown that the resulting isotropic intervall is a solution of the n-dimensional Einstein equations. Further, it is transformed into a spherical symmetric but anisotropic form, analogously to the transformtion found by Kaloper, Kleban and Martin for McVittie's metric.
[2355] vixra:1901.0361 [pdf]
A New Divergence Measure of Belief Function in D-S Evidence Theory
Dempster-Shafer (D-S) evidence theory is useful to handle the uncertainty problems. In D-S evidence theory, however, how to handle the high conflict evidences is still an open issue. In this paper, a new reinforced belief divergence measure, called as RB is developed to measure the discrepancy between basic belief assignments (BBAs) in D-S evidence theory. The proposed RB divergence is the first work to consider both of the correlations between the belief functions and the subset of set of belief functions. Additionally, the RB divergence has the merits for measurement. It can provide a more convincing and effective solution to measure the discrepancy between BBAs in D-S evidence theory.
[2356] vixra:1901.0359 [pdf]
Note on the Golden Mean, Nonlocality in Quantum Mechanics and Fractal Cantorian Spacetime
Given the inverse of the Golden Mean $ \tau^{ -1} = \phi = { 1\over 2} (\sqrt 5 - 1)$, it is known that the continuous fraction expansion of $ \phi^{ -1} = 1 + \phi = \tau$ is $ ( 1, 1, 1, \cdots )$. Integer solutions for the pairs of numbers $ ( d_i, n_i ), i = 1, 2, 3, \cdots $ are found obeying the equation $ ( 1 + \phi)^n = d + \phi^n$. The latter equation was inspired from El Naschie's formulation of fractal Cantorian space time $ {\cal E}_\infty$, and such that it furnishes the continuous fraction expansion of $ ( 1 + \phi )^n ~= ~ (d, d, d, d, \cdots )$, generalizing the original expression for the Golden mean. Hardy showed that is possible to demonstrate nonlocality without using Bell inequalities for two particles prepared in $nonmaximally$ entangled states. The maximal probability of obtaining his nonlocality proof was found to be precisely $\phi^5$. Zheng showed that three-particle nonmaximally entangled states revealed quantum nonlocality without using inequalities, and the maximal probability of obtaining the nonlocality proof was found to be $ 0.25 \sim \phi^3 = 0.236$. Given that the two-parameter $ p, q$ quantum-calculus deformations of the integers $ [ n ]_{ p, q} = F_n $ $coincide$ precisely with the Fibonacci numbers, as a result of Binet's formula when $ p = ( 1 + \phi) = \tau, q = - \phi = - \tau^{ -1} $, we explore further the implications of these results in the quantum entanglement of two-particle spin-$s$ states. We finalize with some remarks on the generalized Binet's formula corresponding to generalized Fibonacci sequences.
[2357] vixra:1901.0351 [pdf]
Introducing: Second-Order Permutation
In this study we answer questions that have to do with finding out the total number of ways of arranging a finite set of symbols or objects directly under a single line constraint set of finite symbols such that common symbols between the two sets do not repeat on the vertical positions. We go further to study the outcomes when both sets consist of the same symbols and when they consist of different symbols. We identify this form of permutation as 'second-order permutation' and show that it has a corresponding unique factorial which plays a prominent role in most of the results obtained. This has been discovered by examining many practical problems which give rise to the emergence of second-order permutation. Upon examination of these problems, it becomes clear that a directly higher order of permutation exist. Hence this study aims at equipping mathematics scholars, educators and researchers with the necessary background knowledge and framework for incorporating second-order permutation into the field of combinatorial mathematics.
[2358] vixra:1901.0341 [pdf]
Necessary and Sufficient Conditions for a Factorable Matrix to be Hyponormal
Necessary and sufficient conditions are given for a special subclass of the factorable matrices to be hyponormal operators on $\ell^2$. Three examples are then given of polynomials that generate hyponormal weighted mean operators, and one example that does not. Paired with the main result presented here, various computer software programs may then be used as an aid for classifying operators in that subclass as hyponormal or not.
[2359] vixra:1901.0322 [pdf]
The Dirac Experiment Test of the Maximal Acceleration Constant
We determine the nonlinear group of transformations between coordinate systems which are mutually in a constant symmetrical uniform acceleration. The maximal acceleration limit is a constant which follows from the logical necessity and the kinematical necessity of the system motion and it is an analogue of the maximal velocity in special relativity. The Pardy acceleration constant is not the same as the Caianiello acceleration constant in quantum mechanics and Lambiase acceleration constant in the Riemann space-time and this situation forms the serious puzzle in physics after the theta-tau puzzle in particle physics and Hawking black hole puzzle in cosmology. The author transformations of the accelerated systems is related to the Orlov transformations. The DIRAC experiment in CERN with pionium in the strong electrical field is discussed.
[2360] vixra:1901.0267 [pdf]
A Perfect Regression Problem for Algebra 2
The full potential of elementary algebra to precipitate a human quantum leap is presented. A simple regression problem demonstrates how programming can be combined with linear regression. The math and programming are simple enough for any algebra class that uses a TI-83 family calculator. The problem fully considered might enable students to see the picture and evolve to a better place.
[2361] vixra:1901.0249 [pdf]
Is There a Flaw in the Traditional FLRW Metric?
A proper derivation of the FLRW metric shows the Ricci curvature scalar to be embedded in the metric's constant k (0, -1 or +1) that largely determines the fate of the universe. But in general R is not a constant, and the consequence of this issue is briefly presented.
[2362] vixra:1901.0247 [pdf]
Nuclear Decay Rate Oscillations and a Gravity-Quantum Connection
This paper construct a model to evaluate the hypothesis that the in- compatibility between General Relativity and Quantum Mechanics is due to the different space-time geometries upon which each respective theory was built. The model is then applied to an unexplained phenomena observed in the decay rate of unstable nuclei, whose decay rate has superimposed on them oscillations that match the yearly cycle of the Earth’s orbit. The gravity-quantum connection model will show that gravitation is likely the ”unknown field” responsible for the nu- clear decay oscillations.
[2363] vixra:1901.0246 [pdf]
Construction of Multivector Inverse for Clif Ford Algebras Over 2m+1-Dimensional Vector Spaces from Multivector Inverse for Clifford Algebras Over 2m-Dimensional Vector Spaces
Assuming known algebraic expressions for multivector inverses in any Clifford algebra over an even dimensional vector space R^{p',q'), n' = p' +q' = 2m, we derive a closed algebraic expression for the multivector inverse over vector spaces one dimension higher, namely over R^{p,q}, n = p+q = p'+q'+1 = 2m+1. Explicit examples are provided for dimensions n' = 2,4,6, and the resulting inverses for n = n' +1 = 3,5,7. The general result for n = 7 appears to be the first ever reported closed algebraic expression for a multivector inverse in Clifford algebras Cl(p,q), n = p + q = 7, only involving a single addition of multivector products in forming the determinant.
[2364] vixra:1901.0228 [pdf]
Fixing Dirac Theory's Relativity and Correspondence Errors
Dirac sought a relativistic quantum free-particle Hamiltonian that imposes space-time symmetry on the Schroedinger equation in configuration representation; he ignored the Lorentz covariance of energy-momentum. Dirac free-particle velocity therefore is momentum-independent, breaching relativity basics. Dirac also made solutions of his equation satisfy the Klein-Gordon equation via requirements imposed on its operators. Dirac particle speed is thereby fixed to the unphysical value of c times the square root of three, and anticommutation requirements prevent four observables, including the components of velocity, from commuting when Planck's constant vanishes, a correspondence-principle breach responsible for Dirac free-particle spontaneous acceleration (zitterbewegung) that diverges in the classical limit. Nonrelativistic Pauli theory contrariwise is physically sensible, and its particle rest-frame action can be extended to become Lorentz invariant. The consequent Lagrangian yields the corresponding closed-form relativistic Hamiltonian when magnetic field is absent, otherwise a successive-approximation regime applies.
[2365] vixra:1901.0218 [pdf]
Thermal Relativity, Corrections to Black-Hole Entropy, Born's Reciprocal Relativity Theory and Quantum Gravity
Starting with a brief description of Born's Reciprocal Relativity Theory (BRRT), based on a maximal proper-force, maximal speed of light, inertial and non-inertial observers, we derive the $exact$ Thermal Relativistic $corrections$ to the Schwarzschild, Reissner-Nordstrom, Kerr-Newman black hole entropies, and provide a detailed analysis of the many $novel$ applications and consequences to the physics of black holes, quantum gravity, minimal area, minimal mass, Yang-Mills mass gap, information paradox, arrow of time, dark matter, and dark energy. We finalize by outlining our proposal towards a Space-Time-Matter Unification program where matter can be converted into spacetime quanta, and vice versa.
[2366] vixra:1901.0176 [pdf]
What is the Magnetic Moment of Electron Spin?
According to the unified theory of dynamic space the inductive-inertial phenomenon has been developed, forming the grouping units (namely electric charges or forms of the electric field). Moreover, with the surface electric charges of the electron cortex its inverse electric fields are formed. By the above phenomena the actual theoretical value of the magnetic dipole moment of electron spin is proved as equal to the experimental measurement.
[2367] vixra:1901.0167 [pdf]
Topological Skyrme Model with Wess-Zumino Anomaly Term Has Colour Dependence in Quark Charges and Indicates Incompleteness of the Pure Skyrme Model
The topological Skyrme has been actively studied in recent times (e.g. see Manton and Sutcliffe, Topological Solitons, Cam U Press, 2004) to understand the structure of the nucleons and the nucleus. Here through a consistent study of the electric charge, it is shown that just the Skyrme lagrangian by itself, gives charges as, Qp = 1/2 and Qn = − 1/2 ; shockingly missing their empirical values. This devastating problem is rectified, only by includig an extra term (not available at Skyrme’s time), arising from the Wess-Zumino anomaly. One then obtains Qu = 2/3 and Qd = − 1/3 , and thus giving the correct charges of the nucleon. It is also shown here (for the first time), that the combined Skyrme-Wess- Zumino lagrangian predicts, colour-number dependence of the electric charges as: Q(u) = 1/2(1 + 1/Nc ); Q(d) = 1/2(−1 + 1/Nc) for arbitrary colour-number of the QCD group SU(Nc). This gives 2/3 and -1/3 charges for Nc=3. Thus it is not good enough to just have the value of charges as 2/3 and -1/3. We show that it is important to have a proper colour dependence existing within the guts of the quark charges. Though the quarks have colour built into its guts, composite protons and neutrons built up of odd-number-of-colours of quarks, turn out to be colour-free with fixed values of 1 and 0 charges, respectively (and which is good for self-consistency of QCD+QED); while the proton and neutron built up of static (colour-independent) charges 2/3 and -1/3, develop explicit colour dependence (and which is disatrous for these models).
[2368] vixra:1901.0166 [pdf]
Automated Brain Disorders Diagnosis Through Deep Neural Networks
In most cases, the diagnosis of brain disorders such as epilepsy is slow and requires endless visits to doctors and EEG technicians. This project aims to automate brain disorder diagnosis by using Artificial In- telligence and deep learning. Brain could have many disorders that can be detected by reading an Electroencephalography. Using an EEG device and collecting the electrical signals directly from the brain with a non- invasive procedure gives significant information about its health. Classi- fying and detecting anomalies on these signals is what currently doctors do when reading an Electroencephalography. With the right amount of data and the use of Artificial Intelligence, it could be possible to learn and classify these signals into groups like (i.e: anxiety, epilepsy spikes, etc). Then, a trained Neural Network to interpret those signals and identify evidence of a disorder to finally automate the detection and classification of those disorders found.
[2369] vixra:1901.0161 [pdf]
Reflection of Light From Moving Mirror
The application of reflection symmetry to two inertial reference frames shows that the elapsed time is conserved in all inertial reference frames. The conservation of the elapsed time indicates that the reflection of light between a pair of stationary mirrors should take the same elapsed time in all inertial reference frames. In one reference frame, both mirrors are stationary. In other reference frames, both mirrors are moving. The distance traveled by the light between the moving mirrors depends on the direction. The conservation law shows that the light travels at a different speed upon reflection by a moving mirror.
[2370] vixra:1901.0151 [pdf]
Algebraic Invariants of Gravity
Newton's mechanics is simple. His equivalence principle is simple, as is the inverse square law of gravitational force. A simple theory should have simple solutions to simple models. A system of n particles, given their initial speed and positions along with their masses, is such a simple model. Yet, solving for n>2 is not simple. This paper discusses what could be done to overcome that problem.
[2371] vixra:1901.0148 [pdf]
Feynman Diagrams of the QED Vacuum Polarization
The Feynman diagrams of Quantum Electrodynamics are assembled from vertices where three edges meet: an incoming fermion, an outgoing fermion and an interaction line. If all vertices are of degree 3, the graphs are 3-regular (cubic), defining the vacuum polarization diagrams. Cutting an edge -- a fermion line or an interaction line -- generates fairly cubic graphs where two vertices have degree 1. These emerge in the perturbation theory for the Green's function (self energy) and for the effective interaction (polarization). The manuscript plots these graphs for up to 8 internal vertices.
[2372] vixra:1901.0120 [pdf]
Toward Unification
A universe based on a fully deterministic, Euclidean, 4-torus cellular automaton is presented using a constructive approach. Each cell contains one integer number forming bubble-like patterns propagating at the speed of light, interacting and being reissued constantly. The collective behavior of these integers is conjectured to form patterns similar to classical and quantum physics, including the mass spectrum, quantum correlations and relativistic effects. Although essentially non-local, it preserves the non-signaling principle. This flexible model predicts that gravity is not quantized as well as the appearence of an arrow of time. Being a causal theory, it can potentially explain the emergence of the classical world and acroscopic observers.
[2373] vixra:1901.0117 [pdf]
A New Look Inside the Theory of the Linear Approximation: Gravity Assists and Flybys
Whitehead's description of the gravitational field of a static point mass source is equivalent to Schwarzshild's solution of Einstein's equations. Conveniently generalized in the framework of Special relativity, I proved that it leads to a new description of the linear approximation of General relativity with a color gage group symmetry. Here I introduce a new line of thought to discuss the problem of spacecrafts orbiting a planet taking into account its motion around the Sun or its proper rotation.
[2374] vixra:1901.0113 [pdf]
Geodesic Curve of a Gravitational Plane Wave Pulse and Curve of Particle
We consider a system of a gravitational wave coming from infinity that collides with a mass M. The metric of the system approaches the metric of a gravitational plane wave pulse as the mass of M goes to zero. The metric of the gravitational plane wave pulse having a specific form. We show there is a limiting curve of M, as mass and size of M go to zero, that is not a geodesic curve. We show conservation of energy-momentum does not hold and there is no solution to the Einstein field equations for this system.
[2375] vixra:1901.0108 [pdf]
Assuming ABC Conjecture is True Implies Beal Conjecture is True
In this paper, we assume that the ABC conjecture is true, then we give a proof that Beal conjecture is true. We consider that Beal conjecture is false then we arrive to a contradiction. We deduce that the Beal conjecture is true.
[2376] vixra:1901.0101 [pdf]
A Resolution Of The Brocard-Ramanujan Problem
We identify equivalent restatements of the Brocard-Ramanujan diophantine equation, $(n! + 1) = m^2$; and employing the properties and implications of these equivalencies, prove that for all $n > 7$, there are no values of $n$ for which $(n! + 1)$ can be a perfect square.
[2377] vixra:1901.0100 [pdf]
The Perturbation Analysis of Low-Rank Matrix Stable Recovery
In this paper, we bring forward a completely perturbed nuclear norm minimization method to tackle a formulation of completely perturbed low-rank matrices recovery. In view of the matrix version of the restricted isometry property (RIP) and the Frobenius-robust rank null space property (FRNSP), this paper extends the investigation to a completely perturbed model taking into consideration not only noise but also perturbation, derives sufficient conditions guaranteeing that low-rank matrices can be robustly and stably reconstructed under the completely perturbed scenario, as well as finally presents an upper bound estimation of recovery error. The upper bound estimation can be described by two terms, one concerning the total noise, and another regarding the best $r$-approximation error. Specially, we not only improve the condition corresponding with RIP, but also ameliorate the upper bound estimation in case the results reduce to the general case. Furthermore, in the case of $\mathcal{E}=0$, the obtaining conditions are optimal.
[2378] vixra:1901.0083 [pdf]
A Test of the Superposition Principle in Intense Laser Beams
A test of a nonlinear effect hitherto unknown in classical electrodynamics is proposed. For the possible nonlinearity to be observed, a high-intensity standing wave with circular polarization in a resonator is required. If the effect exists, an electric voltage should be induced between the mirros and an electric current can be measured. Motivation, quantitative expectations and the design of the experiment are discussed.
[2379] vixra:1901.0074 [pdf]
Up to So(32) Via Supersymmetry "Bootstrap"
A simple self-referencing postulate in the Chan-Paton factors of a string allows to fix some of the freedom in the landscape of SO(32) movel aspiring to be standard-model-like.
[2380] vixra:1901.0056 [pdf]
This Contagious Error Voids Bell-1964, CHSH-1969, Etc.
Elementary instance-tracking identifies a contagious error in Bell (1964). To wit, and against his own advice: in failing to match instances, Bell voids his own conclusions. The contagion extends to Aspect, Griffiths, Levanto, Motl, Peres and each of CHSH.
[2381] vixra:1901.0051 [pdf]
Commonsense Reasoning, Commonsense Knowledge, and the SP Theory of Intelligence
Commonsense reasoning (CSR) and commonsense knowledge (CSK) (together abbreviated as CSRK) are areas of study concerned with problems which are trivially easy for adults but which are challenging for artificial systems. This paper describes how the "SP System" -- meaning the "SP Theory of Intelligence" and its realisation in the "SP Computer Model" -- has strengths and potential in several aspects of CSRK. A particular strength of the SP System is that it shows promise as an overarching theory for four areas of relative success with CSRK problems -- described by other authors -- which have been developed without any integrative theory. How the SP System may help to solve four other kinds of CSRK problem is described: 1) how the strength of evidence for a murder may be influenced by the level of lighting of the murder as it was witnessed; 2) how people may arrive at the commonly-accepted interpretation of phrases like ``water bird’’; 3) interpretation of the horse's head scene in ``The Godfather’’ film; and 4) how the SP System may help to resolve the reference of an ambiguous pronoun in sentences in the format of a `Winograd schema’. Also described is why a fifth CSRK problem -- modelling how a cook may crack an egg into a bowl -- is beyond the capabilities of the SP System as it is now and how those deficiencies may be overcome via planned developments of the system.
[2382] vixra:1901.0042 [pdf]
Smoke Detection: Revisit the PCA Matting Approach
This paper revisits a novel approach, PCA matting, for smoke detection where the removal of the effect of background image and extract textural features are taken into account. This article considers an image as linear blending of smoke component and background component. Under this assumption this paper discusses a model and it's solution using the concept of PCA.
[2383] vixra:1901.0038 [pdf]
Deux Applications Des Méthodes de L’analyse Des Données Avec R
Dans ce projet, nous allons appliquer deux méthodes d’analyse de données ( classification hiérarchique & l’ACP) pour étudier 2 échantillons de données . On commence par une présentation courte des outils théorique, ensuite nous exposons notre analyse via ces deux méthodes en utilisant le langage R . Je me base principalement dans la partie théorique sur les cours de Wikistat .
[2384] vixra:1901.0032 [pdf]
Extraction of the Speed of Gravity (Light) from Gravity Observations Only
We show how one can measure the speed of gravity only using gravitational phenomena. Our approach offers several ways to measure the speed of gravity (light) and checks existing assumptions about light (gravity) in new types of experiments. The speed of light is included in several well-known gravitational formulas. However, if we can measure this speed from gravitational phenomena alone, then is it the speed of light or the speed of gravity we are measuring? We think it is more than a mere coincidence that they are the same. In addition, even if it is not possible to draw strong conclusions now, our formulations support the view that there is a link between electromagnetism and gravity. This paper also shows that all major gravity phenomena can be predicted from only performing two to three light observations. There is no need for knowledge of Newton’s gravitational constant G or the mass size to complete a series of major gravity predictions.
[2385] vixra:1901.0028 [pdf]
Algorithm for Evaluating Bivariate Kolmogorov Statistics in O(N Log N) Time.
We propose an O(n log n) algorithm for evaluation of bivariate Kolmogorov- Smirnov statistics (n is number of samples). It offers few orders of magnitude of speedup over existing implementations for n > 100k samples of input. The algorithm is based on static Binary Search Trees and sweep algorithm. We share C++ implementation with Python bindings.
[2386] vixra:1901.0007 [pdf]
On the Prime Decomposition of Integers of the Form (Z^n-Y^n)/(z-y)
In this work, the author shows a sufficient and necessary condition for an integer of the form (z^n-y^n)/(z-y) to be divisible by some perfect mth power p^m,where p is an odd prime and m is a positive integer. A constructive method of this type of integers is explained with details and examples. Links beetween the main result and known ideas such as Fermat’s last theorem, Goor-maghtigh conjecture and Mersenne numbers are discussed. Other relatedideas, examples and applications are provided.
[2387] vixra:1901.0001 [pdf]
Time Coordinate Transformation From Reflection Symmetry
The application of symmetry to physics leads to conservation law and conserved quantity. For inertial reference frames, the reflection symmetry generates not only conservation but also transformation. Under reflection symmetry, the elapsed time is conserved in all inertial reference frames. The displacement in space is also conserved in all inertial reference frames. From the conservation of the elapsed time and the displacement, the coordinate transformation between inertial reference frame is derived. Based on the coordinate transformation, both the time transformation and the velocity transformation are also derived. The derivation shows that all three transformations are dependent exclusively on the relative motion between inertial reference frames.
[2388] vixra:1812.0483 [pdf]
Beginnings of the Helicity Basis in the (S,0)+(0,S) Representations of the Lorentz Group
We write solutions of relativistic quantum equations explicitly in the helicity basis for S=1/2 and S=1. We present the analyses of relations between Dirac-like and Majorana-like field operators. Several interesting features of bradyonic and tachyonic solutions are presented.
[2389] vixra:1812.0482 [pdf]
Even FibBinary Numbers and the Golden Ratio
Previously, a determination of the relationship between the Natural numbers (N) and the n'th odd fibbinary number has been made using a relationship with the Golden ratio \phi=(Sqrt[5]+1}/2 and \tau=1/\phi. Specifically, if the n'th odd fibbinary equates to the j'th N, then j=Floor[n(\phi+1) - 1]. This note documents the completion of the relationship for the even fibbinary numbers, such that if the n'th even fibbinary equates to the j'th N, then j=Floor[n(\tau+1) + \tau].
[2390] vixra:1812.0472 [pdf]
A New Representation of Spin Angular Momentum
This paper aims to present intuitive imagery of the angular momentum of electrons, which has not been attempted yet. As electrons move similarly to a slinky spring, we first discuss the motions of a slinky progressing down an uneven stairway. The spin angular momentum under a magnetic field gradient is analogous to a slinky traveling down the uneven stairway inclined perpendicular to the advancing direction (i.e., every step inclined a bit to the left or right side to the advancing direction). The study extends our previous work from a single virtual oscillating photon to a particle moving linearly in one direction. The entire mass energy of the electrons is assumed as thermal potential energy. Particles (spinors) possessing this energy emit all their energy by radiation, which is then absorbed by a paired spinor particle. This transfer of radiative energy is accomplished by a virtual photon enveloping the spinor particles. Although the spinor particle contains both the absorber and emitter depending on its phase, a spinor particle cannot exhibit both the functions simultaneously; therefore, the spinor particle moves similar to a slinky spring.
[2391] vixra:1812.0461 [pdf]
On the Nature of Dark Energy, Cosmological Constant and Dark Matter
In the present essay, we consider the origin of the dark energy, cosmological constant and the dark matter. The dark energy, is consequence of the annihilation of the matter - antimatter at the very biginning of the big bang.This dark energy was the origin and cause of the cosmological expansion and the past and present creation of the whole space. We take into account the presence of the call “dark” matter as consequence of highly excited hydrogen and helium Ridberg’s atoms in perfect equilibrium with the CMB radiation. The cosmological constant is considered as an arbitrary ad hoc anti gravitational entity. Finally, we note that the Casimir effect, as a suitable and truly efficient physical method and a reliable resource in the experimental determination of the dark energy.
[2392] vixra:1812.0443 [pdf]
Review: Generic Multi-Objective Deep Reinforcement Learning(MODRL)
In this paper, the author reviewed the existing survey regarding MODRL and published in March 2018 by Thanh Thi Nguyen, and discussed the variety of reinforcement learning approaches in terms of multi-objective problem setting.
[2393] vixra:1812.0437 [pdf]
Bell's Inequality Refuted Via Elementary Algebra in Spacetime
Bell’s inequality is widely regarded as a profound impediment to any intuitive understanding of physical reality. We disagree. So here, via elementary algebra - backed by experiments; and thus with certainty - we refute his famous inequality, correct his key error, resolve his locality-dilemma: all in accord with the antiBellian true-local-realism that we’ve advanced since 1989. We thus restore commonsense/intuitive ideas to physics - thereby making physical reality more intelligible - by completing the quantum mechanical account of EPR-Bohm correlations via a wholistic/Einsteinian approach in spacetime.
[2394] vixra:1812.0431 [pdf]
Motion of a Spinning Symmetric Top
We firstly reviewed the symmetric top problem, then we have solved different possible motions numerically. We have given explanation about the rise of the symmetric top during nutation in terms of torque and angu- lar momentum. We have encountered with previously unnoticed proper- ties of motion and studied them. During the study, calculations gave some surprising results that the symmetric top can change its spin direction.
[2395] vixra:1812.0430 [pdf]
Better and Deeper Quantum Mechanics! Thoughts on a New Definition of Momentum That Makes Physics Simpler and More Consistent
We suggest that momentum should be redefined in order to help make physics more consistent and more logical. In this paper, we propose that there is a rest-mass momentum, a kinetic momentum, and a total momentum. This leads directly to a simpler relativistic energy–momentum relation. As we point out, it is the Compton wavelength that is the true wavelength for matter; the de Broglie wavelength is mostly a mathematical artifact. This observation leads us to a new relativistic wave equation and a new and likely better QM. Further, we show that Minkowski space-time is unnecessarily complex and that a simplified, special case of Minkowski space-time is more consistent with the quantum world. Also, we show how the Heisenberg principle breaks down at the Planck scale, which opens this area of physics up for hidden variable theories once again. Many of the mystical interpretations in modern QM are rooted in the development of an unnecessarily complex theory that drives much speculation and is therefore subject to many different and even conflicting interpretations.
[2396] vixra:1812.0423 [pdf]
The Curvature and Dimension of a Closed Surface
In this short memorandum, the curvature and dimension properties of the $2$-sphere surface of a 3-dimensional ball and the $2.x$-dimensional surface of a 3-dimensional fractal set are considered. Tessellation is used to approximate each surface, primarily because the $2.x$-dimensional surface of a 3-dimensional fractal set is otherwise non-differentiable (having no well-defined surface normals). It is found that the curvature of a closed surface {\it must} lead to fractional dimension. Notes are then given, including how this tessellation model applies to a toy Universe.
[2397] vixra:1812.0416 [pdf]
Remark on Vacuum fluctuation as the Cause of Universe Creation: or How Neutrosophic Logic and Material Point Method May Resolve Dispute on the Origin of the Universe Through re-Reading Gen. 1:1-2
Questions regarding the formation of the Universe and what was there before the existence of Early Universe have been great interest to mankind of all times. In recent decades, the Big Bang as described by the Lambda CDM Standard Model Cosmology has become widely accepted by majority of physics and cosmology communities. Among other things, we can cite A.A. Grib & Pavlov who pointed out some problems of heavy particles creation out of vacuum and also other proposal of Creatio ex nihilo theory (CET). But the philosophical problems remain, as Vaas pointed out: Did the universe have a beginning or does it exist forever, i.e. is it eternal at least in relation to the past? This fundamental question was a main topic in ancient philosophy of nature and the Middle Ages, and still has its revival in modern physical cosmology both in the controversy between the big bang and steady state models some decades ago and in the contemporary attempts to explain the big bang within a quantum cosmological (vacuum fluctuation) framework. In this paper we argue that Neutrosophic Logic offers a resolution to the long standing disputes between beginning and eternity of the Universe. In other words, in this respect we agree with Vaas, i.e. it can be shown: “how a conceptual and perhaps physical solution of the temporal aspect of Immanuel Kant’s ” first antinomy of pure reason“ is possible, i.e. how our universe in some respect could have both a beginning and an eternal existence. Therefore, paradoxically, there might have been a time before time or a beginning of time in time.” By the help of computational simulation, we also show how a model of early Universe with rotation can fit this new picture. Further observations are recommended.
[2398] vixra:1812.0415 [pdf]
The Friedmann-Lemaitre-Robertson-Walker Metric in de Sitter Spacetime
The FLRW metric is derived for a pure de Sitter universe, showing that the cosmological constant is proportional to the Ricci scalar. As in the standard FLRW model, the fate of such a universe depends on whether the cosmological constant is positive, negative or zero.
[2399] vixra:1812.0405 [pdf]
Three Dimensions in Motivic Quantum Gravity
Important polytope sequences, like the associahedra and permutohedra, contain one object in each dimension. The more particles labelling the leaves of a tree, the higher the dimension required to compute physical amplitudes, where we speak of an abstract categorical dimension. Yet for most purposes, we care only about low dimensional arrows, particularly associators and braiding arrows. In the emerging theory of motivic quantum gravity, the structure of three dimensions explains why we perceive three dimensions. The Leech lattice is a simple consequence of quantum mechanics. Higher dimensional data, like the e8 lattice, is encoded in three dimensions. Here we give a very elementary overview of key data from an axiomatic perspective, focusing on the permutoassociahedra of Kapranov.
[2400] vixra:1812.0381 [pdf]
Acceleration Of Radio Wave From Reflection Symmetry
The reflection symmetry is a physical property for inertial reference frames. It shows that the elapsed time in one inertial reference frame is identical to the elapsed time in another inertial reference frame. The same symmetry also leads to the conservation of the wavelength across inertial reference frames. The velocity of a wave is proportional to its frequency. Doppler effect shows that both the velocity and the frequency depend on the reference frame. The higher the detected frequency is, the faster the wave travels toward the detector. One example is the radar gun used by the traffic police. The reflected radio wave travels faster than the emitted radio wave. This results in frequency difference between two waves. This difference is used to calculate the velocity of a vehicle.
[2401] vixra:1812.0374 [pdf]
New Equations of Motion of an Electron. I. A Classical Point Particle
A new formula for the energy density of electrostatic field is derived. Based on the conservation of energy and momentum, the classical equations of motion of an electron, which is considered as a point particle, are then obtained by establishing a delay coordinate system. The resulting equations are exact but not covariant. Finally we calculate the self-energy of a free electron in quantum electrodynamics using a new cut-off procedure.
[2402] vixra:1812.0345 [pdf]
New Equations of the Resolution of The Navier-Stokes Equations
This paper represents an attempt to give a solution of the Navier-Stokes equations under the assumptions (A) of the problem as described by the Clay Mathematics Institute. After elimination of the pressure, we obtain the fundamental equations function of the velocity vector u and vorticity vector \Omega=curl(u), then we deduce the new equations for the description of the motion of viscous incompressible fluids, derived from the Navier-Stokes equations, given by: \nu \Delta \Omega -\frac{\partial \Omega}{\partial t}=0 \Delta p=-\sum^{i=3}_{i=1}\sum^{j=3}_{j=1}\frac{\partial u_i}{\partial x_j}\frac{\partial u_j}{\partial x_i} Then, we give a proof of the solution of the Navier-Stokes equations u and p that are smooth functions and u verifies the condition of bounded energy.
[2403] vixra:1812.0340 [pdf]
Modular Logarithms Unequal
The main idea of this article is simply calculating integer functions in module. The algebraic in the integer modules is studied in completely new style. By a careful construction the result that two finite numbers is with unequal logarithms in a corresponding module is proven, which result is applied to solving a kind of high degree diophantine equation.
[2404] vixra:1812.0321 [pdf]
Positivity of the Fourier Transform of the Shortest Maximal Order Convolution Mask for Cardinal B-splines
Positivity of the Fourier transform of a convolution mask can be used to define an inverse convolution and show that the spatial dependency decays exponentially. In this document, we consider, for an arbitrary order, the shortest possible convolution mask which transforms samples of a function to Cardinal B-spline coefficients and show that it is unique and has indeed a positive Fourier transform. We also describe how the convolution mask can be computed including some code.
[2405] vixra:1812.0313 [pdf]
The Alternative Schrödinger's Equation
According to the unified theory of dynamic space the inductive-inertial phenomenon and its forces has been developed. These forces act on the electric units of the dynamic space, forming the grouping units (namely electric charges or forms of the electric field). So, by this inductive phenomenon and the phenomenon of motion the wave function will be calculated. This wave function, which essentially interprets the phenomena of motion waves replaces the Schrödinger's equation.
[2406] vixra:1812.0312 [pdf]
Another Proof for Catalan's Conjecture
In 2002 Preda Mihailescu used the theory of cyclotomic fields and Galois modules to prove Catalan's Conjecture. In this short paper, we give a very simple proof. We first prove that no solutions exist for a^x-b^y=1 for a,b>0 and x,y>2. Then we prove that when x=2 the only solution for a is a=3 and the only solution for y is y=3.
[2407] vixra:1812.0306 [pdf]
Power Law and Dimension of the Maximum Value for Belief Distribution with the Max Deng Entropy
Deng entropy is a novel and efficient uncertainty measure to deal with imprecise phenomenon, which is an extension of Shannon entropy. In this paper, power law and dimension of the maximum value for belief distribution with the max Deng entropy are presented, which partially uncover the inherent physical meanings of Deng entropy from the perspective of statistics. This indicated some work related to power law or scale-free can be analyzed using Deng entropy. The results of some numerical simulations are used to support the new views.
[2408] vixra:1812.0305 [pdf]
On the Distribution of Prime Numbers
In this paper it is proposed and proved an exact formula for the prime-counting function, finding an expression of Legendre's formula. As corollaries, they are proved some important conjectures regarding prime numbers distribution.
[2409] vixra:1812.0278 [pdf]
Computer Models of Brain Tumor Metastasis
A computer model of brain tumor metastasis is developed and simulated using the language Mathematica. Diffusion of cancer cells through regions of gray and white matter is differentiated resulting in realistic asymmetric tumor growth. Applications include the precise treatment of a patient’s “future tumor” with focused radiation, and modelling the effects of chemotherapy.
[2410] vixra:1812.0274 [pdf]
Generalized Ising Model, Approximate Calculation Method
An approximate solution of the expected value of the direction of an arbitrary electron on the generalized Ising model (Ising model in which the energy with the external magnetic eld and the energy of the interaction between the electrons take an arbitrary value) was obtained. Actually, considering application to the information system, we calculated each electron spin state as 0or1 instead of -1or1. As a result, even if the number of each spin increased, the error did not increase and it fell within 2%. If the expected value of the spin state is 0.1 to 0.9, the expected value can be obtained within an error range of 2% regardless of whether the energy value is positive or negative. The calculation amount of the approximate solution is obtained by the calculation amount of the square of N multiplied by 10 times. It can be expected as application of network analysis and the like. We also posted python source code.
[2411] vixra:1812.0250 [pdf]
Aspie96 at IronITA (EVALITA 2018): Irony Detection in Italian Tweets with Character-Level Convolutional RNN
Irony is characterized by a strong contrast between what is said and what is meant: this makes its detection an important task in sentiment analysis. In recent years, neural networks have given promising results in different areas, including irony detection. In this report, I describe the system used by the Aspie96 team in the IronITA competition (part of EVALITA 2018) for irony and sarcasm detection in Italian tweets.
[2412] vixra:1812.0248 [pdf]
Time and Relative Reflection Symmetry
The relative reflection symmetry exists for an isolated system of two stationary persons. The first person sees the second person in a distance away. The second person sees the first person in the same distance away but in the opposite direction. Such symmetry also exists for two mobile persons. Both persons see each other moving at the same speed but in opposite direction in their own rest frames. From the definition of velocity, the time in the rest frame of the first person can be compared to the time in the rest frame of the second person. The result shows that the time in the rest frame of the first person differs from the time in the rest frame of the second person by a constant. Two simultaneous events in one inertial reference frame are also simultaneous in another inertial reference frame.
[2413] vixra:1812.0240 [pdf]
Reversing Teerac
2016 is filled with what seems like a new Ransomware every day, whether the influx is due to the recent sale of the CryptoWall source code or the actors involved in Dyre have since moved on to something profitable after the reported takedown, it would appear that for the time being pushing Ransomware is the new hip thing in the malware world. Most of the big names in Ransomware have had plenty of papers and research done but lots of the newer variants while possibly being based on either leaked or sold code will more often than not make changes in order to make themselves unique. Teerac which is a variant of TorrentLocker with a subdomain generation feature to the hardcoded domain is no exception to this as the malware matches multiple reports on TorrentLocker with the exception of an added subdomain generation.
[2414] vixra:1812.0224 [pdf]
About the Universe
A time scale of the Universe will be shown in all details. It starts with the existence of the first information (0/1) Bit and the probability of God or the Creator of the Universe. Step by step are comming dimensions within the Universe until the 4-dimensionsal Universe of General Relativity Theory (GRT) is created.
[2415] vixra:1812.0208 [pdf]
Definitive Proof of the Twin-Prime Conjecture
A twin prime is defined as a pair of prime numbers (p1,p2) such that p1 + 2 = p2. The Twin Prime Conjecture states that there are an infinite number of twin primes. A more general conjecture by de Polignac states that for every natural number k, there are infinitely many primes p such that p + 2k is also prime. The case where k = 1 is the Twin Prime Conjecture. In this document, a function is derived that corresponds to the number of twin primes less than n for large values of n. Then by proof by induction, it is shown that as n increases indefinitely, the function also increases indefinitely thus proving the Twin Prime Conjecture. Using this same methodology, the de Polignac Conjecture is also shown to be true.
[2416] vixra:1812.0206 [pdf]
Solution of a Sangaku "Tangency" Problem via Geometric Algebra
Because the shortage of worked-out examples at introductory levels is an obstacle to widespread adoption of Geometric Algebra (GA), we use GA to solve one of the beautiful sangaku problems from 19th-Century Japan. Among the GA operations that prove useful is the rotation of vectors via the unit bivector, i.
[2417] vixra:1812.0203 [pdf]
Review on Rationality Problems of Algebraic K-Tori
Rationality problems of algebraic k-tori are closely related to rationality problems of the invariant field, also known as Noether's Problem. We describe how a function field of algebraic k-tori can be identified as an invariant field under a group action and that a k-tori is rational if and only if its function field is rational over k. We also introduce character group of k-tori and numerical approach to determine rationality of k-tori.
[2418] vixra:1812.0182 [pdf]
A Proposed Proof of The ABC Conjecture
In this paper, from a,b,c positive integers relatively prime with c=a+b, we consider a bounded of c depending of a,b. Then we do a choice of K(\epsilon) and finally we obtain that the ABC conjecture is true. Four numerical examples confirm our proof.
[2419] vixra:1812.0178 [pdf]
The Zeta Induction Theorem: The Simplest Equivalent to the Riemann Hypothesis?
This paper presents an uncommon variation of proof by induction. We call it deferred induction by recursion. To set up our proof, we state (but do not prove) the Zeta Induction Theorem. We then assume that theorem is true and provide an elementary proof of the Riemann Hypothesis (showing their equivalence).
[2420] vixra:1812.0161 [pdf]
An Application for Medical Decision Making with the Fuzzy Soft Sets
In the present study, for the medical decision making problem, the proposed technique related to the fuzzy soft set by Celik-Yamak through Sanchez’s method was used. The real dataset which is called Cleveland heart disease dataset applied in this problem.
[2421] vixra:1812.0154 [pdf]
L' Attraction Des Nombres Par la Force Syracusienne}
I study in which cases $ x \in \mathbb{N}^*$ and $1 \in \mathcal{O}_S (x)= \{ S^n(x), n \in \mathbb{N}^* \} $ where $ \mathcal{O}_S (x)$ is the orbit of the function S defined on $\mathbb{R}^+$ by $S(x)= \frac{x}{2} + (\frac{q-1}{2} x+\frac{1}{2}) sin^2(x\frac{\pi}{2})$ , $ q \in 2\mathbb{N}^*+1$. And I deduce the proof of the Syracuse conjecture.
[2422] vixra:1812.0137 [pdf]
Electroweak Physics Reconstrued Using Null Cone Integrals
We present a novel formulation of particle physics that dispenses with space-time derivative operators in favour of null cone integrations. It is shown that the loss of locality incurred is compensated by gains in con- ceptual and mathematical simplicity, the absence of non-physical gauge degrees of freedom and the concomittant complications of ghosts etc.. Central to the formulation is a dimensionless homologue of the Lagrangian density, formed from integrals of scalar product terms over null cones. In- stead of covariant derivatives, the gauge fields are represented by rotations over the simple product of the internal and Lorentz symmetry groups. We demonstrate that application of a variational principle to this quasi-action functional yields essentially the same equations of motion as the SM. As a consequence of the enlarged symmetry group, the primordial electroweak Higgs field is shown to be the origin of all bosonic degrees of freedom, not just the Goldstone modes, prior to the symmetry breaking that reduces it to an isospin carrying scalar. Although this paper is restricted to considerations of leptons and the elec- troweak SU (2) L × U (1) Y symmetry group, the extension of the method to quarks and SU (3) C ⊂ SO(10) would appear to be straightforward and will be the subject of a subsequent paper.
[2423] vixra:1812.0108 [pdf]
Accelerated Frames of Reference Without the Clock Hypothesis: Fundamental Disagreements with Rindler
Based on an intuitive generalization of the Lorentz Transformations to non-inertial frames, this study presents new coordinates for a hyperbolically accelerated reference frame. These coordinates are equivalent to the Rindler coordinates exclusively at small times due to the loss of the clock hypothesis. This hypothesis is considered an excellent but fundamentally incorrect approximation for longitudinal motion. The proper acceleration of a hyperbolically accelerated particle is no longer constant and its proper time progressively slows down until becoming constant at the speed of light. This is in agreement with the timeless nature of photons. An event horizon beyond which any information cannot reach the particle is still present and is identical to the Rindler horizon. More importantly, a time dependent factor appears in the metric that could profoundly change our understanding of the space-time dynamic.
[2424] vixra:1812.0107 [pdf]
A Note About the Determination of The Integer Coordinates of An Elliptic Curve: Part I
In this paper, we give the elliptic curve (E) given by the equation: y^2=x^3+px+q with $p,q \in Z$ not null simultaneous. We study a part of the conditions verified by $(p,q)$ so that it exists (x,y) \in Z^2 the coordinates of a point of the elliptic curve (E) given by the equation above.
[2425] vixra:1812.0105 [pdf]
Modified General Relativity
A modified Einstein equation of general relativity is obtained by using the principle of least action, a decomposition of symmetric tensors on a time oriented Lorentzian manifold, and a fundamental postulate of general relativity. The decomposition introduces a new symmetric tensor $ \varPhi_{\alpha\beta} $ which describes the energy-momentum of the gravitational field itself. It completes Einstein's equation and addresses the energy localization problem. The positive part of $ \Phi $, the trace of the new tensor with respect to the metric, describes dark energy. The cosmological constant must vanish and is dynamically replaced by $ \Phi $. A cyclic universe which developed after the Big Bang is described. The dark energy density provides a natural explanation of why the vacuum energy density is so small, and why it dominates the present epoch of the universe. The negative part of $ \Phi $ describes the attractive self-gravitating energy of the gravitational field. $\varPhi_{\alpha\beta} $ introduces two additional terms into the Newtonian radial force equation: the force due to dark energy and the $\frac{1}{r}$ "dark matter" force. When the dark energy force balances the Newtonian force, the flat rotation curves and the baryonic Tully-Fisher relation are obtained. The Newtonian rotation curves for galaxies with no flat orbital curves, and those with rising rotation curves for large radii are described as examples of the flexibility of the orbital rotation curve equation.
[2426] vixra:1812.0069 [pdf]
Divergence Measure of Intuitionistic Fuzzy Sets
As a generation of fuzzy sets, the intuitionistic fuzzy sets (IFSs) have more powerful ability to represent and deal with the uncertainty of information. The distance measure between the IFSs is still an open question. In this paper, we propose a new distance measure between the IFSs on the basis of the Jensen{ Shannon divergence. The new distance measure of IFSs not only can satisfy the axiomatic de nition of distance measure, but also can discriminate the diference between the IFSs more better. As a result, the new distance measure can generate more reasonable results.
[2427] vixra:1812.0024 [pdf]
Stochastic Space-Time and Quantum Theory:part B: Granular Space-Time
A previous publication in Phys. Rev. D, (Part A of this paper) pointed out that vacuum energy fluctuations implied mass fluctuations which implied curvature fluctuations which then implied fluctuations of the metric tensor. The metric fluctuations were then taken as fundamental and a stochastic space-time was theorized. A number of results from quantum mechanics were derived. This paper (Part B), in addressing some of the difficulties of Part A, required an extension of the model: In so far as the fluctuations are not in space-time but of space-time, a granular model was deemed necessary. For Lorentz invariance, the grains have constant 4-volume. Further, as we wish to treat time and space similarly, we propose fluctuations in time. In order that a particle not appear at different points in space at the same time, we find it necessary to introduce a new model for time where time as we know it is emergent from an analogous coordinate, tau-time, τ, where ' τ -Time Leaves No Tracks' (that is to say, in the sub-quantum domain, there is no 'history'). The model provides a 'meaning' of curvature as well as a (loose) derivation of the Schwartzschild metric without need for the General Relativity field equations. The purpose is to fold the seemingly incomprehensible behaviors of quantum mechanics into the (one hopes) less incomprehensible properties of space-time.
[2428] vixra:1812.0020 [pdf]
A Complete Proof of the ABC Conjecture
In this paper, we assume that Beal conjecture is true, we give a complete proof of the ABC conjecture. We consider that Beal conjecture is false $\Longrightarrow$ we arrive that the ABC conjecture is false. Then taking the negation of the last statement, we obtain: ABC conjecture is true $\Longrightarrow$ Beal conjecture is true. But, if the Beal conjecture is true, then we deduce that the ABC conjecture is true
[2429] vixra:1812.0018 [pdf]
Zeros of the Riemann Zeta Function can be Found Arbitrary Close to the Line \Re(s) =1
In this paper, not only did we disprove the Riemann Hypothesis (RH) but we also showed that zeros of the Riemann zeta function $\zeta (s)$ can be found arbitrary close to the line $\Re (s) =1$. Our method to reach this conclusion is based on analyzing the fine behavior of the partial sum of the Dirichlet series with the Mobius function $M (s) = \sum_n \mu (n) /n^s$ defined over $p_r$ rough numbers (i.e. numbers that have only prime factors greater than or equal to $p_r$). Two methods to analyze the partial sum fine behavior are presented and compared. The first one is based on establishing a connection between the Dirichlet series with the Mobius function $M (s) $ and a functional representation of the zeta function $\zeta (s)$ in terms of its partial Euler product. Complex analysis methods (specifically, Fourier and Laplace transforms) were then used to analyze the fine behavior of partial sum of the Dirichlet series. The second method to estimate the fine behavior of partial sum was based on integration methods to add the different co-prime partial sum terms with prime numbers greater than or equal to $p_r$. Comparing the results of these two methods leads to a contradiction when we assume that $\zeta (s)$ has no zeros for $\Re (s) > c$ and $c <1$.
[2430] vixra:1812.0017 [pdf]
Special Relativity Leads to a Trans-Planckian Crisis that is Solved by Haug’s Maximum Velocity for Matter
In gravity theory, there is a well-known trans-Planckian problem, which is that general relativity theory leads to a shorter than Planck length and shorter than Planck time in relation to so-called black holes. However, there has been little focus on the fact that special relativity also leads to a trans-Planckian problem, something we will demonstrate here. According to special relativity, an object with mass must move slower than light, but special relativity has no limits on how close to the speed of light something with mass can move. This leads to a scenario where objects can undergo so much length contraction that they will become shorter than the Planck length as measured from another frame, and we can also have shorter time intervals than the Planck time. The trans-Planckian problem is easily solved by a small modification that assumes Haug’s maximum velocity for matter is the ultimate speed limit for something with mass. This speed limit depends on the Planck length, which can be measured without any knowledge of Newton’s gravitational constant or the Planck constant. After a long period of slow progress in theoretical physics, we are now in a Klondike “gold rush” period where many of the essential pieces are falling in place.
[2431] vixra:1811.0505 [pdf]
Compressed Monte Carlo for Distributed Bayesian Inference
Bayesian models have become very popular over the last years in several fields such as signal processing, statistics and machine learning. Bayesian inference needs the approximation of complicated integrals involving the posterior distribution. For this purpose, Monte Carlo (MC) methods, such as Markov Chain Monte Carlo (MCMC) and Importance Sampling (IS) algorithms, are often employed. In this work, we introduce theory and practice of a Compressed MC (C-MC) scheme, in order to compress the information contained in a could of samples. CMC is particularly useful in a distributed Bayesian inference framework, when cheap and fast communications with a central processor are required. In its basic version, C-MC is strictly related to the stratification technique, a well-known method used for variance reduction purposes. Deterministic C-MC schemes are also presented, which provide very good performance. The compression problem is strictly related to moment matching approach applied in different filtering methods, often known as Gaussian quadrature rules or sigma-point methods. The connections to herding algorithms and quasi-Monte Carlo perspective are also discussed. Numerical results confirm the benefit of the introduced schemes, outperforming the corresponding benchmark methods.
[2432] vixra:1811.0502 [pdf]
Stochastic Space-Time and Quantum Theory: Part a
Abstract Much of quantum mechanics may be derived if one adopts a very strong form of Mach's Principle, requiring that in the absence of mass, space-time becomes not flat but stochastic. This is manifested in the metric tensor which is considered to be a collection of stochastic variables. The stochastic metric assumption is sufficient to generate the spread of the wave packet in empty space. If one further notes that all observations of dynamical variables in the laboratory frame are contravariant components of tensors, and if one assumes that a Lagrangian can be constructed, then one can derive the uncertainty principle. Finally, the superposition of stochastic metrics and the identification of the square root of minus the determinant of the metric tensor as the indicator of relative probability yields the phenomenon of interference, as will be described for the two-slit experiment.
[2433] vixra:1811.0496 [pdf]
Dieudonné-Type Theorems for Lattice Group-Valued K-Triangular Set Functions
Some versions of Dieudonne-type convergence and uniform boundedness theorems are proved, for k-triangular and regular lattice group-valued set functions. We use sliding hump techniques and direct methods. We extend earlier results, proved in the real case.
[2434] vixra:1811.0495 [pdf]
The Hawking Temperature Intensive Crisis and a Possible Solution that Leads to an Intensive Schwarzschild Radius Temperature
Crothers and Robitaille have recently pointed out that the Hawking temperature and the Unruh temperature are not intensive and how this is inconsistent with thermodynamics, which suggests that the theory around the temperature of black holes is flawed, incomplete, or at least not fully understood. Here we offer a modified Newtonian type acceleration field linked to the Planck scale that leads to a new modified intensive Schwarzschild surface temperature for so-called black holes.
[2435] vixra:1811.0487 [pdf]
Dirac and Majorana Field Operators with Self/Anti-Self Charge Conjugate States
We discuss relations between Dirac and Majorana-like field operators with self/anti-self charge conjugate states. The connections with recent models of several authors have been found.
[2436] vixra:1811.0463 [pdf]
Stochastic Space-Time and Quantum Theory: Part C: Five-Dimensional Space-Time
This is a continuation of Parts A and B which describe a stochastic, granular space-time model. In this, Part C, in order to tessellate the space-time manifold, it was necessary to introduce a fifth dimension which is 'rolled up' at the Planck scale. The dimension is associated with mass and energy (in a non-trivial way). Further, it addresses other problems in the granular space-time model.
[2437] vixra:1811.0462 [pdf]
Clarification of “Overall Relativistic Energy” According to Yarman’s Approach.
In this essay, we will attempt to clarify the concept of “overall relativistic energy” according to Yarman’s Approach; which happens to be the underlying framework of Yarman-Arik-Kholmetskii (YARK) gravitation theory. The reformed meaning of this key concept is, in juxtaposition to the general theory of relativity (GTR), shown to subtly differ from particularly the Newtonian understanding of the “total energy of a system” as just being the “sum of constituent kinetic and potential energies”.
[2438] vixra:1811.0439 [pdf]
Remark on Seven Applications of Neutrosophic Logic: in Cultural Psychology, Economics Theorizing, Conflict Resolution, Philosophy of Science, Etc.
In this short communication, we review seven applications of NFL which we have explored in a number of papers: 1) Background: The purpose of this study is to review on how Neutrosophic Logic can be found useful in a number of diverse areas of interest; 2) Methods: we use logical analysis based on NL; 3) Results: Some fields of study may be found elevated after analyzed by NL theory; and 4) Conclusions: We can expect NL theory can be applied in many areas of research too, both in applied mathematics, economics, and also physics. Hopefully the readers will find a continuing line of thoughts in our research in the last few years.
[2439] vixra:1811.0417 [pdf]
Distance Measure of Pythagorean Fuzzy Sets
The Pythagorean fuzzy set (PFS), as an extension of intuitionistic fuzzy set, is more capable of expressing and handling the uncertainty under uncertain envi- ronments. Whereas, how to measure the distance between Pythagorean fuzzy sets appropriately is still an open issue. Therefore, a novel distance measure between Pythagorean fuzzy sets is proposed based on the Jensen{Shannon di- vergence in this paper. The new distance measure has the following merits: i) it meets the axiomatic de nition of distance measure; ii) it can better indicate the discrimination degree of PFSs. Then, numerical examples are demonstrated that the PFSJS distance measure is feasible and reasonable.
[2440] vixra:1811.0414 [pdf]
An Information Theoretic Formulation of Game Theory, II
This short article follows an earlier document, wherein I indicated how the foundations of game theory could be reformulated within the lens of a more information theoretic and topological approach. Building on said work, herein I intend to generalise this to meta games, where one game (the meta-game) is built on top of a game, and then to meta-meta-games. Finally I indicate how one might take these ideas further, in terms of constructing frameworks to study policies, which relate to the solution of various algebraic invariants.
[2441] vixra:1811.0412 [pdf]
On the Distributional Expansions of Powered Extremes from Maxwell Distribution
In this paper, asymptotic expansions of the distributions and densities of powered extremes for Maxwell samples are considered. The results show that the convergence speeds of normalized partial maxima relies on the powered index. Additionally, compared with previous result, the convergence rate of the distribution of powered extreme from Maxwell samples is faster than that of its extreme. Finally, numerical analysis is conducted to illustrate our findings.
[2442] vixra:1811.0396 [pdf]
Five Different Superposition Principles With/without Test Charge, Retarded Waves/advanced Waves Applied to Electromagnetic Fields or the Photons
In electromagnetic theory and quantum theory, there are superposition principle. Traditionally there are only one kind superposition principle. However, in this author's theory the retarded wave and advanced wave are all involved, the superposition become multiple. For example this author need to deal the problem how to superpose the retarded wave and the advanced wave. This author found that there are 5 different kinds of superpositions. The superposition principles have some differences. The research about these differences is a key to open the door of many physical difficulties. For example the particle and wave duality problem, and to judge which interpretation of the quantum mechanics is a correct one. The first two superposition principles are the superpositions with and without the test charges. The slight different superposition principles are the superposition with a retarded wave alone and the superposition with the advanced wave alone. According to theory of this author, the emitter sends the retarded wave, the absorber sends the advance wave. Hence, normal electromagnetic field actually is consist of retarded wave and advanced wave. This two waves together become the traditional electromagnetic fields. This kind of electromagnetic fields can be seen approximately as retarded wave only, this kind wave also has its own superposition. This kind of superposition is also different with the superposition when we consider the retarded waves alone and also the advanced waves alone. In this article this author will discuss the differences of these different superpositions. This author will also discuss the different physical result with a few different superposition principles. In this article this author will prove the superposition with test charge and the superposition with retarded waves alone or advance wave alone can be see one kind of superposition. This superposition is correct and can be derived from the mutual energy principle. This will be referred as first kind of superposition. The superposition without test charge and the superposition with the retarded waves and advanced waves can be seen as one kind of superposition. For this kind of superpositions are not correct naturally. This kind of super position are applied to N-charge's Poynting theorem. In order to make this superposition work, the self-energy principle have to be applied. Without self-energy principle this kind of superpositions will violate the energy conservation law. Considering the self-energy conservation this kind of superposition become the superposition of first kind. This is referred as second kind of superposition. The third kind of superposition is for the traditional electromagnetic field. Here the traditional electromagnetic field means the electromagnetic field where the advanced waves are omit. When the advanced waves are omit doesn't mean the advanced waves do not exist, that is only means the retarded wave and the advanced wave are with nearly same intensity and can be treated together in some situation, for example, in wave guide, or in the free space where the absorber are uniformly distributed at infinite big sphere. Abstract Only when the self-energy principle is accept, all kinds of superposition can be accept. However different superposition has different physical meanings. Otherwise only the superposition with test charge or the superposition with only one kind wave either retarded waves or advanced waves can be accepted. Hence, the discussion about the superposition also support the concept of the self-energy principle which means there must exist the time reversal waves. That also means the waves do not collapse but collapse back. Wave collapse means collapse to its target, for example, the retarded wave will collapse to an absorber and the advanced wave will collapse to an emitter. Wave collapse back means the retarded wave sent from emitter will collapse back to an emitter; The advanced wave sent from the absorber will collapse back to an absorber. Hence, one purpose of this article is to clarify the superposition principles, and another purpose is to support this author's electromagnetic field theory which is started from two new axioms the self-energy principle and the mutual energy principle.
[2443] vixra:1811.0391 [pdf]
Intention Physics
A Theory of everything must spring from a metaphysics but must necessarily rest on logics. The aim of this paper is to show the theory of everything: it needs a new conceptual framework, a new geometry, a new mind, a new language. A critical examination of the conceptions of memory, movement, time and space brings to light the primitive space where relativistic physics and quantum mechanics can meet. Furthermore, it highlights the distinction between the true time that opens in the decision between the previous moment and the successive one, and the mnemonic time, trace spatialised in the moment, in which the evolution of the emergent phenomena is reflected. Finally, recognizing space, time, electricity and gravitation as four different aspects of one sole substance, we come to the only unit measure and to the only equation that, devoid of singularity, unifies all the natural interactions, without disagreement with any experimental result, and throws light on the shape and origin of the universe and on matter organization.
[2444] vixra:1811.0390 [pdf]
Deng Entropy in Thermodynamics
Dempster-Shafer theory (D-S theory) has been widely used in many fields. Recently, a new entropy called Deng entropy was proposed in D-S theory. As an extension of Shannon entropy, it can deal with uncertainty problems in D-S theory. Entropy originated in physics and was later widely used in many fields. A natural question is what is the form of Deng entropy in physics? In this paper, we proposed the Deng entropy in thermodynamics, and under the conditions of a given system, deduced the Deng entropy in thermodynamics. In addition, we discussed the properties of Deng entropy in thermodynamics. First, the Deng entropy of thermodynamics is an extension of Gibbs entropy, just as Deng entropy is an extension of Shannon’s entropy. Similarly, Deng entropy in thermodynamics is also a measure of uncertainty. Given the state distribution of particles in a system, we can describe the uncertainty of particle states through Deng entropy in thermodynamics. Then, by proof, we find that Deng entropy in thermodynamics does not satisfy additivity. Finally, we also derived the probability distribution corresponding to the system when the Deng entropy in thermodynamics reaches its extreme value.
[2445] vixra:1811.0381 [pdf]
A Combined Poincare and Conformal Lie Algebra
The Poincare and conformal groups are contenders for the most fundamental spacetime symmetry group. An 8-dimensional rep, putting two 4-spinors together, makes a suitable platform to install matrix representations of these two fundamental groups. But some of their generators do not commute, so new generators are introduced to keep the algebra closed. The combined algebra then has 37 basis generators, a dozen more than needed for the Poincare and conformal algebras. Interestingly, with two Lorentz subalgebras, one finds two distinct definitions of spin. For the adjoint representation, one set of Lorentz generators reduces to irreducible representations, all with integer spin. The other Lorentz group reduces to both integer and `half-integer' spin irreducible representations. Also, one finds that the various representations confirm the spin rules for matrix translation generators with the spins of both Lorentz subgroups.
[2446] vixra:1811.0373 [pdf]
A Method for Detecting Lagrangian Coherent Structures (LCSs) using Fixed Wing Unmanned Aircraft System (UAS)
The transport of material through the atmosphere is an issue with wide ranging implications for fields as diverse as agriculture, aviation, and human health. Due to the unsteady nature of the atmosphere, predicting how material will be transported via Earth's wind field is challenging. Lagrangian diagnostics, such as Lagrangian coherent structures (LCSs), have been used to discover the most significant regions of material collection or dispersion. However, Lagrangian diagnostics can be time consuming to calculate and often rely on weather forecasts that may not be completely accurate. Recently, Eulerian diagnostics have been developed which can provide indications of LCS and have computational advantages over their Lagrangian counterparts. In this paper, a methodology is developed for estimating local Eulerian diagnostics from wind velocity data measured by a fixed wing unmanned aircraft system (UAS) flying in circular arcs. Using a simulation environment, it is shown that the Eulerian diagnostic estimates from UAS measurements approximate the true local Eulerian diagnostics, therefore also predicting the passage of LCSs. This methodology requires only a single flying UAS, making it more easy to implement in the field than existing alternatives.
[2447] vixra:1811.0367 [pdf]
The Semi-Pascal Triangle of Maximum Deng Entropy
In D-S theory, measure the uncertainty has aroused many people’s attention. Deng proposed the interesting Deng entropy that it can measure non-specificity and discord. Hence, exploring the physical meaning of Deng entropy is an essential issue. Based the maximum Deng entropy and fractal, the paper discuss the relation in them.
[2448] vixra:1811.0365 [pdf]
On Making Books More Dynamic
The emergence of dynamic learning systems is inevitable. In a future not so distant from today reading tools have incorporated interactive elements that would have rendered the process of learning very rewarding.These systems are radically different from current reading tools in that they provide users with functionalities that try to act as the extensions of human brain. This note is an attempt to describe some elements of a (fictional) proto-humanity-first reading tool.
[2449] vixra:1811.0353 [pdf]
Reflection and Acceleration Of Radio Wave
The radio wave changes direction upon reflection. It also changes frequency if it is reflected by a moving surface. In the standing wave formed by the incident wave and the reflected wave, the formation of the nodes requires both waves to have the same wavelength. The nodes exist in all reference frames. This requires both the incident wave and the reflected wave to have the same wavelength in all reference frames. However, these two waves have different frequencies due to Doppler effect Therefore, these two waves travel at different speeds. Doppler radar is a good example. With two moving surfaces to reflect the radio wave between them, the radio wave can be accelerated if the distance between two reflective surfaces decreases with time.
[2450] vixra:1811.0352 [pdf]
Failure of Complex Systems, Cascading Disasters, and the Onset of Disease
Complex systems can fail through different routes, often progressing through a series of (rate-limiting) steps and modified by environmental exposures. The onset of disease, cancer in particular, is no different. A simple but very general mathematical framework is described for studying the failure of complex systems, or equivalently, the onset of disease. It includes the Armitage-Doll multi-stage cancer model as a particular case, and has potential to provide new insights into how diseases arise and progress. A method described by E.T. Jaynes is developed to provide an analytical solution for the models, and highlights connections between the convolution of Laplace transforms, sums of random samples, and Schwinger/Feynmann parameterisations. Examples include: exact solutions to the Armitage-Doll model, the sum of Gamma-distributed variables with integer-valued shape parameters, a clonal-growth cancer model, and a model for cascading disasters. The approach is sufficiently general to be used in many contexts, such as engineering, project management, disease progression, and disaster risk for example, allowing the estimation of failure rates in complex systems and projects. The intended result is a mathematical toolkit for the study of failure rates in complex systems and the onset of disease, cancer in particular.
[2451] vixra:1811.0342 [pdf]
Quantum Theory of Dispersion of Light
We derive the index of refraction of light from quantum theory of atoms and from the Dirac equation with the plane wave. The result is the integral a part of the mainstream of the quantum optics. The article involves also discussion on the possibility to create the electron-positron pairs during the Cherenkov process with the adequate intex of refraction.
[2452] vixra:1811.0312 [pdf]
A Model of an Electron Including Two Perfect Black Bodies
This paper modifies two significant points of existing quantum electrodynamics. First, the image of a virtual photon is replaced with a real one, i.e., till date, we consider virtual photon as being capable of exchanging its energy between two particles along with self interaction, and that it is a transient fluctuation. We shall change this definition such that what we call “an electron” would include two bare electrons and these two would interact within a real photon. The virtual photon in this study is the same as the real photon which is not to observe, but difference from traditional virtual photon because the re-imaged virtual photon would exist continuously not temporally. Second, it is assumed that the bare electron is a perfect black body. To meet the constraints of charge conservation, a virtual photon must include two bare electrons. There is a temperature gradient between the two because the two particles alternate between behaving as emitters and absorbers. The proposed study extends this model by considering that an electron comprises two blinking bare electrons and at least one real photon by exchanging the energies within the three. Consequently, we attempt to create an electron model that exhibits spinor behavior by setting and modifying a trigonometric function which could periodically achieve the value of zero-point energy.
[2453] vixra:1811.0311 [pdf]
Astrophysical Applications of Yilmaz Gravity Theory
Huseyin Yilmaz proposed the form of a theory of gravitation (Yilmaz 1958, 1971) that has later been shown to present only minor conceptual change for Einstein's General Relativity. The primary effect of the change is to modify terms of second order in the gravitational potential or its derivatives. Since most of the weak field tests that have been taken as confirmation of General Relativity are of first order, the Yilmaz theory continues to pass all of these tests, but there are some interesting effects of the higher order terms that arise in the Yilmaz theory. These corrections move the metric singularity back to the location of a point particle source. This eliminates the black hole event horizon and permits the existence of intrinsic magnetic moments for stellar mass black hole candidates and supermassive AGN. It is shown here that the same second order corrections also eliminate the need for cosmological ``dark energy". Additional considerations dicussed here show that the Yilmaz theory correctly encompasses all of the major observational tests that must be satisfied by an acceptable relativistic gravity theory.
[2454] vixra:1811.0279 [pdf]
Anomaly in Sign Function Probability Function Integration
In the paper it is demonstrated that integration of products of sign functions and probability density functions such as in Bell's formula for +/-1 measurement functions, leads to inconsistencies.
[2455] vixra:1811.0250 [pdf]
A Proof Minus Epsilon of the Abc Conjecture
In this paper, we give a proof minus $\epsilon$ of the $ABC$ conjecture, considering that Beal conjecture is true. Some conditions are proposed for the proof, perhaps it needs some justifications that is why I give the title of the paper " a proof minus $\epsilon$ of the $ABC$ conjecture".
[2456] vixra:1811.0247 [pdf]
On Bell's Experiment
With the use of tropical algebra operators and a d=2 parameter vectors space, Bell's theorem does not forbid a, physics vvalid, reproduction of the quantum correlation.
[2457] vixra:1811.0245 [pdf]
An Inherently Relativistic Force Carrier Mechanism for Unifying Electromagnetism and Gravity Based on Weber Electrodynamics
The report summarizes the current state of research of quantino-theory, which is an approach to explain fundamental physical phenomena based on Weber electrodynamics. The core concept of quantino-theory is a force-carrier model, which fulfils the postulates of special relativity without the spacetime. The field theory that follows out of this model does not turn for slow charge-carrier velocities into Maxwell electrodynamics but Weber electrodynamics. Based on the quantino-mechanism, numerous physical effects can be reinterpreted and explained. For example, it is possible to show that gravity is a thermodynamic electromagnetic relativistic effect of fourth order. Furthermore, quantino-theory seems to be an interesting approach to explain the reason for the existence of quantum mechanical effects, inertia and anti-matter. The report furthermore discusses the perihelion precession of the planet Mercury, the mass-energy-relation, photons and electromagnetic waves.
[2458] vixra:1811.0244 [pdf]
Remark on the paper of Zheng Jie Sun and Ling Zhu
In this short review note we show that the new proof of theorem 1.1 given by Zheng Jie Sun and Ling Zhu in the paper Simple proofs of the Cusa-Huygens-type and Becker-Stark-type inequalities is logically incorrect and present another simple proof of the same.
[2459] vixra:1811.0222 [pdf]
Real Numbers in the Neighborhood of Infinity
We demonstrate the existence of a broad class of real numbers which are not elements of any number field: those in the neighborhood of infinity. After considering the reals and the affinely extended reals, we prove that numbers in the neighborhood of infinity are ordinary real numbers of the type detailed in Euclid's Elements. We show that real numbers in the neighborhood of infinity obey the Archimedes property of real numbers. The main result is an application in complex analysis. We show that the Riemann zeta function has infinitely many non-trivial zeros off the critical line in the neighborhood of infinity.
[2460] vixra:1811.0186 [pdf]
The Volkov Solution of the Dirac Equation with the Higgs Field
We determine the power radiation formula of the electron moving in the plane wave Higgs potential from the Volkov solution of the Dirac equation. The Higgs potential is here the vector extension of the scalar Higgs potential. The Higgs bosons mass is involved in the power radiation formula. The article represents the unification of the particle and the laser physics.
[2461] vixra:1811.0185 [pdf]
What is a Neutrino?
The emission of antineutrinos is interpreted by the inductive-inertial phenomenon as independent E/M formations, which are created when a neutron breaks down into a proton and an electron (beta decay). Specically, at the contact limits of the neutron quarks, due to the acceleration of the surface charges of the neutron cortex, the adjacent opposite units are strongly accelerated, causing grouping units outside the neutron cortex as independent E/M formations of one spindle.
[2462] vixra:1811.0173 [pdf]
Calculation of the Atomic Masses
According to the generally accepted physical theory, the synthesis of the elements may happen at a very high temperature in supernova explosions. In consequence of nuclear fusion, the supernova stars emit a very strong electromagnetic (EM) radiation, predominantly in form of X-rays and gamma rays. The intensive EM radiation drastically decreases the masses of the exploding stars, directly causing mass defects of the resulting atoms. The description of black body EM radiation is based on the famous Planck's radiation theory, which supposes the existence of independent quantum oscillators inside the black body. In this paper, it is supposed that in exploding supernova stars, the EM radiating oscillators can be identied with the nascent elements losing their specic yields of their own rest masses in consequence of the radiation process. The nal binding energy of the atoms (nuclei) is additionally determined by the strong neutrino radiation what also follows the Maxwell- Boltzmann distribution in the extremly high temperature. Extending Planck's radiation law for discrete radiation energies, a very simple formula is obtained for the theoretical determination of the atomic masses. In addition, the newly introduced theoretical model gives the fusion temperature what is necessary for the generation of the atoms of the Periodic Table. Keywords: nuclear binding energy, Planck radiation law, generation of atoms, the origin of the elements, new theoretical model of the nuclear synthesis, fusion temperature.
[2463] vixra:1811.0172 [pdf]
Dynamic Gravity Experiment with Accelerated Masses
The nearly 300-year success of the Newtonian gravitation theory has always been based on the implicit assumption that the gravitational force is the same strong between standing and moving masses. In the 1990s, in Hungary, gravitational experiments were carried out in which gravitational effects were studied between moved masses. Surprisingly, by the outer force moved source masses generated more powerful gravitational force than Newtonian gravity. In addition, depending on the direction of moved masses, gravitational repulsion has been observed. Theoretical studies have shown that the newly discovered gravitational phenomenon appears when the interacted masses are moved artificially by inner and/or outer forces.
[2464] vixra:1811.0165 [pdf]
Accelerating Hubble Redshift
Understanding the “acceleration” of modern Hubble redshift measurements begins with Schr ̈odinger. In 1939 he proved that all quantum wave functions coevolve with the curved spacetime of a closed Friedmann universe. While both photon wavelengths and atomic radii are proportional to the Fried- mann radius, the wavelengths of photons that an atom emits are proportional to the square of the radius. This larger shift in atomic emissions changes the current paradigm that redshift implies ex- pansion. Instead, redshift implies the contraction of a closed Friedmann universe. Hubble redshifts are observed only when old blueshifted photons are compared to current atomic emissions that have blueshifted even more. This theoretical prediction is confirmed by modern Hubble redshift measure- ments. The Pantheon redshift data set of 1048 supernovas was analyzed assuming that atoms change like Schro ̈dinger predicted. The Hubble constant and deceleration parameter are the only variables. The fit, Ho = −72.03 ± 0.25 km s−1Mpc−1 and 1/2 < qo < 0.501, has a standard deviation 0.1516 compared to the average data error 0.1418. No modifications to general relativity or to Friedmann’s 1922 solution are necessary to explain accelerating Hubble redshifts. A nearly flat Friedmann universe accelerating in collapse is enough.
[2465] vixra:1811.0127 [pdf]
On the Fundamental Role of the Massless Form of Matter in Particle Physics
In the article, with the help of various models, the thesis about the fundamental nature of the field form of matter as the basis of elementary particles is considered. In the first chapter a model of special relativity is constructed, on the basis of which the priority of the massless form of matter is revealed. In the second chapter a field model of inert and heavy mass is constructed and the mechanism of inertness and gravity of weighty bodies is revealed. In the third chapter, the example of geons shows the fundamental nature of a massless form of matter on the Planck scale. The three-dimensionality of the observable space is substantiated. In the fourth chapter, we consider a variant of solving the problem of singularities in general relativity using the example of multidimensional spaces. The last chapter examines the author's approach to quantum gravity. The conclusions do not contradict the main thesis of the article on the fundamental nature of the massless form of matter.
[2466] vixra:1811.0112 [pdf]
A New Proof of the Strong Goldbach Conjecture
The Goldbach conjecture dates back to 1742 ; we refer the reader to [1]-[2] for a history of the conjecture. Christian Goldbach stated that every odd integer greater than seven can be written as the sum of at most three prime numbers. Leonhard Euler then made a stronger conjecture that every even integer greater than four can be written as the sum of two primes. Since then, no one has been able to prove the Strong Goldbach Conjecture.\\ The only best known result so far is that of Chen [3], proving that every sufficiently large even integer N can be written as the sum of a prime number and the product of at most two prime numbers. Additionally, the conjecture has been verified to be true for all even integers up to $4.10^{18}$ in 2014 , J\"erg [4] and Tom\'as [5]. In this paper, we prove that the conjecture is true for all even integers greater than 8.
[2467] vixra:1811.0100 [pdf]
Time-Resolved Imaging of Model Astrophysical Jets
An approximate, time-delayed imaging algorithm is implemented, within exist- ing line-of-sight code. The resulting program acts on hydrocode output data, producing synthetic images, depicting what a model relativistic astrophysical jet looks like to a stationary observer. As part of a suite of imaging and simula- tion tools, the software is able to visualize a variety of dynamical astrophysical phenomena. A number of tests are performed, in order to confirm code integrity, and to present features of the software. The above demonstrate the potential of the computer program to help interpret astrophysical jet observations.
[2468] vixra:1811.0097 [pdf]
Black Hole Mass Decreasing, The Power and The Time of Two Black Holes in Coalescence, in The Quintessence Field
In this paper, we investigate some consequences of the black hole stabilization of Schwarzschild in the presence of quintessence type of dark energy which leads the way to the black hole mass decreasing other than the Hawking radiation process. The results show that in the quintessence field, the black hole shows a second-order phase transition, implying the existence of a stable phase. However, this stabilization implies some paradoxical effects on the black hole, which gives us a new regard about black holes, precisely we obtain the negative absolute temperature and we propose a process permitting us to appreciate well the likely cause of this phenomenon. These results allow us to give a new definition of the surface gravity for the Schwarzschild black hole in the field of quintessence, which depends on the flux of dilatation produced by the quintessence type of dark energy. Afterward, we analyze the impact of dark energy on the power and the time of two black holes needed to coalesce. Keywords: Quintessence, black hole, second-order phase transition, negative absolute temperature, gravitational waves
[2469] vixra:1811.0091 [pdf]
C x H x O-valued Gravity, [SU(4)]$^4$ Unification, Hermitian Matrix Geometry and Nonsymmetric Kaluza-Klein Theory
We review briefly how {\bf R} $\otimes$ {\bf C} $\otimes$ {\bf H} $\otimes$ {\bf O}-valued Gravity (real-complex-quaterno-octonionic Gravity) naturally can describe a grand unified field theory of Einstein's gravity with a Yang-Mills theory containing the Standard Model group $SU(3) \times SU(2) \times U(1)$. In particular, the $ C \otimes H \otimes O$ algebra is explored deeper. It is found that it can furnish the gauge group {\bf [SU(4)]}$^4$ revealing the possibility of extending the Standard Model by introducing additional gauge bosons, heavy quarks and leptons, and a $fourth$ family of fermions with profound physical implications. An analysis of $ C \otimes H \otimes O$-valued gravity reveals that it bears a connection to Nonsymmetric Kaluza-Klein theories and complex Hermitian Matrix Geometry. The key behind these connections is in finding the relation between $ C \otimes H \otimes O$-valued metrics in $two$ $complex$ dimensions with metrics in $higher$ dimensional $real$ manifolds ($ D = 32 $ real dimensions in particular). It is desirable to extend these results to hypercomplex, quaternionic manifolds and Exceptional Jordan Matrix Models.
[2470] vixra:1811.0081 [pdf]
Saving Physicalism/materialism: the Chalmers Test
Not a full-blown article, just a quick sketch on an alternative take on the philosophical zombie problem that neglects vague notions of "conceivability" or "metaphysical possibility" in favor of tentatively more rigorous language. The implication seems to be that some form of monism is logically necessary.
[2471] vixra:1811.0079 [pdf]
Quantum Ontology Suggested by a Kochen-Specker Loophole
We discuss a specific way in which the conclusions of the Kochen-Specker theorem may be avoided while, at the same time, closing the gap in a practical but usually neglected matter regarding scientific methodology in general. Implications of the possibilities of hidden variables thus defined are discussed, and a tentative connexion with cosmology is delineated.
[2472] vixra:1811.0073 [pdf]
Variable Polytropic Gas Cosmology
We mainly study a cosmological scenario dening by the variable Polytropic gas (VPG) unified energy density proposal. To reach this aim, we start with reconstructing a generalized form of the original Polyrtropic gas (OPG) denition. Later, we fit the auxiliary parameters given in the model and discuss essential cosmological features of the VPG proposal. Besides, we compare the VPG with the OPG by focusing on recent observational dataset given in literature including Planck 2018 results. We see that the VPG model yields better results than the OPG description and it fits very well with the recent experimental data. Moreover, we discuss some thermodynamical features of the VPG and conclude that the model describes a thermodynamically stable system.
[2473] vixra:1811.0058 [pdf]
Gravitational Angels
Based on the quantum modification of general relativity (Qmoger), gravitational angel (gravitangel) is introduced as a cloud of the background gravitons hovering over the ordinary matter (OM). According to Qmoger, the background gravitons are ultralight and they form the quantum condensate even for high temperature. The quantum entanglement of OM particles is explained in terms of splitting gravitangels. A hierarchy of gravitangels of different scale is considered. One of the simplest gravitangel is hovering over neutrino, which explains the neutrino oscillations. A more large-scale gravitangels are hovering over the neuron clusters in the brain, which explains the subjective experiences (qualia). The global gravitangel (GG) is connected to all processes happening with OM in the universe. GG can be considered as a gigantic quantum supercomputer.
[2474] vixra:1811.0057 [pdf]
Spiral Galaxy Rotation Curves and Arm Formation Without Dark Matter or MOND
Usual explanations of spiral galaxy rotation curves assume circular orbits of stars. The consequences of giving up this assumption were investigated through a couple of models in an earlier communication. Here, further investigations of one of these models (the spinner model) shows that it can explain the formation of the spiral arms as well. It is also shown that the behavior of the tail of the rotation curve is related to the age of the galaxy. The spinner model conjectures the existence of a spinning hot disk around a spherical galactic core. The disk is held together by local gravity and electromagnetic scattering forces. However, it disintegrates at the edge producing fragments that form stars. Once separated from the disk, the stars experience only the centrally directed gravitational force due to the massive core and remaining disk. A numerical simulation shows that a high enough angular velocity of the disk produces hyperbolic stellar trajectories that agree with the observed rotation curves. Besides the rotation curves, the simulation generates two other observable features of spiral galaxies. First, it shows the formation of spiral arms and their nearly equal angular separations. Second, it determines that, for large radial distances, younger galaxies have rotation curves that dip downwards and older galaxies have a rising trend. The strength of this model lies in the fact that it does not require the postulation of dark matter or MOND. This model also revisits the method of estimation of star age. As the stars are formed from an already hot disk, they do not start off as cold collections of dust and gas. Hence, their ages are expected to be significantly less than what current models estimate. This explains why they have not escaped the galaxy in spite of their hyperbolic trajectories.
[2475] vixra:1811.0044 [pdf]
A Simple Proof That Finite Quantum Theory And Finite Mathematics Are More Fundamental Than Standard Quantum Theory And Classical Mathematics, Respectively
Standard quantum theory is based on classical mathematics involving such notions as infinitely small/large and continuity. Those notions were proposed by Newton and Leibniz more than 300 years ago when people believed that every object can be divided by an arbitrarily large number of arbitrarily small parts. However, now it is obvious that when we reach the level of atoms and elementary particles then standard division loses its meaning and in nature there are no infinitely small objects and no continuity. In our previous publications we proposed a version of finite quantum theory (FQT) based on a finite ring or field with characteristic $p$. In the present paper we first define the notion when theory A is more general than theory B and theory B is a special degenerate case of theory A. Then we prove that standard quantum theory is a special degenerate case of FQT in the formal limit $p\to\infty$. Since quantum theory is the most general physics theory, this implies that classical mathematics itself is a special degenerate case of finite mathematics in the formal limit when the characteristic of the ring or field in the latter goes to infinity. In general, introducing infinity automatically implies transition to a degenerate theory because in that case all operations modulo a number are lost. So, {\it even from the pure mathematical point of view}, the very notion of infinity cannot be fundamental, and theories involving infinities can be only approximations to more general theories. Motivation and implications are discussed.
[2476] vixra:1811.0036 [pdf]
Incompatibility of the Dirac-Like Field Operators with the Majorana Anzatzen
In the present article we investigate the spin-1/2 and spin-1 cases in different bases. Next, we look for relations with the Majorana-like field operator. We show explicitly incompatibility of the Majorana anzatzen with the Dirac-like field operators in both the original Majorana theory and its generalizations. Several explicit examples are presented for higher spins too. It seems that the calculations in the helicity basis give mathematically and physically reasonable results only.
[2477] vixra:1810.0502 [pdf]
Short Note on Unification of Field Equations and Probability
Is math in harmony with existence? Is it possible to calculate any property of existence over math? Is exact proof of something possible without pre-acceptance of some physical properties? This work is realized to analysis these arguments somehow as simple as possible over short cuts, and it came up with some compatible results finally. It seems that both free space and moving bodies in this space are dependent on the same rule as there is no alternative, and the rule is determined by mathematics.
[2478] vixra:1810.0500 [pdf]
Can Stabilization and Symmetry Breakings Give Rise to Life in the Process of the Universe Evolution?
Bio-genesis can be understood as the final process of the Universe's evolution, from Planck scale down to nuclear scale to atomic scale to molecular scale, then finally to bio-scale, with the breaking of relevant symmetries at every step. By assuming the simplest definition of life, that life is just a molecular system which can reproduce itself (auto-reproducing molecular system -- ARMS) and has such kinetic ability (kineto-molecular system -- KMS), at least for its microscopic level, as to respond actively to its surrounding environments, we tried to explain the origin of life, taking the final step of the Universe evolution. We found a few clues for the origin of life, such as: (1) As the Universe expands and gets extremely cold, bio-genesis can take place by auto-reproducing molecular system, new level of stabilization may be achievable only at `locally cold places', such as comets. (2) There must be the parity breaking in the bio-scale stabilization process, which can be violated spontaneously, or dynamically by the van der Waals forces possible only at locally cold places. (3) And the rule of bio-parity breaking is universal within the bio-horizon. So we will find, for example, only left-handed amino acids in all living beings dwelling within our galaxy. (4) The idea of bio-genesis through the bio-scale stabilization in the evolution of the Universe looks very consistent with Panspermia hypothesis, and supports it by providing a viable answer for life's origin at such locally cold places.
[2479] vixra:1810.0468 [pdf]
Inconsistencies in EM Theory - the Kelvin Polarization Force Density Contradiction
Calculations of resultant electrostatic force on a charged spherical or cylindrical capacitor with two sectors of different dielectrics, based on the classical formulas of electrostatic pressure, Kelvin polarization force density, and Maxwell stress tensor predict a reactionless force that violates Newton’s 3rd law. Measurements didn’t confirm the existence of such a reactionless thrust, thus there is an apparent inconsistency in the classical EM theory that leads to wrong results.
[2480] vixra:1810.0459 [pdf]
Twin Primes
This paper gives us an application of Eratosthenes sieve to distribution mean distance between primes using first and upper orders of Gauss integral log- arithm Li(x).We define function Υ in section 5. Sections 1 − 4 give us an introduction to the terminology and a clarification on Υ terms. Section 6 reassumes foregoing explanations and gives us two theorems using first and upper integral logarithm orders.
[2481] vixra:1810.0458 [pdf]
Methods for Derivation of Generalized Equations in the (S,0)+(0,S) Representations of the Lorentz Group
We continue the discussion of several explicit examples of generalizations in relativistic quantum mechanics. We discussed the generalized spin-1/2 equations for neutrinos and the spin-1 equations for photon. The equations obtained by means of the Gersten-Sakurai method and those of Weinberg for spin-1 particles have been mentioned. Thus, we generalized the Maxwell and Weyl equations. Particularly, we found connections of the well-known solutions and the dark 4-spinors in the Ahluwalia-Grumiller elko model. They are also not the eigenstates of the chirality and helicity. The equations may lead to the dynamics which are different from those accepted at the present time. For instance, the photon may have non-transverse components and the neutrino may be {\it not} in the energy states and in the chirality states. The second-order equations have been considered too. They have been obtained by the Ryder method.
[2482] vixra:1810.0444 [pdf]
Is Time Misconception of the Transformations Possible?
Time is quite interesting phenomenon in physics, and it seems is relative; but what does it mean to be relative of time? What does to be fixed of light speed mean? Does to be fixed of light speed require observation at light speed? What if we can observe faster than light speed because of increased frame number? Is time effective for this imaginary effect or also time itself is dependent on another actual causative phenomenon? Is it possible to make a wrong conception for time and speed even if the phenomenon we advocate is actually true?
[2483] vixra:1810.0437 [pdf]
Maximum Velocity for Matter in Relation to the Schwarzschild Radius Predicts Zero Time Dilation for Quasars
This is a short note on a new way to describe Haug’s newly introduced maximum velocity for matter in relation to the Schwarzschild radius. This leads to a probabilistic Schwarzschild radius for elementary particles with mass smaller than the Planck mass. In addition, our maximum velocity, when linked to the Schwarzschild radius, seems to predict that particles just at that radius cannot move. This implies that radiation from the Schwarzschild radius not can undergo velocity time dilation. Our maximum velocity of matter, therefore, seems to predict no time dilation, even in high Z quasars, as has surprisingly been observed recently.
[2484] vixra:1810.0428 [pdf]
From Cosmological Constant to Cosmological Matrix
The starting point of modern theoretical cosmology were the Einstein equations with the cosmological constant Λ which was introduced by Einstein. The Einstein equations with the cosmological matrix is introduced here.
[2485] vixra:1810.0423 [pdf]
The Formula of Zeta Odd Number
I calculated ζ (3),ζ(5). ζ (7),ζ(9)……… ζ (23). And the formula indicated. For example, in ζ (3) For example, in ζ (5) And ultimately the following formula is required n and m are positive integer.
[2486] vixra:1810.0416 [pdf]
Self-Gravitating Gaseous Spheres in 5D Framework
One of the suitable theoretical idea for the polytrope in the Kaluza-Klein cosmology is discussed. Assuming a 5-dimensional (5D) spacetime model described by the Kaluza-Klein theory of gravity, we implement the energy density and pressure of the polytrope which is a self-gravitating gaseous sphere and still very useful as a crude approximation to more realistic stellar models. Next, we obtain the best-fit values of the auxiliary parameters given in the model according to the recent observational dataset. Finally, we study some cosmological features and the thermodynamical stability of the model.
[2487] vixra:1810.0408 [pdf]
Homotopy Analysis Method for Solving a Class of Nonlinear Mixed Volterra-Fredholm Integro-Differential Equations of Fractional Order
In this paper, we describe the solution approaches based on Homotopy Analysis Method for the follwing Nonlinear Mixed Volterra-Fredholm integro-differential equation of fractional order $$^{C}D^{\alpha }u(t)=\varphi (t)+\lambda \int_{0}^{t}\int_{0}^{T}k(x,s)F\left( u(s\right) )dxds,$$ $$u^{(i)}(0)=c_{i},i=0,...,n-1,$$ where $t\in \Omega =\left[ 0;T\right] ,\ k:\Omega \times \Omega \longrightarrow \mathbb{R},$ $\varphi :\Omega \longrightarrow \mathbb{R},$ are known functions,\ $F:C\left(\Omega, \mathbb{R}\right) \longrightarrow \mathbb{R}$ is nonlinear function, $c_{i} (i=0,...,n-1),$ and $\lambda $ are constants, $^{C}D^{\alpha }$ is the Caputo derivative of order $\alpha $ with $n-1<\alpha \leq n.$ In addition some examples are used to illustrate the accuracy and validity of this approach.
[2488] vixra:1810.0403 [pdf]
The Formulation Of Thermodynamical Path Integral
In a non-equilibrium thermodynamical physics, there has been al- most no universal theory for representing the far from equilibrium sys- tems. In this work, I formulated the thermodynamical path integral from macroscopic view, using the analogy of optimal transport and large deviations to calculate the non-equilibrium indicators quantita- tively. As a result, I derived Jarzynski equality, fluctuation theorem, and second law of thermodynamics as its corollaries of this formula. In addition, the latter result implies the connection between non- equilibrium thermodynamics and Riemannian geometry via entropic flow.
[2489] vixra:1810.0366 [pdf]
Gravitational Catastrophe and Dark Matter
I have been working for a long time about basic laws of physics. During this time I noticed, that gravity does not work as Newtonian. Distance and gravitational force relation changes over distance. The attraction properties change for each point of free space, and have some limits. The attraction changes due to some values between 1/r and 1/r^2 even for the existent furthest distance. This work aims to analyze and discuss this phenomenon.
[2490] vixra:1810.0352 [pdf]
A Resolution Of The Catt Anomaly.
The Catt anomaly - more correctly the ‘Catt Question’ - was posed by Ivor Catt since 1982. It was initially posed to two Cambridge professors in electromagnetism who gave contradictory resolutions. None provided answers to explained why two authorities gave contradictory answers. Over the years, some attempts have been made but they did not seem to have been satisfactory. As recent as 2013, two Italian professors M. Pieraccini and S. Selleri attempted yet another resolution published in the journal Physics Education. From the critique by Stephen J. Crothers, this attempt too may be unsatisfactory. The author here gives an answer to the Catt question that is based only on classical electromagnetic theory. Through investigation on the Catt question, a concomitant observation have been made that electrical interactions and electric power transfer over conducting conductors all are based on instantaneous_action_at_a_distance’.
[2491] vixra:1810.0349 [pdf]
A Generalized Klein Gordon Equation with a Closed System Condition for the Dirac-Current Probability Tensor
By taking spin away from particles and putting it in the metric, thus following Dirac's vision, I start my attempt to formulate an alternative math-phys language, biquaternion based and incorporating Clifford algebra. At the Pauli level of two by two matrix representation of biquaternion space, a dual base is applied, a space-time and a spin-norm base. The chosen space-time base comprises what Synge called the minquats and in the same spirit I call their spin-norm dual the pauliquats. Relativistic mechanics, electrodynamics and quantum mechanics are analyzed using this approach, with a generalized Poynting theorem as the most interesting result. Then moving onward to the Dirac level, the M{\"o}bius doubling of the minquat/pauliquat basis allows me to formulate a generalization of the Dirac current into a Dirac probability/field tensor with connected closed system condition. This closed system condition includes the Dirac current continuity equation as its time-like part. A generalized Klein Gordon equation that includes this Dirac current probability tensor is formulated and analyzed. The usual Dirac current based Lagrangians of relativistic quantum mechanics are generalized using this Dirac probability/field tensor. The Lorentz transformation properties the generalized equation and Lagrangian is analyzed.
[2492] vixra:1810.0335 [pdf]
Using Cantor's Diagonal Method to Show Zeta(2) is Irrational
We look at some of the details of Cantor's Diagonal Method and argue that the swap function given does not have to exclude 9 and 0, base 10. We also puzzle out why the convergence of the constructed number, its value, is of no concern. We next review general properties of decimals and prove the existence of an irrational number with a modified version of Cantor's diagonal method. Finally, we show, with yet another modification of the method, that Zeta(2) is irrational.
[2493] vixra:1810.0312 [pdf]
Riemann Integration On R^n
Throughout these discussions the numbers epsilon > 0 and delta > 0 should be thought of as very small numbers. The aim of this part is to provide a working definition for the integral of a bounded function f(x) on the interval [a, b]. We will see that the real number "f(x)dx" is really the limit of sums of areas of rectangles.
[2494] vixra:1810.0308 [pdf]
Existence of Solutions for a Nonlinear Fractional Langevin Equations with Multi-Point Boundary Conditions on an Unbounded Domain
In this work, we apply the fixed point theorems, we study the existence and uniqueness of solutions for Langevin differential equations involving two ractional orders with multi-point boundary conditions on the half-line.
[2495] vixra:1810.0304 [pdf]
The Black Hole Binary Gravitons and Thermodynamics
The energy spectrum of graviton emitted by the black hole binary is calculated in the first part of the chapter. Then, the total quantum loss of energy, is calculated in the Schwinger theory of gravity. In the next part we determine the electromagnetic shift of energy levels of H-atom electrons by calculating an electron coupling to the black hole thermal bath. Energy shift of electrons in H-atom is determined in the framework of nonrelativistic quantum mechanics. In the last section we determine the velocity of sound in the black hole atmosphere, which is here considered as the black hole photon sea. Derivation is based on the thermodynamic theory of the black hole photon gas.
[2496] vixra:1810.0291 [pdf]
Gravity Differential And Applications Essay /The Potential of Water Cycle Management
Earth is a complex structure of different densities baked in the rotisserie of the Sun. It alters the densities or air, water, and ground. And, gravity is our natural home energy. Working with the centrifugal force of spiraling Earth, it drives the circulations of particle, gas, liquid, and matter in a spherical force field of Earth. Mimicking nature’s circulations of water and energy will benefit all lives greatly. It could be easier than we think.
[2497] vixra:1810.0289 [pdf]
An Information Theoretic Formulation of Game Theory, I
Within this paper, I combine ideas from information theory, topology and game theory, to develop a framework for the determination of optimal strategies within iterated cooperative games of incomplete information.
[2498] vixra:1810.0281 [pdf]
What is the Spin of the Particles?
According to the unified theory of dynamic space it is described the first (Universal) and the second (local) space deformation, which change the geometric structure of the isotropic space. These geometric deformations created the dynamic space, the Universe, and the space holes (bubbles of empty space), the early form of matter. The neutron cortex is structured around these space holes with the electrically opposite elementary units (in short: units) at the light speed. So, an electrical and geometric deformation of the neutron cortex occurs, as the third space deformation, resulting in the creation of surface electric charges (quarks), to which the particles spin is due. Additionally, the "paradox" magnetic dipole moment of neutron is interpreted.
[2499] vixra:1810.0263 [pdf]
Semistable Holomorphic Bundles Over Compact bi-Hermitian Manifolds
In this paper, by using Uhlenbeck-Yau's continuity method, we prove that the existence of approximation $\alpha$-Hermitian-Einstein strusture and the $\alpha$-semi-stability on $I_{\pm}$-holomorphic bundles over compact bi-Hermitian manifolds are equivalent.
[2500] vixra:1810.0254 [pdf]
Energy Of Light And Radius of Electron
Based on conservation law of energy, Poynting vector describes the power per unit area in electromagnetic wave. The time-averaged power per unit area is independent of the wavelength and the frequency of the wave. One example is FM radio signal. In photoelectric effect, the incident light wave transfers energy to the electron. Light wave of higher frequency takes longer time to transfer more energy to the electron. The total energy absorbed by the electron is proportional to the area facing the incident light. From this area, the radius of the electron can be calculated.
[2501] vixra:1810.0199 [pdf]
A Generalized Klein Gordon Equation with a Closed System Condition for the Dirac-Current Probability/field Tensor
I begin with a short historical analysis of the problem of the electron from Lorentz to Dirac. It is my opinion that this problem has been quasi frozen in time because it has always been formulated within the paradigm of the Minkowski-Laue consensus, the relativistic version of the Maxwell-Lorentz theory. By taking spin away from particles and putting it in the metric, thus following Dirac's vision, I start my attempt to formulate an alternative math-phys language. In the created non-commutative math-phys environment, biquaternion and Clifford algebra related, I formulate an alternative for the Minkowski-Laue consensus. This math-phys environment allows me to formulate a generalization of the Dirac current into a Dirac probability/field tensor with connected closed system condition. This closed system condition includes the Dirac current continuity equation as its time-like part. A generalized Klein Gordon equation that includes this Dirac current probability tensor is formulated and analyzed. The Standard Model's Dirac current based Lagrangians are generalized using this Dirac probability/field tensor. The Lorentz invariance or covariance of the generalized equations and Lagrangians is proven. It is indicated that the Dirac probability/field tensor and its closed system condition closes the gap with General Relativity quite a bit.
[2502] vixra:1810.0175 [pdf]
New Abelian Groups for Primes of Type 4K-1 and 4K+1.
p is prime.The article describes the new Abelian groups of type p=4k+1 and p = 4k-1, for which a theorem similar to the Fermat's little theorem applies. The multiplicative group (Z/pZ)* in some sense similar to the Abelian group of type p = 4k+1. Abelian group of type p = 4k-1 is a different structure compared to group (Z/pZ)*. This fact is used for the primality test of integer N = 4k-1. The primality test was veried up to N = 2^(64).
[2503] vixra:1810.0171 [pdf]
The Seiberg-Witten Equations for Vector Fields
By similarity with the Seiberg-Witten equations, we propose two differential equations, depending of a spinor and a vector field, instead of a connection. Good moduli spaces are espected as a consequence of commutativity.
[2504] vixra:1810.0170 [pdf]
Existence of Solutions for Langevin Differential Equations Involving Two Fractional Orders on the Half-Line
In this paper, we study the existence and uniqueness of solutions for Langevin differential equations of Riemman-Liouville fractional derivative with boundary value conditions on the half-line. By a classical fixed point theorems, several new existence results of solutions are obtained.
[2505] vixra:1810.0169 [pdf]
Existence of Solutions for Fractional Langevin Equations with Boundary Conditions on an Infinite Interval
In this paper, we investigate the existence and uniqueness of solutions for the following fractional Langevin equations with boundary conditions $$\left\{\begin{array}{l}D^{\alpha}( D^{\beta}+\lambda)u(t)=f(t,u(t)),\text{ \ \ \ }t\in(0,+\infty),\\ \\u(0)=D^{\beta}u(0)=0,\\ \\ \underset{t\rightarrow+\infty}{\lim}D^{\alpha-1}u(t)=\underset{t\rightarrow+\infty}{\lim}D^{\alpha +\beta-1}u(t)=au(\xi),\end{array}\right.$$ where $1<\alpha \leq2$ and$\ 0<\beta \leq1,$ such that $1<\alpha +\beta \leq2,$ with $\ a,b\in\mathbb{R},$ $\xi \in\mathbb{R}^{+},$\ and $D^{\alpha}$, $D^{\beta }$ are the Riemman-Liouville fractional derivative. Some new results are obtained by applying standard fixed point theorems.
[2506] vixra:1810.0168 [pdf]
Existence of Solutions for a Class of Nonlinear Fractional Langevin Equations with Boundary Conditions on the Half-Line
In this work, we use the fixed point theorems, we investigate the existence and uniqueness of solutions for a class of fractional Langevin equations with boundary value conditions on an infinite interval.
[2507] vixra:1810.0165 [pdf]
Open Letter - The Failure Of Einstein's E=mc².
The author has discovered very recently (April 2016) that the formula E = mc^2 is invalid; energy is fictitious in the formula. The proof is simple and involves no high mathematics.Any good high school students taking physics as a subject could easily come to a definite understanding of the analysis and decides for himself whether the author’s claim is correct; there is no need to rely on the words of any physics professor to know whether the formula E = mc^2 is valid or invalid.
[2508] vixra:1810.0164 [pdf]
Newton's Invariant Mass Has Remained Invariant.
Contemporary mainstream physics has accepted special relativity to be a fully tested and verified theory. The internet has been full of references for experiments that purportedly verified special relativity. This article argues that many of these experiments purportedly verifying special relativity are irrelevant as evidence; a commonly quoted example being the Kaufmann, Bucherer and Neumann experiments. On the contrary, there is only one lone uncorroborated experiment that shows some evidence of the validity of special relativity - the 1964 experiment of William Bertozzi of the MIT; for the matter, the experiment provides only a weak evidence with 10% accuracy. If a lone experiment were sufficient as evidence in science, then the 1989 Pons & Fleischmann experiment could have won the experimenters a Nobel Prize in physics - they did not. The author proposes a simple experiment that could decide incontrovertibly between the two competing mechanics, the old Newtonian mechanics or the “newer” special relativity - by just directly measuring the velocity of electrons ejected in natural beta decay. To date, despite the simplicity of the experiment, no one has performed the experiment.
[2509] vixra:1810.0163 [pdf]
Dishonesty In Academia To Promote Special Relativity.
A central feature of special relativity is the increase of mass with velocity - mass going to infinity when a body approaches the speed limit of light. This feature is of the utmost importance as special relativity has been accepted by modern physics to have clearly proven Newtonian mechanics to be wrong fundamentally; Newton’s mechanics has mass to be an invariant property of matter. As it is expected that students would not easily accept a dismissal of Newton’s monumental work, the Principia, the physics academia tries to have a way to convince students that indeed this central feature of mass increasing with velocity could even be verified through experiments done in the usual laboratory of a university. The fact of the matter is otherwise - that even the original ex- periments by Kaufmann (1901), Bucherer (1908) that attempted to show mass increasing with velocity are flawed as the author has shown. The proposed simplified experiments are tantamount to fraud propagated on unsuspecting students who may not have the time to delve into the issues more thoroughly.
[2510] vixra:1810.0157 [pdf]
Dirichlet Problem for Hermitian-Einstein Equations Over bi-Hermitian Manifolds
In this paper, we solve the Dirichlet problem for $\alpha$-Hermitian-Einstein equations on $I_{\pm}$-holomorphic bundles over bi-Hermitian manifolds. As a corollary, we obtain an analogue result about generalized holomorphic bundles on generalized K\"{a}hler manifolds.
[2511] vixra:1810.0122 [pdf]
Low-Rank Matrix Recovery Via Regularized Nuclear Norm Minimization
In this paper, we theoretically investigate the low-rank matrix recovery problem in the context of the unconstrained regularized nuclear norm minimization (RNNM) framework. Our theoretical findings show that, one can robustly recover any matrix X from its few noisy measurements b=A(X)+n with a bounded constraint ||n||_{2}<ε via the RNNM, if the linear map A satisfies restricted isometry property (RIP) with δ_{tk}<√(t-1)/t for certain fixed t>1. Recently, this condition with t≥4/3 has been proved by Cai and Zhang (2014) to be sharp for exactly recovering any rank-k matrices via the constrained nuclear norm minimization (NNM). To the best of our knowledge, our work first extends nontrivially this recovery condition for the constrained NNM to that for its unconstrained counterpart. Furthermore, it will be shown that similar recovery condition also holds for regularized l_{1}-norm minimization, which sometimes is also called Basis Pursuit DeNoising (BPDN).
[2512] vixra:1810.0121 [pdf]
Rip-Based Performance Guarantee for Low Tubal Rank Tensor Recovery
The essential task of multi-dimensional data analysis focuses on the tensor decomposition and the corresponding notion of rank. However, most tensor ranks are not well defined with a tight convex relaxation. In this paper, by introducing the notion of tensor singular value decomposition (t-SVD), we establish a regularized tensor nuclear norm minimization (RTNNM) model for low tubal rank tensor recovery. In addition, the tensor nuclear norm within the unit ball of the tensor spectral norm here has been shown to be a convex envelop of tensor average rank. On the other hand, many variants of the restricted isometry property (RIP) have proven to be crucial frameworks and analysis tools for recovery of sparse vectors and low-rank tensors. So, we initiatively define a novel tensor restrict isometry property (t-RIP) based on t-SVD. Besides, our theoretical results show that any third-order tensor X∈R^{n_{1}× n_{2}× n_{3}} whose tubal rank is at most r can stably be recovered from its as few as measurements y = M(X)+w with a bounded noise constraint ||w||_{2}≤ε via the RTNNM model, if the linear map M obeys t-RIP with δ_{tr}^{M}<√(t-1)/(n_{3}^{2}+t-1) for certain fixed t>1. Surprisingly, when n_{3}=1, our conditions coincide with T. Cai and A. Zhang's sharp work in 2013 for low-rank matrix recovery via the constrained nuclear norm minimization. We note that, as far as the authors are aware, such kind of result has not previously been reported in the literature.
[2513] vixra:1810.0114 [pdf]
Dirac Theory's Breaches of Quantum Correspondence and Relativity; Nonrelativistic Pauli Theory's Unique Relativistic Extension
A single-particle Hamiltonian independent of the particle's coordinate ensures the particle conserves momentum, i.e., is free. This free-particle Hamiltonian is completely determined by Lorentz covariance of its energy-momentum and the particle's rest-energy value; such a free particle has velocity which vanishes when its momentum vanishes. Dirac required his free-particle Hamiltonian to be inhomogeneously linear in momentum, which contrariwise produces velocity that is independent of momentum; he also required his Hamiltonian's square to equal the above relativistic Hamiltonian's square, forcing many observables to anticommute and breach the quantum correspondence principle, as well as forcing the speed of any Dirac "free particle" to be c times the square root of three, which remains true when the particle interacts electromagnetically. The quantum correspondence principle breach causes a Dirac "free particle" to exhibit spontaneous acceleration that becomes unbounded in the classical limit; an artificial "spin" is also made available. Unlike the Dirac Hamiltonian, the nonrelativistic Pauli Hamiltonian is free of unphysical anomalies. Its relativistic extension is worked out via Lorentz-invariant upgrade of its associated action functional at zero particle velocity, and is obtained in closed form when there is no applied magnetic field; when there is, a successive approximation scheme must be used.
[2514] vixra:1810.0096 [pdf]
Developing a New Cryptic Communication Protocol by Quantum Tunnelling over Classic Computer Logic
I have been working for a time about basic laws of directing the universe [1,2]. It seems that the most basic and impressive principle which causes any physical phenomenon is the Uncertainty Principle of Heisenberg [3], that existence have any property because of the uncertainty. During this process, while I was thinking about conservation of information I noticed, that information cannot be lost; but at a point, it becomes completely unrecognizable according to us as there is no alternative. Any information and the information searched for become the same after a point relatively to us. The sensitivity increases forever but its loss. Each sensitivity level also has higher level; so actually an absolute protection seems possible.
[2515] vixra:1810.0046 [pdf]
Riemann Conjecture Proof
The main contribution of this paper is to achieve the proof of Riemann hypothesis. The key idea is based on new formulation of the problem $$\zeta(s)=\zeta(1-s) \Leftrightarrow re(s)=\frac{1}{2}$$. This proof is considered as a great discovery in mathematic.
[2516] vixra:1810.0023 [pdf]
Finite-Time Lyapunov Exponents and Lagrangian Coherent Structures in the Infinitesimal Integration Time Limit
Lagrangian diagnostics, such as the finite-time Lyapunov exponent and Lagrangian coherent structures, have become popular tools for analyzing unsteady fluid flows. These diagnostics can help illuminate regions where particles transported by a flow will converge to and diverge from, even in a divergence-free flow. Unfortunately, calculating Lagrangian diagnostics can be time consuming and computationally expensive. Recently, new Eulerian diagnostics have been developed which provide similar insights into the Lagrangian transport properties of fluid flows. These new diagnostics are faster and less expensive to compute than their Lagrangian counterparts. Because Eulerian diagnostics of Lagrangian transport structure are relatively new, there is still much about their connection to Lagrangian diagnostics that is unknown. This paper provides a mathematical bridge between Lagrangian and Eulerian diagnostics. It rigorously explores the mathematical relationship that exists between invariants of the right Cauchy-Green deformation tensor and the Rivlin-Ericksen tensors, primarily the Eulerian rate-of-strain tensor, in the infinitesimal integration time limit. Additionally, this paper develops the infinitesimal-time Lagrangian coherent structures (iLCSs) and demonstrates their efficacy in predicting the Lagrangian transport of particles even in realistic geophysical fluid flows generated by numerical models.
[2517] vixra:1810.0015 [pdf]
A Minimum Rindler Horizon When Accelerating?
When a particle is in constant acceleration, it has been suggested it has a Rindler horizon given by c^2/ a, where a is the proper acceleration. The Rindler event horizon tells us that we cannot receive information outside the horizon during the time period in which we are accelerating at this uniform rate. If we accelerate uniformly, sooner or later we will reach the speed of light, or at least very close to it. In this paper, we will look more closely at the Rindler horizon in relation to Haug’s newly-suggested maximum velocity for matter and see that there likely is a minimum Rindler horizon for a particle with mass that is accelerating; this minimum Rindler horizon may, in fact, be the Planck length.
[2518] vixra:1810.0005 [pdf]
The Relativistic Mechanics of E=mc2 Fails
The relativistic mechanics of contemporary physics does not have a defined unit of force. Its definition of force as F=d/dt(mv/√(1-v²/c²)) does not define a real standard unit of force. A Newtonian unit of force, e.g. the SI newton, may not be used in any of the relativistic formulas; it is a real unit of force only with Newtonian mechanics which observes Newton’s second law of motion as an axiom defining a unit of force as mass × acceleration. Without a unit of force, the application of the work-energy theorem produces only a formula that evaluates only to a pure number which has no association with any real unit of energy. All values of energy from relativistic mechanics are, therefore, fictitious. The implication is grave. The well known equation: E = mc² and the central identity of relativistic mechanics: E² = (pc)² + m²c⁴ are now invalidated. The quantum electrodynamics, the Standard Model of particle physics are now highly questionable. At the Large Hadron Collider (LHC) of CERN where protons are propelled to near the speed of light, the purported energy of the relativistic protons is 6.5TeV, but the real value is only 470MeV - the reported energy being inflated by a factor of 15,000. The Kaufmann-Bucherer-Neumann experiments were not evidence for a mass varying with speed; they showed only a contradiction between the Lorentz force law with Newton’s force law. The correct conclusion is not a failure of invariant mass of Newtonian mechanics, but evidence of failure of the Lorentz force law at relativistic speed conditions. Nature does not seem to favor any relativistic mechanics. We may have to fall back on our old Newtonian mechanics.
[2519] vixra:1810.0004 [pdf]
1908 Bucherer Experiment And The Lorentz Force Law
The Bucherer experiment of 1908 was not experimental proof of a relativistic mass varying with speed, but proof that electromagnetism and the Lorentz force law fail under relativistic speed conditions. Our conclusions come from a novel re-examination of the experiment based on three different interpretations of New ton’s second law as applied to the experiment and to analyze the implications for each of: (1) force∝dp/dt (2) force=relativistic_mass×acceleration (3) the classical f = ma The new interpretation now shows a constant charge-mass ratio for all relativistic speed; both charge and mass would be speed invariant. New relativistic force laws had to be proposed to be consistent with the experimental findings; the Lorentz force law is now: F = q((1+v²/c²)√(1-v²/c²) E + √(1-v⁴/c⁴) (v x B) ); the Coulomb’s law is: F = (1+v²/c²)√(1-v²/c²) (1/4πε₀)q₁q₂R/r²; The Coulomb’s law has an additional scalar factor dependent on the relative velocity between the charges; for small speed, the form is: F = (1+½v²/c²)(1/4πε₀)q₁q₂R/r²; This enables the formula for the force between parallel current-carrying conductors: F_dl = μ₀/(2πR)i₁i₂dl, be derived free of the concept of the magnetic field. A real possibility exists for a formulation of a revolutionary Newtonian electric theory free of magnetism and the Biot-Savart law.Also,the Bucherer experiment could have been an experimental verification of the relativistic Lorentz force law if the predicted speeds of the electrons had been verified through direct time-of-flight measurements.
[2520] vixra:1810.0003 [pdf]
Mass Energy Equivalence Not Experimentally Verified
The notion of mass-energy equivalence and its mathematical expression through the famous equation E = mc² predates Einstein when he introduced special relativity in 1905. It has to be noted that E = mc² has no rigorous theoretical basis; it is only a pure hypothesis not related to any physical theory. The thesis of this paper is that there is no incontrovertible experimental verification of mass-energy equivalence. The Year_Of_Physics_2005 ‘Direct Test Of E = mc²’ published in Nature 2005 claims a verification of the equation to an accuracy of 0.00004%. The experimenters misunderstood the very nature of the experiment that they carried out. It was not a verification of E = mc², but just another experiment to deduce the mass of the neutron. To date,we have not measured the true mass of the neutron to any degree of accuracy; we only have a deduced estimate of the neutron mass based on the mass-energy equivalence of E = mc² .
[2521] vixra:1809.0582 [pdf]
Refraction
Refraction is treated classically, which is not physically realistic. Unlike optical reflection that is well understood refraction, is a more difficult problem exposing a major missing piece of quantum mechanics. Refraction normally is treated either classically or as a non-relativistic perturbation response. Recently it became apparent where this property finds its quantum origin in a full relativistic quantum description.
[2522] vixra:1809.0580 [pdf]
A New Mass Measure and a Simplification and Extension of Modern Physics
Recent experimental research has shown that mass is linked to Compton periodicity. We suggest a new way to look at mass: Namely that mass at its most fundamental level can simply be seen as reduced Compton frequency over the Planck time. In this way, surprisingly, neither the Planck constant nor Newton’s gravitational constant are needed to observe the Planck length, nor in any type of calculation, except when we want to convert back to old and less informative mass measures such as kg. The theory gives the same predictions as Einstein’s special relativity theory, with one very important exception: anything with mass must have a maximum velocity that is a function of the Planck length and the reduced Compton wavelength. For all observed subatomic particles, such as the electron, this velocity is considerably above what is achieved in particle accelerators, but always below the speed of light. This removes a series of infinity challenges in physics. The theory also offers a way to look at a new type of quantum probabilities. As we will show, a long series of equations become simplified in this way.
[2523] vixra:1809.0579 [pdf]
Electrostatic Accelerated Electrons Within Information Horizons Exert Bidirectional Propellant-Less Thrust
During internal discharge (electrical breakdown or field emission transmission), thin symmetric capacitors accelerate slightly towards the anode; an anomaly that does not appear obvious using standard physics. The effect can be predicted by core concepts of a model called quantised inertia (MiHsC) which assumes inertia of accelerated particles, such as electrons, is caused by Unruh radiation. This discrete Unruh radiation forms standing waves between the particle’s boundaries from the Rindler horizon to the confinement horizon. These waves are established based on special relativity in concert with quantum mechanics. Electrons accelerate toward the anode and are assumed to encounter an inhomogeneous Unruh radiation condition causing a force to modify their inertial mass. To conserve momentum, the overall mechanical system moves in the direction of the anode. This resulting force is assumed to be caused by an energy gradient in between the confinement and the Rindler zone and its equation is derived directly from the uncertainty principle. Discharging capacitors with various thicknesses are compared and show agreement between the experimental findings and a virtual particle oscillation associated with a standing wave energy gradient hypothesis. The preliminary results are encouraging.
[2524] vixra:1809.0575 [pdf]
Electron Structure, Ultra-dense Hydrogen and Low Energy Nuclear Reactions
In this paper, a simple Zitterbewegung electron model, proposed in a previous work, is presented from a different perspective that does not require advanced mathematical concepts. A geometric-electromagnetic interpretation of mass, relativistic mass, De Broglie wavelength, Proca, Klein-Gordon and Aharonov-Bohm equations in agreement with the model is proposed. Starting from the key concept of mass-frequency equivalence a non-relativistic interpretation of the 3.7 keV deep hydrogen level found by J. Naudts is presented. Abstract According to this perspective, ultra-dense hydrogen can be conceived as a coherent chain of bosonic electrons with protons or deuterons at center of their Zitterbewegung orbits. The paper ends with some examples of the possible role of ultra-dense hydrogen in some aneutronic low energy nuclear reactions.
[2525] vixra:1809.0567 [pdf]
Helical Solenoid Model of the Electron
A new semiclassical model of the electron with helical solenoid geometry is presented. This new model is an extension of both the Parson Ring Model and the Hestenes Zitterbewegung Model. This model interprets the Zitterbewegung as a real motion that generates the electron’s rotation (spin) and its magnetic moment. In this new model, the g-factor appears as a consequence of the electron’s geometry while the quantum of magnetic flux and the quantum Hall resistance are obtained as model parameters. The Helical Solenoid Electron Model necessarily implies that the electron has a toroidal moment, a feature that is not predicted by Quantum Mechanics. The predicted toroidal moment can be tested experimentally to validate or discard this proposed model.
[2526] vixra:1809.0566 [pdf]
The Helicon: A New Preon Model
A new preon model is presented as an extension of the semiclassical Helical Solenoid Electron Model that was previously proposed by the author. This helicon model assumes as postulates both the Atomic Principle and the equality between matter and electric charge. These postulates lead us to a radical reinterpretation of the concepts of antimatter and dark matter and form a new framework for future preon theories.
[2527] vixra:1809.0551 [pdf]
Differentiation Under the Loop Integral: a New Method of Renormalization in Quantum Field Theory
In the conventional approach of renormalization, divergent loop integrals are regulated and combined with counterterms to satisfy a set of renormalization conditions. While successful, the process of regularization is tedious and must be applied judiciously to obtain gauge-invariant results. In this Letter, I show that by recasting the renormalization conditions as the initial conditions of momentum-space differential equations for the loop amplitudes, the need for regularization disappears because the process of differentiating under the loop integrals renders them finite. I apply this approach to successfully renormalize scalar $\phi^4$ theory and QED to one-loop order without requiring regularization or counterterms. Beyond considerable technical simplifications, the ability to perform renormalization without introducing a regulator or counterterms can lead to a more fundamental description of quantum field theory free of ultraviolet divergences.
[2528] vixra:1809.0547 [pdf]
Grangels
Based on the quantum modification of the general relativity (Qmoger), gravitational angels (grangels) are introduced as areas of the background graviton condensate surrounding the interfaces between gravitons and the ordinary matter. The quantum entanglement is interpreted as interaction between splitting grangels. Our subjective experiences (qualia) are described in terms of grangels surrounding neuron clusters. A hierarchy of grangels is considered, including cosmological grangels
[2529] vixra:1809.0536 [pdf]
Supermassive Black Holes
In the following black hole model, electrons and positrons form a neutral gas which is confined by gravitation. The smaller masses are supported against gravity by electron degeneracy pressure. Larger masses are supported by ideal gas and radiation pressure. In each case, the gas is a polytrope which satisfies the Lane-Emden equation. Solutions are found that yield the physical properties of black holes, for the range 1000 to 100 billion solar masses.
[2530] vixra:1809.0528 [pdf]
Conformal Symmetry Breaking in Einstein-Cartan Gravity Coupled to the Electroweak Theory
We develop an alternative to the Higgs mechanism for spontaneously breaking the local SU(2)xU(1) gauge invariance of the Electroweak Theory by coupling to Einstein-Cartan gravity in curved spacetime. The theory exhibits a local scale invariance in the unbroken phase, while the gravitational sector does not propagate according to the conventional quantum field theory definition. We define a unitary gauge for the local SU(2) invariance which results in a complex Higgs scalar field. This approach fixes the local SU(2) gauge without directly breaking the local U(1). We show how the electroweak symmetry can be spontaneously broken by choosing a reference mass scale to fix the local scale invariance. The mass terms for the quantum fields are then generated without adding any additional symmetry breaking terms to the theory. We point out subtle differences of the quantum field interactions in the broken phase.
[2531] vixra:1809.0491 [pdf]
Nuclear Binding Energy Fails (Is Mass Spectrometry Accurate?)
Mass spectrometry measures atomic masses giving precision of 10^{-10}, but its accuracy has not been verified - precision and accuracy are two independent aspects. The Lorentz force law underlying mass spectrometry has not been verified. In the 1920's, the atomic masses of some elements measured through the early mass spectrometers showed some discrepancies from the `whole-number-rule' of atomic weights. The physics community accepted the discrepancies from whole numbers to be correct; they proposed the concept of `mass defects'. This, together with the mass energy equivalence of E = mc^2 allowed Arthur Eddington to propose a new `sub-atomic' energy to account for the source of the energy of the sun to be in line with the 15 billion age of the sun in their theory. They never entertained the other simpler option - that their mass spectrometers were only approximately good. If the atomic masses of nuclides were to be just whole numbers equal to the mass number in atomic mass unit, it would be a confirmation of the law of mass conservation in the atomic and subatomic world. The key to decide the fate of nuclear physics is in sodium fluoride NaF. Sodium and fluorine occur in nature only as single stable isotopes. A chemical analysis of NaF with the current analytical balance to determine the relative atomic mass of Na/F would decide conclusively if mass spectrometry is accurate. The current relative atomic mass of Na/F is : 22.989769/18.998403 or 1.210089; the ratio of the mass number of Na/F is : 23/19 or 1.210526. The accuracy of mass spectrometry would be confirmed if the value is 1.210089 +- 0.000012. Otherwise, if the value is 1.210526 +- 0.000012, it would mean a confirmation of the law of conservation of mass. The implications of such a scenario is beyond imagination - the whole world of nuclear physics would collapse.
[2532] vixra:1809.0485 [pdf]
On The Non-Real Nature of x.0 (x in R*): The Set of Null Imaginary Numbers
In this work I axiomatize the result of $x \cdot 0$ ($x\in\real_{\ne 0}$) as a number $\inull(x)$ that has a null real part (denoted as $\Re(\inull(x))=0$) but that is not real. This implies that $y+\Re(\inull(x)) = y$ but $y+\inull(x) = y + x\cdot 0 \not=y$, $y\in\real_{\ne 0}$. From this I define the set of null imaginary numbers $\nullset=\{\inull(x)=x\cdot 0|\forall x\in\real_{\ne 0}\}$ and present its elementary algebra taking the axiom of uniqueness as base (i.e., if $x\ne y\Leftrightarrow \inull(x)\ne \inull(y)$). Under the condition of existence of $\nullset$ I show that division by zero can be defined without causing inconsistencies in elementary algebra.
[2533] vixra:1809.0481 [pdf]
The Riemann Hypothesis
The Riemann Hypothesis is a famous unsolved problem dating from 1859. This paper will present a simple proof using a radically new approach. It is based on work of von Neumann (1936), Hirzebruch (1954) and Dirac (1928).
[2534] vixra:1809.0472 [pdf]
A Generalization of the Levi-Civita Connection
We define here a generalization of the well-know Levi-Civita connection. We choose an automorphism and define a connection with help of a (non-symmetric) bilinear form.
[2535] vixra:1809.0454 [pdf]
Unfalsifiable Conjectures in Mathematics
It is generally accepted among scientists that an unfalsifiable theory, a theory which can never conceivably be proven false, can never have any use in science. In this paper, we shall address the question, “Can an unfalsifiable conjecture ever have any use in mathematics?”
[2536] vixra:1809.0451 [pdf]
Partial Quantum Tensors of Input and Output Connections
I show how many connections of Γ are presently existing from R to β as they are being inputted simultaneously through tensor products. I plan to address the Quantum state of this tensor connection step by step throughout the application presented. Also, I will show you how to prove that the tensor connection is true through its output method using a wide variety but small amount of tensor calculus methods and number theory. You will patently see the formations of operator functions throughout the application as these two mathematical methods work together.
[2537] vixra:1809.0450 [pdf]
How Gas and Force Work Together to Create Geometrical Dispersed Patterns Based On an Object's Shape
This unique mathematical method for understanding the flow of gas through each individual objects shape will show us how we can produce physical functions for each object based on the dissemination of gas particles in accordance to its shape. We analyze its continuum per shape of the object and the forces acting on the gas which in return produces its own unique function for the given object due to the rate at which forces were applied to the gas. We also get to examine the different changes in the working rate due to the effect of its volume and mass from the given objects shape with our working equation discovered through Green’s and Gaussian functions.
[2538] vixra:1809.0374 [pdf]
One Way Speed Of Light Based on Wineland's Laser Cooling Experiment
Based on David Wineland's experiment in 1978, a laser beam points at an electromagnetcally trapped magnesium ion. The frequency of the laser light in the rest frame of the laser becomes a different frequency in the rest frame of the ion. If this new frequency matches the absorption frequency of the ion, the light will be absorbed by the ion. The wavelength is independent of reference frame. Therefore, the faster the ion moves toward the laser, the higher the frequency detected by the ion will be.
[2539] vixra:1809.0366 [pdf]
The World Lines of a Dust Collapsar
In a previous article it was shown that the end state for the dust metric of Oppenheimer and Snyder has most of its mass concentrated just inside the gravitational radius; it is proposed that the resulting object be considered as an idealized shell collapsar. Here the treatment is extended to include the family of interior metrics described by Choquet-Bruhat. The end state is again a shell collapsar, and its structure depends on the density pro…le at the beginning of the collapse. What is lacking in most previous commentaries on the Oppenheimer- Snyder article is the recognition that Oppenheimer and Snyder matched the time coordinate at the surface, and that implies a …nite upper limit for the comoving time coordinate inside the collapsar. A collapse process having all the matter going inside the gravitational radius would require comoving times which go outside that upper limit.
[2540] vixra:1809.0351 [pdf]
The Cordiality for the Conjunction of Two Paths
Abstract A graph is called cordial if it has a 0 - 1 labeling such that the number of vertices (edges) labeled with ones and zeros dier by at most one. The conjunction of two graphs (V1;E1) and (V2;E2) is the graph G = (V;E), where V = V1 x V2 and u = (a1; a2), v = (b1; b2) are two vertices, then uv belongs to E if aibi belongs to Ei for i = 1 or 2. In this paper, we present necessary and sucient condition for cordial labeling for the conjunction of two paths, denoted by Pn ^ Pm. Also, we drive an algorithm to generate cordial labeling for the conjunction Pn ^ Pm.
[2541] vixra:1809.0323 [pdf]
A Generalization of the Clifford Algebra
We propose here a generalization of the Clifford algebra by mean of two endomorphisms. We deduce a generalized Lichnerowicz formula for the space of modified spinors.
[2542] vixra:1809.0318 [pdf]
The Dirac Hamiltonian's Egregious Violations of Special Relativity; the Nonrelativistic Pauli Hamiltonian's Unique Relativistic Extension
A single-particle Hamiltonian independent of the particle's coordinate ensures the particle conserves momentum, i.e., is free. Lorentz-covariance of that Hamiltonian's energy-momentum specifies it up to the particle's rest energy; the free particle it describes has speed below c and constant velocity parallel to its conserved momentum. Dirac took his free-particle Hamiltonian to have the same squared value as that relativistic one, but unwittingly blocked Lorentz-covariance of his Hamiltonian's energy-momentum by requiring it to be inhomogeneously linear in momentum. The Dirac "free particle" badly flouts relativity and even physical cogency; its velocity direction is extremely nonconstant, while its speed is fixed to c times the square root of three even when it interacts electromagnetically. Both its rest energy and total energy can be negative, and its velocity components and rest energy are artificially correlated by being mutually anticommuting; its alleged "spin" is an artifact of the anticommutation of its velocity components. Unlike the Dirac Hamiltonian, the nonrelativistic Pauli Hamiltonian is apparently physically sensible for particle speed far below c. Its relativistic extension is worked out via Lorentz-invariant upgrade of its associated action functional at zero particle velocity, and is obtained in closed form if there is no applied magnetic field; a successive approximation scheme must otherwise be used.
[2543] vixra:1809.0277 [pdf]
Constraining the Standard Model in Motivic Quantum Gravity
A physical approach to a category of motives must account for the emergent nature of spacetime, where real and complex numbers play a secondary role to discrete operations in quantum computation. In quantum logic, the cardinality of a set is initially replaced by a dimension of a linear space, making contact with the increasing dimensions in an operad. The operad of associahedra governs tree level scattering, and is closely related to the permutohedra and cube tiles, where cube vertices can encode components of a spinor in higher dimensional octonionic approaches. A study of rest mass generation begins with the cosmological infrared scale, set by the neutrino masses, and its related see-saw mechanism. We employ the anyonic ribbon spectrum for Standard Model states, and consider its relation to magic star algebras, giving a context for the Koide rest mass phenomenology of charged leptons and quarks.
[2544] vixra:1809.0240 [pdf]
Question on Iterated Limits in Relativity
Two iterated limits are not equal each other, in general. Thus, we present an example when the massless limit of the function of E, vec p, m does not exist in some calculations within quantum field theory.
[2545] vixra:1809.0234 [pdf]
Proof of the Limits of Sine and Cosine at Infinity
We develop a representation of complex numbers separate from the Cartesian and polar representations and define a representing functional for converting between representations. We define the derivative of a function of a complex variable with respect to each representation and then we examine the variation within the definition of the derivative. After studying the transformation law for the variation between representations of complex numbers, we will show that the new representation has special properties which allow for a modification to the transformation law for the variation which preserves, in certain cases, the definition of the derivative. We refute a common proof that the limits of sine and cosine at infinity cannot exist. We use the modified variation in the definition of the derivative to compute the limits of sine and cosine at infinity.
[2546] vixra:1809.0231 [pdf]
Finding the Planck Length Independent of Newton's Gravitational Constant and the Planck Constant
In modern physics, it is assumed that the Planck length is a derived constant from Newton's gravitational constant, the Planck constant and the speed of light, l_p=Sqrt(G*hbar/c^3). This was first discovered by Max Planck in 1899. We suggest a way to find the Planck length independent of any knowledge of Newton's gravitational constant or the Planck constant, but still dependent on the speed of light (directly or indirectly).
[2547] vixra:1809.0211 [pdf]
The Cosmological Rotation Reversal and the G\"{o}del-Brahe Model: the Modifications of the G\"{o}del Metric
The General Relativistic G\"{o}del-Brahe model visualizes the universe rotating with angular velocity $2 \pi$ radians/day - around a stationary earth. The wave function of this model of universe $\psi_{Univ}$, has two chiral states - clockwise and anti clockwise. Due to instabilities in the electromagnetic fields, the wave function can tunnel between the two states. G\"{o}del-Rindler model with a heigth varying acceleration gives the gravitational field of the earth. G\"{o}del-Obukhov model with a sinusoidally varying scale factor gives the yearly north-south motion of the sun. G\"{o}del-Randall-Sundram model with an angular velocity varying with height, gives the yearly rotation of sun with respect to the back ground of the fixed stars. Confinement of light rays due to rotation in the G\"{o}del universe, coupled with an appropriate mapping, generates the illusion of sphericity over a flat earth - with half of earth lit by sun light and the other half in darkness. Finally a metric combining all these properties is given. Discussion of further work is given, namely - (1) Origin of earth's magnetic field due to a charged G\"{o}del universe - with a relation to the Van Allen radiation belt, (2) Geomagnetic reversals due to reversals of cosmological rotation, (3) Caismir energy in the charged G\"{o}del type universe and the energy density required for the G\"{o}del-Brahe model and (4) Behaviour of Causality in G\"{o}del universe and the Einstein-Podolsky-Rosen (EPR) paradox.
[2548] vixra:1809.0172 [pdf]
Dynamic Gravity Experiment with Physical Pendulum
In recent decades, several methods significantly different from the classic method of the Cavendish torsion balance have been developed and used for measuring the gravitational constant G. Unfortunately, the new de-terminations of G have not reduced significantly its uncertainty. It seems that in recent times, the accuracy prob-lem for the gravitational constant has not been the focus. This paper presents a new type gravity experiment used a big and heavy physical pendulum, not for a newest gravitational constant measure, but for the study of special gravitational effects encountered accidentally. Surprisingly strong gravitational effects have been observed between moving masses. We have named the whole new group of gravitational phenomena by "dynamic gravity". Despite the simplicity of our gravity experiment, the observed extraordinary results could lead to an unexpected revolution in gravity science.
[2549] vixra:1809.0170 [pdf]
One Way Speed Of Light Based on Anderson's Experiment
Based on Wilmer Anderson's experiment in 1937, the light detector is put in motion relatively to the mirror. Two light pulses are emitted from the mirror toward the detector. The elapsed time between two emissions is recorded on the oscilloscope. This elapsed time is larger if the detector moves away from the mirror faster. By comparing the elapsed time in the rest frame of the mirror to the elapsed time in the rest frame of the detector, the speed of light pulse in the rest frame of the mirror is found to be different from the speed of light pulse in the rest frame of the detector.
[2550] vixra:1809.0164 [pdf]
A Fully Relativistic Description of Stellar or Planetary Objects Orbiting a Very Heavy Central Mass
A fully relativistic numerical program is used to calculate the advance of the peri-helium of Mercury or the deflection of light by the Sun is here used also to discuss the case of S2, a star orbiting a very heavy central mass of the order of $4.3\,10^6$ solar masses.
[2551] vixra:1809.0156 [pdf]
Finding the Planck Length From Electron and Proton Fundamentals
We suggest a way to find the Planck length by finding the Compton wavelength of the electron from Compton scattering, and then measuring the proton-electron ratio using cyclotron frequency. This gives us the Planck length using a Cavendish apparatus with no knowledge of Newton's gravitational constant. The Planck length is indeed important for gravity, but Newton's gravitational constant is likely a composite constant.
[2552] vixra:1809.0086 [pdf]
An Identity for Horadam Sequences
We derive an identity connecting any two Horadam sequences having the same recurrence relation but whose initial terms may be different. Binomial and ordinary summation identities arising from the identity are developed.
[2553] vixra:1809.0084 [pdf]
Elementary Particles, Dark Matter, Dark Energy, Cosmology, and Galaxy Evolution
We suggest united models and specific predictions regarding elementary particles, dark matter, dark energy, aspects of the cosmology timeline, and aspects of galaxy evolution. Results include specific predictions for new elementary particles and specific descriptions of dark matter and dark energy. Some modeling matches known elementary particles and extrapolates to predict other elementary particles, including bases for dark matter. Some models complement traditional quantum field theory. Some modeling features Hamiltonian mathematics and originally de-emphasizes motion. We incorporate results from traditional motion-centric and action-based Lagrangian math into our Hamiltonian-centric framework. Our modeling framework features mathematics for isotropic quantum harmonic oscillators.
[2554] vixra:1809.0070 [pdf]
A Note on Rank Constrained Solutions to Linear Matrix Equations
This preliminary note presents a heuristic for determining rank constrained solutions to linear matrix equations (LME). The method proposed here is based on minimizing a non- convex quadratic functional, which will hence-forth be termed as the Low-Rank-Functional (LRF). Although this method lacks a formal proof/comprehensive analysis, for example in terms of a probabilistic guarantee for converging to a solution, the proposed idea is intuitive and has been seen to perform well in simulations. To that end, many numerical examples are provided to corroborate the idea.
[2555] vixra:1809.0066 [pdf]
Quantum Mechanics and the ``emission & Regeneration'' Unified Field Theory.
The origin of the limitations of our standard theoretical model is the assumption that the energy of a particle is concentrated at a small volume in space. The limitations are bridged by introducing artificial objects and constructions like particles wave, gluons, strong force, weak force, gravitons, dark matter, dark energy, etc. The proposed approach models subatomic particles such as electrons and positrons as focal points in space where continuously fundamental particles are emitted and absorbed, fundamental particles where the energy of the electron or positron is stored as rotations defining longitudinal and transversal angular momenta (fields). Interaction laws between angular momenta of fundamental particles are postulated in that way, that the basic laws of physics (Coulomb, Ampere, Lorentz, Maxwell, Gravitation, etc.) can be derived from the postulates. This methodology makes sure, that the approach is in accordance with the basic laws of physics, in other words, with well proven experimental data. Due to the dynamical description of the particles the proposed approach has not the limitations of the standard model and is not forced to introduce artificial concepts or constructions. All forces are the product of interactions between angular momenta of fundamental particles (electromagnetic interactions) described by QED. Interactions like QCD and Gauge/Gravity Duality are simply the product of the insufficiencies of the SM and not required.
[2556] vixra:1809.0061 [pdf]
Physically Consistent Probability Density in Noncommutative Quantum Mechanics
We formulate and solve the problem of boundary values in non-relativistic quantum mechanics in non-commutative boundary spaces-times. The formalism developed can be useful to the formulation of the boundary value problem in in Noncommutative Quantum Mechanics
[2557] vixra:1809.0059 [pdf]
A Proof of Benfords Law in Geometric Series
We show in this paper another proof of Benford’s Law. The idea starts with the problem of to find the first digit of a power. Then we deduced a function to calculate the first digit of any power a j called L f function. The theorem 1.2 its a consequence of the periodicity of the $L_f$ function.
[2558] vixra:1808.0680 [pdf]
High-Accuracy Inference in Neuromorphic Circuits using Hardware-Aware Training
Neuromorphic Multiply-And-Accumulate (MAC) circuits utilizing synaptic weight elements based on SRAM or novel Non-Volatile Memories (NVMs) provide a promising approach for highly efficient hardware representations of neural networks. NVM density and robustness requirements suggest that off-line training is the right choice for ``edge'' devices, since the requirements for synapse precision are much less stringent. However, off-line training using ideal mathematical weights and activations can result in significant loss of inference accuracy when applied to non-ideal hardware. Non-idealities such as multi-bit quantization of weights and activations, non-linearity of weights, finite max/min ratios of NVM elements, and asymmetry of positive and negative weight components all result in degraded inference accuracy. In this work, it is demonstrated that non-ideal Multi-Layer Perceptron (MLP) architectures using low bitwidth weights and activations can be trained with negligible loss of inference accuracy relative to their Floating Point-trained counterparts using a proposed off-line, continuously differentiable HW-aware training algorithm. The proposed algorithm is applicable to a wide range of hardware models, and uses only standard neural network training methods. The algorithm is demonstrated on the MNIST and EMNIST datasets, using standard MLPs.
[2559] vixra:1808.0679 [pdf]
Relativistic Newtonian Gravitation That Gives the Correct Prediction of Mercury Precession and Needs Less Matter for Galaxy Rotation Observations
In the past, there was an attempt to modify Newton’s gravitational theory, in a simple way, to consider relativistic effects. The approach was “abandoned” mainly because it predicted only half of Mercury’s precession. Here we will revisit this method and see how a small logical extension can lead to a relativistic Newtonian theory that predicts the perihelion precession of Mercury correctly. In addition, the theory requires much less mass to explain galaxy rotation than standard theories do, and is also interesting for this reason.
[2560] vixra:1808.0642 [pdf]
Where Einstein Got It Wrong
In Einstein’s General Relativity, gravitation is considered to be only an effect of spacetime geometry. Einstein considered the sources of gravitation to be all varieties of mass and energy excepting that of gravitational fields. This exclusion leaves General Relativity correct only to first order. By considering gravitational fields to be real entities that possess field energy densities and including them as source terms for the Einstein tensor, they contribute to spacetime curvature in a way that prevents the formation of event horizons and also has the effect of accelerating the expansion of the cosmos without any need for a cosmological constant (aka “dark energy”).
[2561] vixra:1808.0641 [pdf]
On Some Ser's Infinite Product
I derive some Ser's infinite product for exponential function and exponential of the digamma function; as well as an integral representation for the digamma function.
[2562] vixra:1808.0610 [pdf]
The Complexity of Student-Project-Resource Matching-Allocation Problems
In this technical note, I settle the computational complexity of nonwastefulness and stability in student-project-resource matching-allocation problems, a model that was first proposed by \cite{pc2017}. I show that computing a nonwasteful matching is complete for class $\text{FP}^{\text{NP}}[\text{poly}]$ and computing a stable matching is complete for class $\Sigma_2^P$. These results involve the creation of two fundamental problems: \textsc{ParetoPartition}, shown complete for $\text{FP}^{\text{NP}}[\text{poly}]$, and \textsc{$\forall\exists$-4-Partition}, shown complete for $\Sigma_2^P$. Both are number problems that are hard in the strong sense.
[2563] vixra:1808.0601 [pdf]
Six Different Natario Warp Drive Spacetime Metric Equations.
Warp Drives are solutions of the Einstein Field Equations that allows superluminal travel within the framework of General Relativity. There are at the present moment two known solutions: The Alcubierre warp drive discovered in $1994$ and the Natario warp drive discovered in $2001$.Alcubierre used the so-called $3+1$ original Arnowitt-Dresner-Misner($ADM$) formalism using the approach of Misner-Thorne-Wheeler to develop his warp drive theory.As a matter of fact the first equation in his warp drive paper is derived precisely from the original $3+1$ $ADM$ formalism and we have strong reasons to believe that Natario which followed the Alcubierre steps also used the original $3+1$ $ADM$ formalism to develop the Natario warp drive spacetime.Several years ago some works appeared in the scientific literature advocating two new parallel $3+1$ $ADM$ formalisms.While the original $ADM$ formalism uses mixed contravariant and covariant scripts one of the new parallel formalisms uses only contravariant scripts while the other uses only covariant scripts.Since the Natario vector is the generator of the Natario warp drive spacetime metric in this work we expand the original Natario vector including the coordinate time as a new Canonical Basis for the Hodge star generating an expanded Natario vector and we present $6$ equations for the Natario warp drive spacetime $3$ of them with constant velocities one for the original $ADM$ formalism and $2$ other for the parallel formalisms. We also present the corresponding $3$ extended versions of the Natario warp drive spacetime metric which encompasses accelerations and variable velocities both in the original and parallel $ADM$ formalisms.
[2564] vixra:1808.0579 [pdf]
Quantum Origin of Classical Poisson Distribution Universality
Scientists have discovered a mysterious pattern that somehow connects a bus system in Mexico and chicken eyes to quantum physics and number theory. The observed universality reveals properties for a large class of systems that are independent of the dynamical details, revealing underlying mathematical connections described by the classical Poisson distribution. This note suggests that their origin can be found in the wavefunction as modeled by the geometric interpreation of Clifford algebra.
[2565] vixra:1808.0576 [pdf]
A Joint Multifractal Analysis of Finitely Many Non Gibbs-Ahlfors Type Measures
In the present paper, new multifractal analysis of vector valued Ahlfors type measures is developed. Mutual multifractal generalizations f fractal measures such as Hausdorff and packing have been introduced with associated dimensions. Essential properties of these measures have been shown using convexity arguments.
[2566] vixra:1808.0567 [pdf]
A Proof For Beal's Conjecture
In the first part of this paper, we show how a^x - b^y can be expressed as a new non-standard binomial formula (to an indeterminate power, n). In the second part, by fixing n to the value of z we compare this binomial formula to the standard binomial formula for c^z to prove the Beal Conjecture.
[2567] vixra:1808.0531 [pdf]
Goldbach's Conjecture
I proved the Goldbach's conjecture. Even numbers are prime numbers and prime numbers added, but it has not been proven yet whether it can be true even for a huge number (forever huge number). All prime numbers are included in (6n - 1) or (6n + 1) except 2 and 3 (n is a positive integer). All numbers are executed in hexadecimal notation. This does not change even in a huge number (forever huge number). 2 (6n + 2), 4 (6n - 2), 6 (6n) in the figure are even numbers. 1 (6n + 1), 3 (6n + 3), 5 (6n - 1) are odd numbers.
[2568] vixra:1808.0521 [pdf]
Developing a New Space over Deterministic Imaginary Time
In a day from days, when the famous x is lengthened to x_2 and lost its virginity... Hey-o! Here comes the danger up in this club again. Listen up! Here's the story about a little guy, that lives in a dark world and uses power of wisdom as a torch to find way in darkness; and all day and all night and everything he sees is just illusion. I have been working about the laws of existence for a time. I developed new formulas which were based on a strong mechanism over philosophical hypotheses. Nobody can answer easily; but I thought many times better mathematical infrastructure. Actually at the beginning, I noticed, that a fixed observer does observation of moving bodies being the bodies do a circular motion because of emerging and changing angles over time even if the objects move parallel manner relatively to the observer at that time . This would not happen accidentally even if abstract math says, nothing is going to change. Eureka! Finally while I was in a cafe today, I remembered and developed in a few hours a new method on some note papers which I demanded from cafe to explain existence, and thereupon I asked to my friends for leave, and I am writing towards morning in the name of giving a shoulder to the tired giants. The ancients smile on me!
[2569] vixra:1808.0509 [pdf]
Gamma is Irrational
We introduce an unaccustomed number system, H±, and show how it can be used to prove gamma is irrational. This number system consists of plus and minus multiplies of the terms of the harmonic series. Using some properties of ln, this system can depict the harmonic series and lim as n goes to infinity of ln n at the same time, giving gamma as an infinite decimal. The harmonic series converges to infinity so negative terms are forced. As all rationals can be given in H± without negative terms, it follows that must be irrational.
[2570] vixra:1808.0507 [pdf]
Neutrosophic Ideals in Bck=bci-Algebras Based on Neutrosophic Points
The concept of neutrosophic set (NS) developed by Smarandache is a more general platform which extends the concepts of the classic set and fuzzy set, intuitionistic fuzzy set and interval valued intuitionistic fuzzy set.
[2571] vixra:1808.0501 [pdf]
Neutrosophic Q-Fuzzy Subgroups
In this paper, the notation of concept of neutrosopy in Q-fuzzy set is introduced. Further some properties and results on neutrosophic Q-fuzzy subgroups are discussed.
[2572] vixra:1808.0499 [pdf]
Neutrosoph_ic Topoloj_ik Uzaylar
Ordu Oniversitesi Fen Bilimleri Enstitlisti ogrencisi Cemil KURU tarafmdan haz1rlanan ve Yrd. Do9. Dr. Mehmet KORKMAZ dam~manhgmda yilrilttilen "Neutrosophic Topolojik Uzaylar "adh bu tez, jurimiz tarafmdan 18 I 12 I 2017 tarihinde oy birligi I ov coklueu ile Matematik Anabilim Dalmda Yuksek Lisans tezi olarak kabul edilmistir.
[2573] vixra:1808.0494 [pdf]
NEUTROSOPH˙IC TOPOLOJ˙IK Uzaylarda Kompaktlik
Ordu Oniversitesi Fen Bilimleri Enstitilsti ogrencisi Burak KILIC; tarafmdan haz1rlanan ve Yrd.Doc. Dr. Yild1ray CELiK damsmanhgmda ytirtiti.ilen "Neutrosophic Topolojik Uzaylarda Kompakthk" adh bu tez, jtirimiz tarafmdan 18/12/2017 tarihinde oy birligi I ov coklugu ile Matematik Anabilim Dalmda YtiksekLisans tezi olarak kabul edilmistir.
[2574] vixra:1808.0493 [pdf]
Neutrosophic Triplet Cosets and Quotient Groups
In this paper, by utilizing the concept of a neutrosophic extended triplet (NET), we define the neutrosophic image, neutrosophic inverse-image, neutrosophic kernel, and the NET subgroup. The notion of the neutrosophic triplet coset and its relation with the classical coset are defined and the properties of the neutrosophic triplet cosets are given. Furthermore, the neutrosophic triplet normal subgroups, and neutrosophic triplet quotient groups are studied.
[2575] vixra:1808.0490 [pdf]
New Integrated Quality Function Deployment Approach Based on Interval Neutrosophic Set for Green Supplier Evaluation and Selection
Green supplier evaluation and selection plays a crucial role in the green supply chain management of any organization to reduce the purchasing cost of materials and increase the flexibility and quality of products.
[2576] vixra:1808.0489 [pdf]
New Multiple Attribute Decision Making Method Based on DEMATEL and TOPSIS for Multi-Valued Interval Neutrosophic Sets
Interval neutrosophic fuzzy decision making is an important part of decision making under uncertainty, which is based on preference order. In this study, a new multi-valued interval neutrosophic fuzzy multiple attribute decision making method has been developed by integrating the DEMATEL (decision making trial and evaluation laboratory) method and the TOPSIS (the technique for order preference by similarity to an ideal solution) method. Evaluation values are given in the form of multi-valued interval neutrosophic fuzzy values. By using DEMATEL, dependencies among attributes can be modeled, and attribute weights are determined.
[2577] vixra:1808.0483 [pdf]
On Pseudohyperbolical Smarandache Curves in Minkowski 3-Space
We define pseudohyperbolical Smarandache curves according to the Sabban frame in Minkowski 3-space.We obtain the geodesic curvatures and the expression for the Sabban frame vectors of special pseudohyperbolic Smarandache curves. Finally, we give some examples of such curves.
[2578] vixra:1808.0476 [pdf]
Primes of the Form P
A natural number is a prime if it has only factors of 1 and itself. There are, by Euclidean theorem (about 350BC) infinitely many primes.
[2579] vixra:1808.0475 [pdf]
Q-Neutrosophic Soft Relation and Its Application in Decision Making
Q-neutrosophic soft sets are essentially neutrosophic soft sets characterized by three independent two-dimensional membership functions which stand for uncertainty, indeterminacy and falsity. Thus, it can be applied to two-dimensional imprecise, indeterminate and inconsistent data which appear in most real life problems.
[2580] vixra:1808.0473 [pdf]
Recent Neutrosophic Models for KRP Systems
Knowledge Representation and Processing (KRP) plays an important role in the development of expert systems as engines for accelerating the processes of economic and social life development.
[2581] vixra:1808.0457 [pdf]
Single Valued Neutrosophic Clustering Algorithm Based on Tsallis Entropy Maximization
Data clustering is an important field in pattern recognition and machine learning. Fuzzy c-means is considered as a useful tool in data clustering. Neutrosophic set, which is extension of fuzzy set, has received extensive attention in solving many real life problems of uncertainty, inaccuracy, incompleteness, inconsistency and uncertainty.
[2582] vixra:1808.0453 [pdf]
Single Valued Neutrosophic Relations
We introduce the concept of a single valued neutrosophic reFLexive, symmetric and transitive relation. And we study single valued neutrosophic analogues of many results concerning relationships between ordinary reflexive, symmetric and transitive relations. Next, we dene the concepts of a single valued neutrosophic equivalence class and a single valued neutrosophic partition, and we prove that the set of all single valued neutrosophic equivalence classes is a single valued neutrosophic partition and the single valued neutrosophic equivalence relation is induced by a single valued neutrosophic partition.
[2583] vixra:1808.0450 [pdf]
Some Interval Neutrosophic Linguistic Maclaurin Symmetric Mean Operators and Their Application in Multiple Attribute Decision Making
There are many practical decision-making problems in people’s lives, but the information given by decision makers (DMs) is often unclear and how to describe this information is of critical importance.
[2584] vixra:1808.0447 [pdf]
Some Results on the Comaximal Ideal Graph of a Commutative Ring
The rings considered in this article are commutative with identity which admit at least two maximal ideals. Let R be a ring such that R admits at least two maximal ideals.
[2585] vixra:1808.0446 [pdf]
Special Smarandache Curves with Respect to Darboux Frame in Galilean 3-Space
In the present paper, we investigate special Smarandache curves with Darboux apparatus with respect to Frenet and Darboux frame of an arbitrary curve on a surface in the three-dimensional Galilean space G3.
[2586] vixra:1808.0442 [pdf]
Symmetry Measures of Simplified Neutrosophic Sets for Multiple Attribute Decision-Making Problems
A simplified neutrosophic set (containing interval and single-valued neutrosophic sets) can be used for the expression and application in indeterminate decision-making problems because three elements in the simplified neutrosophic set (including interval and single valued neutrosophic sets)are characterized by its truth, falsity, and indeterminacy degrees.
[2587] vixra:1808.0430 [pdf]
Failure Mode and Effects Analysis Considering Consensus and Preferences Interdependence
Failure mode and effects analysis is an effective and powerful risk evaluation technique in the field of risk management, and it has been extensively used in various industries for identifying and decreasing known and potential failure modes in systems, processes, products, and services. Traditionally, a risk priority number is applied to capture the ranking order of failure modes in failure mode and effects analysis.
[2588] vixra:1808.0429 [pdf]
Fault Diagnosis Method for a Mine Hoist in the Internet of Things Environment
To reduce the difficulty of acquiring and transmitting data in mining hoist fault diagnosis systems and to mitigate the low efficiency and unreasonable reasoning process problems, a fault diagnosis method for mine hoisting equipment based on the Internet of Things (IoT) is proposed in this study.
[2589] vixra:1808.0425 [pdf]
Fully-Automated Segmentation of Fluid/Cyst Regions in Optical Coherence Tomography Images with Diabetic Macular Edema using Neutrosophic Sets and Graph Algorithms
This paper presents a fully-automated algorithm to segment fluid-associated (fluid-filled) and cyst regions in optical coherence tomography (OCT) retina images of subjects with diabetic macular edema (DME).
[2590] vixra:1808.0411 [pdf]
Improved Symmetry Measures of Simplified Neutrosophic Sets and Their Decision-Making Method Based on a Sine Entropy Weight Model
This work indicates the insufficiency of existing symmetry measures (SMs) between asymmetry measures of simplified neutrosophic sets (SNSs) and proposes the improved normalized SMs of SNSs, including the improved SMs and weighted SMs in single-valued and interval neutrosophic settings.
[2591] vixra:1808.0401 [pdf]
Medical Diagnosis Based on Single-Valued Neutrosophic Probabilistic Rough Multisets over Two Universes
In real-world diagnostic procedures, due to the limitation of human cognitive competence, a medical expert may not conveniently use some crisp numbers to express the diagnostic information,and plenty of research has indicated that generalized fuzzy numbers play a significant role in describing complex diagnostic information.
[2592] vixra:1808.0398 [pdf]
M-N Anti Fuzzy Normal Soft Groups
In this paper, we have discussed the concept of M-N anti fuzzy normal soft group, we then dene the M-N anti level subsets of a normal fuzzy soft subgroup and its some elementary properties are also discussed.
[2593] vixra:1808.0397 [pdf]
Models for Green Supplier Selection with Some 2-Tuple Linguistic Neutrosophic Number Bonferroni Mean Operators
In this paper, we extend the Bonferroni mean (BM) operator, generalized Bonferroni mean (GBM) operator, dual generalized Bonferroni mean (DGBM) operator and dual generalized geometric Bonferroni mean (DGGBM) operator with 2-tuple linguistic neutrosophic numbers (2TLNNs) to propose 2-tuple linguistic neutrosophic numbers weighted Bonferroni mean (2TLNNWBM) operator, 2-tuple linguistic neutrosophic numbers weighted geometric Bonferroni mean (2TLNNWGBM) operator, generalized 2-tuple linguistic neutrosophic numbers weighted Bonferroni mean (G2TLNNWBM) operator, generalized 2-tuple linguistic neutrosophic numbers weighted geometric Bonferroni mean (G2TL NNWGBM) operator, dual generalized 2-tuple linguistic neutrosophic numbers weighted Bonferroni mean (DG2TLNNWBM) operator, and dual generalized 2-tuple linguistic neutrosophic numbers weighted geometric Bonferroni mean (DG2TLNNWGBM) operator.
[2594] vixra:1808.0392 [pdf]
Multi-Criteria Decision Making Method Based on Similarity Measures Under Single-Valued Neutrosophic Re¯ned and Interval Neutrosophic Re¯ned Environments
In this paper, we propose three similarity measure methods for single-valued neutrosophic refined sets and interval neutrosophic re¯ned sets based on Jaccard, Dice and Cosine similarity measures of single-valued neutrosophic sets and interval neutrosophic sets. Furthermore, we suggest two multi-criteria decision making methods under single-valued neutrosophic refined environment and interval neutrosophic refined environment, and give applications of proposed multi-criteria decision making methods. Finally we suggest a consistency analysis method for proposed similarity measures between interval neutrosophic refined sets and give an application to demonstrate process of the method.
[2595] vixra:1808.0391 [pdf]
Multiple Attribute Decision-Making Method Using Similarity Measures of Neutrosophic Cubic Sets
In inconsistent and indeterminate settings, as a usual tool, the neutrosophic cubic set (NCS) containing single-valued neutrosophic numbers and interval neutrosophic numbers can be applied in decision-making to present its partial indeterminate and partial determinate information.
[2596] vixra:1808.0382 [pdf]
A Novel Skin Lesion Detection Approach Using Neutrosophic Clustering and Adaptive Region Growing in Dermoscopy Images
This paper proposes novel skin lesion detection based on neutrosophic clustering and adaptive region growing algorithms applied to dermoscopic images, called NCARG. First, the dermoscopic images are mapped into a neutrosophic set domain using the shearlet transform results for the images.
[2597] vixra:1808.0368 [pdf]
A Study on Neutrosophic Cubic Graphs with Real Life Applications in Industries
Neutrosophic cubic sets are the more generalized tool by which one can handle imprecise information in a more effective way as compared to fuzzy sets and all other versions of fuzzy sets. Neutrosophic cubic sets have the more flexibility, precision and compatibility to the system as compared to previous existing fuzzy models. On the other hand the graphs represent a problem physically in the form of diagrams, matrices etc. which is very easy to understand and handle.
[2598] vixra:1808.0362 [pdf]
Decision-Making Approach Based on Neutrosophic Rough Information
Rough set theory and neutrosophic set theory are mathematical models to deal with incomplete and vague information. These two theories can be combined into a framework for modeling and processing incomplete information in information systems. Thus, the neutrosophic rough set hybrid model gives more precision, flexibility and compatibility to the system as compared to the classic and fuzzy models. In this research study, we develop neutrosophic rough digraphs based on the neutrosophic rough hybrid model. Moreover, we discuss regular neutrosophic rough digraphs, and we solve decision-making problems by using our proposed hybrid model. Finally, we give a comparison analysis of two hybrid models, namely, neutrosophic rough digraphs and rough neutrosophic digraphs.
[2599] vixra:1808.0361 [pdf]
Decision-Making via Neutrosophic Support Soft Topological Spaces
The concept of interval neutrosophic sets has been studied and the introduction of a new kind of set in topological spaces called the interval valued neutrosophic support soft set has been suggested. We study some of its basic properties. The main purpose of this paper is to give the optimum solution to decision-making in real life problems the using interval valued neutrosophic support soft set.
[2600] vixra:1808.0350 [pdf]
Inverse Properties in Neutrosophic Triplet Loop and Their Application to Cryptography
A generalized group is an algebraic structure which has a deep physical background in the unified gauge theory and has direct relation with isotopies. Mathematicians and physicists have been trying to construct a suitable unified theory for twistor theory, isotopies theory, and so on. It was known that generalized groups are tools for constructions in unified geometric theory and electroweak theory.
[2601] vixra:1808.0349 [pdf]
Left (Right)-Quasi Neutrosophic Triplet Loops (Groups) and Generalized BE-Algebras
The new notion of a neutrosophic triplet group (NTG) is proposed by Florentin Smarandache; it is a new algebraic structure different from the classical group. The aim of this paper is to further expand this new concept and to study its application in related logic algebra systems. Some new notions of left (right)-quasi neutrosophic triplet loops and left (right)-quasi neutrosophic triplet groups are introduced, and some properties are presented.
[2602] vixra:1808.0345 [pdf]
Multi-Attribute Decision-Making Method Based on Neutrosophic Soft Rough Information
Soft sets (SSs), neutrosophic sets (NSs), and rough sets (RSs) are different mathematical models for handling uncertainties, but they are mutually related. In this research paper, we introduce the notions of soft rough neutrosophic sets (SRNSs) and neutrosophic soft rough sets (NSRSs) as hybrid models for soft computing. We describe a mathematical approach to handle decision-making problems in view of NSRSs. We also present an efficient algorithm of our proposed hybrid model to solve decision-making problems.
[2603] vixra:1808.0340 [pdf]
Neutrosophic Duplet Semi-Group and Cancellable Neutrosophic Triplet Groups
The notions of the neutrosophic triplet and neutrosophic duplet were introduced by Florentin Smarandache. From the existing research results, the neutrosophic triplets and neutrosophic duplets are completely different from the classical algebra structures. In this paper, we further study neutrosophic duplet sets, neutrosophic duplet semi-groups, and cancellable neutrosophic triplet groups. First, some new properties of neutrosophic duplet semi-groups are funded, and the following important result is proven: there is no finite neutrosophic duplet semi-group.
[2604] vixra:1808.0337 [pdf]
Neutrosophic Incidence Graphs With Application
In this research study, we introduce the notion of single-valued neutrosophic incidence graphs. We describe certain concepts, including bridges, cut vertex and blocks in single-valued neutrosophic incidence graphs.We present some properties of single-valued neutrosophic incidence graphs. We discuss the edge-connectivity, vertex-connectivity and pair-connectivity in neutrosophic incidence graphs. We also deal with a mathematical model of the situation of illegal migration from Pakistan to Europe.
[2605] vixra:1808.0332 [pdf]
Neutrosophic Nano Ideal Topological Structures
Neutrosophic nano topology and Nano ideal topological spaces induced the authors to propose this new concept. The aim of this paper is to introduce a new type of structural space called neutrosophic nano ideal topological spaces and investigate the relation between neutrosophic nano topological space and neutro- sophic nano ideal topological spaces. We define some closed sets in these spaces to establish their relationships. Basic properties and characterizations related to these sets are given.
[2606] vixra:1808.0330 [pdf]
Neutrosophic Quadruple BCK/BCI-Algebras
The notion of a neutrosophic quadruple BCK/BCI-number is considered, and a neutrosophic quadruple BCK/BCI-algebra, which consists of neutrosophic quadruple BCK/BCI-numbers, is constructed. Several properties are investigated, and a (positive implicative) ideal in a neutrosophic quadruple BCK-algebra and a closed ideal in a neutrosophic quadruple BCI-algebra are studied.
[2607] vixra:1808.0316 [pdf]
On Neutrosophic Closed Sets
The aim of this paper is to introduce the concept of closed sets in terms of neutrosophic topological spaces. We also study some of the properties of neutrosophic closed sets. Further, we introduce continuity and contra continuity for the introduced set. The two functions and their relations are studied via a neutrosophic point set.
[2608] vixra:1808.0314 [pdf]
On the Powers of Fuzzy Neutrosophic Soft Matrices
In some real life applications, one has to consider not only the truth membership supported by the evidence but also the falsity membership against the evidence, which is beyond the scope of fuzzy sets and IVFSs.
[2609] vixra:1808.0310 [pdf]
Single–Valued Neutrosophic Filters in EQ–algebras
This paper introduces the concept of single–valued neutrosophic EQ–subalgebras, single–valued neutrosophic EQ–prefilters and single–valued neutrosophic EQ–filters. We study some properties of single–valued neutrosophic EQ–prefilters and show how to construct single–valued neutrosophic EQ–filters. Finally, the relationship between single–valued neutrosophic EQ–filters and EQ–filters are studied.
[2610] vixra:1808.0304 [pdf]
Study on the Development of Neutrosophic Triplet Ring and Neutrosophic Triplet Field
Rings and fields are significant algebraic structures in algebra and both of them are based on the group structure. In this paper, we attempt to extend the notion of a neutrosophic triplet group to a neutrosophic triplet ring and a neutrosophic triplet field. We introduce a neutrosophic triplet ring and study some of its basic properties. Further, we define the zero divisor, neutrosophic triplet subring, neutrosophic triplet ideal, nilpotent integral neutrosophic triplet domain, and neutrosophic triplet ring homomorphism. Finally, we introduce a neutrosophic triplet field.
[2611] vixra:1808.0298 [pdf]
Achievable Single-Valued Neutrosophic Graphs in Wireless Sensor Networks
This is an unedited version of the accepted manuscript scheduled for publication. It has been uploaded in advance for the benefit of our customers. The manuscript will be copyedited, typeset and proofread before it is released in the final form.
[2612] vixra:1808.0294 [pdf]
Nash Embedding and Equilibrium in Pure Quantums States
With respect to probabilistic mixtures of the strategies in non-cooperative games, quantum game theory provides guarantee of fixed-point stability, the so-called Nash equilibrium. This permits players to choose mixed quantum strategies that prepare mixed quantum states optimally under constraints. We show here that fixed-point stability of Nash equilibrium can also be guaranteed for pure quantum strategies via an application of the Nash embedding theorem, permitting players to prepare pure quantum states optimally under constraints.
[2613] vixra:1808.0293 [pdf]
Compiling Adiabatic Quantum Programs
We develop a non-cooperative game-theoretic model for the problem of graph minor-embedding to show that optimal compiling of adiabatic quantum programs in the sense of Nash equilibrium is possible.
[2614] vixra:1808.0287 [pdf]
Update the Path Integral in Quantum Mechanics by Using the Energy Pipe Streamline
Abstract The path integral in quantum mechanics is a very important mathematical tools. It is widely applied in quantum electrodynamics and quantum field theory. But its basic concepts confuse all of us. The first thing is the propagation of the probability. The second is the path can be any paths you can draw. How this can work? In this article, a new definition of energy pipe streamline integral is introduced in which the mutual energy theorem and the mutual energy flow theorem, mutual energy principle, self-energy principle, Huygens principle, and surface integral inner product of the electromagnetic fields are applied to offer a meaningful and upgraded path integral. The mutual energy flow is the energy flow from the emitter to the absorber. This energy flow is built by the retarded wave radiates from the emitter and the advanced wave radiates from the absorber. The mutual energy flow theorem guarantees that the energy of the photon go through any surface between the emitter and the absorber are all equal. This allow us to build many slender flow pipes to describe the energy flow. The path integral can be defined on these pipes. This is a updated path integral and it is referred as the energy pip streamline integral. The Huygens principle allow us to insert virtual current sources on any place of the pipes. Self-energy principle tell us that any particles are consist of 4 waves: the retarded wave, the advanced wave and another two time-reversal waves. All these waves are canceled and, hence, the waves do not carry or transfer any energy. Energy is only carried and transferred by the mutual energy flow. Hence, the mutual energy flow theorem is actually the energy flow theorem. Wave looks like probability wave, but mutual energy flow are real energy flow, it is not a probability flow. In this article the streamline integral is applied to photon which satisfy Maxwell equation. However, this concept can be easily widened to other particle for example electrons which satisfies the Schrödinger or Dirac equation.
[2615] vixra:1808.0283 [pdf]
Analysis of Muon Count Variations According to Detector's Measuring Scale
I have analyzed muon count variations according to detector's measuring scale, calculated declining tendency and analyzed result value, inferring some reason, as a experiment hypothesis, "All muons which come to surface probably will result in those having a maximum of vertical incidences about a ground as a minimum distance." I calculated a limit moving distance based on muon's life span and muon's average velocity, and showed muon's maximum incidence scale for the earth's surface based on muon's average emergence altitude for the hypothesis' proof. After performing the experiment theoretically, I analyzed the measured data by LSM(Least Square Method). Conclusively, I discovered that as the detector's measuring scale gains each 10 degree, muons which are measured per 10 minute decrease averagely each 9.0667 approx, and I saw that on the 30°-40° point and 50°-60° point, muon counts are radically decreased.
[2616] vixra:1808.0281 [pdf]
A Classical Group of Neutrosophic Triplet Groups
Fuzzy set theory was introduced by Zadeh and was generalized to the Intuitionistic Fuzzy Set (IFS) by Atanassov. Real-world, uncertain, incomplete, indeterminate, and inconsistent data were presented philosophically as a neutrosophic set by Smarandache [3], who also studied the notion of neutralities that exist in all problems.
[2617] vixra:1808.0267 [pdf]
An Extension of Neutrosophic AHP–SWOT Analysis for Strategic Planning and Decision-Making
Every organization seeks to set strategies for its development and growth and to do this, it must take into account the factors that affect its success or failure. The most widely used technique in strategic planning is SWOT analysis. SWOT examines strengths (S), weaknesses (W),opportunities (O) and threats (T), to select and implement the best strategy to achieve organizational goals.
[2618] vixra:1808.0265 [pdf]
An Outline of Cellular Automaton Universe Via Cosmological KdV Equation
It has been known for long time that the cosmic sound wave was there since the early epoch of the Universe. Signatures of its existence are abound. However, such a sound wave model of cosmology is rarely developed fully into a complete framework.
[2619] vixra:1808.0264 [pdf]
Einstein’s Reply to Bell and Others? a Simple Constructive Classical Foundation for Quantum Theory
Having elsewhere refuted Bell’s theorem irrefutably with elementary mathematics, we here advance Einstein’s ideas similarly with a classical Lorentz-invariant theory, observationally-indistinguishable from quantum mechanics. Given that our elementary theory is straight-forward and non-mysterious, we provide an Einsteinian—a specifically local and truly realistic—advance toward understanding the classical nature of physical reality at the quantum level. We thus resolve Bell’s dilemma in Einstein’s favor: as Bell half-expected, he and his supporters were being rather silly.
[2620] vixra:1808.0189 [pdf]
RxCxHxO-valued Gravity as a Grand Unified Field Theory
It is argued how {\bf R} $\otimes$ {\bf C} $\otimes$ {\bf H} $\otimes$ {\bf O}-valued Gravity (real-complex-quaterno-octonionic Gravity) naturally can describe a Grand Unified Field theory of Einstein's gravity with a Yang-Mills theory containing the Standard Model group $SU(3) \times SU(2) \times U(1)$. In particular, it leads to a $[SU(4)]^4$ symmetry group revealing the possibility of extending the standard model by introducing additional gauge bosons, heavy quarks and leptons, and an extra $fourth$ family of fermions. We finalize by displaying the analog of the Einstein-Hilbert action for ${\bf R} \otimes {\bf C} \otimes {\bf H} \otimes {\bf O}$-valued gravity via the use of matrices, and which is based on ``coloring" the graviton; i.e. by attaching internal indices to the metric $g_{\mu \nu}$. In the most general case, $U(16)$ arises as the isometry group, while $U(8)$ is the isometry group in the split-octonion case.
[2621] vixra:1808.0179 [pdf]
Electron Carries "Hidden" 31,6 GW Field Energy Vortex
An electron is enveloped by a "hidden" electromagnetic field-energy circulation vortex of $\approx 31,6$ GW passive power determined by the Poynting-vector field existing around an electron. The energy vortex is most intensive in the proximity of classical electron radius, with maximum in its equatorial plane.\\ A thoretical upper limit of such (non-usable) passive energy circulation is analytically determined by integration of the Poynting-vector field over a specific plane of reference. The result highlights a new singularity problem of classical electron theory.
[2622] vixra:1808.0157 [pdf]
New Research Shall be Initiated in Regard with the Effects of Mood Alternation During Neurological Shutdown
In 1994 the Death With Dignity Act was first passed by Oregon voters successfully, and it became the first law in American history permitting physician-assisted suicide. Pain and negative mental feelings and physical unconsciousness about the future events in the Universe are getting considered as the only disadvantages of the death. Aside of the issues of human cloning and mind uploading for making progress towards eternity of life, complementary researches in-regard-with the effects of mood alternation during neurological shutdown shall be initiated. Alternation on the human feelings by stimulation of the brain's synapses would lead to a more peaceful mental shutdown. The pain's circuit (the circuit of negative emotions during the death) shall be extracted and gets stimulated during the death. The stimulation of the circuit usually begins right after the act of suicide and suppression of the oxygen follow, and then gets followed for about 15 minutes. Since the Brain Research Initiative began in 2013, NIH and the Brain Research Initiative are responsible for further researches on a variety of different brain circuits. The BRAIN Initiative, is a collaborative, public-private research initiative announced by the Obama administration with the goal of supporting the development and application of innovative technologies that can create a dynamic understanding of brain function.
[2623] vixra:1808.0155 [pdf]
The Complexity of Robust and Resilient $k$-Partition Problems
In this paper, we study a $k$-partition problem where a set of agents must be partitioned into a fixed number of $k$ non-empty coalitions. The value of a partition is the sum of the pairwise synergies inside its coalitions. Firstly, we aim at computing a partition that is robust to failures from any set of agents with bounded size. Secondly, we focus on resiliency: when a set of agents fail, others can be moved to replace them. We settle the computational complexity of decision problem \textsc{Robust-$k$-Part} as complete for class $\Sigma_2^P$. We also conjecture that resilient $k$-partition is complete for class $\Sigma_3^P$ under simultaneous replacements, and for class PSPACE under sequential replacements.
[2624] vixra:1808.0133 [pdf]
High-Level Task Planning in Robotics with Symbolic Model Checking
A robot control system contains a lowlevel motion planner and a high level task planner. The motions are generated with keyframe to keyframe planning while the the tasks are described with primitive action-names. A good starting point to formalize task planning is a mindmap which is created manually for a motion capture recording. It contains the basic actions in natural language and is the blueprint for a formal ontology. The mocap annotations are extended by features into a dataset, which is used for training a neural network. The resulting modal is a qualitative physics engine, which predicts future states of the system.
[2625] vixra:1808.0119 [pdf]
Topological Skyrme Model with Wess-Zumino Anomaly term and their Representations
Our model here, the Skyrme-Wess-Zumino model, is Skyrme lagrangian sup- plemented with the Wess-Zumino anomaly term. It is commonly believed that spin-half octet and spin three-half decuptet are the lowest dimensional repre- sentations that the three-flavour Skyrmions would correspond to. We study the effect of including the electric charges consistently in these analysis. We show that indeed this leads to significant improvement in our understanding of proper reprentations of two-flavour and three-flavour Skyrmionic representations
[2626] vixra:1808.0117 [pdf]
Short Note on Space Wind Powered by Disorder: Dark Energy
After midnight just before sleeping, I noticed in my bed, that free space itself can cause a parachute effect on the moving bodies especially the bodies move in low gravitational fields like in Pioneer Anomaly. Because of this reason, while speed of a satellite is decreasing, speed of another one which spins around the world on different axis can increase; a satellite wandering in interstellar medium can speed up as also it can slow down; low frequency light and high frequency light behave differently; galaxies have lower mass can spin faster since mutual gravitation is not only option. These are only a few examples.
[2627] vixra:1808.0116 [pdf]
Sine Function at Rational Argument, Finite Product of Gamma Functions and Infinite Product Representation
I corrected the Theorem 21 of previous paper, obtaining an identity for sine function at rational argument involving finite sum of the gamma functions; hence, the representation of infinite product arose.
[2628] vixra:1808.0107 [pdf]
Mapping the Fourfold H4 600-Cells Emerging from E8: a Mathematical and Visual Study
It is widely known that the E8 polytope can be folded into two Golden Ratio (Phi) scaled copies of the 4 dimensional (4D) 120 vertex 720 edge H4 600-cell. While folding an 8D object into a 4D one is done by applying the dot product of each vertex to a 4x8 folding matrix, we use an 8x8 rotation matrix to produce four 4D copies of H4 600-cells, with the original two left side scaled 4D copies related to the two right side 4D copies in a very specific way. This paper will describe and visualize in detail the specific symmetry relationships which emerge from that rotation of E8 and the emergent fourfold copies of H4. It will also introduce a projection basis using the Icosahedron found within the 8x8 rotation matrix. It will complete the detail for constructing E8 from the 3D Platonic solids, Icosians, and the 4D H4 600-cell. Eight pairs of Phi scaled concentric Platonic solids are identified directly using the sorted and grouped 3D projected vertex norms present within E8.
[2629] vixra:1808.0085 [pdf]
Hidden Truth of Free Fall Experiment
Free fall experiment has been performed countless times for centuries. However, many facts were overlooked. A simple experiment since Galileo tells more than just gravity. Everyone can do it experiment will not catch much serious attention. Dishing up the same old stuff of few hounded years might seemed child's play to some. However, many fundamentals of the universe are revealed if looked closer. Significant facts can be overlooked, ignored, or misinterpreted when we rush forward. %Ambiguous and paradoxical interpretations of fundamentals could be roadblock left. Nevertheless, free fall experiments reveal: 1. Gravity is a force. 2. Free fall is an action powered by gravity. 3. Gravity contradicts attraction force. It acts exactly like contact force. 4. The infinite independence of space.
[2630] vixra:1808.0037 [pdf]
Generalization of Pollack Rule and Alternative Power Equation
After showing that only one of the different versions of Pollack's rule found on the literature agrees with the experimental behavior of a CPU running at stock frequency versus the same CPU overclocked, we introduce a formal simplified model of a CPU and derive a generalized Pollack's rule also valid for multithread architectures, caches, clusters of processors, and other computational devices described by this model. A companion equation for power consumption is also proposed.
[2631] vixra:1808.0033 [pdf]
Digital Quantum Simulation of Laser-Pulse Induced Tunneling Mechanism in Chemical Isomerization Reaction
Using quantum computers to simulate polyatomic reaction dynamics has an exponential advantage in the amount of resources needed over classical computers. Here we demonstrate an exact simulation of the dynamics of the laser-driven isomerization reaction of assymetric malondialdehydes. We discretize space and time, decompose the Hamiltonian operator according to the number of qubits and use Walsh-series approximation to implement the quantum circuit for diagonal operators. We observe that the reaction evolves by means of a tunneling mechanism through a potential barrier and the final state is in close agreement with theoretical predictions. All quantum circuits are implemented through IBM's QISKit platform in an ideal quantum simulator.
[2632] vixra:1807.0510 [pdf]
On the Representation of Even Integers by the Sum of Prime Numbers
The main objective of this short note is prove that some statements concerning the representation of even integers by the sum of prime numbers are equivalent to some true trivial case. This implies that these statements are also true. The analysis is based on a new prime formula and some trigonometric expressions.
[2633] vixra:1807.0496 [pdf]
Bell's Inequality Refuted on Bell's Own Terms
Bell's famous inequality contradicts ours. But ours holds algebraically, experimentally, classically and quantum-mechanically: Bell's does not. So Bell's inequality is refuted on Bell's terms as we identify his naively-realistic error and correct it.
[2634] vixra:1807.0485 [pdf]
Intuitionistic Evidence Sets
Dempster-Shafer evidence theory can express and deal with uncertain and imprecise information well, which satisfies the weaker condition than the Bayes probability theory. The traditional single basic probability assignment only considers the degree of the evidence support the subsets of the frame of discernment. In order to simulate human decision-making processes and any activities requiring human expertise and knowledge, intuitionstic evidence sets (IES) is proposed in this paper. It takes into account not only the degree of the support, but also the degree of non-support. The combination rule of intuitionstic basic probability assignments (IBPAs) also be investigated. Feasibility and effectiveness of the proposed method are illustrated by using an application of multi-criteria group decision making.
[2635] vixra:1807.0454 [pdf]
Time Arrow Spinors for the Modified Cosmological Model
We construct time arrow spinor states and define for them a Stern--Gerlach analogue Hamiltonian. The dispersion relations of the allowed modes are derived in a few special cases. We examine experimental data regarding negative frequency resonant radiation and show that the energy shift of the negative frequency mode is on the characteristic scale of the energies of the new Hamiltonian. We describe the similitude of the modified cosmological model (MCM) and the Stern--Gerlach apparatus, and we also show how the Pauli matrices are well-suited to applications in MCM cosmology. Complex and quaternion phase are combined in the wavefunction to generate new multiplectic structures. The principles described in this paper are oriented toward a time circuit application so we briefly describe an electrical circuit whose constructive elements elucidate the requirements needed for a working time circuit. The algebraic graph representation of electrical nodes with different electric potentials is replaced with time nodes that have different times in the time circuit graph.
[2636] vixra:1807.0450 [pdf]
Explanation of Double-Slit Experiment
According to the unified theory of dynamic space, the inductive-inertial phenomenon and its forces has been developed. These forces act on the electric units of the dynamic space, forming the grouping units (namely electric charges or forms of the electric field). The nature of the magnetic forces is explained, that are Coulomb's electric forces between these grouping units, created by the accelerated electron. So, Coulomb's Law for magnetism, the physical signicance of magnetic quantities and the so called strangeness of the fluctuation of nucleons magnetic dipole moment are interpreted. Additionally, due to in-phase motion of grouping units, a parallel common course of their electrons and a superposition picture of their motion waves are caused as in the double-slit experiment is displayed.
[2637] vixra:1807.0429 [pdf]
Time In Projectile Motion
In the projectile motion under vertical gravity, the horizontal speed remains constant while the vertical speed increases. The elapsed time to travel over a fixed length in the horizontal direction remains constant regardless of the vertical speed. Such elapsed time is independent of any reference frame in vertical motion. Therefore, the elapsed time is independent of the reference frame. Consequently, time is independent of the reference frame.
[2638] vixra:1807.0417 [pdf]
A Dual Identity Based Symbolic Understanding of the Gödel’s Incompleteness Theorems, P-NP Problem, Zeno’s Paradox and Continuum Hypothesis
A semantic analysis of formal systems is undertaken, wherein the duality of their symbolic definition based on the “State of Doing” and “State of Being” is brought out. We demonstrate that when these states are defined in a way that opposes each other, it leads to contradictions. This results in the incompleteness of formal systems as captured in the Gödel’s theorems. We then proceed to resolve the P-NP problem, which we show to be a manifestation of Gödel’s theorem itself. We then discuss the Zeno’s paradox and relate it to the same aforementioned duality, but as pertaining to discrete and continuous spaces. We prove an important theorem regarding representations of irrational numbers in continuous space. We extend the result to touch upon the Continuum Hypothesis and present a new symbolic conceptualization of space, which can address both discrete and continuous requirements. We term this new mathematical framework as “hybrid space”.
[2639] vixra:1807.0391 [pdf]
Eternal Sun
This paper seeks to explain three solar puzzles - namely - (1) Solar Neutrino Problem, (2) Coronal Temperature Problem, and (3) Ion and electron teperature discrepancy. It considers a hypertoroidal cosmological model, in which space time have the toplogy $S^3 \times S^1$ - which have also been explored by Segal and Guillemin. The retarded fluxes, emanating from the sun, interact with the advanced fluxe returning to their origin - provide solution.
[2640] vixra:1807.0340 [pdf]
Entanglement Condition for W Type Multimode States and a Scheme for Experimental Realization
We derive a class of inequality relations, using both the sum uncertainty relations of su(2) algebra operators and the Schrodinger-Robertson uncertainty relation of partially transposed su(1, 1) algebra operators, to detect the three-mode entanglement of non-Gaussian states of electromagnetic field. These operators are quadratic in mode creation and annihilation operators. The inseparability condition obtained using su(2) algebra operators is shown to guarantee the violation of stronger separability condition provided by Schrodinger-Robertson uncertainty relation of partially transposed su(1, 1) algebra operators. The obtained inseparability condition is also shown to be a necessary condition for W type entangled states and it is used to derive the general form for a family of such inseparability conditions. An experimental scheme is proposed to test the violation of separability condition. The results derived for three-mode systems are generalized to multimode systems.
[2641] vixra:1807.0337 [pdf]
Neutron Cluster Explain The Distribution of Dark Matter
Atom passed by photon sphere can be lose all electrons and then its nucleus disintegrate into neutron cluster. It explains the distribution of dark matter and why positron exist in that area.
[2642] vixra:1807.0334 [pdf]
The Chronology Protection Conjecture and the Formation of the Kerr Black Hole Through Gravitational Collapse
Exterior of the Kerr black hole is considered as the likely spacetime geometry around a black hole formed by gravitational collapse. However the Kerr black hole has chronology violating Closed Time-like Curves (CTCs), near the ring singularity in interior. Hawking has considered formation of CTCs using a scalar field and shown that back reaction of nature - divergence of energy-momentum tensor - would prevent formation of CTCs. The question being raised here is whether the physics behind Hawking's concept of "chronology protection", will actually prevent formation of the Kerr black hole - through gravitational collapse. On the other hand, if the Kerr black hole indeed gets formed by gravitational collapse, then one has a counter example to Chronology Protection Conjecture. Chronology protection thus imposes the constraint that formation of a rotating black hole through gravitational collapse should be such that the interior metric of the black hole does not match that of the Kerr black hole, or any other black hole with CTCs. It is possible that the quantum gravity effects rule out formation of CTCs during gravitational collapse - as they are expected to remove singularities. There may however be classical solutions to the problem. Angular momentum orginating from the rotation and the formation of CTCs are intimately related - through the off diagonal term $g_{t \phi}$ i.e., the coefficient of the $dtd\phi$ term (corresponding to the temporal coordinate $t$ and the longitudinal angular coordinate $\phi$) - of the metric tensor. A possible solution is that during the gravitational collapse, all the angular momentum of the star, is radiated away by gravitational waves - leaving behind as residual, the Schwarzschild black hole.
[2643] vixra:1807.0332 [pdf]
Prospects for Digital Audio Broadcasting in South Africa
The authors submitted a comment to the Independent Communications Authority of South Africa (ICASA) in response to the call for comment in the Discussion Document on Digital Sound Broadcasting (DSB) that appeared in South Africa’s Government Gazette No. 41534 of 29 March 2018. Consequently, the second author made an oral submission at the public hearings on this issue at the ICASA premises in Sandton on 13 July 2018. In this document, we elaborate on the thinking in our submission in response to a request in this regard from Advocate Dimakatso Qocha, the Councillor who chaired the hearing. We still intend address ourselves only to Question 1 of the Discussion Document. Is there a need for the introduction of DSB technologies in South Africa? We are convinced that there is no compelling argument for the introduction of the technology in the way apparently requested by many in the industry. We do not however oppose the liberalisation of signal distribution or a technology (and content) neutral approach to spectrum management and tradeable broadcasting and spectrum rights. If these were introduced, broadcasters would be able to decide on a purely commercial basis when and if to introduce digital broadcasts and new entry to the market would be possible through the purchase of appropriate rights.
[2644] vixra:1807.0324 [pdf]
Lyapunov-Type Inequality for the Hadamard Fractional Boundary Value Problem on a General Interval [a;b], (1≤a<b)
In this paper, we studied an open problem, where using two different methods, we obtained several results for a Lyapunov-type and Hartman-Wintner-type inequalities for a Hadamard fractional differential equation on a general interval [a;b],(1≤a<b) with the boundary value conditions.
[2645] vixra:1807.0318 [pdf]
Structural Damage Information Decision Based on Z-numbers
Structural health monitoring (SHM) has grate economic value and research value because of the application of finite element model technology, structural damage identification theory, intelligent sensing system, signal processing technology and as so on. A typical SHM system involved three major subsystems: a sensor subsystem, a data processing subsystem and a health evaluation subsystem. It is significance of sensor data fusion for the data processing subsystem. In this paper, considering the fuzziness and reliability of the data, the method based on Z-numbers is proposed in the damage information fusion for decision level, which is a softer method and avoids the severe effect of a small data on the fusion result. The result given by the simulation example of space structure shows the effectiveness of this method.
[2646] vixra:1807.0313 [pdf]
Quark String of Elementary Particle
This paper suggests quark combination of elementary particle based on AdS/CFT correspondence. Through this, we can define quark conservation law and majonara particle. Tension of closed string which diverge to infinity confine quarks as in color confinement of strong interaction.
[2647] vixra:1807.0290 [pdf]
The Gravity Primer
It was shown in [1] that gravitational interaction can be expressed as an algebraic quadratic invariant form of energies. This allows the decomposition of the entire gravitational system into the sum of squares of energies of its composing particles. Still then, we ran into serious problems, when it came to figure out the Hamiltonian and calculate the total energy of the system from that. (Equivalently put, the algebraic invariant above is not a Hamiltonian one.) The problem is: What goes wrong? This is what this article is about, and the answer is very simple.
[2648] vixra:1807.0278 [pdf]
One Way Speed Of Light Based on Fizeau's Experiment
Based on Fizeau's experiment, the single cogwheel is replaced with two rotating disks to measure the one-way speed of light. A single slit is cut out in the radial direction on each disk for the light to pass through the disk. With both disks rotating at the same angular speed, the light can pass through both disks only if the second slit is in a different radial direction from the first slit. The light takes time to travel from the first disk to the second disk. With both slits rotating into the straight path of light, the one-way speed of light can be calculated from the distance between two disks, angular speed of the disks and the angular difference between two slits.
[2649] vixra:1807.0258 [pdf]
Fifa World Cup and Gini Coefficient
In this paper we have found empirical evidence of a rising trend of lower Gini coefficient deciding the higher probability of winning of a country in a match in the FIFA World Cup. We have also studied the role of HDI in different stages of the football cup.
[2650] vixra:1807.0257 [pdf]
Generalized Ordered Propositions Fusion Based on Belief Entropy
A set of ordered propositions describe the different intensities of a characteristic of an object, the intensities increase or decrease gradually. A basic support function is a set of truth-values of ordered propositions, it includes the determinate part and indeterminate part. The indeterminate part of a basic support function indicates uncertainty about all ordered propositions. In this paper, we propose generalized ordered propositions by extending the basic support function for power set of ordered propositions. We also present the entropy which is a measure of uncertainty of a basic support function based on belief entropy. The fusion method of generalized ordered proposition also be presented. The generalized ordered propositions will be degenerated as the classical ordered propositions in that when the truth- values of non-single subsets of ordered propositions are zero. Some numerical examples are used to illustrate the efficiency of generalized ordered propositions and their fusion.
[2651] vixra:1807.0253 [pdf]
Partition Into Triangles Revisited
We show that if one has ever loved reading Prasolov’s books, then one can move on reading our recent article [3] and several words following to deduce that partitioning a graph into triangles is not an easy problem.
[2652] vixra:1807.0245 [pdf]
Measuring Fuzziness of Z-numbers and Its Application in Sensor Data Fusion
Real-world information is often characterized by fuzziness due to the uncertainty. Z- numbers is an ordered pair of fuzzy numbers and is widely used as a flexible and efficient model to deal with the fuzziness information. This paper extends the fuzziness measure to continuous fuzzy number. Then, a new fuzziness measure of discrete Z-numbers and continuous Z-numbers is proposed: simple addition of fuzziness measures of two fuzzy numbers of a Z-number. It can be used to obtain a fused Z-number with the best in- formation quality in sensor fusion applications based on Z-numbers. Some numerical examples and the application in sensor fusion are illustrated to show the efficiency of the proposed fuzziness measure of Z-numbers.
[2653] vixra:1807.0239 [pdf]
Using Textual Summaries to Describe a Set of Products
When customers are faced with the task of making a purchase in an unfamiliar product domain, it might be useful to provide them with an overview of the product set to help them understand what they can expect. In this paper we present and evaluate a method to summarise sets of products in natural language, focusing on the price range, common product features across the set, and product features that impact on price. In our study, participants reported that they found our summaries useful, but we found no evidence that the summaries influenced the selections made by participants.
[2654] vixra:1807.0234 [pdf]
Making Sense of Bivector Addition
As a demonstration of the coherence of Geometric Algebra's (GA's) geometric and algebraic concepts of bivectors, we add three geometric bivectors according to the procedure described by Hestenes and Macdonald, then use bivector identities to determine, from the result, two bivectors whose outer product is equal to the initial sum. In this way, we show that the procedure that GA's inventors dened for adding geometric bivectors is precisely that which is needed to give results that coincide with those obtained by calculating outer products of vectors that are expressed in terms of a 3D basis. We explain that that accomplishment is no coincidence: it is a consequence of the attributes that GA's designers assigned (or didn't) to bivectors.
[2655] vixra:1807.0224 [pdf]
A Fast Algorithm for the Demosaicing Problem Concerning the Bayer Pattern
In this paper we deal with the demosaicing problem when the Bayer pattern is used. We propose a fast heuristic algorithm, consisting of three parts. In the first one, we initialize the green channel by means of an edge-directed and weighted average technique. In the second part, the red and blue channels are updated, thanks to an equality constraint on the second derivatives. The third part consists of a constant-hue-based interpolation. We show experimentally how the proposed algorithm gives in mean better reconstructions than more computationally expensive algorithms.
[2656] vixra:1807.0197 [pdf]
Packing Triangles is Harder Than Previously Thought
In this work, we will take one problem, namely Packing Triangles as an example of combinatorial optimization problems. We show that if one has ever loved reading Prasolov’s books, then one should not try to find efficient algorithm for various restricted cases of this problem.
[2657] vixra:1807.0149 [pdf]
Gauss's Law of Gracvity and Observational Evidence Reveal no Solar Lensing in Empty Vacuum Space
Findings show that the rays of star light are lensed primarily in the plasma rim of the sun and hardly in the vacuum space just slightly above the rim. Since the lower boundary of this vacuum space is only a fraction of a solar radius above the solar plasma rim, it is exposed to virtually the same gravitational field. The thin plasma atmosphere of the sun appears to represent an indirect interaction involving an interfering plasma medium between the gravitational field of the sun and the rays of star light. The very same light bending equation obtained by General Relativity was derived from classical assumptions of a minimum energy path of a light ray in the plasma rim, exposed to the gravitational gradient field of the sun. The resulting calculation was found to be independent of frequency. An intense search of the star filled skies reveals a clear lack of lensing among the countless numbers of stars, where there are many candidates for gravitational lensing according to the assumptions of General Relativity. Assuming the validity of the light bending rule of General Relativity, the sky should be filled with images of Einstein rings. Moreover, a lack of evidence for gravitational lensing is clearly revealed in the time resolved images of the rapidly moving stellar objects orbiting about Sagittarius A*. Subject headings: black hole – gravitational lensing – galaxy center – plasma atmosphere – Gauss’s law
[2658] vixra:1807.0148 [pdf]
Löschverschiebungsprinzip ( Extinction Shift Principle ) a Replacement for Relativity
The observational evidence in Astrophysics clearly shows that if the light bending rule of General Relativity were actually valid, then the following would be the consequences: The star-abundant skies would be filled with images of Einstein rings. At high impact parameters far above the plasma rim of the stars, the astrophysical observations of deep space would be totally denied to all modern Astronomy. Deep space images of the skies due to gravitational lensing effects, would be completely blurred to all modern astronomy. The sought after Modified Newtonian Dynamics (MOND) or simply put, the modification of Newton's law and/or General Relativity, in the attempt to satisfactorily explain the prevailing observations pertaining to the subject matters of gravitational lensing and the so-called dark matter would be totally unnecessary. These findings are clearly supported by the observational evidence.
[2659] vixra:1807.0136 [pdf]
The Golden Ratio in the Modified Cosmological Model
The golden ratio Phi is very important in the modified cosmological model (MCM). In previous work, we have inserted it artificially rather than showing where it comes from. Where the real numbers are extended to the complex numbers for routine physical applications, we extend the complex numbers to the hypercomplex numbers and show that Phi is inherent to the transfinite structure. We formalize the transfinite concept of continuation beyond infinity. We improve upon previous motivations for deriving general relativity and the fine structure constant in the MCM, and we propose an origin for the Yang-Mills mass gap.
[2660] vixra:1807.0091 [pdf]
Triple Conformal Geometric Algebra for Cubic Plane Curves (long CGI2017/ENGAGE2017 paper in SI of MMA)
The Triple Conformal Geometric Algebra (TCGA) for the Euclidean R^2-plane extends CGA as the product of three orthogonal CGAs, and thereby the representation of geometric entities to general cubic plane curves and certain cyclidic (or roulette) quartic, quintic, and sextic plane curves. The plane curve entities are 3-vectors that linearize the representation of non-linear curves, and the entities are inner product null spaces (IPNS) with respect to all points on the represented curves. Each IPNS entity also has a dual geometric outer product null space (OPNS) form. Orthogonal or conformal (angle-preserving) operations (as versors) are valid on all TCGA entities for inversions in circles, reflections in lines, and, by compositions thereof, isotropic dilations from a given center point, translations, and rotations around arbitrary points in the plane. A further dimensional extension of TCGA, also provides a method for anisotropic dilations. Intersections of any TCGA entity with a point, point pair, line or circle are possible. TCGA defines commutator-based differential operators in the coordinate directions that can be combined to yield a general n-directional derivative.
[2661] vixra:1807.0088 [pdf]
The Mesogranulation Convective Power Spectrum
Some authors have pointed out that the observed spectrum of long-lived solar horizontal velocity shows only a single peak at wavelength ≃35 Mm, ‘supergranulation’. However the corresponding verticalvelocity spectrum looks very different with power shifted to higher wavenumber in a broad divided peak representing a range of plume sizes or half wavelengths from 4–12 Mm, i.e. ‘mesogranulation’. Vertical-velocity spectra derived from the Hathaway et al. (2000) SOHO-MDI 62 day full-disk Dopplervelocity spectrum and based upon the Koutchmy (1994) granulation intensity spectrum show expected Kolmogorov-inertial and eddy-noise power-law wavenumber subranges, giving evidence for energy injection into the vertical flow around three coherent length scales &1.01, 5.4, and 10.5 Mm, i.e. granulation, and ‘mesogranular’ and ‘supergranular’ subsurface counterparts. The three energy-injection scales correspond reasonably to interior-model convective mixing lengths for H I, He I, and He II 50% ionization depths, respectively. The two larger scales are nearly one-to-two also suggestive of a possible resonant overtone structure. The horizontal supergranulation flow seems evident as a distinct scale of eddy noise without energy injection around wavelength 45.3 Mm, consistent with the first subharmonic of the supergranular subsurface counterpart.
[2662] vixra:1807.0087 [pdf]
Classical Unified Field Theory of Gravitation and Electromagnetism
According to Noether's theorem, for every differentiable symmetry of action, there exists a corresponding conserved quantity. If we assume the stationary condition as a role of symmetry, there is a conserved quantity. By using the definition of the Komar mass, one can calculate the mass in a curved spacetime. If we consider charge as a conserved quantity, it means existence of symmetry, and we can consider one more axis. In this paper, we consider an extension to five dimensions by using results of the ADM formalism. In terms of (4+1) decomposition, with an alternative surface integral, we find the rotating and charged five-dimensional metric solution and check whether it gives the mass, charge, and angular momentum exactly.
[2663] vixra:1807.0049 [pdf]
Gravitational Effect on the Final Stellar to Planetary Mass Ratio
The discovery of exoplanets and possible future detection of exomoons led to the question if any relationship holds between the mass of its host star and mass of its planets. That is, if the mass of planets is constrained by the final mass of the hosting star. If relationship such as this exists, then several key questions can be answered. First of all, based on the mass of the star, what is the upper limit mass budget available for the creation of terrestrial planets. Secondly, if we know the mass budget availability, then what is the upper limit of the mass budget for the water in the system. Thirdly, finding the probability of the formation of exomoons with sizes comparable to earth around Jovian planets. Fourthly, explaining the rare occurrence of Jupiter sized planets around red dwarf systems.
[2664] vixra:1807.0030 [pdf]
Fibonacci Oscillators and (P, q)-Deformed Lorentz Transformations
The two-parameter quantum calculus used in the construction of Fibonacci oscillators is briefly reviewed before presenting the $ (p, q)$-deformed Lorentz transformations which leave invariant the Minkowski spacetime interval $ t^2 - x^2 - y^2 - z^2$. Such transformations require the introduction of three different types of exponential functions leading to the $(p, q)$-analogs of hyperbolic and trigonometric functions. The composition law of two successive Lorentz boosts (rotations) is $no$ longer additive $ \xi_3 \not= \xi_1 + \xi_2$ ( $ \theta_3 \not= \theta_1 + \theta_2$). We finalize with a discussion on quantum groups, noncommutative spacetimes, $\kappa$-deformed Poincare algebra and quasi-crystals.
[2665] vixra:1807.0027 [pdf]
Time and Reference Frame
Time in a reference frame can be represented by a rotating ring with constant angular velocity. The rotational motion of this ring is not affected by any acceleration parallel to its axis of rotation. The rotation period remains constant as the ring accelerates along the axis of rotation. The rotation period in the rest frame of the ring is always constant. Therefore, the rotation period is independent of reference frame. The rotation period represents the elapsed time in a reference frame. As a result, the elapsed time is also independent of reference frame. Consequently, time is independent of reference frame.
[2666] vixra:1806.0467 [pdf]
Clifford Algebras :New Results
The purpose of the paper is to present new results (exponential, real structure, Cartan algebra,...) but, as the definitions are sill varying with the authors, the paper covers all the domain, and can be read as a comprehensive presentation of Clifford algebras.
[2667] vixra:1806.0466 [pdf]
Great Unification Theory :A Solution
The paper presents a unified model representing the gravitational, electromagnetic, weak and strong fields, fermions and bosons, in the Geometry of General Relativity. It is based on a group belonging to the Clifford algebra Cl(C,4), acting on the algebra itself. It uses an original real structure on the Clifford algebra, accounting for the physical specificities of the geometry. An explicit expression of the group, its action and of the vector states and charges of the known fermions is given. Bosons are represented as discontinuities in the derivative of the potential of the force field. No additional dimension, physical object or exotic property are required. The model appears as the continuation and extension of the Spinor model of Mechanics which holds at any scale.
[2668] vixra:1806.0439 [pdf]
Critique of the Paper "A Note on Solid-State Maxwell Demon" by Germano D’Abramo
Since Dr. Sheehan has published his discovery of the solid-state Maxwell demon (SSMD) many people have attacked the concept, simply because it violates the 2nd law of thermodynamics, which they consider to be sacrosanct and inviolable. One of the opponents is Dr. D’Abramo who attempted to debunk the principle in several papers. His main objection against the principle of the device is that according to him no electrostatic field can exist within the vacuum gap of the SSMD in equilibrium. In the present paper we will refute his arguments that he presented in the quoted publication.
[2669] vixra:1806.0438 [pdf]
Black Holes in the Present Tense.
General relativity does not offer a meaning to ‘at the same time in a different place’. This paper examines what might be implied or inferred from the use of the present tense. I start by looking at Schwarzschild space to show that the oft made statement that “There is a singularity at the centre of a black hole” is misleading. I use this as a starting point for exploring the notion of a moment in time.
[2670] vixra:1806.0413 [pdf]
N − Dimensional Ads Related Spacetime and Its Transformation
Recently, anti-de Sitter spaces are used in promising theories of quantum gravity like the anti-de Sitter/conformal field theory correspondence. The latter provides an approach to string theorie, which includes more than four dimensions. Unfortunately, the anti-de Sitter model contains no mass and is not able to describe our universe adequately. Nevertheless, the rising interest in higherdimensional theories motivates to take a deeper look at the n-dimensional AdS Spacetime. In this paper, a solution of Einstein's field equations is constructed from a modified anti-de Sitter metric in n dimensions. The idea is based on the connection between Schwarzschild- and McVittie metric: McVittie's model, which interpolates between a Schwarzschild Black Hole and an expanding global Friedmann–Lemaître–Robertson–Walker spacetime, can be constructed by a simple coordinate replacement in Schwarzschild's isotropic intervall, where radial coordinate and it's differential is multiplied by a time dependent scale factor a(t). In a previous work I showed, that an exact solution of Einstein's equations can analogously be generated from a static transformation of de Sitter's metric. The present article is concerned with the application of this method on an AdS (Anti de Sitter) related spacetime in n dimensions. It is shown that the resulting isotropic intervall is a solution of the n-dimensional Einstein equations. Further, it is transformed into a spherical symmetric but anisotropic form, analogously to the transformtion found by Kaloper, Kleban and Martin for McVittie's metric.
[2671] vixra:1806.0402 [pdf]
New Sufficient Conditions of Robust Recovery for Low-Rank Matrices
In this paper we investigate the reconstruction conditions of nuclear norm minimization for low-rank matrix recovery from a given linear system of equality constraints. Sufficient conditions are derived to guarantee the robust reconstruction in bounded $l_2$ and Dantzig selector noise settings $(\epsilon\neq0)$ or exactly reconstruction in the noiseless context $(\epsilon=0)$ of all rank $r$ matrices $X\in\mathbb{R}^{m\times n}$ from $b=\mathcal{A}(X)+z$ via nuclear norm minimization. Furthermore, we not only show that when $t=1$, the upper bound of $\delta_r$ is the same as the result of Cai and Zhang \cite{Cai and Zhang}, but also demonstrate that the gained upper bounds concerning the recovery error are better. Finally, we prove that the restricted isometry property condition is sharp.
[2672] vixra:1806.0398 [pdf]
Dark Matter and the Energy-Momentum of the Gravitational Field
The $\Lambda $ Cold Dark Matter cosmological model assumes general relativity is correct. However, the Einstein equation does not contain a symmetric tensor which describes the energy-momentum of the gravitational field itself. Recently, a modified equation of general relativity was developed which contains the missing tensor and completes the Einstein equation. An exact static solution was obtained from the modified Einstein equation in a spheroidal metric describing the gravitational field outside of its source, which does not contain dark matter. The flat rotation curves for a class of galaxies were calculated and the baryonic Tully-Fisher relation followed directly from the gravitational energy-momentum tensor. The Newtonian rotation curves for galaxies with no flat orbital curves, and those with rising rotation curves for large radii were described as examples of the flexibility of the orbital rotation curve equation.
[2673] vixra:1806.0397 [pdf]
Gravity Without Newton’s Gravitational Constant and no Knowledge of Mass Size
In this paper, we show that the Schwarzschild radius can be extracted easily from any gravitationally-linked phenomena without having knowledge of Newton's gravitational constant or the mass size of the gravitational object. Further, the Schwarzschild radius can be used to predict any gravity phenomena accurately, again without knowledge of Newton's gravitational constant and also without knowledge of the size of the mass, although this may seem surprising at first. Hidden within the Schwarzschild radius are the mass of the gravitational object, the Planck mass, and the Planck length, which we will assert contain the secret essence related to gravity, in addition to the speed of light, (the speed of gravity). This seems to indicate that gravity is quantized, even at the cosmological scale, and this quantization is directly linked to the Planck units. This also supports our view that Newton's gravitational constant is a universal composite constant of the form G=l_p^2c^3/hbar, rather than relying on the Planck units as a function of G. This does not mean that Newton's gravitational constant is not a universal constant, but rather that it is a composite universal constant, which depends on the Planck length, the speed of light, and the Planck constant. This is, to our knowledge, the first paper that shows how a long series of major gravity predictions and measurements can be completed without any knowledge of the mass size of the object, or Newton's gravitational constant. As a minimum, we think it provides an interesting new angle for evaluating existing theories of gravitation, and it may even provide a hint on how to combine quantum gravity with Newton and Einstein gravity.
[2674] vixra:1806.0389 [pdf]
Residual Annual and Diurnal Periodicities of the Pioneer 10 Acceleration Term Resolved in Absolute CMB Rest Frame
<html> The phenomenon of the residual, so far unexplained annual and diurnal tracking signal variations on top of the constant acceleration term <em>Anderson & Laing & Lau & et al. (2002)</em>, is resolved by applying the general, classical Doppler formula (CMB-Doppler formula) of first order for two-way radio Doppler signals in the fundamental rest frame of the isotropic cosmic microwave background radiation (CMB) between earthbound Deep Space Network stations (DSN), and the Pioneer 10 space probe (P 10). The anomalous annual and diurnal variations of the constant acceleration term vanish, if instead of the relativistic Standard-Doppler formula (SRT-Doppler formula) of first and second order the CMB-Doppler formula is used. That formula contains the absolute velocities <b>u</b><sub>e</sub> of Earth, and <b>u</b><sub>pio</sub> of P 10, derived from the absolute velocity <b>u</b><sub>sun</sub> of the solar system barycenter in the CMB, with <em>u<sub>sun</sub></em> = 369.0 ± 0.9 km/s, and the relative revolution velocity <b>v</b><sub>e</sub> of Earth, and the relative velocity <b>v</b><sub>pio</sub> of P 10 in the heliocentric frame from January 1987 until December 1996. The flyby radio Doppler and the proportional ranging data anomalies can be resolved as well by using the CMB-Doppler formula with the absolute, asymptotic velocities of the inbound and outbound flights during a gravity assist maneuver, which have usually slightly different magnitudes, inducing the so far unexplained frequency shift, and the unexplained difference in the ranging data, although the relative velocities are equal. </html>
[2675] vixra:1806.0378 [pdf]
On the Origin of Extraterrestrial Industrial Civilizations
Recent discovery of billions of habitable planets within the Milky Way alone and a practical route to nuclear fusion using Project PACER approach, suggesting that any habitable planet with intelligent life should be able to expand beyond their home planet and colonize the galaxy within a relatively short time. Given the absence of detection by SETI for the past few decades, we take this result for granted that no other industrial civilization exists within the galaxy and validated the rare earth and rare intelligence hypothesis by using rigorous astronomical and geological filter to reduce the potential candidate pool to host civilization < 1 per galaxy. So that, the total number of habitable extraterrestrial planets within the Milky Way capable of supporting advanced, intelligent life within the next 500 Myr is < 969. Most of which are earth-like orbiting around a single star with mass ranges from 0.712 to 1 solar mass. No exomoons are capable of supporting advanced life, and a negligible number of low mass binary systems (<0.712 solar mass) are habitable. Among these habitable, the emergence of intelligence is still rare and must be a relatively recent phenomena. Abstract By specifying species as a combination and permutation of traits acquired through evolutionary time, multi-nominal distribution profile of species can be constructed. Those with fewer traits are the most common. A particular multi-nominal distribution is build to model the emergence of civilization by specifying homo sapiens as an outlier. The deviation is calculated based on known cranial capacity of homo sapiens and the explosive growth of angiosperm. The multi-nominal distribution is then transformed/approximated into a more manipulative, generalized multivariate time-dependent exponential lognormal distribution to model biological evolution from the perspective of man. Abstract Most surprisingly, given that the emergence chance of civilization decreases exponentially into the past, predicted by the distribution model, a wall of semi-invisibility exists due to relativistic time delay of signal arrival at cosmological distance so that the universe appears empty even if a significant portion of the space could have already been occupied. The nearest extraterrestrial industrial civilization lies at least 51.85 million light years away, and possibly at least 100 million light years or beyond. Based on the starting model, no extraterrestrial civilization arises before 119 Mya within the observable universe, and no extraterrestrial civilization arises before 138 Mya within the universe by co-moving distance. Despite great distances between the nearest civilizations and the low probability of emergence within our vicinity, given the sheer size of the universe, the total number of intelligent extraterrestrial civilizations likely approaches infinity or \left(\frac{1}{4.4\cdot10^{7}}\right)^{3}\cdot3.621\cdot10^{6}\cdot10^{10^{10^{122}}} if the universe is finitely bounded. Based on incentives for economic growth and assuming wormhole shortens cosmic distances, all civilizations tend to expands near the speed of light and will eventually universally connect with each other via wormhole networks. Within such a network, the farthermost distances traversable from earth can be either infinite or 3.621\cdot10^{6}\cdot10^{10^{10^{122}}}light years in radius if the universe is finitely bounded. Abstract This work distinguishes from and enhances previous works on SETI by focusing on the biological and statistical aspect of the evolution of intelligence, statistical distributions can serve as indispensable tools for SETI to model the pattern and behavior of civilization's emergence and development and bridging the inter-disciplinary gap between astrophysical, biological, and social aspects of extraterrestrial study.
[2676] vixra:1806.0373 [pdf]
Derivation of the Local Lorentz Gauge Transformation of a Dirac Spinor Field in Quantum Einstein-Cartan Theory
I examine the groups which underly classical mechanics, non-relativistic quantum mechanics, special relativity, relativistic quantum mechanics, quantum electrodynamics, quantum flavourdynamics, quantum chromodynamics, and general relativity. This examination includes the rotations SO(2) and SO(3), the Pauli algebra, the Lorentz transformations, the Dirac algebra, and the U(1), SU(2), and SU(3) gauge transformations. I argue that general relativity must be generalized to Einstein-Cartan theory, so that Dirac spinors can be described within the framework of gravitation theory.
[2677] vixra:1806.0343 [pdf]
Multi-barycenter Mechanics, N-body Problem and Rotation of Galaxies and Stars
In this paper, the establishment of a systematic multi-barycenter mechanics is based on the multi-particle mechanics. The new theory perfects the basic theoretical system of classical mechanics, which discovers the law of mutual interaction between particle groups, reveals the limitations of Newton's third law, finds the principle of the internal relationship between gravity and tidal force, reasonably explains the origin and change laws for the rotation angular momentum of galaxies and stars and so on. By applying new theory, the N-body problem can be transformed into a special two-body problem and for which a simulation solution method is proposed, the motion law of each particle can be roughly obtained.
[2678] vixra:1806.0339 [pdf]
Some Remarks on the Clique Problem
We survey some information about the clique problem - one of the cornerstone of the field of research in theory of algorithms. A short note at the end of the paper should be read with care on the reader’s own.
[2679] vixra:1806.0323 [pdf]
Quake Tomato: Strange Electrical Signals from a Tomato Plant in Taiwan Five Days Before the 2008 Sichuan M8.0 Earthquake
Five days before the 2008 Sichuan M8.0 Earthquake, I observed strange electrical signals from a tomato plant in Yilan, Taiwan. That opened my door to quake forecast. Since then, I observed electrical signals of plants, tofu, soil, water or air to predict earthquakes. I successfully predicted a lot of quakes. Now I have about 30 quake forecast stations all over the world. I will publish a series of papers for my discoveries in the past 10 years. This paper is the start of the series. I am Founder and CEO of Taiwan Quake Forecast Institute.
[2680] vixra:1806.0316 [pdf]
A Very Simple Single Electron Lamb Shift Approximation
The Lamb shift was discovered by Willis Lamb and measured for the first time in 1947 by Lamb and Rutherford [1, 2, 3] on the hydrogen microwave spectrum. We suggest that the Lamb shift can be approximated by a very simple function that seems accurate enough for most experimenters working with elements where relativistic effects of the electron are minimal, that is up to element 80 or so. Even if our new approximation does not show anything new in physics, we think it can be useful for experimenters and students of quantum physics and chemistry; now everyone can calculate the Lamb shift on the back of an envelope.
[2681] vixra:1806.0286 [pdf]
An End-to-end Model of Predicting Diverse Ranking OnHeterogeneous Feeds
As an external assistance for online shopping, multimedia content (feed) plays an important role in e-Commerce eld. Feeds in formats of post, item list and video bring in richer auxiliary information and more authentic assessments of commodities (items). In Alibaba, the largest Chinese online retailer, besides traditional item search engine (ISE), a content search engine (CSE) is utilized for feeds recommendation as well. However, the diversity of feed types raises a challenge for the CSE to rank heterogeneous feeds. In this paper, a two-step end-to-end model including Heterogeneous Type Sorting and Homogeneous Feed Ranking is proposed to address this problem. In the first step, an independent Multi-Armed bandit (iMAB) model is proposed first, and an improved personalized Markov Deep Neural Network (pMDNN) model is developed later on. In the second step, an existing Deep Structured Semantic Model (DSSM) is utilized for homogeneous feed ranking. A/B test on Alibaba product environment shows that, by considering user preference and feed type dependency, pMDNN model significantly outperforms than iMAB model to solve heterogeneous feed ranking problem.
[2682] vixra:1806.0282 [pdf]
The Theory of Disappearance and Appearance
It is known that quantum mechanics is one of the most successful theories in physics across the entire history of physics, nevertheless, many believe that its foundations are still not really understood like: wave-particle duality, interference, entanglement, quantum tunneling, uncertainty principle, vacuum catastrophe, wave collapse, relation between classical mechanics and quantum mechanics, classical limit, quantum chaos etc., and the continuous failures in the unify between relativity theory and quantum theory may be an indication about a problem in the foundations, this paper aims at discovering the first small step in the path of solving and understanding these quantum puzzles, in fact, the key to solving quantum puzzles is by understanding the reality of the motion and how it occurs. This paper proposes a model of motion with a new action principle like the principle of least action called "alike action principle". Actually, we have been able to deduce the principles of quantum mechanics so that the oddity of the quantum becomes easier to understand and interpret, for example, this paper proposes a solution to vacuum catastrophe and gives us the origin of dark energy, and shows that the basic law of motion must be broader than both quantum mechanics and classical mechanics.
[2683] vixra:1806.0255 [pdf]
Stabilized QFT: Verification Supplement
This paper does two things: (1) it recaps the method of stabilized amplitudes that resolves divergence issues in QFT without infinite charge and mass renormalizations, and (2) it presents a detailed case study which verifies that stabilized amplitudes agree with renormalization for radiative corrections in Abelian and non-Abelian gauge theories.
[2684] vixra:1806.0250 [pdf]
The Pagerank Algorithm: Theory & Implementation in Scilab
Search engines are huge power factors on the Web, guiding people to information and services. Google is the most successful search engine in recent years,his research results are very complete and precise. When Google was an early research project at Stanford, several articles have been written describing the underlying algorithms. The dominant algorithm has been called PageRank and is still the key to providing accurate rankings for search results. A key feature of web search engines is sorting results associated with a query in order of importance or relevance. We present a model allowing to define a quantification of this concept (Pagerank) a priori fuzzy and elements of formalization for the numerical resolution of the problem. We begin with a natural first approach unsatisfactory in some cases. A refinement of the algorithm is introduced to improve the results.
[2685] vixra:1806.0248 [pdf]
What is de Broglie's Wave-Particle?
According to the theory of dynamic space the inductive-inertial phenomenon and its forces will be developed, which act on the electric units of the dynamic space, forming the grouping units (namely electric charges or forms of the electric field). So, the nature of the magnetic forces is explained, that are Coulomb's electric forces between these grouping units, created by the accelerated electron. Additionally, it is proved the so called de Broglie's wave-particle is a motion wave (wave-like form) as a result the dynamics of the extremely fine texture of the particle motion.
[2686] vixra:1806.0238 [pdf]
Linear Programming Solves Biclique Problems, Flaws in Literature Proof
The study of perebor dates back to the Soviet-era mathematics, especially in the 1980s [1]. Post-Soviet mathematicians have been working on many problems in combinatorial optimization. One of them is Maximum Edge Biclique Problem (MBP). In [2], the author proves that MBP is NP-complete. In this note, we give a polynomial time algorithm for MBP by using linear programming (LP). Thus, some flaw needs to be found in Peeter's work. We leave this to the community.
[2687] vixra:1806.0179 [pdf]
Exact Weight Perfect Matching of Bipartite Graph Problem Simplified
The study of perebor dates back to the Soviet-era mathematics, especially in the 1980s [1]. Post-Soviet mathematicians has been working on many problems in combinatorial optimization. One of them is Exact Weight Perfect Matching of Bipartite Graph (EWPM).This particular problem has been thoroughly considered by [2], [3], [4]. In this note, we give a simpler proof about the solvability of EWPM.
[2688] vixra:1806.0119 [pdf]
The Pauli Objection Addressed in a Logical Way
One of the greatest unsolved problems in quantum mechanics is related to time operators. Since the Pauli objection was first raised in 1933, time has only been considered a parameter in quantum mechanics and not as an operator. The Pauli objection basically asserts that a time operator must be Hermitian and self-adjoint, something the Pauli objection points out is actually not possible. Some theorists have gone so far as to claim that time between events does not exist in the quantum world. Others have explored various ideas to establish an acceptable type of time operator, such as a dynamic time operator, or an external clock that stands just outside the framework of the Pauli objection. However, none of these methods seem to be completely sound. We think that a better approach is to develop a deeper understanding of how elementary particles can be seen, themselves, as ticking clocks, and to examine more broadly how they relate to time.
[2689] vixra:1806.0082 [pdf]
Derivation of the Limits of Sine and Cosine at Infinity
This paper examines some familiar results from complex analysis in the framework of hypercomplex analysis. It is usually taught that the oscillatory behavior of sine waves means that they have no limit at infinity but here we derive definite limits. Where a central element in the foundations of complex analysis is that the complex conjugate of a C-number is not analytic at the origin, we introduce the tools of hypercomplex analysis to show that the complex conjugate of a *C-number is analytic at the origin.
[2690] vixra:1806.0079 [pdf]
Thermodynamics and the Virial Theorem, Gravitational Collapse and the Virial Theorem: Insight from the Laws of Thermodynamics
Application of the virial theorem, when combined with results from the kinetic theory of gases, has been linked to gravitational collapse when the mass of the resulting assembly is greater than the Jeans mass, MJ. While the arguments appear straightforward, the incorporation of temperature into these equations, using kinetic theory, results in a conflict with the laws of thermodynamics. Temperature must always be viewed as an intensive property. However, it is readily demonstrated that this condition is violated when the gravitational collapse of a free gas is considered using these approaches. The result implies star formation cannot be based on the collapse of a self-gravitating gaseous mass.
[2691] vixra:1806.0078 [pdf]
Kirchhoff ’s Law of Thermal Emission: Blackbody and Cavity Radiation Reconsidered
Kirchhoff’s law of thermal emission asserts that, given sufficient dimensions to neglect diffraction, the radiation contained within arbitrary cavities must always be black, or normal, dependent only upon the frequency of observation and the temperature, while independent of the nature of the walls. In this regard, it is readily apparent that all cavities appear black at room temperature within the laboratory. However, two different causes are responsible: 1) cavities made from nearly ideal emitters self-generate the appropriate radiation, while 2) cavities made from nearly ideal reflectors are filled with radiation contained in their surroundings, completely independent of their own temperature. Unlike Kirchhoff’s claims, it can be demonstrated that the radiation contained within a cavity is absolutely dependent on the nature of its walls. Real blackbodies can do work, converting any incoming radiation or heat to an emission profile corresponding to the Planckian spectrum associated with the temperature of their walls. Conversely, rigid cavities made from perfect reflectors cannot do work. The radiation they contain will not be black but, rather, will reflect any radiation which was previously incident from the surroundings in a manner independent of the temperature of their walls.
[2692] vixra:1806.0073 [pdf]
Matter: How to Count It? and an Introduction to Quantum Different Phases of Matter
Today scientists believe that all “particles” also have a “wave nature” (and vice versa). This phenomenon has been verified not only for elementary particles, but also for the elementary particles that exist in compound particles like molecules and even atoms. You can consider light (the photons of the light beams) as a “wave-like energy”. This energy is a wave–particle, just containing elementary matter and speed. We can use Einstein, Planck equations to determine the amount of the energy which make up a sample photon. But, to date we cannot measure the matter, therefore we make a simple unit that let us to measure the matter.
[2693] vixra:1806.0066 [pdf]
Quantum Gravity Unification Model with Fundamental Conservation
A fundamental conservation and symmetry is proposed, as a unification between General Relativity (GR) and Quantum Theory (QT). Unification is then demonstrated across multiple applications. First, as applied to cosmological redshift z and energy density ρ. Then, a local system galaxy rotational curve is examined. Next, as applied to Quantum Mechanics’ ”time problem”: Absolute and relative notions of time are shown to be reconcilable, as well as renormalization values between scales. Finally, as applied to the Cosmological Constant: The discrepancy that exists between the vacuum energy density in GR at critical density: ρcr = 3H^2/8πG=1.88(H^2)x10−29g/cm^3, and the much greater zero-point energy delta value as calculated in quantum field theory (QFT) with a Planck scale ultraviolet cutoff: ρhep = M4c^3/h^3 =2.44x1091g/cm^3 is resolved to null orders of magnitude.
[2694] vixra:1806.0065 [pdf]
Proof of the Goldbach Conjecture
This proves that any even number larger than 2 can be written as the sum of two prime Numbers, also known as the "goldbach conjecture" or "goldbach conjecture about the even" is in the test for any greater than or equal to 6 even conform to guess the number of prime Numbers, accidentally discovered the prime Numbers of "additionality" and further expansion of verification.This article does not focus on the functional expressions of prime Numbers themselves, but takes a different approach to prove that all even Numbers can be composed of two prime Numbers
[2695] vixra:1806.0052 [pdf]
Identities for Second Order Recurrence Sequences
We derive several identities for arbitrary homogeneous second order recurrence sequences with constant coefficients. The results are then applied to present a harmonized study of six well known integer sequences, namely the Fibonacci sequence, the sequence of Lucas numbers, the Jacobsthal sequence, the Jacobsthal-Lucas sequence, the Pell sequence and the Pell-Lucas sequence.
[2696] vixra:1806.0051 [pdf]
Weighted Tribonacci sums
We derive various weighted summation identities, including binomial and double binomial identities, for Tribonacci numbers. Our results contain some previously known results as special cases.
[2697] vixra:1806.0049 [pdf]
CSP Solver and Capacitated Vehile Routing Problem
In this paper, we present several models for Capacitated Vehicle Routing Problem (CVRP) using Choco solver. A concise introduction to the constraint programming methods is included. Then, we construct two models for CVRP. Experimental results for each model are given in details.
[2698] vixra:1806.0044 [pdf]
L’apprentissage Profond Sur Mimic-III :Prédiction de la Mortalité Sous 24 H
Ce projet décrit la fouille de données sur la base MIMIC-III . L’objectif est de prédire le décès à l’hôpital sur la base MIMIC III. On va suivre dans ce projet le processus Knowledge Discovery in Databases (KDD) qui est : 1. Sélection et extraction d’un ensemble de données de séries chronologiques multiva- riées à partir d’une base de données de rangées de millons en écrivant des requêtes SQL. 2. Prétraiter et nettoyer la série chronologique en un ensemble de données bien rangé en explorant les données, en gérant les données manquantes (taux de données man- quantes> 50%) et en supprimant le bruit / les valeurs aberrantes. 3. Développement d’un modèle prédictif permettant d’associer aux séries chronolo- giques biomédicales un indicateur de gravité ( probabilité de mortalité ) en mettant en œuvre plusieurs algorithmes tels que l’arbre de décision gradient boost et le k-NN (k-nearest neighbors) avec l’algorithme DTW (Dynamic time warping). 4. Résultat de 30% d’augmentation du score F1 (mesure de la précision d’un test) par rapport à l’indice de notation médical (SAPS II).
[2699] vixra:1806.0024 [pdf]
The Time Scale of Gravitational Collapse
In a previous article it was shown that the end state for the dust metric of Oppenheimer and Snyder has most of its mass concentrated just inside the gravitational radius; it is proposed that the resulting object be considered as an idealized \emph{shell collapsar}. Here the treatment is extended to include the family of interior metrics described by Weinberg, and involving the curvature parameter of a Friedmann metric. The end state is again a shell collapsar, with a shell which becomes more concentrated as the curvature parameter increases, which shows that the details of the shell structure are dependent on the initial density profile at the beginning of the collapse. What is lacking in most previous commentaries on the Oppenheimer-Snyder article is the recognition that their matching of the time coordinate at the surface implies a finite upper limit for the comoving time coordinate. A collapse process having all the matter going inside the gravitational radius would require comoving times which go outside that limit.
[2700] vixra:1806.0020 [pdf]
Advances of the New Century: It’s All About the Wavefunction
The 2018 Physics Today essay competition invites participants to identify a ‘significant advance’ in his or her field since the millennium that deserves wider recognition among non-experts, and to write an essay that describes the advance, how it was made, and why it’s important[1]. This essay takes quantum mechanics to be the field of interest, introducing ‘non-experts’ to a new synthesis of math and physics, of geometry and fields, a computationally precise yet intuitive representation of wavefunctions and their interactions at all scales, allowing for a common sense interpretation of quantum phenomena and resolution of most if not all quantum paradoxes. It’s all about the wavefunction, the foundation, fundamental, quantum philosophy, quantum logic. As yet we are all non-experts.
[2701] vixra:1806.0019 [pdf]
Preceding: Atomic Internal Gravitational Waves and Shock Waves: Electromagnetic Charge Cannot Hold a Positron Near a Proton Both with Positive Charges, But the Gravitational Waves Make it Possible
Mostly, the destructive force of internal (atomic) wave-particles that we call microscopic shock waves emitted by the nuclei at most, and lastly the external (galactic gravitonic, and photonic) wave-particles towards the nuclei, is affectionate to make them unstable. A higher rate of energy that would increase the internal energy of atoms and so increases the energy of these sub-atomic particles, and also what we call higher entropy (higher energy dispersal), both cause the powerful microscopic shock waves, coming from sub atomic wave-particles. Shock waves are not much strong for atomic objects, or celestial objects to get measured, meanwhile their destructive power potentially can destroy the nearby smaller and weakly confined objects.
[2702] vixra:1806.0005 [pdf]
On Finsler Geometry, MOND and Diffeomorphic Metrics to the Schwarzschild Solution
We revisit the construction of diffeomorphic but $not$ isometric solutions to the Schwarzschild metric. The solutions relevant to Black Holes are those which require the introduction of non-trivial areal-radial functions that are characterized by the key property that the radial horizon's location is $displaced$ continuously towards the singularity ($ r = 0 $). In the limiting case scenario the location of the singularity and horizon $merges$ and any infalling observer hits a null singularity at the very moment he/she crosses the horizon. This fact may have important consequences for the resolution of the firewall problem and the complementarity controversy in black holes. Next we show how modified Newtonian dynamics (MOND) can be obtained from solutions to Finsler gravity, and which in turn, can also be modeled by metrics which are diffeomorphic but not isometric to the Schwarzschild metric. The key point now is that one will have to dispense with the asymptotic flatness condition, by choosing an areal radial function which is $finite$ at $ r = \infty$. Consequently, changing the boundary condition at $ r = \infty$ leads to MONDian dynamics. We conclude with some discussions on the role of scale invariance and Born's Reciprocal Relativity Theory based on the existence of a maximal proper force.
[2703] vixra:1805.0520 [pdf]
An English-Hindi Code-Mixed Corpus: Stance Annotation and Baseline System
Social media has become one of the main channels for peo- ple to communicate and share their views with the society. We can often detect from these views whether the person is in favor, against or neu- tral towards a given topic. These opinions from social media are very useful for various companies. We present a new dataset that consists of 3545 English-Hindi code-mixed tweets with opinion towards Demoneti- sation that was implemented in India in 2016 which was followed by a large countrywide debate. We present a baseline supervised classification system for stance detection developed using the same dataset that uses various machine learning techniques to achieve an accuracy of 58.7% on 10-fold cross validation.
[2704] vixra:1805.0519 [pdf]
A Corpus of English-Hindi Code-Mixed Tweets for Sarcasm Detection
Social media platforms like twitter and facebook have be- come two of the largest mediums used by people to express their views to- wards different topics. Generation of such large user data has made NLP tasks like sentiment analysis and opinion mining much more important. Using sarcasm in texts on social media has become a popular trend lately. Using sarcasm reverses the meaning and polarity of what is implied by the text which poses challenge for many NLP tasks. The task of sarcasm detection in text is gaining more and more importance for both commer- cial and security services. We present the first English-Hindi code-mixed dataset of tweets marked for presence of sarcasm and irony where each token is also annotated with a language tag. We present a baseline su- pervised classification system developed using the same dataset which achieves an average F-score of 78.4 after using random forest classifier and performing 10-fold cross validation.
[2705] vixra:1805.0478 [pdf]
Skyrme Model, Wess-Zumino Anomaly, Quark Model, and Consistent Symmetry Breaking
The original Skyrme lagrangian needs to be supplemented with a Wess-Zumino anomaly term to ensure proper quantzation. This is our Skyrme-Wess-Zumino model here. In this model, we show that the study of the electric charges is a very discriminating property. It provides powerful statements as to how the two flavour group SU(2) may be embedded in the three flavour group SU(3). The subsequent symmetry breaking is found to be very different from the one necessary in the SU(3) quark model. The Skyrme-Wess-Zumino model leads to a unique and unambiguos symmetry breaking process. It is known that all Irreducible Representations given by triangle diagrams for SU(3) are 3, 6, 10, 15, 21 etc. dimensional states. The triplet, being the lowest dimensional one, plays the most crucial and basic role here. This leads to composite Sakaton as emerging to become the proper Irreducible Representation of the flavour group SU(3) in the Skyrme-Wess-Zumino model.
[2706] vixra:1805.0475 [pdf]
A New Physical Constant from the Ratio of the Reciprocal of the “Rydberg Constant” to the Planck Length
This study presents a unique set of solutions, using empirically determined physical quantities, in achieving a novel dimensionless constant α(1/Roo )/PL from the ratio of the inverse of the Rydberg constant to the Planck length. It is henceforth shown that the Lorentz Scalar coming into play, which we dub the Parana constant, necessitates us to interpret the Gravitational constant G as being neither universal nor Lorentz Invariant. Just the same, the elementary charge in the MKS system should not by itself be considered as Lorentz Invariant, but the term e^2 / εo , including its powers, ought to be. That being the case, the “Rydberg constant” must not, according to the present undertaking, be deemed a ubiquitous magnitude either, but the ratio of its reciprocal to Planck length would, in effect, be. The Parana constant is furthermore shown to exhibit meaningfulness as the proportion of the Planck mass to the electron rest mass. Throughout our derivations, we take the oppurtunity to reveal interesting features and deliberate over them.
[2707] vixra:1805.0467 [pdf]
Perfect Symmetry: A Short Philosophical Note on Math Rebuilt from Ancient Atomism
In this paper we point out an interesting asymmetry in the rules of fundamental mathematics between positive and negative numbers. We suggest an alternative (additional) number system rooted in ancient atomism; see, for example, Guthrie (1965); Pyle (1995); Taylor (1999); Grellard and Robert (2009). At several points in history, leading philosophers and scientists have thought that atomism could be the fundamental ex- planation of everything. Ancient atomism basically claims that everything consists of indivisible particles traveling in the void. If this is true, then we will see that everything comes in quanta (whole units) and fractional parts do not exist. Accordingly, in math developed from atomism there are no imaginary numbers.
[2708] vixra:1805.0455 [pdf]
The `constant Lagrangian' Fit of Galaxy Rotation Curves as Caused by Hubble Space Expansion Under Baryonic Energy Conservation Conditions.
In my opinion, the problem of the galaxy rotation curves can be solved on the basis of the combined and competing principles of space expansion and energy conservation on the one hand and gravitational contraction and the virial theorem on the other hand. Thus far it has been assumed that the existence of galaxies is proof of the dominance of gravitational contraction in galaxies. In this paper I present arguments in favor of my conviction that this assumption is wrong and that space expansion is one of the two dominating and competing principles active in galaxies. I propose to reset, rethink and rescale the presupposed boarder between Newtonian gravitational contraction and Hubble space expansion. This effectively identifies Dark Matter attributed effects in galaxies as Dark Energy manifestations.
[2709] vixra:1805.0402 [pdf]
A Categorization and Analysis of the `constant Lagrangian' Fits of the Galaxy Rotation Curves of the Complete Sparc Database of 175 LTG Galaxies.
In this paper I categorize and analyze the `constant Lagrangian' model fits I made of the complete SPARC database of 175 LTG galaxies. Of the 175 galaxies, 45 allowed a single fit rotation curve, so about 26 percent. Another 2 galaxies could almost be plotted on a single fit. Then 36 galaxies could be fitted really nice on crossing dual curves. The reason for the appearance of this dual curve, in its two versions, could be given and related to the galactic constitution and dynamics. Another 25 galaxies could be fitted on parallel transition dual curves. This appearance could also be related to galactic dynamics and galactic mass distribution. Then there were the 19 multiple fit, complex extended galaxies, the complexities of which could be analyzed on the basis of the 4 types of dual fits. In total 128 of the 175 galaxies could be fitted and analyzed very well to reasonably well within the error margins. That is a 73 percent positive rate. This result rules out stochastic coincidence as an explanation of those fits. In my opinion, the success of the `constant Lagrangian' approach indicates that the problem of the galaxy rotation curves, perceived as a virial theorem problem, can be solved solely on the basis of the Lagrangian formulation of the principle of conservation of energy, when applied to this domain existing in between Newton's and Einstein's theories of gravity.
[2710] vixra:1805.0355 [pdf]
Boundary Matrices and the Marcus-de Oliveira Determinantal Conjecture
We present notes on the Marcus-de Oliveira conjecture. The conjecture concerns the region in the complex plane covered by the determinants of the sums of two normal matrices with prescribed eigenvalues. Call this region ∆. This paper focuses on boundary matrices of ∆. We prove 2 theorems regarding these boundary matrices.
[2711] vixra:1805.0342 [pdf]
A `constant Lagrangian' Fit of the Galaxy Rotation Curves of the Complete Sparc Database of 175 Galaxies.
In this paper I apply the `constant Lagrangian' model for galactic dynamics to the complete SPARC database of 175 galaxies. For twenty five percent, 43 out of 175, of the galaxies of the series, a single fit model already remains nicely within the error margins. Fifteen galaxies are more complicated and clearly need a threefold fit. One exceptional galaxy justified five fits. So 116 galaxies, 66 percent, have a dual fit. The multiple fit appears to follow the mass composition of galaxies as composed of a bulge, possibly a disk and mostly extended gas clouds. As in previous papers, I will first repeat a presentation of the `constant Lagrangian' approach. The original part of this paper is the fit of the complete set of the SPARC database and a first categorization of the result in single, dual, triple or multiple fit galaxies. Through the extensive database fit, the `constant Lagrangian' approach can be inverted from a deductive to an inductive result: huge stretches of all galaxies can be fitted on a constant Lagrangian curve, while remaining within the empirical margin of errors. This paper's galaxy fits prove this restricted claim beyond doubt. The issue then becomes to explain this empirical, inductive result.
[2712] vixra:1805.0325 [pdf]
Demonstration Riemann's Hypothesis
158 years ago that in the complex analysis a hypothesis was raised, which was used in principle to demonstrate a theory about prime numbers, but, without any proof; with the passing Over the years, this hypothesis has become very important, since it has multiple applications to physics, to number theory, statistics, among others In this article I present a demonstration that I consider is the one that has been dodging all this time.
[2713] vixra:1805.0322 [pdf]
Illusion Of Light
Besides energy, light presents us fastest and most details of the universe. It made us obsessively visual. It controls our understanding as well as misunderstanding of the universe. We, too, are creatures of motion. Speed is our obsession too. Light and it's speed might seem magical to us. However, what we see is only broadcasting of physical reality in the form of radiations delivered to us in high speed. We only see the visible part of radiations that stands out from the background and perceive it as information. It does not mean background contains less significant information than foreground revealed by light. Additionally, speed of light is no more important than other speed. Say, slow gentle motion can be more beneficial for building structures. To me, light and it's speed do not have any power to alter the fundamentals of the universe. To the contrary, it is the universe in charge of light and speed. No matter how fast the light, space does not allow it to go to next location instantly. The question is, can we be fooled by information in the form of light delivered at light speed, or by our obsession?
[2714] vixra:1805.0310 [pdf]
The Logic of Elements of Reality
We define the logic of elements of reality. The logic of elements of reality is not a logic in the classical sense. It is an abstract language for constructing models of a certain kind. In part, it corresponds to the language of propositional logic. We define the logic of elements of reality on arbitrary sets of elements of reality. The basic relation between arbitrary elements of reality p, q is the relation p |> q (if there exists p, then there exists q). We consider the physical space and the property: if p |> q then takes place E(p) >= E(q) (E() is energy). For strongly deterministic spaces the law of energy conservation is described as follows: from p |> q, q |> p it follows E(p) = E(q).
[2715] vixra:1805.0304 [pdf]
The Cherry on Tau
<p>I propose to rewrite the volume equation for the non-euclidian spherical Universe in terms of <em>tau</em> instead of <em>π</em>. Written this new way, a truly elegant equation and deeper structure becomes visible. Further, I postulate that the Universe is the Fundamental Theorem of Calculus, i.e. that the 3-dimensional Universe we live in is the derivative-surface of its 4-dimensional hypersphere volume.</p>
[2716] vixra:1805.0293 [pdf]
What is the Space?
By the unified theory of dynamic space has been there a forecast of the follow observation that, parallel moving photons of different frequency, reduce locally the cohesive pressure of space, resulting to move with different speeds. So, the photons with higher frequency slow down against parallel moving photons with lower frequency, as on delay in gamma rays from galaxy Markarian 501 was observed. This observation proves that space contains unseen forces, which are evident as deformations of space, such as in the above theory are described. Accordingly, for the great problem of Physics and Philosophy, "What is the space?", there is the answer-solution: "The dynamic space".
[2717] vixra:1805.0292 [pdf]
Revisiting the Derivation of Heisenberg’s Uncertainty Principle: The Collapse of Uncertainty at the Planck Scale
In this paper, we will revisit the derivation of Heisenberg’s uncertainty principle. We will see how the Heisenberg principle collapses at the Planck scale by introducing a minor modification. The beauty of our suggested modification is that it does not change the main equations in quantum mechanics; it only gives them a Planck scale limit where uncertainty collapses. We suspect that Einstein could have been right after all, when he stated, “God does not throw dice.” His now-famous saying was an expression of his skepticism towards the concept that quantum randomness could be the ruling force, even at the deepest levels of reality. Here we will explore the quantum realm with a fresh perspective, by re-deriving the Heisenberg principle in relation to the Planck scale. We will show how this idea also leads to an upper boundary on uncertainty, in addition to the lower boundary. These upper and lower boundaries are identical for the Planck mass particle; in fact, they are zero, and this highlights the truly unique nature of the Planck mass particle. Further, there may be a close connection between light and the Planck mass particle: In our model, the standard relativistic energy momentum relation also seems to apply to light, while in modern physics light generally stands outside the standard relativistic momentum energy relation. We will also suggest a new way to look at elementary particles, where mass and time are closely related, consistent with some of the recent work in experimental physics. Our model leads to a new time operator that does not appear to be in conflict with the Pauli objection. This indicates that both mass and momentum come in quanta, which are perfectly correlated to an internal Compton ‘clock’ frequency in elementary particles.
[2718] vixra:1805.0284 [pdf]
Minimum Amount of Text Overlapping in Document Separation
We consider a Blind Source Separation problem. In particular we focus on reconstruction of digital documents degraded by bleed-through and show-through effects. In this case, since the mixing matrix, the source and data images are nonnegative, the solution is given by a Nonnegative Factorization. As the problem is ill-posed, further assumptions are necessary to estimate the solution. In this paper we propose an iterative algorithm in order to estimate the correct overlapping level from the verso to the recto of the involved document. Thus, the proposed method is a Correlated Component Analysis technique. This method has low computational costs and is fully unsupervised. Moreover, we give an extension of the proposed algorithm in order to deal with a not translation invariant model. Our experimental results confirm the goodness of the method.
[2719] vixra:1805.0278 [pdf]
Coupled Diffusion of Impurity Atoms and Point Defects in Silicon Crystals
A theory describing the processes of atomic diffusion in a nonequilibrium state with nonuniform distributions of components in a defect-impurity system of silicon crystals is proposed. Based on this theory, partial diffusion models are constructed and simulation of a large number of experimental data is curried out. A comparison of the simulation results with the experiments confirms the correctness and importance of the theory developed. The book will useful for researchers, engineers, and advanced students in semiconductor physics, microelectronics, and nanoelectronics. Practical application of the theoretical ideas formulated in the book allows finding cheaper solutions in the manufacturing of semiconductor devices and integrated microcircuits.
[2720] vixra:1805.0267 [pdf]
An Improved Method of Generating Z-Number Based on Owa Weights and Maximum Entropy
How to generate Z-number is an important and open issue in the uncertain information processing of Z-number. In [1], a method of generating Z-number using OWA weight and maximum entropy is investigated. However, the meaning of the method in [1] is not clear enough according to the definition of Z-number. Inspired by the methodology in [1], we improve the method of determining Z-number based on OWA weights and maximum entropy, which is more clear about the meaning of Z-number. Some numerical examples are used to illustrate the effectiveness of the proposed method.
[2721] vixra:1805.0230 [pdf]
Solution of the Erdös-Moser Equation 1+2^p+3^p+...+(k)^p=(k+1)^p
The Erdös-Moser equation (EM equation), named after Paul Erdös and Leo Moser, has been studied by many number theorists throughout history since combines addition, powers and summation together. An open and very interesting conjecture of Erdös-Moser states that there is no other solution of the EM equation than trivial 1+2=3. Investigation of the properties and identities of the EM equation and ultimately prove the conjecture is the main purpose of this article.
[2722] vixra:1805.0226 [pdf]
A Memristor based Unsupervised Neuromorphic System Towards Fast and Energy-Efficient GAN
Deep Learning has gained immense success in pushing today's artificial intelligence forward. To solve the challenge of limited labeled data in the supervised learning world, unsupervised learning has been proposed years ago while low accuracy hinters its realistic applications. Generative adversarial network (GAN) emerges as an unsupervised learning approach with promising accuracy and are under extensively study. However, the execution of GAN is extremely memory and computation intensive and results in ultra-low speed and high-power consumption. In this work, we proposed a holistic solution for fast and energy-efficient GAN computation through a memristor-based neuromorphic system. First, we exploited a hardware and software co-design approach to map the computation blocks in GAN efficiently. We also proposed an efficient data flow for optimal parallelism training and testing, depending on the computation correlations between different computing blocks. To compute the unique and complex loss of GAN, we developed a diff-block with optimized accuracy and performance. The experiment results on big data show that our design achieves 2.8x speedup and 6.1x energy-saving compared with the traditional GPU accelerator, as well as 5.5x speedup and 1.4x energy-saving compared with the previous FPGA-based accelerator.
[2723] vixra:1805.0205 [pdf]
Labeled Trees with Fixed Node Label Sum
The non-cyclic graphs known as trees may be labeled by assigning positive integer numbers (weights) to their vertices or to their edges. We count the trees up to 10 vertices that have prescribed sums of weights, or, from the number-theoretic point of view, we count the compositions of positive integers that are constrained by the symmetries of trees.
[2724] vixra:1805.0185 [pdf]
Revisit of Carmichael 1913 Work and an Elementary Approach for Fermat’s Last Theorem of Case I
We discuss an elementary approach to prove the first case of Fermat's last theorem (FLT). The essence of the proof is to notice that $a+b+c$ is of order $N^{\alpha}$ if $a^N+b^N+c^N=0$. To prove FLT, we first show that $\alpha$ can not be $2$; we then show that $\alpha$ can not be $3$, etc. While this is is the standard method of induction, we refer to it here as the ``infinite ascent'' technique, in contrast to Fermat's original ``infinite descent'' technique. A conjecture, first noted by Ribenboim is used.
[2725] vixra:1805.0168 [pdf]
A `constant Lagrangian' Fit of the Galaxy Rotation Curves of the `F-Series' from the Sparc Database.
In this paper I apply the `constant Lagrangian' model for galactic dynamics to the F-series of the SPARC database. I will fit the experimental rotation curves of the $16$ `F' galaxies from this database using the dual fit approach. This means that one fit is made for the stars dominated region of one galaxy. Another fit is added for the gas dominated region of the same galaxy. The dual fit approach results in a rotation curve fit that mostly remains within the observational error margins. For some galaxies of the series a single model already remains nicely within the error margins and then doesn't really justify a dual fit. Some galaxies are more complicated and problably need a threefold fit. This dual fit approach follows the mass composition of galaxies as composed of a bulge, possibly a disk and mostly extended gas clouds.
[2726] vixra:1805.0152 [pdf]
On Squarefree Values of some Univariate Polynomials.
We consider univariate Polynomials, P(s), of the form (a1 * s + b1)*...*(ak * s + bk), where a1,..,ak,b1,..,bk are natural numbers and the variable s is squarefree. We give an algorithm to calculate, for a arbitrary s, the probability that the value of P(s) is squarefree.
[2727] vixra:1805.0132 [pdf]
Proof of the Hypothesis of Dirac Large Numbers or How to Weight the Universe. (English Version)
The article gives a brief exposition of the solution of cosmological problems. The problem of stability and shortage of mass in galaxies, huge velocities of galactic clusters is solved. The law of formation of fundamental constants, the law of nonlinear expansion of the Universe, the law of gravitational interaction is found. Proof of the hypothesis of large Dirac numbers. This is the English version.
[2728] vixra:1805.0129 [pdf]
Gravity Wave Basics
An overview of the fundamentals of gravity waves intended for undergraduate physics students, curious high schoolers, and brilliant 4th graders, utilizing the traditional linearized form of Einstein’s field equations.
[2729] vixra:1805.0128 [pdf]
Quantum Hall Mixmaster Gravitational Wave Echoes Bounded by Geometric Clifford Wavefunction Interaction Impedances
This note proposes a topic to the upcoming 7th Conference on Applied Geometric Algebras. It conjectures that exact impedance quantization of the fractional quantum Hall effect, claims of gravitational wave echoes recovered from LIGO/VIRGO data, and mixmaster tidal oscillations of Professor Thorne’s wife share causal origins in quantized impedance networks of Geometric Wavefunction Interactions of the particle physicist’s Clifford algebra.
[2730] vixra:1805.0122 [pdf]
The Two-Dimensional Vavilov-Cherenkov Effect with Radiative Corrections
We derive the photon power spectrum, including the radiative corrections, generated by charged particle moving within 2D graphene sheet with implanted ions forming dielectric medium. It enables the experimental realization of the Vavilov-Cherenkov radiation. The relation of the Vavilov-Cherenkov radiation to light emission diode (LED) is discussed. LED dielectric sheets can be the crucial components of detectors in experimental particle physics. So, the article represents the unification of graphene physics with the physics of elementary particles.
[2731] vixra:1805.0109 [pdf]
Lunar Ascent of the Apollo 17 in Television Broadcast
We analyze the Apollo 17 ascent from the lunar surface that occurred on December 14, 1972. The lunar ascent was captured by a remotely operated pan-zoom-tilt (PZT) camera on the Lunar Roving Vehicle parked some distance away, and broadcasted on television to the audiences on Earth. We use known features of the camera PZT to extract the angle above the horizon (elevation) as a function of time of the craft in the TV broadcast. We then compare the craft's reconstructed trajectory to that of the Apollo 11 Lunar Module Ascent Stage. We list the differences in Vertical Rise and early Orbit Insertion phases, and what makes them anomalous.
[2732] vixra:1805.0108 [pdf]
Separation, Transposition and Docking of Apollo 11 in Low-Earth Orbit
A selection of visual media from the Apollo 11 Mission, namely, the 70mm photographs AS11-36- 5301 through AS11-36-5313 and the 16mm film magazine A, are shown to strongly suggest that at the time of their filming Apollo 11 craft were in low-Earth orbit. The visual media comprise the film footage and the photographs of the craft and the Earth filmed before and during the maneuvers separation, transposition and docking (STD). The STD reportedly occured during the translunar coast some 30 minutes after the translunar injection (TLI). In the STD, the Command and Service Module (CSM) would dismount the rocket Saturn-IVB (S-IVB) carrying the Lunar Module (LM) and the CSM up to that point, then dock with and extract the LM. The S-IVB would then split from the group to fly behind the Moon and in an orbit around the Sun. In determining the location of the Apollo 11 craft, the sizes of Earth and the S-IVB rocket and their distances from the camera are extracted from the media assuming a selection of camera lenses. The extracted CSM flight data include the turning rate and the turning angle, the maximum separation distance, and the docking velocity. From their comparison to the Flight Plan, the Mission Report and to the oral transcripts from the Apollo 11 Flight Journal, it is found that the 16mm Mauer Data Acquisition Camera (DAC) was filming with the 10mm lens, and not with the 18mm lens as NASA reported. Consequently, the photography must have been done with a Hasselblad manual camera with the 38mm lens and not with the Hasselblad electric camera with the 80mm lens as NASA reported. The visual media being recorded with these new lenses puts the craft at the time of the STD in low-Earth orbit, rather than Moon-bound after successful TLI.
[2733] vixra:1805.0095 [pdf]
Topological Skyrme Model and the Nucleus
We study the two-flavour topological Skyrme model with lagrangian L = L2 + L4 , and point out that, in spite of all the successes attibuted to it, as to the electric charges, it predicts Q(proton) = 1/2 and Q(neutron) = − 1/2 . This is in direct conflict with the experimental values of proton and neutron charges. This should be considered a failure of the Skyrme model. The Wess-Zumino anomaly term however, comes to its rescue and provides additional contribution which lead to the the correct charges for baryons as per the standard Gell-Mann- Nishijima expression. But as per conventional understanding, that the Skyrme model gives a conserved atomic mass number A=Z+N, is not fulfilled in the above picture. We suggest a new consistent scenario wherein on quantization, a dual description beyond the above model arises, and which provides a framework which is fully compatible with nuclear physics. This picture finds justfication with respect to the surprising 1949 succcessful calculation by Steinberger for the decay π0 → γγ.
[2734] vixra:1805.0089 [pdf]
Group Sparse Recovery in Impulsive Noise Via Alternating Direction Method of Multipliers
In this paper, we consider the recovery of group sparse signals corrupted by impulsive noise. In some recent literature, researchers have utilized stable data fitting models, like $l_1$-norm, Huber penalty function and Lorentzian-norm, to substitute the $l_2$-norm data fidelity model to obtain more robust performance. In this paper, a stable model is developed, which exploits the generalized $l_p$-norm as the measure for the error for sparse reconstruction. In order to address this model, we propose an efficient alternative direction method of multipliers, which includes the proximity operator of $l_p$-norm functions to the framework of Lagrangian methods. Besides, to guarantee the convergence of the algorithm in the case of $0\leq p<1$ (nonconvex case), we took advantage of a smoothing strategy. For both $0\leq p<1$ (nonconvex case) and $1\leq p\leq2$ (convex case), we have derived the conditions of the convergence for the proposed algorithm. Moreover, under the block restricted isometry property with constant $\delta_{\tau k_0}<\tau/(4-\tau)$ for $0<\tau<4/3$ and $\delta_{\tau k_0}<\sqrt{(\tau-1)/\tau}$ for $\tau\geq4/3$, a sharp sufficient condition for group sparse recovery in the presence of impulsive noise and its associated error upper bound estimation are established. Numerical results based on the synthetic block sparse signals and the real-world FECG signals demonstrate the effectiveness and robustness of new algorithm in highly impulsive noise.
[2735] vixra:1805.0070 [pdf]
Developments of the Extended Relativity Theory in Clifford Spaces
A brief tour of the developments of the Extended Relativity Theory in Clifford Spaces ($C$-space) is presented. These include : (i) Novel physical consequences like generalized dispersion relations, energy-dependent speed of light propagation, extended Lorentz transformations, relative locality, generalized Weyl-Heisenberg algebra and uncertainty relations, tensionless branes, superluminality, generalized velocities. (ii) Generalized areal, volume, $\cdots$ metrics and gravitational field equations in $C$-space. (iii) A unified description of particles, strings and branes. (iv) Clifford gravity based cosmology and dark energy. (v) Moyal deformations of Clifford gauge theories of gravity. (vi) N-ary algebras. We conclude with a brief discussion on symplectic Clifford algebras and generalized geometries.
[2736] vixra:1805.0048 [pdf]
Introducing Light Force
Why Earth momentum keeps same angle to the orbit, but moment of momentum changes its angle to the orbit (and to the momentum)? Both momentums behave like vectors (if no reflections) in an inertial coordinate system.
[2737] vixra:1805.0047 [pdf]
A Stars-Gas Dual Fit Result in the `constant Lagrangian' Model for Galactic Dynamics When Applied to the Sparc Database
In this paper I apply the `constant Lagrangian' model for galactic dynamics to a subset of the SPARC database. I will fit $25$ galaxies from this database using the dual fit approach. This means that one fit is made for the stars dominated region of one galaxy. Another fit is added for the gas dominated region of the same galaxy. Both are presented in one single graph. The switch from stars dominated to gas dominated is sometimes visible as a ``wiggle'' in the total rotation velocity, as for exampe in the rotation curve of NGC 1560. I will demonstrate that this more or less visible``wiggle'' is part of the rotation velocity curve of almost every galaxy in the sample. The dual fit approach results in a rotation curve fit that mostly remains within the observational error margins.
[2738] vixra:1805.0030 [pdf]
On Surface Measures on Convex Bodies and Generalizations of Known Tangential Identities
One theme of this paper is to extend known results from polygons and balls to the general convex bodies in n− dimensions. An another theme stems from approximating a convex surface with polytope surface. Our result gives a sufficient and necessary condition for an natural approximation method to succeed (in principle) in the case of surfaces of convex bodies. Thus, Schwartz`s paradox does not affect our method. This allows us to denefine certain surface measures on surfaces of convex bodies in a novel and simple way.
[2739] vixra:1805.0029 [pdf]
Quantum Machine Learning in High Energy Physics: the Future Prospects
This article reveals the future prospects of quantum machine learning in high energy physics (HEP). Particle identication, knowing their properties and characteristics is a challenging problem in experimental HEP. The key technique to solve these problems is pattern recognition, which is an important application of machine learning and unconditionally used for HEP problems. To execute pattern recognition task for track and vertex reconstruction, the particle physics community vastly use statistical machine learning methods. These methods vary from detector to detector geometry and magnetic led used in the experiment. Here in the present introductory article, we deliver the future possibilities for the lucid application of quantum machine learning in HEP, rather than focusing on deep mathematical structures of techniques arise in this domain.
[2740] vixra:1805.0001 [pdf]
The Golden Section in Physics (in English)
The physical constants play important role in physics. It is fact that the accuracy of the physical constants grows year by year. Special attention is paying to the dimensionless constants; the most familiars among them are the fine structure constant, the electron/proton and electron/muon mass-ratios, the ratio of the gravitational/electromagnetic interaction, the Weinberg angle in the electro-weak interaction theory, etc. The one of the most important questions is for a long time: are there any physical and/or mathematical relations between the fundamental physical constants. The paper gives a recently explored simple math relation between them. The precise theoretical explanation of this amazing finding need more detailed investigations related to the physical background. Keywords: exponential relations between physical constants, Titius-Bode rule, new atomic mass formula.
[2741] vixra:1804.0494 [pdf]
Applying the Transtheoretical Model of Behavioral Change to Reddit Data: A Pilot Study of Cessation Strategies and Outcomes among Tobacco Users
Despite the existence of numerous studies demonstrating that tobacco use is strongly associated with cancer, heart disease, stroke, and other health problems, tobacco continues to be a major public health threat worldwide. Social media provides a rich resource for population-level, health-based monitoring and understanding of tobacco cessation efforts and trends. In this pilot study, we identified common and emerging tactics used by Reddit users for tobacco cessation and demonstrated that the Transtheoretical Model — an established and widely used behavioural change theory – could be leveraged to track and identify successful cessation of tobacco usage within this population.
[2742] vixra:1804.0478 [pdf]
Supersymmetric Preons and the Standard Model
The experimental fact that standard model superpartners have not been observed compels one to consider an alternative implementation for supersymmetry. The basic supermultiplet proposed here consists of a photon and a charged spin 1/2 preon field, and their superpartners. These fields are shown to yield the standard model fermions, Higgs fields and gauge symmetries. Supersymmetry is defined for unbound preons only. Quantum group SLq(2) representations are introduced to classify topologically scalars, preons, quarks and leptons.
[2743] vixra:1804.0416 [pdf]
Statistical Bias in the Distribution of Prime Pairs and Isolated Primes
Computer experiments reveal that twin primes tend to center on nonsquarefree multiples of 6 more often than on squarefree multiples of 6 compared to what should be expected from the ratio of the number of nonsquarefree multiples of 6 to the number of squarefree multiples of 6 equal $\pi^2/3-1$, or ca 2.290. For multiples of 6 surrounded by twin primes, this ratio is 2.427, a relative difference of ca $6.0\%$ measured against the expected value. A deviation from the expected value of this ratio, ca $1.9\%$, exists also for isolated primes. This shows that the distribution of primes is biased towards nonsquarefree numbers, a phenomenon most likely previously unknown. For twins, this leads to nonsquarefree numbers gaining an excess of $1.2\%$ of the total number of twins. In the case of isolated primes, this excess for nonsquarefree numbers amounts to $0.4\%$ of the total number of such primes. The above numbers are for the first $10^{10}$ primes, with the bias showing a tendency to grow, at least for isolated primes.
[2744] vixra:1804.0408 [pdf]
The Timeless Universe
In the unified theory of dynamic space the phenomenon of motion has been described as a form of space deformation, that is identical to time. The motion force is deducted from the dynamic space and is accumulated on the spherical zone of the particle, due to the difference of cohesive pressure in front of and behind it. Cosmic journey of galaxies becomes at a Universal constant timeless speed. Also, timeless speed of light is a Universal constant, while light speed c is a local constant. The gravity tail of galactic systems is one of moreover causes for their chaotic motion. So, the search for an unknown form of dark matter and energy is no longer necessary.
[2745] vixra:1804.0405 [pdf]
Mixed Generalized Multifractal Densities for Vector Valued Quasi-Ahlfors Measures
In the present work we are concerned with some density estimations of vector valued measures in the framework of the so-called mixed multifractal analysis. We precisely consider some Borel probability measures satisfying a weak quasi-Alfors regularity. Mixed multifractal generalizations of densities are then introduced and studied in a framework of relative mixed multifractal analysis.
[2746] vixra:1804.0397 [pdf]
Vortex Equation in Holomorphic Line Bundle Over Non-Compact Gauduchon Manifold
In this paper, by the method of heat flow and the method of exhaustion, we prove an existence theorem of Hermitian-Yang-Mills-Higgs metrics on holomorphic line bundle over a class of non-compact Gauduchon manifold.
[2747] vixra:1804.0386 [pdf]
Fitting Some Galaxy Rotation Curves Using the `constant Lagrangian' Model for Galactic Dynamics.
The velocity rotation curves of the SPARC database present an opportunity to test the `constant Lagrangian' model for galactic dynamics. The fits of the rotation curves from thirteen different galaxies are presented.
[2748] vixra:1804.0385 [pdf]
Q-Analogues for Ramanujan-Type Series
From a very-well-poised _{6}\phi_{5} series formula we deduce a general series expansion formula involving the q-gamma function. With this formula we can give q-analogues of many Ramanujan-type series.
[2749] vixra:1804.0376 [pdf]
The Strong Goldbach Conjecture, Klein Bottle And Möbius Strip
This modest article shows the connection between the strong Goldbach conjecture and the topological properties of the Klein bottle and the Möbius strip. This connection is established by functions derived from the number of divisors of the two odd integers whose sum is an even number.
[2750] vixra:1804.0363 [pdf]
Learning Geometric Algebra by Modeling Motions of the Earth and Shadows of Gnomons to Predict Solar Azimuths and Altitudes
Because the shortage of worked-out examples at introductory levels is an obstacle to widespread adoption of Geometric Algebra (GA), we use GA to calculate Solar azimuths and altitudes as a function of time via the heliocentric model. We begin by representing the Earth's motions in GA terms. Our representation incorporates an estimate of the time at which the Earth would have reached perihelion in 2017 if not affected by the Moon's gravity. Using the geometry of the December 2016 solstice as a starting point, we then employ GA's capacities for handling rotations to determine the orientation of a gnomon at any given latitude and longitude during the period between the December solstices of 2016 and 2017. Subsequently, we derive equations for two angles: that between the Sun's rays and the gnomon's shaft, and that between the gnomon's shadow and the direction ``north" as traced on the ground at the gnomon's location. To validate our equations, we convert those angles to Solar azimuths and altitudes for comparison with simulations made by the program Stellarium. As further validation, we analyze our equations algebraically to predict (for example) the precise timings and locations of sunrises, sunsets, and Solar zeniths on the solstices and equinoxes. We emphasize that the accuracy of the results is only to be expected, given the high accuracy of the heliocentric model itself, and that the relevance of this work is the efficiency with which that model can be implemented via GA for teaching at the introductory level. On that point, comments and debate are encouraged and welcome.
[2751] vixra:1804.0350 [pdf]
... back to Enchantment ... ? Donald Rumsfeld : A View of the World ?
The issues considered are as follows : \begin{itemize} \item The long ongoing and by now significant disenchantment with any religion among a significant part of Western humans. \item A proposal to reinstate a general enough awareness among the educated Westerners of the essential role of transcendental realms in the day to day life of humanity. The concept of the UNKNOWN UNKNOWN, or briefly (UU), introduced by Donald Rumsfeld in 2002 is suggested to be made use of. He introduced somewhat in passing and for a far more particular issue into public discourse this concept. However, it appears that the concept of (UU) can play a basic role in building up a new and general enough awareness of the transcendental and its essential and permanent role in human affairs. The present essay starts with as brief as possible consideration of the concept of (UU). \item Possible ``Commentaries" on the (UU) follow, presented in subsections a) to r). This part of the essay is obviously open to further contributions. \item Ways to a possible return to ENCHANTMENT in our human view of reality - ways we have lost in our modern days - a suggested. \item The truly fundamental issue of our usual conception of ``Time" is briefly presented, underlying the dramatic limitation which it imposes upon all the rest of our views of reality. The importance of that issue is hard to overstate, in spite of the fact that in the ever ongoing and often accelerating rush of everyday life, hardly anyone notices, let alone may be ready to stop for a while and wonder about it ... \item the Essay end with an ``Appendix" which, together with the respective indicated literature, shows that - much contrary to the general perception - there is even today a significant concern and research regarding the possible structures, far far beyond the simplicity of those in general human awareness, structures which may be involved in the concept of ``Time". \end{itemize}
[2752] vixra:1804.0328 [pdf]
Fitting the NGC 1560 Rotation Curve and Other Galaxies in the `constant Lagrangian' Model for Galactic Dynamics.
The velocity rotation curve of NGC 1560 has a peculiar wiggle around 4.5 kpc. This makes it a favorable galaxy to test the diverse models trying to explain galactic dynamics, as for example CDM and MOND. I will fit NGC 1560 using the GR-Schwarzschild based `constant Lagrangian' model for galactic dynamics and compare it to other results. But first I will give a brief expose of the `constant Lagrangian' approach. At the end, I present same other fitting curves: those of galaxies F583-1, F579V1 and U11648.
[2753] vixra:1804.0320 [pdf]
Speakable and Unspeakable in Special Relativity: the Ageing of the Twins in the Paradox
In previous papers we have presented a general formulation of special relativity, based on a weaker statement of the postulates. In this work, the paradigmatic example of the twin paradox is discussed in detail. Within the present formulation of special relativity, a “non-paradoxical” interpretation of the asymmetric ageing of the twins emerges. It is based exclusively on the rhythms of the clocks, which are not related by the standard textbook expressions and shall not be confused with clock time readings. Moreover, the current approach exposes the irrelevance of the acceleration of the returning twin in the discussion of the paradox.
[2754] vixra:1804.0298 [pdf]
A `constant Lagrangian' Model for Galactic Dynamics in a Geodetic Approach Towards the Galactic Rotation Dark Matter Issue.
I start with a historical note on the galactic rotation curves issue. The problem with the virial theorem in observed galactic dynamics, lead to the Dark Matter hypothesis but also to Modified Newtonian Dynamics or MOND. Then I move (away) from MOND towards a relativistic, Lagrangian approach of orbital dynamics in a curved Schwarzschild metric. I propose a `constant Lagrangian' model for galactic scale geodetic dynamics. I will show with four rotation fitting curves to what extend my proposed model galaxies `constant Lagrangian' postulate works in these limited number of situations. The fitted galaxies are NGC 2403, NGC 3198, UGC 6614 and F571-8. In the paper I present a theoretical context in which the `constant Lagrangian' postulate might replace the classical virial theorem on a galactic scale. But the proposed postulate isn't a `general law of nature' because in the solar system and in the GNSS relativistic context, the classical virial theorem is proven accurate. Due to the limitations of the proposed postulate, a statement regarding Dark Matter can't be made. But the model might achieve within the GR-Schwarzschild paradigm what MOND achieves within the Newtonian paradigm, fitting the experimental galactic rotation curves.
[2755] vixra:1804.0288 [pdf]
On Q-Laplace Transforms and Mittag-Leffler Type Functions
In the present paper, the author derived the results based on q-Laplace transform of the K-Function introduced by Sharma[7]. Some special cases of interest are also discussed.
[2756] vixra:1804.0287 [pdf]
b#D - Sets and Associated Separation Axioms
In this paper the notion of b#D-sets is introduced. Some weak separation axioms namely b# −Dk, b# −R0, b#-R1 and b#-S0 are introduced and studied. Some lower separation axioms are characterized by using these separation axioms.
[2757] vixra:1804.0267 [pdf]
Intuitive Explanation of the Riemann Hypothesis
Let \alpha be the unique \Gamma(2) invariant form on H with a pole of residue 1 at i\infty and one of residue -1 at 1. The ratio [\alpha: i\pi dtau] tends to 1 at the upper limit of [0,i\infty). Let \mu_{pm}:TxH->H be the action of multiplying by \sqrt{g} and {1\over{\sqrt{g}} for g in the connected real multiplicative group T. For each real c in (0,1) and each unitary character \omega, the form g^{2-2c}\omega(g)\mu_+^*(\alpha-\pi d\tau)\wedge \mu_-^*)\alpha-i\pi d\tau is exact if and only if \zeta(c+i\omega_0)=0 where \omega_0 is chosen such that \omega(g)=g^{i\omega_0}. The rate of change of the magnitude is given by an integral involving a unitary character. Conjecturally the rate seminegative on the region 0 The form descends to the real projective line, it is locally meromorphic there with one pole and integrates to \pi e^{i\pi ({3\over 2}s + 1}. The value \zeta(s)=0 if and only if the integral along the arc from 0 to \infty not passing 1 is zero. This implies the arc passing through 1 equals a residue. We begin to relate the equality with the condition Re(s)=1/2.
[2758] vixra:1804.0222 [pdf]
On Schrödinger Equations Equivalent to Constant Coefficient Equations
This paper shows that the solution of some classes of Schrödinger equations may be performed in terms of the solution of equations of constant coefficients. In this context, it has been possible to generate new exactly solvable potentials and to show that the Schrödinger equation for some well known potentials may also be solved in terms of elementary functions.
[2759] vixra:1804.0205 [pdf]
A Physical Review on Currency
A theoretical self-sustainable economic model is established based on the fundamental factors of production, consumption, reservation and reinvestment, where currency is set as a unconditional credit symbol serving as transaction equivalent and stock means. Principle properties of currency are explored in this ideal economic system. Physical analysis reveals some facts that were not addressed by traditional monetary theory, and several basic principles of ideal currency are concluded: 1. The saving-replacement is a more primary function of currency than the transaction equivalents; 2. The ideal efficiency of currency corresponds to the least practical value; 3. The contradiction between constant face value of currency and depreciable goods leads to intrinsic inflation.
[2760] vixra:1804.0191 [pdf]
Dark Matter is Negative Mass
There has been an old and false claims in the scientific community related to negative mass. This paper describes the vacuum instability problem, runaway motion problem, and wheel problem with negative mass and positive mass. Negative mass is an object whose existence is required by the law of the conservation of energy. The fundamental properties of negative mass can explain important characteristics of dark matter. 1) additional centripetal force effects, 2) explanations derived from fundamental principles about the reason why dark matter does not have electromagnetic interaction, 3) repulsive gravity ensuring almost even distribution and lower interaction of dark matter, 4) gravitational lens effect, 5) accelerating expansion of the universe can be explained with negative mass. Therefore, we should seriously examine the negative mass model.
[2761] vixra:1804.0173 [pdf]
Fractals on Non-Euclidean Metric
As far as I know, there is no a study on fractals on non euclidean metrics.This paper proposes a first approach method about generating fractals on a non-euclidean metric. The idea is to extend the calculus of fractals on non-euclidean metrics. Using the Riemann metric, there will be defined a non-euclidean modulo of a complex number in order to check the divergence of the series generated by the Mandelbrot set. It also shown that the fractals are not invariant versus rotations. The study will be extended to the quaternions, where is shown that the study of fractals might not be extended to quaternions with a general metric because of the high divergence of the series (a condition in order to generate a fractal is selecting bounded operators). Finally, a Java program will be found as example to show those kind of fractals, where any metric can be defined, so it will be helpful to study those properties.
[2762] vixra:1804.0116 [pdf]
Wave-Particle Duality Paradox is Solved Using Mutual Energy and Self-Energy Principles for Electromagnetic Field and Photon
The particle and wave duality is solved through the self-energy and the mutual energy principles. Welch has introduced the time-domain reciprocity theorem in 1960. This author have introduced the mutual energy theorem in 1987. It has been proved that the above two theorems are same theorem in time-domain or in Fourier domain. This author believe there is an energy flow from transmitting antenna to the receiving antenna. Hence this theorem is a energy theorem instead of a mathematical theorem i.e. the reciprocity theorem. This author found that the mutual energy is the additional energy when the two waves are superposed comparing to the situation when the two waves alone stayed in the space. It is often asked that if the two waves are identical what is the energy after the two waves are superposed, 4 or 2 times? this author's answer are 2 or 4 depending whether the sources of the waves are involved or not. However this author noticed that a more important situation, which is the superposition of two waves: one is retarded wave sent from the emitter, another is the advanced wave sent from the absorber. This situation actually described the photon. This author have found that, instead there are two photons the retarded photon and the advanced photon like some author believed, there is only one photon. The reason is that the two waves the retarded wave and the advanced wave they both bring one photon energy, which are sent to the space, but these energy are returned with the time-reversed waves. The additional energy because of the superpose process of the two waves is just with 1 photon's energy instead of 2 photon's energy. This energy is sent from the emitter to the absorber. These build this author's photon model. This photon model is proved by this author through the notice of the conflict between the energy conservation and both the superposition principle and the Maxwell equations for single charge. This conflict force this author introduced the mutual energy principle and the self-energy principle. Self-energy principle tell us the self-energy (the wave's energy before superposed) time-reversal return to its source and hence do not transfer any energy from emitter to the absorber. The mutual energy principle tell us that transferring the energy from the emitter to the absorber is only done by the mutual energy flow. This author also proved that the mutual energy flow theorem, which says that the energy transferred by mutual energy flow is equal in any surface between the emitter to the absorber. The wave function collapse process is explained by the two processes together the first is the self-energy time-reversal return to their sources (instead of the targets), the second is that the mutual energy flow brings a photon's energy package from emitter to the absorber. The wave's probability property is also explained that because only when a retarded wave synchronized with an advanced wave the energy can be transferred. The photon energy is transferred only when the retarded wave (one of solution of Maxwell equations) and the advanced wave (another solution of the Maxwell equations) are synchronized, otherwise the two waves are returned by two time-reversal waves. Time-reversal wave are not satisfy Maxwell equations but satisfy the time-reversal Maxwell equations. Hence, 4 time-reversal Maxwell equations which describe the two additional time-reversal waves are added to Maxwell equations. Hence, the photon's package wave is consist of 4 waves which are corresponding to 4 self-energy flows. There are two additional energy flows, which are the mutual energy flows that is responsible for transferring the energy from emitter to the absorber. The time-reversal mutual energy flow which is responsible to bring the energy back from the absorber to the emitter if the absorber only obtained a half photon or a part of photon.
[2763] vixra:1804.0101 [pdf]
Proof of the Hypothesis of Large Dirac Numbers or How to Weigh the Universe.
The article gives a brief exposition of the solution of cosmological problems. The problem of stability and shortage of mass in galaxies, huge velocities of galactic clusters is solved. The law of formation of fundamental constants, the law of nonlinear expansion of the Universe, the law of gravitational interaction is found. Proof of the hypothesis of large Dirac numbers. The original in Russian. The English version will be published in the near future.
[2764] vixra:1804.0072 [pdf]
Particle in a Quantum $\delta$-Function Potential
A quantum potential $V(x,t)$ of $\delta$-function type is introduced, to describe the inertial motion of a particle. Quantum-mechanically, it is in a bound state, though classically one seems to be free. The motion of the object (micro- or macroscopic) takes place according to the Huygens-Fresnel principle. The new position of the object (wave front) plays the role of the secondary sources that maintain the propagation. The mean value of the potential energy is $-mc^{2}$. We found that the de Broglie - Bohm quantum potential is the difference between the bound energy $E = - mc^{2}/2$ from the stationary case and our potential $V(x,t)$.
[2765] vixra:1804.0065 [pdf]
A New 'more Natural' Toe Model with Covariant Emergent Gravity as a Solution to the Dark Sector
Arguments have been raised against several of the central ideas in theoretical physics, such as M-Theory's (MT's) inability to provide for the falsification necessary to avoid relegating it to the scientific dustbin of an anthropic principle based rationalization, such as the Multiverse. Along similar lines, ideas of a uniquely falsifiable inflation era after the Big Bang (BB) have also lost some of their traction. Recent major experimental results too have sent shock waves, such as the confirmation of the low Higgs mass constraining notions of super symmetry, and the Planck satellite's detail mapping of the Cosmic Microwave Background (CMB) fixing the Hubble parameter at odds with other more traditional methods. Even the null results of major experimental apparatus are causing consternation, as in the search for particle based Dark Matter (DM). Thankfully, all this seems to be opening the door for a more serious investigation into alternative theories that attempt to answer the big questions related to the causal relationships between the Standard Model (SM), General Relativity (GR), and 95\% of the known Universe, namely cosmology's dark sector (Dark Energy (DE) and DM). This paper attempts to connect the dots between some of these alternative ideas as they relate to MOdified Newtonian Gravity (MOND), covariant emergent gravity (CEG), and the fundamental parameters used to fix Natural or Planck units-of-measure. The result is intended to point the way toward a fresh discussion in the directions available for unification of GR with SM while resolving the now more open problems in theoretical physics today.
[2766] vixra:1804.0029 [pdf]
On the Relativity and Equivalence Principles in Quantized Gauge Theory Gravity
Ongoing proliferation of interpretations of the formalism of quantum mechanics over the course of nearly a century originates in historical choices of wavefunction, and in particular with assignment of geometric and topological attributes to unintuitive internal intrinsic properties of point particle quarks and leptons. Significant simplification and intuitive appeal arise if one extends Dirac’s two-component spinors to the full eight-component Pauli algebra of 3D space, providing a geometric representation of wavefunctions and their interactions that is comparatively easily visualized. We outline how the resulting quantized impedance network of geometric wavefunction interactions yields a naturally finite, confined, and gauge invariant model of both the unstable particle spectrum and quantum gravity that is consistent with and clarifies interpretation of both special relativity and the equivalence principle.
[2767] vixra:1804.0024 [pdf]
Optical Image Discovery of Dark Matter Based on Alternative Facts
Observations of the rotational curves of galaxies, gravitational lensing of galaxy clusters, and of temperature and polarization anisotropies of the cosmic microwave background have previously been interpreted as evidence for the gravitational signatures of a non-baryonic component of matter. Disturbingly, this invisible substance, known as "dark matter", makes up about 26.8 % of the entire mass-energy budget of the Universe. It has long been thought that it is in the very nature of dark matter to be invisible. Here we report the first successful direct imaging of dark matter. This discovery, which has been achieved through the use of alternative facts, is the greatest discovery ever. Period.
[2768] vixra:1804.0001 [pdf]
Circuit Complexity and Problem Structure in Hamming Space
This paper describes about relation between circuit complexity and accept inputs structure in Hamming space by using almost all monotone circuit that emulate deterministic Turing machine (DTM). Circuit family that emulate DTM are almost all monotone circuit family except some NOT-gate which connect input variables (like negation normal form (NNF)). Therefore, we can analyze DTM limitation by using this NNF Circuit family. NNF circuit have symmetry of OR-gate input line, so NNF circuit cannot identify from OR-gate output line which of OR-gate input line is 1. So NNF circuit family cannot compute sandwich structure effectively (Sandwich structure is two accept inputs that sandwich reject inputs in Hamming space). NNF circuit have to use unique AND-gate to identify each different vector of sandwich structure. That is, we can measure problem complexity by counting different vectors. Some decision problem have characteristic in sandwich structure. Different vectors of Negate HornSAT problem are at most constant length because we can delete constant part of each negative literal in Horn clauses by using definite clauses. Therefore, number of these different vector is at most polynomial size. The other hand, we can design high complexity problem with almost perfct nonlinear (APN) function.
[2769] vixra:1803.0740 [pdf]
Planck Mass Measured Totally Independent of Big G Utilizing McCulloch-Heisenberg Newtonian Equivalent Gravity
In 2014, McCulloch showed, in a new and interesting way, how to derive a gravity theory from Heisenberg's uncertainty principle that is equivalent to Newtonian gravity. McCulloch utilizes the Planck mass in his derivation and obtains a gravitational constant of hbar*c/m_p^2. This is a composite constant, which is equivalent in value to Newton's gravitational constant. However, McCulloch has pointed out that his approach requires an assumption on the value of $G$, and that this involves some circular reasoning. This is in line with the view that the Planck mass is a derived constant from Newton's gravitational constant, while big G is a universal fundamental constant. Here we will show that we can go straight from the McCulloch derivation to measuring the Planck mass without any knowledge of the gravitational constant. From this perspective, there are no circular problems with his method. This means that we can measure the Planck mass without Newton's gravitational constant, and shows that the McCulloch derivation is a theory of quantum gravity that stands on its own. And further we show that we can easily measure the Schwarzschild radius of a mass without knowing its mass, or Newton's gravitational constant, or the Planck constant. The very essence of gravity is linked to the Planck length and the speed of light, but here we will claim that we do not need to know the Planck length itself. Our conclusion is that Newton's gravitational constant is a universal constant, but it is a composite constant of the form G=l_p^2*c^3/hbar where the Planck length and the speed of light are the keys to gravity. This could be an important step towards the development of a full theory of quantum gravity.
[2770] vixra:1803.0708 [pdf]
Enough of the Trap of Non-Existent Dark Matter
Since the experimental event of abnormal speeds within galaxies was discovered; the so-called dark matter is incessantly sought. But all the experiments still do not find such a ghost. Its justification is supported, in the apparent, simplest explanation: the existence of a matter additional to the baryonic matter within the galaxies. But there is another possible explanation more natural and just as simple: A modification of the gravity due to the occurrence of effects of quantum gravity produced by the repulsive acceleration of the vacuum. A repulsion of the vacuum would produce, as we demonstrated with the experiments with magnets of equal polarity, a pressure towards the center of mass and, therefore, a fictitious increase in mass. In this short article; we calculate the average speed of rotation within the newly discovered ultradiffuse galaxy NGC1052-DF2. This calculation shows that the equation, type MOND, agrees perfectly with the estimated velocities. Therefore, some voices that too quickly have announced that this galaxy (almost without dark matter or anything of it) does not agree with the theories of modified gravitation (we refer to the quantum gravitation that would quantize the equations of Einstein's gravitation, or GR); they are simply avoiding the real explanation: effects of quantum gravity produced by the vacuum (very large masses of galaxies).
[2771] vixra:1803.0707 [pdf]
Fine-Structure Constant from Golden Ratio Geometry
After a brief review of the golden ratio in history and our previous exposition of the fine-structure constant and equations with the exponential function, the fine-structure constant is studied in the context of other research calculating the fine-structure constant from the golden ratio geometry of the hydrogen atom. This research is extended and the fine-structure constant is then calculated in powers of the golden ratio to an accuracy consistent with the most recent publications. The mathematical constants associated with the golden ratio are also involved in both the calculation of the fine-structure constant and the proton-electron mass ratio. These constants are included in symbolic geometry of historical relevance in the science of the ancients.
[2772] vixra:1803.0678 [pdf]
Probability and Entanglement
Here the concept of "TRUE" is defined according to Alfred Tarski, and the concept "OCCURING EVENT" is derived from this definition. From here, we obtain operations on the events and properties of these operations and derive the main properties of the CLASSICAL PROB-ABILITY. PHYSICAL EVENTS are defined as the results of applying these operations to DOT EVENTS. Next, the 3 + 1 vector of the PROBABILITY CURRENT and the EVENT STATE VECTOR are determined. The presence in our universe of Planck's constant gives reason to\linebreak presume that our world is in a CONFINED SPACE. In such spaces, functions are presented by Fourier series. These presentations allow formulating the ENTANGLEMENT phenomenon.
[2773] vixra:1803.0675 [pdf]
A Survey on Reasoning on Building Information Models Based on IFC
Building Information Models (BIM) are computer models that act as a main source of building information and integrate several aspects of engineering and architectural design, including building utilisation. They aim at enhancing the efficiency and the effectiveness of the projects during design, construction, and maintenance. Artificial Intelligence, which is used to automate tasks that would require intelligence, has found its way into BIM by applying reasoners, among other techniques. A reasoner is a piece of software that makes the implicit and hidden knowledge as explicit by using logical inferring techniques. Reasoners are applied on BIM to help take enhanced decisions and to assess the construction projects. The importance of BIM in both construction and information technology sectors has motivated many researchers to work on surveys that attempt to provide the current state of BIM, but unfortunately, none of these surveys has focused on reasoning on BIM. In this article we survey the research proposals and toolkits that rely on using reasoning systems on BIM, and we classify them into a two-level schema based on what they are intended for. According to our survey, reasoning is mainly used for solving design problems, and is especially applied for code consistency checking, with an emphasis on the semantic web technologies. Furthermore, user-friendliness is still a gap in this field and case-based reasoning, which was often applied in the past efforts, is still hardly applied for reasoning on BIM. The survey shows that this research area is active and that the research results are progressively being integrated into commercial toolkits.
[2774] vixra:1803.0660 [pdf]
A Short Note From Myself to Myself to Better Understand Hawking
In this short paper we look at the Hawking temperature from a Newtonian perspective as well as a General Relativity perspective. If we are considering the Hawking temperature, and simply replace the gravitational field input in his 1974 formula with that of Newton, we will get a Hawking temperature of half of that of the well-known Hawking temperature formula. This is very similar to the case where Newton’s theory predicts half the light bending that GR does. Based on recent theoretical research on Newton’s gravitational constant, we also rewrite the Hawking temperature to give a somewhat different perspective without changing the output of the formula, that makes some of the Hawking formulas more intuitive.
[2775] vixra:1803.0655 [pdf]
Heisenberg Quantum Probabilities Leads to a Quantum Gravity Theory that Requires Much Less Mass to Explain Gravitational Phenomena
In this paper we suggest that through working with the Planck mass and its link to other particles in a simple way, it possible to “convert” the Heisenberg uncertainty principle into a very simple quantum probabilistic model. We further combine this with key elements from special relativity theory and get an interesting quantum relativistic probability theory. Some of the key points presented here could help to eliminate negative and above unity (pseudo) probabilities that often are used in standard quantum mechanics. These fake probabilities may be rooted in a failure to understand the Heisenberg principle fully in relation to the Planck mass. When properly understood, the Heisenberg principle seems to give a probabilistic range of quantum probabilities that is sound. There are no instantaneous probabilities and the maximum probability is always unity. In our formulation, the Planck mass particle is always related to a probability of one. Thus, we have certainty at the Planck scale for the Planck mass particle, or for particles accelerated to reach Planck energy. We are also presenting a relativistic extension of the McCulloch Heisenberg-derived Newton equivalent gravity theory. Our relativistic version requires much less mass than the Newtonian theory to explain gravitational phenomena, and initial investigation indicates it is consistent with perihelion of Mercury.
[2776] vixra:1803.0642 [pdf]
Heisenberg Probabilistic Quantum Gravity that Holds at the Subatomic and the Macroscopic Scale
Here we will present a probabilistic quantum gravity theory derived from Heisenberg’s uncertainty principle. Surprisingly, this theory is fully deterministic when operating with masses that are exactly divisible by the Planck mass. For masses or mass parts less than one Planck mass, we find that probabilistic effects play an important role. Most macroscopic masses will have both a deterministic gravity part and a probabilistic gravity part. In 2014, McCulloch derived Newtonian gravity from Heisenberg’s uncertainty principle. McCulloch himself pointed out that his theory only seems to hold as long as one operates with whole Planck masses. For those who have studied his interesting theory, there may seem to be a mystery around how a theory rooted in Heisenberg’s principle, which was developed to understand quantum uncertainty, can give rise to a Newtonian gravity theory that works at the cosmic scale (which is basically deterministic). However, the deeper investigation introduced here shows that the McCulloch method is very likely correct and can be extended to hold for masses that are not divisible by the Planck mass, a feature that we describe in more detail here. Our extended quantum gravity theory also points out, in general directions, how we can approach the set up of experiments to measure the gravitational constant more accurately.
[2777] vixra:1803.0633 [pdf]
Almost All Monotone Circuit Family
This paper describes about "Almost all monotone circuit family" and introduce its interesting advantages to measuring problem complexity. Explained in Michael Sipser "Introduction to the Theory of COMPUTATION", circuit family that emulate Deterministic Turing machine (DTM) are almost all monotone circuit family except some NOT-gate that connect input variables (like negation normal form (NNF)). This "NNF Circuit family" have good characteristic which present each accept input exclusivity and symmetry. Each input make some INPUT-gate and NOT-gate output 1 which set is different from another input, and meet these output in OR-gate and finally connect (specified) OUTPUT-gate. That is, NNF circuit start accept input that exclusive each other, and meet each input as symmetry input step by step and finally goal same output. Especially, some different variables which sandwich reject inputs correspond to unique AND-gate. That is, we can measure problem complexity by using different variables of accept inputs. For examples, such number of different variables type of negation HornSAT is at most polynomial size. This is one of reason that we can compute P problem easily.
[2778] vixra:1803.0620 [pdf]
Notions of Rough Neutrosophic Digraphs
Graph theory has numerous applications in various disciplines, including computer networks, neural networks, expert systems, cluster analysis, and image capturing. Rough neutrosophic set (NS) theory is a hybrid tool for handling uncertain information that exists in real life.
[2779] vixra:1803.0612 [pdf]
On Neutrosophic Soft Metric Space
In this paper, the notion of neutrosophic soft metric space (NSMS) is introduced in terms of neutrosophic soft points and several related properties, structural characteristics have been investigated. Then the convergence of sequence in neutrosophic soft metric space is defined and illustrated by examples. Further, the concept of Cauchy sequence in NSMS is developed and some related theorems have been established, too.
[2780] vixra:1803.0598 [pdf]
Review on BCI/BCK-Algebras and Development
The aim of the paper is to investigate the relationship between BCK/BCIalgebras and other algebras namely d-algebras, Q-algebras-BCH-algebras- TM-algebras-INK-algebras and we introduce some algebraic system.
[2781] vixra:1803.0597 [pdf]
Rough Neutrosophic Digraphs with Application
A rough neutrosophic set model is a hybrid model which deals with vagueness by using the lower and upper approximation spaces. In this research paper, we apply the concept of rough neutrosophic sets to graphs.
[2782] vixra:1803.0591 [pdf]
Technology-Embedded Hybrid Learning
With the rapid surge in technological advancements, an equal amount of investment in technology-embedded teaching has become vital to pace up with the ongoing educational needs. Distance education has evolved from the era of postal services to the use of ICT tools in current times. With the aid of globally updated content across the board, technology usage ensures all students receive equal attention without any discrimination. Importantly, web-based teaching allows all kind of students to learn at their own pace, without the fear of being judged, including professionals who can learn remotely without disturbing their job schedules. Having web-based content allows low-cost and robust implementation of the content upgradation. An improved, yet effective, version of the education using such tools is Hybrid Learning (HL). This learning mode aims to provide luxurious reinforcement to its legitimate candidates while maintaining the quality standards of various elements. Incorporated with both traditional and distance learning methods, along with exploiting social media tools for increased comfort level and peer-to-peer collaboration, HL ultimately facilitates the end user and educational setup. The structure of such a hybrid model is realized by delivering the study material via a learning management system (LMS) designed in compliance with quality standards, which is one of the fundamental tackling techniques for controlling quality constraints. In this paper, we present the recently piloted project by COMSATS Institute of Information Technology (CIIT) which is driven by technology-embedded teaching model. This model is an amalgam of the traditional class room model with the aid of state-of-the-art online learning technologies. The students are enrolled as full-time students, with all the courses in traditional classroom mode, except one course offered as hybrid course. This globally adapted model helps the students to benefit from both face-to-face learning as well as gaining hands-on experience on technology-enriched education model providing flexibility of timings, learning pace, and boundaries. Our HL model is equipped with two major synchronous and asynchronous blocks. The synchronous block delivers real-time live interaction scenarios using discussion boards, thereby providing a face-to-face environment. Interactions via social network has witnessed equally surging improvement in the output performance. The asynchronous block refers to the lecture videos, slides and handouts, prepared by imminent professors, available 24/7 for students. To ensure quality output, our HL model follows the course learning outcomes (CLOs), and program learning outcomes (PLOs) as per international standards. As a proof of concept, we have deployed a mechanism at the end of each semester to verify the effectiveness of our model. This mechanism fundamentally surveys the satisfaction levels of all the students enrolled in the HL courses. With the surveys already conducted, a significant level of satisfaction has been noted. Extensive results from these surveys are presented in the paper to further validate the efficiency and robustness of our proposed HL model.
[2783] vixra:1803.0584 [pdf]
Single Valued Neutrosophic Exponential Similarity Measure for Medical Diagnosis and Multi Attribute Decision Making
Neutrosophic set (NS) is very useful to express incomplete, uncertainty, and inconsistent information in a more general way. In the modern medical technologies, each element can be expressed as NS having different truth – membership, indeterminacy – membership, and falsity – membership degrees.
[2784] vixra:1803.0583 [pdf]
Single-Valued Neutrosophic Hesitant Fuzzy Choquet Aggregation Operators for Multi-Attribute Decision Making
This paper aims at developing new methods for multi-attribute decision making (MADM) under a single-valued neutrosophic hesitant fuzzy environment, in which each element has sets of possible values designed by truth, indeterminacy, and falsity membership hesitant functions.
[2785] vixra:1803.0569 [pdf]
Some Hybrid Weighted Aggregation Operators Under Neutrosophic Set Environment and Their Applications to Multi Criteria Decision Making
Neutrosophic sets (NS) contain the three ranges: truth, indeterminacy, and falsity membership degrees,and are very useful for describing and handling the uncertainties in the real life problem.
[2786] vixra:1803.0563 [pdf]
Special Timelike Smrandache Curves in Minkowski 3-Space
In Smarandache geometry, a regular non-null curve in Minkowski 3-space, whose position vector is collected by the Frenet frame vectors of other regular non-null curve, is said to be Smarandache curve.
[2787] vixra:1803.0561 [pdf]
Summary of the Special Issue “Neutrosophic Information Theory and Applications” at “Information” Journal
Over a period of seven months (August 2017–February 2018), the Special Issue dedicated to “Neutrosophic Information Theory and Applications” by the “Information” journal (ISSN 2078-2489), located in Basel, Switzerland, was a success.
[2788] vixra:1803.0551 [pdf]
Very True Pseudo-BCK Algebras
In this paper we introduce the very true operators on pseudo-BCK algebras and we study their properties. We prove that the composition of two very true operators is a very true operator if and only if they commute.
[2789] vixra:1803.0541 [pdf]
Models for Multiple Attribute Decision-Making with Dual Generalized Single-Valued Neutrosophic Bonferroni Mean Operators
In this article, we expand the dual generalized weighted BM (DGWBM) and dual generalized weighted geometric Bonferroni mean (DGWGBM) operator with single valued neutrosophic numbers (SVNNs) to propose the dual generalized single-valued neutrosophic number WBM (DGSVNNWBM) operator and dual generalized single-valued neutrosophic numbers WGBM (DGSVNNWGBM) operator. Then, the multiple attribute decision making (MADM) methods are proposed with these operators. In the end, we utilize an applicable example for strategic suppliers selection to prove the proposed methods.
[2790] vixra:1803.0538 [pdf]
Multi-criteria Decision-making Approach based on Multi-valued Neutrosophic Geometric Weighted Choquet Integral Heronian Mean Operator
Multi-valued neutrosophic sets (MVNSs) have recently become a subject of great interest for researchers, and have been applied widely to multi-criteria decision-making (MCDM) problems. In this paper, the multi-valued neutrosophic geometric weighted Choquet integral Heronian mean (MVNGWCIHM) operator, which is based on the Heronian mean and Choquet integral, is proposed, and some special cases and the corresponding properties of the operator are discussed.
[2791] vixra:1803.0535 [pdf]
Multiple Attribute Decision-Making Method Using Correlation Coefficients of Normal Neutrosophic Sets
The normal distribution is a usual one of various distributions in the real world. A normal neutrosophic set (NNS) is composed of both a normal fuzzy number and a neutrosophic number, which a significant tool for describing the incompleteness, indeterminacy, and inconsistency of the decision-making information.
[2792] vixra:1803.0524 [pdf]
Neutrosophic Hough Transform
Hough transform (HT) is a useful tool for both pattern recognition and image processing communities. In the view of pattern recognition, it can extract unique features for description of various shapes, such as lines, circles, ellipses, and etc.
[2793] vixra:1803.0522 [pdf]
Neutrosophic Ideals of Semirings
Neutrosophic ideals of a semiring are introduced and studied in the sense of Smarandache[14], along with some operations such as intersection, composition, cartesian product etc. on them. Among the other results/characterizations, it is shown that all the operations are structure preserving.
[2794] vixra:1803.0521 [pdf]
Neutrosophic Linear Equations and Application in Traffic Flow Problems
A neutrosophic number (NN) presented by Smarandache can express determinate and/or indeterminate information in real life. NN (z = a + uI) consists of the determinate part a and the indeterminate part uI for a, u 2 R (R is all real numbers) and indeterminacy I, and is very suitable for representing and handling problems with both determinate and indeterminate information.
[2795] vixra:1803.0517 [pdf]
Neutrosophic N-Structures and Their Applications in Semigroups
The notion of neutrosophic N-structure is introduced, and applied it to semigroup. The notions of neutrosophic N-subsemigroup, neutrosophic N-product and "-neutrosophic N-subsemigroup are introduced, and several properties are investigated.
[2796] vixra:1803.0515 [pdf]
Neutrosophic Number Nonlinear Programming Problems and Their General Solution Methods under Neutrosophic Number Environments
The possible optimal ranges of the decision variables and NN objective function are indicated when the indeterminacy I is considered for possible interval ranges in real situations.
[2797] vixra:1803.0511 [pdf]
Neutrosophic Rough Set Algebra
A rough set is a formal approximation of a crisp set which gives lower and upper approximation of original set to deal with uncertainties. The concept of neutrosophic set is a mathematical tool for handling imprecise, indeterministic and inconsistent data. In this paper, we defne concepts of Rough Neutrosophic algebra and investigate some of their properties.
[2798] vixra:1803.0507 [pdf]
Neutrosophic Triplet Normed Space
In this paper; new properties for neutrosophic triplet groups are introduced. A notion of neutrosophic triplet metric space is given and properties of neutrosophic triplet metric spaces are studied.
[2799] vixra:1803.0506 [pdf]
Neutrosophic Vague Generalized Pre-Closed Sets in Neutrosophic Vague Topological Spaces
The aim of this paper is to introduce and develop a new class of sets namely neutrosophic vague generalized pre-closed sets in neutrosophic vague topological space. Further we have analyse the properties of neutrosophic vague generalized pre-open sets.
[2800] vixra:1803.0490 [pdf]
Correlation Coefficients of Probabilistic Hesitant Fuzzy Elements and Their Applications to Evaluation of the Alternatives
Correlation coefficient is one of the broadly use indexes in multi-criteria decision-making (MCDM) processes. However, some important issues related to correlation coefficient utilization within probabilistic hesitant fuzzy environments remain to be addressed.
[2801] vixra:1803.0487 [pdf]
Dynamical Dark Energy and the Relativistic Bohm-Poisson Equation
The $nonlinear$ Bohm-Poisson-Schroedinger equation is studied further. It has solutions leading to $repulsive$ gravitational behavior. An exact analytical expression for the observed vacuum energy density is obtained. Further results are provided which include two possible extensions of the Bohm-Poisson equation to the full relativistic regime. Two specific solutions to the novel Relativistic Bohm-Poisson equation (associated to a real scalar field) are provided encoding the repulsive nature of dark energy. One solution leads to an exact cancellation of the cosmological constant, but an expanding decelerating cosmos; while the other solution leads to an exponential accelerated cosmos consistent with a de Sitter phase, and whose extremely small cosmological constant is $ \Lambda = { 3 \over R_H^2}$, consistent with current observations. We conclude with some final remarks about Weyl's geometry.
[2802] vixra:1803.0485 [pdf]
Discovered “Angel Particle”, which is Both Matter and Antimatter, as a New Experimental Proof of Unmatter
“Angel particle” bearing properties of both particles and anti-particles, which was recently discovered by the Stanford team of experimental physicists, is usually associated with Majorana fermions (predicted in 1937 by Ettore Majorana). In this message we point out that particles bearing properties of both matter and anti-matter were as well predicted without any connexion with particle physics, but on the basis of pure mathematics, namely — neutrosophic logic which is a generalization of fuzzy and intuitionistic fuzzy logics in mathematics.
[2803] vixra:1803.0466 [pdf]
Generalizations of Neutrosophic Subalgebras in Bck=bci-Algebras Based on Neutrosophic Points
As a more general platform which extends the notions of the classical set, fuzzy set, interval valued fuzzy set, intuitionistic fuzzy set and interval valued intuitionistic fuzzy set, Smarandache developed the concept of neutrosophic set which consists of three member functions, so called truth membership function, indeterminacy membership function and falsity membership function.
[2804] vixra:1803.0465 [pdf]
Generalized Interval Neutrosophic Rough Sets and its Application in Multi-Attribute Decision Making
Neutrosophic set (NS) was originally proposed by Smarandache to handle indeterminate and inconsistent information. It is a generalization of fuzzy sets and intuitionistic fuzzy sets. Wang and Smarandache proposed interval neutrosophic sets (INS) which is a special case of NSs and would be extensively applied to resolve practical issues.
[2805] vixra:1803.0464 [pdf]
Generalized Neutrosophic Contra-Continuity
In this paper, the concepts of generalized neutrosophic contra-continuous function, gen- eralized neutrosophic contra-irresolute function and strongly generalized neutrosophic contra-continuous function are introduced. Some interesting properties are also studied.
[2806] vixra:1803.0463 [pdf]
Toroidal Approach to the Doubling of the Cube
A doubling of the cube is attempted as a problem equivalent to the doubling of a horn torus. Both doublings are attained through the circle of Apollonius.
[2807] vixra:1803.0460 [pdf]
Graph Structures in Bipolar Neutrosophic Environment
A bipolar single-valued neutrosophic (BSVN) graph structure is a generalization of a bipolar fuzzy graph. In this research paper, we present certain concepts of BSVN graph structures.We describe some operations on BSVN graph structures and elaborate on these with examples. Moreover, we investigate some related properties of these operations.
[2808] vixra:1803.0456 [pdf]
How Objective a NeutralWord Is? A Neutrosophic Approach for the Objectivity Degrees of NeutralWords
In the latest studies concerning the sentiment polarity of words, the authors mostly consider the positive and negative constructions, without paying too much attention to the neutral words, which can have, in fact, significant sentiment degrees.
[2809] vixra:1803.0452 [pdf]
Inductive Learning in Shared Neural Multi-Spaces
The learning of rules from examples is of continuing interest to machine learning since it allows generalization from fewer training examples. Inductive Logic Programming (ILP) generates hypothetical rules (clauses) from a knowledge base augmented with (positive and negative) examples.
[2810] vixra:1803.0445 [pdf]
Intuitionistic Continuous, Closed and Open Mappings
First of all, we dene an intuitionistic quotient mapping and obtain its some properties. Second, we dene some types continuities, open and closed mappings. And we investigate relationships among them and give some examples. Finally, we introduce the notions of an intuitionistic subspace and the heredity, and obtain some properties of each concept.
[2811] vixra:1803.0443 [pdf]
Intuitionistic Topological Spaces
First of all, we list some concepts and results introduced by[10, 15]. Second, we give some examples related to intuitionistic topologies and intuitionistic bases, and obtain two properties of an intuitionistic base and an intuitionistic subbase. And we dene intuitionistic intervals in R.Finally, we dene some types of intuitionistic closures and interiors, and obtain their some properties.
[2812] vixra:1803.0440 [pdf]
Logarithmic Similarity Measure Between Interval-Valued Fuzzy Sets and Its Fault Diagnosis Method
Fault diagnosis is an important task for the normal operation and maintenance of equipment. In many real situations, the diagnosis data cannot provide deterministic values and are usually imprecise or uncertain.
[2813] vixra:1803.0406 [pdf]
Computing the Greatest X-eigenvector of Fuzzy Neutrosophic Soft Matrix
Uncertainty forms have a very important part in our daily life. During the time we handle real life problems involving uncertainty like Medical elds, Engineering, Industry and Economics and so on.
[2814] vixra:1803.0402 [pdf]
Letter to a Friend on Rumsfeld's Unknown Unknown ...
The essay is about how to recover the vast majority of the present day Western world from their aggressive secularism, and do so with the help of the concept of UNKNOWN UNKNOWN introduced - rather accidentally - back in 2002 by Donald Rumsfeld ...
[2815] vixra:1803.0393 [pdf]
Microstrip Quad-Channel Diplexer Using Quad-Mode Square Ring Resonators
A new compact microstrip quad-channel diplexer (2.15/3.60 GHz and 2.72/5.05 GHz) using quad-mode square ring resonators is proposed. The quad-channel diplexer is composed of two quad-mode square ring resonators (QMSRR) with one common input and two output coupled-line structures. By adjusting the impedance ratio and length of the QMSRR, the resonant modes can be easily controlled to implement a dual-band bandpass filter. The diplexer show a small circuit size since it’s constructed by only two QMSRRs and common input coupledline structure while keeping good isolations (> 28 db). Good agreements are achieved between measurement and simulation.
[2816] vixra:1803.0392 [pdf]
A Retinal Vessel Detection Approach Based on Shearlet Transform and Indeterminacy Filtering on Fundus Images
A fundus image is an effective tool for ophthalmologists studying eye diseases. Retinal vessel detection is a significant task in the identification of retinal disease regions. This study presents a retinal vessel detection approach using shearlet transform and indeterminacy filtering.
[2817] vixra:1803.0390 [pdf]
A Side Scan Sonar Image Target Detection Algorithm Based on a Neutrosophic Set and Diffusion Maps
To accurately achieve side scan sonar (SSS) image target detection, a novel target detection algorithm based on a neutrosophic set (NS) and diffusion maps (DMs) is proposed in this paper. Firstly, the neutrosophic subset images were obtained by transforming the input SSS image into the NS domain. Secondly, the shadowed areas of the SSS image were detected using the single gray value threshold method before the diffusion map was calculated.
[2818] vixra:1803.0367 [pdf]
Algorithms for Interval Neutrosophic Multiple Attribute Decision-Making Based on Mabac, Similarity Measure, and Edas
In this paper, we define a new axiomatic definition of interval neutrosophic similarity measure, which is presented by interval neutrosophic number (INN). Later, the objective weights of various attributes are determined via Shannon entropy theory; meanwhile, we develop the combined weights, which can show both subjective information and objective information.
[2819] vixra:1803.0320 [pdf]
A Novel Triangular Interval Type-2 Intuitionistic Fuzzy Sets and Their Aggregation Operators
The objective of this work is to present a triangular interval type-2 (TIT2) intuitionistic fuzzy sets and their corresponding aggregation operators, namely, TIT2 intuitionistic fuzzy weighted averaging, TIT2 intuitionistic fuzzy ordered weighted averaging and TIT2 intuitionistic fuzzy hybrid averaging based on Frank norm operation laws.
[2820] vixra:1803.0317 [pdf]
Analysis of Riemann's Hypothesis
Let p(c,r,v)=e^{(c-1)(r+2v)} log({{\lambda(r+v)}\over{q(r+v)}}) log({{\lambda(v)}\over{q(v)}}), f(c,r)=\int_{-\infty}^\infty p(c,r,v)+p(c,-r,v) dv. Let c be a real number such that 0<c<1/2.<br> Suppose that <br><br> f(c,r)<0 and {{\partial}\over{\partial r}}f(c,r)>0 for all $r\ge 0$ while {{\partial}\over{\partial c}}f(c,r)<0 and {{\partial^2}\over {\partial c \partial r}}f(c,r)>0 for all r>0. <br><br> Then \zeta(c+i\omega) \ne 0 for all \omega.
[2821] vixra:1803.0307 [pdf]
Electron Spi 1/2 is "Hidden" Electromagnetic Field Angular Momentum
This is to present and discuss an alternative method for precise analytical determination of electron spin angular momentum 1/2. The method is based on the Lorentz-force acting on a point-like charge moved through the entire magnetic dipole-field of the electron. The result hbar/2 coincides with a previous result based on Lagrangian electrodynamics and confirms the "hidden" electromagnetic origin of spin angular momentum. Both methods reveal a key role of the "classical" electron radius.
[2822] vixra:1803.0303 [pdf]
The Experiments Of The Bottle And The Beam For The Lifetime Of The Neutron: A Theoretical Approximation Derived From The Casimir Effect
The last neutron life-time experiments using the bottle method rule out possible experimental errors and possible sources of interference; mainly the interaction of the neutrons with the material of the walls of the bottle. Therefore, the discrepancy between the lifetimes of the neutron by experiments of the beam and the bottle require a theoretical explanation.The main and crucial difference between the beam and bottle experiments is the different topology of the experiments. While in the beam experiment the neutrons are not confined; in the experiment of the bottle a confinement takes place. In our theoretical approach we postulate the existence of a type of Casimir effect that due to the different geometry-topology of the experiments; it produces an induction-polarization of the vacuum by the confinement and the existence of the trapped neutrons; in such a way that there is an increase in the density of the quarks u, d, gluons and the virtual W and Z bosons. This density increase, mainly, of the neutral Z bosons; would be responsible for the increased probability of the decay of a neutron in a proton; and therefore of the shortening of the decay time of the neutron in the experiment of the bottle; of 877.7 seconds.
[2823] vixra:1803.0301 [pdf]
Mathematical Modeling Technique
In this paper, a general modeling principle is introduced that was found useful for modeling complex physical systems for engineering applications. The technique is a nonlinear asymptotic method (NLAM), constructed from simplified physical theories, i.e., physical theories that were developed from particular points of view, that can be used to construct a more global theory. Originally, the technique was envisioned primarily for engineering applications, but its success has led to a more general principle. Four examples are presented to discuss and illustrate this method.
[2824] vixra:1803.0293 [pdf]
The Hitchhikers' Guide to Reading and Writing Health Research
In this paper, we introduce the concepts of critically reading research papers and writing of research proposals and reports. Research methods is a general term that includes the processes of observation of the world around the researcher, linking background knowledge with foreground questions, drafting a plan of collection of data and framing theories and hypotheses, testing the hypotheses, and finally, drafting or writing the research to evoke new knowledge. These processes vary with the themes and disciplines that the researcher engages in; nevertheless, common motifs can be found. In this paper, we propose three methods are interlinked: a deductive reasoning process where the structure of the thought can be captured critically; an inductive reasoning method where the researcher can appraise and generate generalisable ideas from observations of the world; and finally, abductive reasoning method where the world can be explained or the phenomena observed can be explained or be accounted for. This step or reasoning is also about framing theories, testing and challenging established knowledge or finding best theories and how theories best fit the observations. We start with a discussion of the different types of statements that one can come across in any scholarly literature or even in lay or semi-serious literature, appraise them, and identify arguments from non-arguments, and explanations from non-explanations. Then we outline three strategies to appraise and identify reasonings in explanations and arguments. We end with a discussion on how to draft a research proposal and a reading/archiving strategy of research.
[2825] vixra:1803.0289 [pdf]
New Discovery on Goldbach Conjecture
In this paper we are going to give the proof of Goldbach conjecture by introducing a new lemma which implies Goldbach conjecture .By using Chebotarev-Artin theorem , Mertens formula and Poincare sieve we establish the lemma
[2826] vixra:1803.0273 [pdf]
Loose Comments on the Topics of Speed of Light, Generalized Continuity and Abductive Reasoning in Science
The possible conclusions resulting from the huge speed of massless objects and certain remarks related to continuity in physics accompanying revolutionary changes are presented. We also show the relationship of the constant velocity of waves propagating in a vacuum with the principle of relativity. We also mention abduction reasoning (retroduction) in science.
[2827] vixra:1803.0271 [pdf]
The Mystery of Mass as Understood from Atomism
Over the past few years I have presented a theory of modern atomism supported by mathematics [1, 2]. In each area of analysis undertaken in this work, the theory leads to the same mathematical end results as Einstein’s special relativity theory when using Einstein-Poincar ́e synchronized clocks. In addition, atomism is grounded in a form of quantization that leads to upper boundary limits on a long series of results in physics, where the upper boundary limits traditionally have led to infinity challenges. In 2014, I introduced a new concept that I coined “time-speed” and showed that this was a way to distinguish mass from energy. Mass can be seen as time-speed and energy as speed. Mass can also be expressed in the normal way in form of kg (or pounds) and in this paper we will show how kg is linked to time-speed. Actually, there are a number of ways to describe mass, and when they are used consistently, they each give the same result. However, modern physics still does not seem to understand what mass truly is. This paper is mainly aimed at readers who have already spent some time studying my mathematical atomism theory. Atomism seems to offer a key to understanding mass and energy at a deeper level than modern physics has attained to date. Modern physics is mostly a top-down theory, while atomism is a bottom-up theory. Atomism starts with the depth of reality and surprisingly this leads to predictions that fit what we can observe.
[2828] vixra:1803.0258 [pdf]
Discussion About the Interaction Between Current Elements Proposal of a New Force Law
In this article, a new force law between current elements is proposed which holds the 3rd Newton's Law and concides with known experimental measurements avoiding the contradictions of Ampère and Grassman expressions. Likewise, an interaction expression between point charges is postulated which satisfies action reaction principle and is Galileo Invariant. This opens the way to a revision of the concept of magnetic field and a further study of the interaction between moving charged bodies. Keywords: Grassman, Ampère, Whittaker, Maxwell, Lorentz Force, Third Newton's Law, Action-Reaction Principle, Magnetic Field.
[2829] vixra:1803.0219 [pdf]
Redefining Imaginary and Complex Numbers, Defining Imaginary and Complex Objects
The existing definition of imaginary numbers is solely based on the fact that certain mathematical operation, square operation, would not yield certain type of outcome, negative numbers; hence such operational outcome could only be imagined to exist. Although complex numbers actually form the largest set of numbers, it appears that almost no thought has been given until now into the full extent of all possible types of imaginary numbers. A close look into what further non-existing numbers could be imagined help reveal that we could actually expand the set of imaginary numbers, redefine complex numbers, as well as define imaginary and complex mathematical objects other than merely numbers.
[2830] vixra:1803.0208 [pdf]
Minkowski-Einstein Spacetime: Insight from the Pythagorean Theorem
The Pythagorean Theorem, combined with the analytic geometry of a right circular cone, has been used by H. Minkowski and subsequent investigators to establish the 4-dimensional spacetime continuum associated with A. Einstein's Special Theory of Relativity. Although the mathematics appears sound, the incorporation of a hyper-cone into the analytic geometry of a right triangle in Euclidean 3-space is in conflict with the rules of pure mathematics. A metric space of n dimensions is necessarily defined in terms of n independent coordinates, one for each dimension. Any coordinate that is a combination of the others for a given space is not independent so cannot be an independent dimension. Minkowski-Einstein spacetime contains a dimensional coordinate, via the speed of light, that is not independent. Consequently, Minkowski-Einstein spacetime does not exist.
[2831] vixra:1803.0204 [pdf]
Revival of the Sakaton
Sakaton, S=p,n,Λ with integral charges, 1,0,0, respectively, and treated as forming the fundamental representation of SU(3) group, was successful in explaining the octet mesons but failed to describe the structure of baryons. This was replaced by fractionally charged quarks. Q=u,d,s providing the fundamental representation of the SU(3) group. This has been a thumping success. Thus a decent burial was given to the concept of the Sakaton. However, there is another model, the Topological Skyrme model, which has been providing a parallel and successful description of the same hadrons. Nevertheless, sometimes this other model gives tantalizing hints of new structures in hadrons. In this paper we prove that this topological Skyrme model, leads to a clear revival of the above concept of Sakaton, as a real and a genuine physical entity. This provides a new perspective to the hypernuclei. ’t Hooft anomaly matching gives an unambiguous support to this revival of the Sakaton.
[2832] vixra:1803.0179 [pdf]
Continuity, Non-Constant Rate of Ascent, & The Beal Conjecture
The Beal Conjecture considers positive integers A, B, and C having respective positive integer exponents X, Y, and Z all greater than 2, where bases A, B, and C must have a common prime factor. Taking the general form A^X + B^Y = C^Z, we explore a small opening in the conjecture through reformulation and substitution to create two new variables. One we call 'C dot' representing and replacing C and the other we call 'Z dot' representing and replacing Z. With this, we show that 'C dot' and 'Z dot' are separate continuous functions, with argument (A^X + B^Y), that achieve all positive integers during their continuous non-constant rates of infinite ascent. Possibilities for each base and exponent in the reformulated general equation A^X +B^Y = ('C dot')^('Z dot') are examined using a binary table along with analyzing user input restrictions and 'C dot' values relative to A and B. Lastly, an indirect proof is made, where conclusively we find the continuity theorem to hold over the conjecture.
[2833] vixra:1803.0151 [pdf]
A Speculative Relationship Between the Proton Mass, the Proton Radius, and the Fine Structure Constant and Between the Fine Structure Constant and the Hagedorn Temperature
In this short note we present a possible connection between the proton radius and the proton mass using the fine structure constant. The Hagedorn temperature is related to the energy levels assumed to be required to free the quarks from the proton, where hadronic matter is unstable. We also speculate that there could be a connection between the Hagedorn temperature and the Planck temperature through the fine structure constant. Whether there is something to this, or it is purely a coincidence, we will leave to others and future research to explore. However, we think these possible relationships are worth further investigation.
[2834] vixra:1803.0119 [pdf]
The Flow of Dirac-Ricci
Following the definition of the flow of Ricci and with help of the Dirac operator, we construct a flow of hermitian metrics for the spinors fiber bundle.
[2835] vixra:1803.0090 [pdf]
Twin-Body Orbital Motion in Special Relativity
A stellar system of two identical stars in orbital motion is chosen to manifest a physics law, conservation of momentum, in Special Relativity. Both stars move around each other in a non-circular orbit. The single gravitational force between two stars demands that total momentum of this stellar system remains constant in any inertial reference frame in which the center of mass moves at a constant velocity. The calculation of total momentum in two different inertial reference frames shows that the momentum expression from Special Relativity violates conservation of momentum.
[2836] vixra:1803.0088 [pdf]
The Continuum Hypothesis
A proof of the Continuum Hypothesis as originally posed by Georg Cantor in 1878; that an uncountable set of real numbers has the same cardinality as the set of all real numbers. Any set of real numbers can be encoded by the infinite paths of a binary tree. If the binary tree has an uncountable node it must have a descendant with 2 uncountable successors. Each of those will have descendants with 2 uncountable successors, recursively. As a result the infinite paths of an uncountable binary tree will have the same cardinality as the set of all real numbers, as will the uncountable set of real numbers encoded by the tree.
[2837] vixra:1803.0085 [pdf]
Input Relation and Computational Complexity
This paper describes about complexity of PH problems by using "Almost all monotone circuit family" and "Accept input pair that sandwich reject inputs". Explained in Michael Sipser "Introduction to the Theory of COMPUTATION", circuit family that emulate Deterministic Turing machine (DTM) are almost all monotone circuit family except some NOT-gate that connect input variables (like negation normal form (NNF)). Therefore, we can find out DTM limitation by using this "NNF Circuit family". To clarify NNF circuit family limitation, we pay attention to AND-gate and OR-gate relation. If two accept "Neighbor input" pair that sandwich reject "Boundary input" in Hamming distance, NNF circuit have to meet these different variables of neighbor inputs in AND-gate to differentiate boundary inputs. NNF circuit have to use unique AND-gate to identify such neighbor input. The other hand, we can make neighbor input problem "Neighbor Tautology DNF problem (NTD)" in PH. NTD is subset of tautology DNF that do not become tautology if proper subset of one variable permutate positive / negative. NTD include many different variables which number is over polynomial size of input length. Therefore NNF circuit family that compute NTD are over polynomial size of length, and NTD that include PH is not in P.
[2838] vixra:1803.0079 [pdf]
A Modified Newtonian Quantum Gravity Theory Derived from Heisenberg's Uncertainty Principle that Predicts the Same Bending of Light as GR
Mike McCulloch has derived Newton's gravity from Heisenberg's uncertainty principle in a very interesting way that we think makes great sense. In our view, it also shows that gravity, even at the cosmic (macroscopic) scale, is related to the Planck scale. Inspired by McCulloch, in this paper we are using his approach to the derivation to take another step forward and show that the gravitational constant is not always the same, depending on whether we are dealing with light and matter, or matter against matter. Based on certain key concepts of the photon, combined with Heisenberg's uncertainty principle, we get a gravitational constant that is twice that of Newton's when we are working with gravity between matter and light, and we get the (normal) Newtonian gravitational constant when we are working with matter against matter. This leads to a very simple theory of quantum gravity that gives the correct prediction on bending of light, i.e. the same as the General Relativity theory does, which is a value twice that of Newton's prediction. One of the main reasons the theory of GR has surpassed Newton's theory of gravitation is because Newton's theory predicts a bending of light that is not consistent with experiments.
[2839] vixra:1803.0076 [pdf]
Finite Statistics Loophole in CH, Eberhard, CHSH Inequalities
Clauser-Horne (CH) inequality, Eberhard inequality, and Clauser-Horne-Shimony-Holt (CHSH) inequality are used to determine whether quantum entanglement can contradict local realism. However, the "finite statistics" loophole is known to allow local realism to violate these inequalities if a sample size is small and not "large enough" [1]. Remarkably though, this paper shows that this loophole can still cause a violation in these inequalities even with a very large sample size, e.g. a 2.4 sigma violation of CH inequality and Eberhard inequality was achieved despite 12,000,000 total trials in a Monte Carlo simulation of a local realist photonic experiment based on Malus' law. In addition, this paper shows how Eberhard inequality is especially vulnerable to this loophole when combined with an improper statistical analysis and incorrect singles counts, e.g. a 13.0 sigma violation was achieved with the same large sample size, and furthermore, a 26.6 sigma violation was produced when a small, acceptable 0.2% production rate loophole was applied. Supplementally, this paper demonstrates how the finite statistics loophole allows a bigger violation in a smaller sample size despite the sample size being "large enough", e.g. a CHSH violation of 4.4 sigma (2.43 +/- 0.10) was achieved with 280 total trials, and 4.0 sigma (2.16 +/- 0.04) with 3,000 total trials. This paper introduces the aforementioned loopholes as plausible local realist explanations to two observed violations reported by Giustina, et al. [2], and Hensen, et al. [3].
[2840] vixra:1803.0059 [pdf]
Harmonic Oscillation in Special Relativity
A physical system of a mechanical spring is chosen to manifest a physics law, conservation of momentum, in Special Relativity. Two identical objects are attached to the ends of this mechanical spring. The single force between two identical objects demands that total momentum of this physical system remains constant in any inertial reference frame in which the center of mass moves at a constant velocity. The calculation of total momentum in two different inertial reference frames shows that the momentum expression from Special Relativity violates conservation of momentum.
[2841] vixra:1803.0052 [pdf]
A Close Look at the Foundation of Quantized Inertia
In his recent work, physicist Mike McCulloch has derived what he has coined “Quantized Inertia” from Heisenberg’s uncertainty principle. He has published a series of papers indicating that Quantized Inertia can predict everything from galaxy rotations (without relying on the concept of dark matter) to the EM drive; see [1, 2, 3, 4]. Clearly, it is an interesting theory that deserves some attention until proven or disproven. We think McCulloch has several excellent insights, but it is important to understand the fundamental principles from which he has derived his theory. We will comment on the derivation in his work and suggest that it could be interpreted from a different perspective. Recent developments in mathematical atomism appear to have revealed new concepts concerning the Planck mass, the Plank length, and their link to special relativity, gravity, and even the Heisenberg principle. We wonder if Quantized Inertia is compatible with the atomist view of the world and, if so, how McCulloch’s theory should be interpreted in that light.
[2842] vixra:1803.0045 [pdf]
Newton's Gravity from Heisenberg's Uncertainty Principle. An In-Depth Study of the McCulloch Derivation
Mike McCulloch has derived Newton's gravity from Heisenberg's uncertainty principle in an innovative and interesting way. Upon deeper examination, we will claim that his work has additional important implications, when viewed from a different perspective. Based on recent developments in mathematical atomism, particularly those exploring the nature of Planck masses and their link to Heisenberg's uncertainty principle, we uncover an insight on the quantum world that leads to an even more profound interpretation of the McCulloch derivation than was put forward previously.
[2843] vixra:1803.0038 [pdf]
Does Heisenberg’s Uncertainty Collapse at the Planck Scale? Heisenberg’s Uncertainty Principle Becomes the Certainty Principle
In this paper we show that Heisenberg’s uncertainty principle, combined with key principles from Max Planck and Einstein, indicates that uncertainty collapses at the Planck scale. In essence we suggest that the uncertainty principle becomes the certainty principle at the Planck scale. This can be used to find the rest-mass formula for elementary particles consistent with what is already known. If this interpretation is correct, it means that Einstein’s intuition that “God Does Not Throw Dice with the Universe” could also be correct. We interpret this to mean that Einstein did not believe the world was ruled by strange uncertainty phenomena at the deeper level, and we will claim that this level is the Planck scale, where all uncertainty seems to collapse. The bad news is that this new-found certainty can only can last for one Planck second! We are also questioning, without coming to a conclusion, if this could have implications for Bell’s theorem and hidden variable theories.
[2844] vixra:1803.0005 [pdf]
Gravitational Force and Conservation of Momentum
In the history of physics, momentum has been represented by two expressions. One from Issac Newton, the other from Special Relativity. Both expressions are expected to describe a physical system that demands conservation of momentum. By examining the gravitational force between two identical particles in two different inertial reference frames, the momentum expression from Issac Newton is found to obey conservation of momentum while the momentum expression from Special Relativity is found to violate conservation of momentum.
[2845] vixra:1803.0001 [pdf]
Expansion Into Bernoulli Polynomials Based on Matching Definite Integrals of Derivatives
A method of function expansion is presented. It is based on matching the definite integrals of the derivatives of the function to be approximated by a series of (scaled) Bernoulli polynomials. The method is fully integral-based, easy to construct and presumably slightly outperforms Taylor series in the convergence rate. Text presents already known results.
[2846] vixra:1802.0410 [pdf]
Non-Inertial Frames in Special Relativity
This article presents a new formulation of special relativity which is invariant under transformations between inertial and non-inertial (non-rotating) frames. Additionally, a simple solution to the twin paradox is presented and a new universal force is proposed.
[2847] vixra:1802.0401 [pdf]
Remarks on Bell's Inequality
Quantum entanglement is of great importance to quantum cryptography and computation. So far, all experimental demonstrations of entanglement are designed to check Bell's inequality which is based on Bell's formulation for EPR paradox. In this note, we specify the assumptions needed in Bell's mathematical argument. We then show the contradictions among these assumptions. As a result, it becomes very easy to see that Bell's inequality is trivial.
[2848] vixra:1802.0392 [pdf]
NS-Cross Entropy-Based MAGDM under Single-Valued Neutrosophic Set Environment
A single-valued neutrosophic set has king power to express uncertainty characterized by indeterminacy, inconsistency and incompleteness. Most of the existing single-valued neutrosophic cross entropy bears an asymmetrical behavior and produces an undefined phenomenon in some situations. In order to deal with these disadvantages, we propose a new cross entropy measure under a single-valued neutrosophic set (SVNS) environment, namely NS-cross entropy, and prove its basic properties.
[2849] vixra:1802.0391 [pdf]
Neutrosophic CommutativeN-Ideals in BCK-Algebras
The notion of a neutrosophic commutative N-ideal in BCK-algebras is introduced, and several properties are investigated. Relations between a neutrosophic N-ideal and a neutrosophic commutative N-ideal are discussed. Characterizations of a neutrosophic commutative N-ideal are considered.
[2850] vixra:1802.0390 [pdf]
Neutrosophic N-Structures Applied to BCK/BCI-Algebras
Neutrosophic N-structures with applications in BCK/BCI-algebras is discussed. The notions of a neutrosophic N-subalgebra and a (closed) neutrosophic N-ideal in a BCK/BCI-algebra are introduced, and several related properties are investigated. Characterizations of a neutrosophic N-subalgebra and a neutrosophic N-ideal are considered, and relations between a neutrosophic N-subalgebra and a neutrosophic N-ideal are stated. Conditions for a neutrosophic N-ideal to be a closed neutrosophic N-ideal are provided.
[2851] vixra:1802.0388 [pdf]
Generalized Single-Valued Neutrosophic Hesitant Fuzzy Prioritized Aggregation Operators and Their Applications to Multiple Criteria Decision-Making
Single-valued neutrosophic hesitant fuzzy set (SVNHFS) is a combination of single-valued neutrosophic set and hesitant fuzzy set, and its aggregation tools play an important role in the multiple criteria decision-making (MCDM) process. This paper investigates the MCDM problems in which the criteria under SVNHF environment are in different priority levels.
[2852] vixra:1802.0387 [pdf]
Some New Biparametric Distance Measures on Single-Valued Neutrosophic Sets with Applications to Pattern Recognition and Medical Diagnosis
Single-valued neutrosophic sets (SVNSs) handling the uncertainties characterized by truth, indeterminacy, and falsity membership degrees, are a more flexible way to capture uncertainty. In this paper, some new types of distance measures, overcoming the shortcomings of the existing measures, for SVNSs with two parameters are proposed along with their proofs.
[2853] vixra:1802.0386 [pdf]
Certain Concepts in Intuitionistic Neutrosophic Graph Structures
A graph structure is a generalization of simple graphs. Graph structures are very useful tools for the study of different domains of computational intelligence and computer science. In this research paper, we introduce certain notions of intuitionistic neutrosophic graph structures. We illustrate these notions by several examples. We investigate some related properties of intuitionistic neutrosophic graph structures. We also present an application of intuitionistic neutrosophic graph structures.
[2854] vixra:1802.0385 [pdf]
NC-TODIM-Based MAGDM under a Neutrosophic Cubic Set Environment
A neutrosophic cubic set is the hybridization of the concept of a neutrosophic set and an interval neutrosophic set. A neutrosophic cubic set has the capacity to express the hybrid information of both the interval neutrosophic set and the single valued neutrosophic set simultaneously. As newly defined, little research on the operations and applications of neutrosophic cubic sets has been reported in the current literature.
[2855] vixra:1802.0384 [pdf]
VIKOR Method for Interval Neutrosophic Multiple Attribute Group Decision-Making
In this paper, we will extend the VIKOR (VIsekriterijumska optimizacija i KOmpromisno Resenje) method to multiple attribute group decision-making (MAGDM) with interval neutrosophic numbers (INNs). Firstly, the basic concepts of INNs are briefly presented.
[2856] vixra:1802.0382 [pdf]
TODIM Method for Single-Valued Neutrosophic Multiple Attribute Decision Making
Recently, the TODIM has been used to solve multiple attribute decision making (MADM) problems. The single-valued neutrosophic sets (SVNSs) are useful tools to depict the uncertainty of the MADM. In this paper, we will extend the TODIM method to the MADM with the single-valued neutrosophic numbers (SVNNs).
[2857] vixra:1802.0375 [pdf]
Kinetic Energy and Conservation of Momentum
In the history of physics, kinetic energy has been represented by two expressions. One from Issac Newton, the other from Special Relativity. Both expressions are expected to describe a physical system that demands conservation of momentum. By examining the expression of momentum in a projectile motion, the kinetic energy from Issac Newton is found to obey conservation of momentum while the kinetic energy from Special Relativity is found to violate conservation of momentum.
[2858] vixra:1802.0372 [pdf]
Time as Motion Phenomenon Physics Laws do not Apply to Inertial Systems
In the unified theory of dynamic space the quantum time is identical to the elementary motion, traveled by electrically opposite elementary units (in short units) in the interval (click-shift) of the quantum dipole length at the speed of light. The quantum time in the units region is the Natural time, that replaced the conventional time, i.e. the second. Nature understands time, as a crowd of moving units, as a length traveled with click-shifts and as a volume occupied by the units. Therefore, time is reflected in the structures of space by the number of their units. However, motion is a form of space deformation, created by force that is reduced from the dynamic space as motion force, which is accumulated on the spherical zone of the particle, due to the difference of cohesive pressure in front of and behind it. This accumulation is made by force talantonion (oscillator) per quantum time in the formations region as quantum force, causing harmonic change to the difference of cohesive pressure in proximal space of the particle as a motion wave (wave-like form), the so-called de Broglie's wave-particle. The Physical meaning of Planck's constant is interpreted as the product of three Nature's entities, namely the force talantonion (which is the foundation of motion), the quantum dipole length and the quantum time in the formations region. The "relative" mass has now been proved and the proof is not based on the second postulate of relativity. So, the particle mass does not in fact increase, when it moves, but only the final force (of gravity and motion), which causes the new dynamics of particle motion, increases. This new dynamics appears as a tension of space, which is maintained in a different way for each uniform motion, resulting the change of the Physics Laws in different inertial systems.
[2859] vixra:1802.0326 [pdf]
Conservation of Momentum vs Lorentz Transformation
An isolated physical system of gravitational force between two identical particles is chosen to manifest the physics law, conservation of momentum, in a random inertial reference frame under Lorentz Transformation. In this random reference frame, the center of mass moves at a constant velocity. By applying Lorentz transformation to the velocities of both particles, total momentum in this random inertial reference frame can be calculated and is expected to remain constant as gravitational force accelerate both particles toward each other. The calculation shows that conservation of momentum fails to hold under Lorentz Transformation.
[2860] vixra:1802.0294 [pdf]
Solution of a High-School Algebra Problem to Illustrate the Use of Elementary Geometric (Clifford) Algebra
This document is the first in what is intended to be a collection of solutions of high-school-level problems via Geometric Algebra (GA). GA is very much "overpowered" for such problems, but students at that level who plan to go into more-advanced math and science courses will benefit from seeing how to "translate" basic problems into GA terms, and to then solve them using GA identities and common techniques.
[2861] vixra:1802.0269 [pdf]
Riemann's Analytic Continuation of Zeta(s) Contradicts the Law of the Excluded Middle, and is Derived by Using Cauchy's Integral Theorem While Contradicting the Theorem's Prerequisites
The Law of the Excluded Middle holds that either a statement "X" or its opposite "not X" is true. In Boolean algebra form, Y = X XOR (not X). Riemann's analytic continuation of Zeta(s) contradicts the Law of the Excluded Middle, because the Dirichlet series Zeta(s) is proven divergent in the half-plane Re(s)<=1. Further inspection of the derivation of Riemann's analytic continuation of $\zeta(s)$ shows that it is wrongly based on the Cauchy integral theorem, and thus false.
[2862] vixra:1802.0263 [pdf]
Elastic Collision Between Charged Particles
An isolated physical system of elastic collision between two identical charged particles is chosen to manifest the physics law, conservation of momentum, in a random inertial reference frame under Lorentz Transformation. In this random reference frame, the center of mass moves at a constant velocity. By applying Lorentz transformation to the velocities of both particles, total momentum during the collision in this random inertial reference frame can be calculated and is expected to remain constant. The calculation shows that conservation of momentum fails to hold under Lorentz Transformation.
[2863] vixra:1802.0261 [pdf]
A Derivation of the Kerr Metric by Ellipsoid Coordinate Transformation
Einstein's general relativistic field equation is a nonlinear partial differential equation that lacks an easy way to obtain exact solutions. The most famous examples are Schwarzschild and Kerr's black hole solutions. The Kerr metric has astrophysical meaning because most of cosmic celestial bodies are rotating. The Kerr metric is even more difficult to derive than the Schwarzschild metric specifically due to off-diagonal term of metric tensor. In this paper, a derivation of Kerr metric was obtained by ellipsoid coordinate transformation, which causes elimination a large amount of tedious derivation. This derivation is not only physics enlightening, but also further deducing some characteristics of the rotating black hole.
[2864] vixra:1802.0212 [pdf]
Why Renormalize if You Don’t Have To?
While the notion that it is better to avoid renormalization if one possibly can is an easy sell, the possibility that a naturally finite, confined, and gauge invariant quantum model has come over the horizon turns out to be a surprisingly hard sell.
[2865] vixra:1802.0169 [pdf]
A Simple Newtonian Quantum Gravity Theory That Predicts the Same Light Bending as GR
In this paper we propose a new and simple theory of quantum gravity, inspired by Newton, that gives the same prediction of light bending as Einstein’s theory of general relativity. This new quantum gravity theory also predicts that non-light beams, that is to say beams of particles with rest-mass such as electron and proton beams, will only have half the bending of light as GR. In other words, this theory is testable. Based on this theory, we will suggest that it is a property of light that makes it bend twice as much as the amount that is predicted by Newton’s theory. This quantum gravity theory also seems to predict that for masses below the Planck mass, we are dealing with quantum probabilities and gravity force expectations. This may explain the difference between the strong and weak force – the difference is simply related to a probability factor at the Planck time scale. We are also suggesting a minor adjustment to the Newtonian gravitational acceleration field, which renders that field equal to the Planck acceleration at the Schwarzschild radius, and gives the same results as predicted by Newton when we are dealing with weak gravitational fields. This stands in contrast to standard Newtonian theory, which predicts a very weak gravitational acceleration field at the Schwarzschild radius for super-massive ob jects.
[2866] vixra:1802.0161 [pdf]
Tunguska Explosion Revisited
In this paper we discuss the previously unnoticed connection of the Tunguska explosion to natural events decades or even centuries long: 1) the third geomagnetic maximum appeared not too far from the epicenter of the Tunguska explosion in the 19th century and has been moving towards the epicenter of the Tunguska explosion along a straight line since 1908; 2) the magnetic North Pole is moving along the path leading to the epicenter of the Tunguska explosion, 3) all magnitude > or = 7.6 earthquakes sufficiently far from the ocean form an arrow pointing towards the epicenter of the Tunguska explosion; 4) the Tunguska explosion occurred at the end of the twisted portion in the path of the magnetic North Pole and at the time when magnitude > or =8.2 earthquakes and VEI > or = 5 volcanic eruptions recovered correlation with syzygies.
[2867] vixra:1802.0150 [pdf]
Elements of Geostatistics
It is a short lectures of Geostatistics giving some elements of this field for third-year students of the Geomatics license of the Faculty of Sciences of Tunis.
[2868] vixra:1802.0126 [pdf]
A Note on the Possibility of Icomplete Theory
In the paper it is demonstrated that Bells theorem is an unprovable theorem. This inconsistency is similar to concrete mathematical incompleteness. The inconsistency is purely mathematical. Nevertheless the basic physics requirements of a local model are fulfilled.
[2869] vixra:1802.0124 [pdf]
Investigation of the Characteristics of the Zeros of the Riemann Zeta Function in the Critical Strip Using Implicit Function Properties of the Real and Imaginary Components of the Dirichlet Eta Function v5
This paper investigates the characteristics of the zeros of the Riemann zeta function (of s) in the critical strip by using the Dirichlet eta function, which has the same zeros. The characteristics of the implicit functions for the real and imaginary components when those components are equal are investigated and it is shown that the function describing the value of the real component when the real and imaginary components are equal has a derivative that does not change sign along any of its individual curves - meaning that each value of the imaginary part of s produces at most one zero. Combined with the fact that the zeros of the Riemann xi function are also the zeros of the zeta function and xi(s) = xi(1-s), this leads to the conclusion that the Riemann Hypothesis is true.
[2870] vixra:1802.0120 [pdf]
Analyticity and Function Satisfying :$\displaystyle \ F'=e^{{f}^{-1}}$
In this note we present some new results about the analyticity of the functional-differential equation $ f'=e^{{f}^{-1}}$ at $ 0$ with $f^{-1}$ is a compositional inverse of $f$ , and the growth rate of $f_-(x)$ and $f_+(x)$ as $x\to \infty$ , and we will check the analyticity of some functional equations which they were studied before and had a relashionship with the titled functional-differential and we will conclude our work with a conjecture related to Borel- summability and some interesting applications of some divergents generating function with radius of convergent equal $0$ in number theory
[2871] vixra:1802.0099 [pdf]
Lorentz Transformation and Elastic Collision
An isolated physical system of elastic collision between two identical objects is chosen to manifest the physics law, conservation of momentum, in two inertial reference frames. In the first reference frame, the center of mass (COM) is stationary. In the second reference frame, the center of mass moves at a constant velocity. By applying Lorentz transformation to the velocities of both objects, total momentum before and during the collision in the second reference frame can be compared. The comparison shows that conservation of momentum fails to hold when both objects move together at the same velocity.
[2872] vixra:1802.0066 [pdf]
a Computational Approach to Estimation of Crowding in Natural Images
Crowding is a phenomenon where the identification of objects in peripheral vision is deteriorated by the presence of nearby targets. Crowding therefore reduces the extent of visual span, i.e. information intake during a single eye fixation. It is, thus, a limiting factor of many everyday tasks, such as reading. The phenomenon is due to wide area feature integration in the higher levels of visual processing. Despite the critical role of the phenomenon, complex natural images have so far not been used in the research of crowding. The purpose of the present study was to determine how the crowding effect affects object recognition in complex natural images, and whether the magnitude of the crowding could be modelled using methods introduced below. The actual magnitude of the crowding effect was determined experimentally by measuring contrast thresholds for letter targets of different sizes on various natural image backgrounds. The results of the experiments were analyzed to evaluate the developed methods. The methods are based on image statistics and clutter modelling. Clutter models assess the complexity in the image. The image statistics and the clutter models were combined with basic knowledge of the crowding effect. In addition, an early visual system model was incorporated to assess the role of the visual acuity across the visual field. The developed models predicted the induced crowding effect in an arbitrary natural image. The model of the visual system contributed to the results, as well. The differences between the methods for assessing the image properties were, however, negligible. Contrast energy, the simplest measure, can be regarded as the most efficient. Natural images can cause very strong crowding effects. The conclusion is that predicting quantitative dimensions of the crowding effect in an arbitrary image is viable. However further research of the subject is necessary for developing the models. Computational assessment of the crowding effect potentially can be applied to e.g. user interface design, assessing information visualization techniques, and the development of augmented reality applications.
[2873] vixra:1802.0063 [pdf]
Frequency Decrease of Light
The phenomenon is so slow that its effect is undetectable in light emitted at distances as in our galaxy, but is significant in light coming from cosmological distances, hence the alias ”Cosmological Degeneration/Decay of Light”. An unprecedented case in physics is that the law governing the phenomenon results uniquely, through mathematical reasoning. As main consequences, it: solves Digges-Olbers’ paradox, thus making possible cosmology with infinite universe; explains Hubble’s redshift (or cosmological redshift), in agreement with Hubble’s constant’s inconstancy; explains the Penzias & Wilosn CMB; explains the unexplained non-uniformity in CMB; replaces the Big-Bang theory/model/scenario. Two new predictions are made.
[2874] vixra:1802.0054 [pdf]
Lorentz Transformation in Inelastic Collision
An isolated physical system of inelastic collision between two identical objects is chosen to manifest the physics law, conservation of momentum, in two inertial reference frames. In the first reference frame, the center of mass (COM) is stationary. In the second reference frame, one object is at rest before collision. By applying Lorentz transformation to the velocities of both objects, total momentum before and after the collision in the second reference frame can be compared. The comparison shows that conservation of momentum fails to hold when both objects move together at the same velocity.
[2875] vixra:1802.0049 [pdf]
Is Dark Matter and Black-Hole Cosmology an Effect of Born's Reciprocal Relativity Theory ?
Born's Reciprocal Relativity Theory (BRRT) based on a maximal proper-force, maximal speed of light velocity, inertial and non-inertial observers is re-examined in full detail. Relativity of locality and chronology are natural consequences of this theory, even in flat phase space. The advantage of BRRT is that Lorentz invariance is preserved and there is no need to introduce Hopf algebraic deformations of the Poincare algebra, de Sitter algebra, nor noncommutative spacetimes. After a detailed study of the notion of $generalized$ force, momentum and mass in phase space, we explain that what one may interpret as ``dark matter'' in galaxies, for example, is just an effect of observing ordinary galactic matter in $different ~accelerating$ frames of reference than ours. Explicit calculations are provided that explain these novel relativistic effects due to the $accelerated$ expansion of the Universe, and which may generate the present-day density parameter value $ \Omega_{DM} \sim 0.25 $ of dark matter. The physical origins behind the numerical coincidences in Black-Hole Cosmology are also explored. We finalize with a rigorous study of the curved geometry of (co) tangent bundles (phase space) within the formalism of Finsler geometry, and provide a short discussion on Hamilton spaces.
[2876] vixra:1802.0047 [pdf]
Calculating the Angle Between Projections of Vectors Via Geometric (Clifford) Algebra
We express a problem from visual astronomy in terms of Geometric (Clifford) Algebra, then solve the problem by deriving expressions for the sine and cosine of the angle between projections of two vectors upon a plane. Geometric Algebra enables us to do so without deriving expressions for the projections themselves.
[2877] vixra:1802.0035 [pdf]
Translation of Some Star Catalogs to the XEphem Format
The text lists Java programs which convert star catalogs to the format of the XEphem visualiser. The currently supported input catalog formats are (i) the data base of the orbital elements of the Minor Planet Center (MPC) of the International Astronomical Union (IAU), (ii) the data base of the Hipparcos main catalog or the variants of the Tycho-1 or Tycho-2 catalogs, (iii) the systems in the Washington Double Star catalog, (iv) the Proper Motions North and South catalogs, (v) the SKY2000 catalog, (vi) the 2MASS catalog, (vii) the Third Reference Catalog of Bright Galaxies (RC3), (viii) the General Catalogue of Variable Stars (GCVS v. 5). (ix) the Naval Observatory CCD Astrograph Catalog (UCAC4). [viXra:1802.0035]
[2878] vixra:1802.0034 [pdf]
Response to the FQXi RFP: Agency in the Physical World
To bring scientific rigor to FQXi challenges[1] is the greatest challenge. Given the diverse community and vexing manner in which our organizers delight in framing issues, prevalent absence of rigor follows. Yet a substantial subset of the community is comprised of physicists, with no obvious tightening there of reason’s web. We have two immediate interests. First is to make our work accessible to larger communities, to focus on the theoretical minimum, on quantum interpretations as applied to the geometric wavefunction, to express geometric wavefunction physics in languages of most closely related disciplines. To function as interpreters[11]. Second is quantum impedance matching in nanoelectonics, beyond scale invariant quantum Hall impedance of vector Lorentz forces to those associated with all forces and their potentials. It appears the present status in that community is black art. Nobody knows what they’re doing. Impedance governs amplitude and phase of energy flow. One has to quantum impedance match if one wants to quantum compute effectively. This is just common sense.
[2879] vixra:1802.0033 [pdf]
Time Runs Only in the Elementary Particles and in Black Holes
The author shows examples where his opinion about fundamentality is different as opinion of the majority of physicists. He claims that special relativity alone gives that time runs only in rest matter. He claims that absolutely empty spacetime without rest matter cannot exist; one reason is also because time runs only in rest matter. Existence of dimensionless masses of the elementary particles also tells us about coupling between rest matter and spacetime. Dimensionless masses of the elementary particles are obtained when the masses of the elementary particles are combined with the gravitational constant, Planck's constant and the speed of light. The author insists that the principle of equivalence remains also in quantum physics, and that gravitational uncertainty principle is simple. Consciousness is still more fundamental than the elementary particles, but consciousness does not exist outside of elementary particles. The author advocates quantum consciousness, free-will, and suggests how to verify this experimentally.
[2880] vixra:1802.0010 [pdf]
Mass Transformation Between Inertial Reference Frames
An isolated physical system of elastic collision between two identical objects is chosen to manifest the conservation of momentum in two inertial reference frames. In the first reference frame, the center of mass (COM) is stationary. In the second reference frame, one object is at rest. The second frame is created by a temporary acceleration from the first frame. By applying both velocity transformation and conservation of momentum to this isolated system, mass transformation is derived precisely. The result shows that the mass of an object is independent of its motion.
[2881] vixra:1801.0416 [pdf]
Developing a Phenomenon for Analytic Number Theory
A phenomenon is described for analytic number theory. The purpose is to coordinate number theory and to give it a specific goal of modeling the phenomenon.
[2882] vixra:1801.0414 [pdf]
Zeno's Paradox and the Planck Length. The Fine Structure Constant and The Speed of Light
In this short paper we look at some interesting relationships between Zeno's paradox, the Planck length, the speed of light, and the fine structure constant. Did you know it takes 144 Zeno steps to reduce the speed of light to Planck length speed? Did you know it takes 137 Zeno steps to go from alpha*c to Planck length speed? And interestingly it is 7 steps between 144 and 137. We assume this is merely a coincidence, but it is often assumed that there are likely a maximum of 137 elements, the last one being the ``Feynmanium." And there are considered to be 7 orbital shells. We do not claim this explains anything particular in physics, but we also do not exclude the possibility that there could be something meaningful in it. Are these patterns purely numerically coincidences, or could they be a link to deeper understanding of other patterns related to physics, like the periodic table? Whatever the answer may be, it is interesting to look at Zeno's paradox in relation to the Planck length.
[2883] vixra:1801.0371 [pdf]
A New Perspective on Newtonian Gravity
In this paper we uncover the true power of Newton’s theory of gravity. Did you know that hidden inside Newton’s gravity theory is the speed of gravity, namely c? Physicists who claim that Newton’s gravitational force is instantaneous have not yet understood Newton’s gravity theory to its full extent. Did you know that the Newton’s theory of gravity, at a deeper level, is actually a theory of quantum gravity? Did you know that what is central for gravity is the Planck length and not the gravitational constant? To truly understand Newtonian gravity, we have to understand that Newton’s gravitational constant is actually a composite constant. Once we understand this, we will truly begin to understand what Newton’s theory of universal gravitation is all about.
[2884] vixra:1801.0335 [pdf]
Space and Time, Geometry and Fields: An Historical Essay on the Fundamental and its Physical Manifestation
We address historical circumstances surrounding the absence of two essential tools - geometric interpretation of Clifford algebra and generalization of impedance quantization - from the particle physicist’s tool kit, and present details of the new perspective that follows from their inclusion. The resulting geometric wavefunction model permits one to examine the interface between fundamental and emergent.
[2885] vixra:1801.0330 [pdf]
The Fundamentality of Gravity
After the physicality of existence, gravity's role in the Universe is the most fundamental thing. This role has various manifestations which, it is argued, have been largely misinterpreted by modern physics. An alternative conception of gravity---one that agrees with firmly established empirical evidence---is most compactly characterized by its definition of Newton's constant in terms of other fundamental constants. This expression and supporting arguments largely fulfill the long-standing goal of unifying gravity with the other forces. Phenomena spanning atomic nuclei to the large-scale cosmos and the basic physical elements, mass, space, and time, are thereby seen as comprising an interdependent (unified) whole. Meanwhile, a virtual industry of fanciful, far-from-fundamental mathematical distractions clog up the literature of what is still called fundamental physics. By contrast with this dubious activity---most importantly---the new conception is empirically testable. The test would involve probing gravity where it has not yet been probed: inside (through the center) of every body of matter.
[2886] vixra:1801.0328 [pdf]
Time Transformation Between Inertial Reference Frames
Time in an inertial reference frame can be obtained from the definition of velocity in that inertial reference frame. Velocity depends on coordinate and time. Therefore, coordinate transformation and velocity transformation between inertial reference frames can lead to time transformation. Based on this approach, the time transformation between two arbitrary inertial reference frames in one dimensional space is derived. The result shows that the elapsed time is identical in all inertial reference frames.
[2887] vixra:1801.0326 [pdf]
Algebra of Classical and Quantum Binary Measurements
The simplest measurements in physics are binary; that is, they have only two possible results. An example is a beam splitter. One can take the output of a beam splitter and use it as the input of another beam splitter. The compound measurement is described by the product of the Hermitian matrices that describe the beam splitters. In the classical case the Hermitian matrices commute (are diagonal) and the measurements can be taken in any order. The general quantum situation was described by Julian Schwinger with what is now known as ``Schwinger's Measurement Algebra''. We simplify his results by restriction to binary measurements and extend it to include classical as well as imperfect and thermal beam splitters. We use elementary methods to introduce advanced subjects such as geometric phase, Berry-Pancharatnam phase, superselection sectors, symmetries and applications to the identities of the Standard Model fermions.
[2888] vixra:1801.0309 [pdf]
Remarks on Liouville-Type Theorems on Complete Noncompact Finsler Manifolds
In this paper, we give a gradient estimate of positive solution to the equation $$\Delta u=-\lambda^2u, \ \ \lambda\geq 0$$ on a complete non-compact Finsler manifold. Then we obtain the corresponding Liouville-type theorem and Harnack inequality for the solution. Moreover, on a complete non-compact Finsler manifold we also prove a Liouville-type theorem for a $C^2$-nonegative function $f$ satisfying $$\Delta f\geq cf^d, c>0, d>1, $$ which improves a result obtained by Yin and He.
[2889] vixra:1801.0307 [pdf]
A Nonconvex Penalty Function with Integral Convolution Approximation for Compressed Sensing
In this paper, we propose a novel nonconvex penalty function for compressed sensing using integral convolution approximation. It is well known that an unconstrained optimization criterion based on $\ell_1$-norm easily underestimates the large component in signal recovery. Moreover, most methods either perform well only under the measurement matrix satisfied restricted isometry property (RIP) or the highly coherent measurement matrix, which both can not be established at the same time. We introduce a new solver to address both of these concerns by adopting a frame of the difference between two convex functions with integral convolution approximation. What's more, to better boost the recovery performance, a weighted version of it is also provided. Experimental results suggest the effectiveness and robustness of our methods through several signal reconstruction examples in term of success rate and signal-to-noise ratio (SNR).
[2890] vixra:1801.0274 [pdf]
A Note on Deutsch-Jozsa Quantum Algorithm
The Deutsch-Jozsa quantum algorithm is of great importance to modern quantum computation, but we find it is flawed. It confuses two unitary transformations: one is performed on a pure state, and the other on a superposition. In the past decades, no constructive specification on the unitary operator performed on involved superposition has been found, and no experimental test on the algorithm has been practically carried out. We think it needs more constructive specifications on the algorithm so as to confirm its correctness.
[2891] vixra:1801.0269 [pdf]
An Alternative Explanation of Non-Newtonian Galactic Rotation Curves
Inspired by the continued success of MOND (Modified Newtonian Dynamics) in the prediction of galactic rotation curves, an attempt to derive the deep-MOND equation from known mechanics has resulted in a third explanation apart from MOND and dark matter. It is proposed that particle velocities follow the relation v = √(GM)r <sup>-1/2 </sup>+ √(a<sub>x</sub>)r<sup>1/2</sup>, where a<sub>x</sub> is a scalar accelerating field that is independent of mass. This yields the following relation for centripetal acceleration: a = (GM)r<sup>-2 </sup>+ 2√(a<sub>x</sub>GM)r<sup>-1 </sup>+ a<sub>x</sub>, which, at large radii, is nearly identical to the deep-MOND equation a = √(a<sub>0</sub>GM)r<sup>-1</sup>. When applied to a handful of galaxies, the velocity equation prefers an a<sub>x</sub> on the order of 10<sup>-14</sup> (km s<sup>-2</sup>), which gives a good fit of velocity curves to observed values. It is posited that scalar field a<sub>x</sub> is a result of local galactic expansion, such that a<sub>x</sub> = cH<sub>g</sub>, where H<sub>g</sub> is the rate of expansion. For the Milky Way, it is estimated that H<sub>g</sub> ≈ 9.3 E-4 (km s<sup>-1</sup> kpc<sup>-1</sup>). This rate would predict an increase of the astronomical unit of 14 (cm yr<sup>-1</sup>), which compares well with the recently reported measurement of 15 ±4 (cm yr<sup>-1</sup>).
[2892] vixra:1801.0252 [pdf]
Relativity Emerging from Microscopic Particle Behaviour and Time Rationing
This article presents a new theory or at least interpretation of relativity whereby relativistic effects emerge as a result of rationing of Newtonian time into spatial and intrinsic motions. Unlike special theory of relativity, this theory does not need to postulate that speed of light (c) is constant for all reference frames. The constancy of speed of light emerges from more basic principles.This theory postulates that :<br/> <strong> Postulate 1:</strong> The speed of spatial motion of a particle is always c.<br/> <strong> Postulate 2:</strong> Spatial motion and intrinsic motion continuously, linearly, and symmetrically rub into each other.<br/> Postulate 1 seems reasonable because the Dirac model of electron (i.e. its <strong>zitterbewegung</strong> interpretation) indicates that the speed in the intrinsic degrees of freedom of an electron is always c. If the spatial speed was anything other than c then transitioning between spatial and intrinsic motions would have entailed repeated cycles of high accelerations and deccelerations. Postulate 2 is also reasonable because it is the simplest and most symmetric way for the spatial and intrinsic time-shares to co-evolve in time. An observer's physical measure of time is entirely encoded by its intrinsic motions. This is the relativistic time. The time spent in spatial motion does not cause any change of the particle's internal configuration, and therefore does not contribute to its measurable time.<br/> If an observer races against a photon, the photon will always lead ahead with a relative speed of c because light advances with respect to the observer only for the duration of the observer's intrinsic motion, i.e. for the full duration of its measurable time. During spatial motion, the observer moves at the same speed as the photon. Consequently the observed relative speed of light - i.e. the spatial advance of light divided by the measurable time is always c. Thus in the limited sense of racing a photon, constancy of its measured speed is a deduced result here. The broader question of relative velocity of an observer with respect to a photon or a light wave-front is clarified in section 5
[2893] vixra:1801.0249 [pdf]
Dark Energy and the Bohm-Poisson-Schroedinger Equation
We revisit the solutions to the nonlinear Bohm-Poisson (BP) equation with relevant cosmological applications. In particular, we obtain an exact analytical expression for the observed vacuum energy density and explain the origins of its repulsive gravitational nature. Further results are provided which include two possible extensions of the Bohm-Poisson equation to the full relativistic regime; how Bohm's quantum potential in four-dimensions could be re-interpreted as a gravitational potential in five-dimensions, and which explains why the presence of dark energy/dark matter in our 4D spacetime can only be inferred indirectly, but not be detected/observed directly. Solutions to the novel Bohm-Poisson-Schroedinger equation are provided encoding the repulsive nature of dark energy (repulsive gravity). We proceed with a discussion on Asymptotic safety, matter creation from the vacuum, and Finsler relativistic extensions of the Bohm-Poisson equation. Finally, we conclude with some comments about the Dirac-Eddington large numbers coincidences.
[2894] vixra:1801.0218 [pdf]
Does Heisenberg’s Uncertainty Principle Predict a Maximum Velocity for Anything with Rest-Mass below the Speed of Light ?
In this paper, we derive a maximum velocity for anything with rest-mass from Heisenberg’s uncertainty principle. The maximum velocity formula we get is in line with the maximum velocity formula suggested by Haug in a series of papers. This supports the assertion that Haug’s maximum velocity formula is useful in considering the path forward in theoretical physics. In particular, it predicts that the Lorentz symmetry will break down at the Planck scale, and shows how and why this happens. Further, it shows that the maximum velocity for a Planck mass particle is zero. At first this may sound illogical, but it is a remarkable result that gives a new and important insight into this research domain. We also show that the common assumed speed limit of v < c for anything with rest-mass is likely incompatible with the assumption of a minimum length equal to the Planck length. So one either has to eliminate the idea of the Planck length as something special, or one has to modify the speed limit of matter slightly to obtain the formula we get from Heisenberg’s uncertainty principle.
[2895] vixra:1801.0211 [pdf]
Smarandache Fresh and Clean Ideals of Smarandache Bci Algebras
The notion of Smarandache fresh and clean ideals is introduced, examples are given, and related properties are investigated. Relations between Q-Smarandache fresh ideals and Q-Smarandache clean ideals are given. Extension properties for Q-Smarandache fresh ideals and Q-Smarandache clean ideals are established.
[2896] vixra:1801.0203 [pdf]
Residual Annual and Diurnal Periodicities of the P 10 Acceleration Term Resolved in Absolute CMB Space
Applying the general, classical Doppler formula (CMB-Doppler formula) of first order for two-way radio Doppler signals in the fundamental rest frame of the isotropic cosmic microwave background radiation (CMB) between earthbound Deep Space Network stations (DSN), and the Pioneer 10 space probe (P 10) resolves the phenomenon of the residual, so far unexplained annual and diurnal signal variations on top of the constant acceleration term <i>Anderson & Laing & Lau & et al. (2002), Anderson & Campbell & Ekelund & et al. (2008)</i>. The anomalous annual and diurnal variations of the acceleration term vanish, if instead of the relativistic Standard-Doppler formula (SRT-Doppler formula) of first and second order the CMB-Doppler formula is used. That formula contains the absolute velocities u<sub>e</sub> of Earth, and u<sub>pio</sub> of P 10, derived from the absolute velocity u<sub>sun</sub> of the solar system barycenter in the CMB, with u<sub>sun</sub> = 369.0 ± 0.9 km/s, and the relative revolution velocity v<sub>e</sub> of Earth, and the relative velocity v<sub>pio</sub> of P 10 in the heliocentric frame from January 1987 until December 1996. The flyby radio Doppler and ranging data anomalies can be resolved as well by using the CMB-Doppler formula with the absolute, asymptotic velocities of the inbound and outbound maneuver flights, which have usually slightly different magnitudes, inducing the so far unexplained frequency shift, and the unexplained difference in the ranging data.
[2897] vixra:1801.0198 [pdf]
Reviving Newtonian To Interpret Relativistic Space-Time
This article presents a new interpretation of relativity whereby relativistic effects emerge as a result of rationing of Newtonian time into spatial and intrinsic motions. Unlike special theory of relativity, this theory does not need to postulate that speed of light (c) is constant for all reference frames. The constancy of speed of light emerges from more basic principles. This theory postulates that : <br/> <strong> The speed of spatial motion of a particle is always c </strong><br/> <strong> Spatial motion and intrinsic motion continuously, linearly, and symmetrically rubs into each other. </strong><br/> Postulate 1 seems reasonable because the Dirac model of electron already shows that the spatial speed of intrinsic degrees of freedom of an electron is always c. If the spatial speed was anything other than c then time-sharing between spatial and intrinsic motions would have entailed repeated cycles of high accelerations and deccelerations. Postulate 2 is also reasonable because it is the simplest and most symmetric way for the spatial and intrinsic time-shares to co-evolve in time. An observer's physical measure of time is entirely encoded by its intrinsic motions. This is the relativistic time. The time spent in spatial motion does not cause any change of the particle's internal state, and therefore does not contribute to measurable time.<br/> Speed of light is constant regardless of the speed of the observer because light advances with respect the observer only for the duration of its intrinsic motion (i.e. during the relativistic time). During spatial motion, the observer moves with the light. Consequently the spatial advance of light divided by the relativistic time (i.e. the observed relative speed) is always c. Hence constancy of speed of light, which is a postulate for Einstein's relativity, is a deduced result here.
[2898] vixra:1801.0187 [pdf]
A Conjecture of Existence of Prime Numbers in Arithmetic Progressions
In this paper it is proposed and proved a conjecture of existence of a prime number on the arithmetic progression S_{a,b}=\left\{ ab+1,ab+2,ab+3,...,ab+(b-1)\right\} As corollaries of this proof, they are proved many classical prime number’s conjectures and theorems, but mainly Bertrand's theorem, and Oppermann's, Legendre’s, Brocard’s, and Andrica’s conjectures. It is also defined a new maximum interval between any natural number and the nearest prime number. Finally, it is stated a corollary which implies some advance on the conjecture of the existence of infinite prime numbers of the form n^{2}+1.
[2899] vixra:1801.0176 [pdf]
Explanation of Michelson-Morley Experiment
The unified theory of dynamic space has been conceived and written by Professor Physicist Naoum Gosdas, inspired from the principle of antithesis (opposition), because of which all natural phenomena are created. These phenomena are derived from the unique absolute dynamic space. So, the explanation of Michelson-Morley experiment is based on the above theory, namely on both actual and the known apparent reduction of light speed, without the second postulate of relativity. The actual reduction of light speed happens only when the light is transmitted on moving material systems, on which the cohesive pressure of the proximal space is reduced.
[2900] vixra:1801.0156 [pdf]
A Remark on the Localization Formulas About Two Killing Vector Fields
In this article, we will discuss a localization formulas of equlvariant cohomology about two Killing vector fields on the set of zero points ${\rm{Zero}}(X_{M}-\sqrt{-1}Y_{M})=\{x\in M \mid |Y_{M}(x)|=|X_{M}(x)|=0 \}.$ As application, we use it to get formulas about characteristic numbers and to get a Duistermaat-Heckman type formula on symplectic manifold.
[2901] vixra:1801.0155 [pdf]
A Poincaré-Hopf Type Formula for A Pair of Vector Fields
We extend the reslut about Poincar\'e-Hopf type formula for the difference of the Chern character numbers (cf.[3]) to the non-isolated singularities, and establish a Poincar\'e-Hopf type formula for a pair of vector field with the function $h^{T_{\mathbb{C}}M}(\cdot,\cdot)$ has non-isolated zero points over a closed, oriented smooth manifold of dimension $2n$.
[2902] vixra:1801.0140 [pdf]
Simple Proofs that Zeta(n>=2) Is Irrational
We prove that partial sums of $\zeta(n)-1=z_n$ are not given by any single decimal in a number base given by a denominator of their terms. This result, applied to all partials, shows that partials are excluded from an ever greater number of rational, possible convergence points. The limit of the partials is $z_n$ and the limit of the exclusions leaves only irrational numbers. Thus $z_n$ is proven to be irrational. Alternative proofs of this same type are given.
[2903] vixra:1801.0138 [pdf]
Natural Squarefree Numbers: Statistical Properties II
This paper is an appendix of Natural Squarefree Numbers: Statistical Properties [PR04]. In this appendix we calculate the probability of c is squarefree, where c=a*b, a is an element of the set X and b is an element of the set Y.
[2904] vixra:1801.0119 [pdf]
Parametric Validation Reinforcement Loops, and the Cosmological Constant Problem
The primary consideration of this unifying field theory is the partial mapping of topology, within observations, as feedback loops. Specifically, the effective degrees of freedom (d.o.f.) resulting from such recursive exchanges. This modeling of observation as partial mapping seems well justified, as it is ubiquitous throughout nature's exchanges and propagation of information. Consider how the meridians of gnomonic projected light waves onto vision receptors are similarly distorted. Thus, PVRL extrapolates this same principle of constraining parameters in recursive feedback loops into the entire scope from QFT, (at flashpoint), to GR. PVRL proposes a multispace of transitioning Rn vector fields (similar to Hilbert space), coexisting like wavelengths in a prism. Progressing from quantum states, which are higher dimensional, outward to lower dimensional Macrospace (Note that backward causation is possible in quantum mechanics, but not possible in the constrained parameters of classic mechanics or GR). Familiar classic R4 spacetime is just one phase of this multispace. The mechanism which delineates between each state is PVRL: An iterated process of conscious binary gnomonic mapping of higher dimensional topology onto biased eigenstates. (and subsequent propagation within the quantum field). At each iteration, symmetry becomes more broken, and geometric parameters become more constrained (Polarity, bonding, separation, alignment and propagation). The inevitable outcome of such recursive feedback loops is a power law distribution (exponential tail), with increased entropy and complexity The resolution of the Cosmological Constant Problem is an understanding that scales approaching QFT are viewed in higher dimensional divergence, and that scales approaching GR are viewed in lower dimensional convergence. A Transitioning Rn space, from R5 at microscales, outward to R3 at the cosmic event horizon, with R4 spactime as an intermediate phase.
[2905] vixra:1801.0108 [pdf]
Velocity Transformation in Reference Frame
A moving object in one inertial reference frame always moves at a different speed in another inertial reference frame. To determine this different speed, a temporary acceleration is applied to a duplicate of the first inertial reference frame in order to match the second inertial reference frame. The velocity transformation between two inertial reference frames is precisely derived based on the applied acceleration. The result shows that velocity transformation depends exclusively on the relative motion between inertial reference frames. Velocity transformation is independent of the speed of light.
[2906] vixra:1801.0102 [pdf]
Bayesian Transfer Learning for Deep Networks
We propose a method for transfer learning for deep networks through Bayesian inference, where an approximate posterior distribution q(w|θ) of model parameters w is learned through variational approximation. Utilizing Bayes by Backprop we optimize the parameters θ associated with the approximate distribution. When performing transfer learning we consider two tasks; A and B. Firstly, an approximate posterior q_A(w|θ) is learned from task A which is afterwards transferred as a prior p(w) → q_A(w|θ) when learning the approximate posterior distribution q_B(w|θ) for task B. Initially, we consider a multivariate normal distribution q(w|θ) = N (µ, Σ), with diagonal covariance matrix Σ. Secondly, we consider the prospects of introducing more expressive approximate distributions - specifically those known as normalizing flows. By investigating these concepts on the MNIST data set we conclude that utilizing normalizing flows does not improve Bayesian inference in the context presented here. Further, we show that transfer learning is not feasible using our proposed architecture and our definition of task A and task B, but no general conclusion regarding rejecting a Bayesian approach to transfer learning can be made.
[2907] vixra:1801.0096 [pdf]
Quadratic Transformations of Hypergeometric Function and Series with Harmonic Numbers
In this brief note, we show how to apply Kummer's and other quadratic transformation formulas for Gauss' and generalized hypergeometric functions in order to obtain transformation and summation formulas for series with harmonic numbers that contain one or two continuous parameters.
[2908] vixra:1801.0066 [pdf]
The Brighter Sides of Gravity
This paper is an appendix to the article "From Bernoulli to Laplace and Beyond" (refenced below), and discusses different aspects of it: electromagnetism, field tensors, general relativity, and probability.
[2909] vixra:1801.0050 [pdf]
Fruit Recognition from Images Using Deep Learning
In this paper we introduce a new, high-quality, dataset of images containing fruits. We also present the results of some numerical experiment for training a neural network to detect fruits. We discuss the reason why we chose to use fruits in this project by proposing a few applications that could use this kind of neural network.
[2910] vixra:1801.0045 [pdf]
Benchmarking and Improving Recovery of Number of Topics in Latent Dirichlet Allocation Models
Latent Dirichlet Allocation (LDA) is a generative model describing the observed data as being composed of a mixture of underlying unobserved topics, as introduced by Blei et al. (2003). A key hyperparameter of LDA is the number of underlying topics <i>k</i>, which must be estimated empirically in practice. Selecting the appropriate value of <i>k</i> is essentially selecting the correct model to represent the data; an important issue concerning the goodness of fit. We examine in the current work a series of metrics from literature on a quantitative basis by performing benchmarks against a generated dataset with a known value of <i>k</i> and evaluate the ability of each metric to recover the true value, varying over multiple levels of topic resolution in the Dirichlet prior distributions. Finally, we introduce a new metric and heuristic for estimating kand demonstrate improved performance over existing metrics from the literature on several benchmarks.
[2911] vixra:1801.0041 [pdf]
Taking Advantage of BiLSTM Encoding to Handle Punctuation in Dependency Parsing: A Brief Idea
In the context of the bidirectional-LSTMs neural parser (Kiperwasser and Goldberg, 2016), an idea is proposed to initialize the parsing state without punctuation-tokens but using them for the BiLSTM sentence encoding. The relevant information brought by the punctuation-tokens should be implicitly learned using the errors of the recurrent contributions only.
[2912] vixra:1801.0025 [pdf]
From Bernoulli to Laplace and Beyond
Reviewing Laplace's equation of gravitation from the perspective of D. Bernoulli, known as Poisson-equation, it will be shown that Laplace's equation tacitly assumes the temperature T of the mass system to be approximately 0 degrees of Kelvin. For temperatures greater zero, the gravitational field will have to be given an additive correctional field. Now, temperature is intimately related to the heat, and heat is known to be radiated as an electromagnetic field. It is shown to take two things in order to get at the gravitational field in the low temperature limit: the total square energy density of the source in space-time and a (massless) field, which expresses the equivalence of inert and gravitational mass/energy in a quadratic, Lorentz-invariant form. This field not only necessarily must include electromagnetic interaction, it also will be seen to behave like it.
[2913] vixra:1801.0010 [pdf]
Concerning the Dirac γ-Matrices Under a Lorentz Transformation of the Dirac Equation
We embolden the idea that the Dirac 4 × 4 γ-matrices are four-vectors where the space components (γ i) represent spin and the forth component (γ 0) should likewise represent the time component of spin in the usual four-vector formalism of the Special Theory of Relativity. With the γ-matrices as four-vectors, it is seen that the Dirac equation admits two kinds of wavefunctions – (1) the usual four component Dirac bispinor ψ and (2) a scalar four component bispinor φ. Realizing this, and knowing forehand of the existing mystery as to why Leptons and Neutrinos come in pairs, we seize the moment and make the suggestion that the pair (ψ, φ) can be used as a starting point to explain mystery of why in their three generations [(e ± , ν e), (µ ± , ν µ), (τ ± , ν τ)], Leptons and Neutrinos come in doublets. In this suggestion, the scalar-bispinor φ can be thought of as the Neutrino while the usual Dirac bispinor ψ can be thought of as the Lepton.
[2914] vixra:1712.0676 [pdf]
"Do Ion Channels Spin?" Update
The idea that ion channel proteins physically rotate to enhance ion flow through the membrane is reviewed, in light of recent experimental results. Although there is still no definite answer to the question presented in the title, some recent work can be interpreted as supporting this notion. As a bonus, we present a general theory of anesthesia. Anesthetics, and alcohol, are dissolved in the membrane lipids, and (perhaps) do not directly bind to ion channels or pores. Instead, they change the properties of the surrounding lipids, thus compromising the rotations of the pores, and producing the anesthetic effects.
[2915] vixra:1712.0664 [pdf]
Electron Toroidal Moment
This Toroidal Solenoid Electron model describe the electron as an infinitesimal electric charge moving at the speed of light along a helical path. From this semiclassical model, we can derive all the electron characteristics as the electron magnetic moment, the g-factor, its natural frequency, the value of Quantum Hall Resistance and the value of the Magnetic Flux Quantum. In this new work, we obtain other features such as the helicity, the chirality, the Schwinger limits and, especially, the Toroidal Moment of the electron. The experimental detection of the Toroidal Moment of the electron could be used to validate this model. The toroidal moment of the electron is a direct consequence of Helical Solenoid Electrón model and it is calculated qualitatively and quantitatively. This feature of the electron (and any other subatomic particle) is not contained in the standard model, but appears as a requirement to explain the violation of the parity symmetry of the subatomic particles. The existence of a toroidal moment has been experimentally verified in nuclei of heavy atoms and also serves as basis to explain the dark matter.
[2916] vixra:1712.0662 [pdf]
The Chameleon Effect, the Binomial Theorem and Beal's Conjecture
In psychology, the Chameleon Effect describes how an animal's behaviour can adapt to, or mimic, its environment through non-conscious mimicry. In the first part of this paper, we show how $a^x - b^y$ can be expressed as a binomial expansion (with an upper index, $z$) that, like a chameleon, mimics a standard binomial formula (to the power $z$) without its own value changing even when $z$ itself changes. In the second part we will show how this leads to a proof for the Beal Conjecture. We finish by outlining how this method can be applied to a more generalised form of the equation.
[2917] vixra:1712.0659 [pdf]
TDBF: Two Dimensional Belief Function
How to efficiently handle uncertain information is still an open issue. Inthis paper, a new method to deal with uncertain information, named as two dimensional belief function (TDBF), is presented. A TDBF has two components, T=(mA,mB). The first component, mA, is a classical belief function. The second component, mB, also is a classical belief function, but it is a measure of reliability of the first component. The definition of TDBF and the discounting algorithm are proposed. Compared with the classical discounting model, the proposed TDBF is more flexible and reasonable. Numerical examples are used to show the efficiency of the proposed method.
[2918] vixra:1712.0647 [pdf]
A Total Uncertainty Measure for D Numbers Based on Belief Intervals
As a generalization of Dempster-Shafer theory, the theory of D numbers is a new theoretical framework for uncertainty reasoning. Measuring the uncertainty of knowledge or information represented by D numbers is an unsolved issue in that theory. In this paper, inspired by distance based uncertainty measures for Dempster-Shafer theory, a total uncertainty measure for a D number is proposed based on its belief intervals. The proposed total uncertainty measure can simultaneously capture the discord, and non-specificity, and non-exclusiveness involved in D numbers. And some basic properties of this total uncertainty measure, including range, monotonicity, generalized set consistency, are also presented.
[2919] vixra:1712.0642 [pdf]
Estimation of the Earth's "Unperturbed" Perihelion from Times of Solstices and Equinoxes
Published times of the Earth's perihelions do not refer to the perihelions of the orbit that the Earth would follow if unaffected by other bodies such as the Moon. To estimate the timing of that ``unperturbed" perihelion, we fit an unperturbed Kepler orbit to the timings of the year 2017's equinoxes and solstices. We find that the unperturbed 2017 perihelion, defined in that way, would occur 12.93 days after the December 2016 solstice. Using that result, calculated times of the year 2017's solstices and equinoxes differ from published values by less than five minutes. That degree of accuracy is sufficient for the intended use of the result.
[2920] vixra:1712.0641 [pdf]
Alternate Proof for Zeta(n>1) is Irrational
This is an alternative proof that zeta(n>1) is irrational. It uses nested intervals and Cantor's Nested Interval Theorem. It is a follow up for the article Visualizing Zeta(n>1) and Proving its Irrationality.
[2921] vixra:1712.0639 [pdf]
Mass and Ground State Energy of the Oscillator of a Black Body and the Lowest Radiation Temperature
According to the Planck’s theory of black body radiation, a black body is a collection of oscillators which are responsible for its radiation. There are no results available in the literature about the mass and vibrational ground state energy of the oscillator. In the present research, ground state energy and mass of the oscillator are calculated from Planck’s theory of black body radiation and de Broglie’s wave particle duality relation. It is observed that the mass of the oscillator is 1.6 × 10^{−39} kg and vibrational ground state energy is 3.587 × 10^{−23} J subject to the constraint that the minimum temperature for radiation of a black body is 2.598K.
[2922] vixra:1712.0637 [pdf]
The Canonical Commutation Relation is Unitary Due to Scaling Between Complementary Variables
Abstract<br>Abstract Textbook theory says that the Canonical Commutation Relation derives from the homogeneity of space. This paper shows that the Canonical Commutation Relation does not derive from homogeneity of space or the homogeneity symmetry itself, but derives from a duality viewpoint of homogeneity, seen both from the viewpoint of position space, and from the viewpoint of momentum space, combined. Additionally, a specific particular fixed scale factor, relating position space with momentum space is necessary. It is this additional scaling information which enables complementarity between the system variables and makes the system unitary. Without this particular scaling, the Canonical Commutation Relation is left non-unitary and broken. Indeed, unitarity is separate information, unconnected and logically independent of the quantum system's underlying symmetry. This single counter-example contradicts the current consensus that foundational symmetries, underlying quantum systems, are ontologically, intrinsically and unavoidably unitary. And thus removes ‘unitary ontology’, as reason, for axiomatically imposing unitarity (or self-adjointness) — by Postulate — on quantum mechanical systems.<br><br>Keywords<br>foundations of quantum theory, quantum mechanics, wave mechanics, Canonical Commutation Relation, symmetry, homogeneity of space, unitary.
[2923] vixra:1712.0634 [pdf]
Anatomy of Radiation And Propagation
<p>We hear the word radiation often. We see waves revealed by water, and air wave above hot surface. We feel the air (wind) and hear it's motion (sound), even see the supersonic shack wave of air.  How particle, wave, radiant energy, or EMR is different from perceivable world in our day to day life? Shouldn't there be fundamental principles that governing the action of matter and it's interaction with it's environment?</p> <p>Even a lone particle stands out from it's background. Any object broadcasts it's existence, from insignificant to strong, simple to complex. It will never stop regardless it is observable to us or not. Action of object casts force (energy) into the environment. It can be contact force that requires media to spread, or action-at-a-distance force that does not. Energy is passed onto next neighbors and spread out. The collected interactions of neighbors will be in the form of waves. It is the fundamental activity of action-reaction-interaction of an object and environment.</p> <p>Oscillating piano string with hummer, it makes sound and heat. Dust or water droplets on th string will be shook off. Oscillating it with electricity, it produces heat and light. Particles are radiated when oscillated in high energy. Certainly, the structure of the string will not survive long in such intense. Comparing to what the Sun is doing, do you see the similarity?  Isn't the Sun is oscillating at the intense of destruction?</p>
[2924] vixra:1712.0606 [pdf]
Theoretical Lower Limit of Mass of Phonon and Critical Mass for Matter-Dark Matter Conversion
In this article, theoretical lower limit of mass of phonon and critical mass for matter-dark matter conversion is presented. Using Planck’s equation for black body radiation and de Broglie’s wave particle duality, we can get a relation between mass of phonon and frequency of emitted radiation which opens up several questions and possibilities. From this relation, with few assumptions, we may have a critical mass bellow which we have dark matter and above we get normal matter. With the help of this relation, maximum matter density and limit of string length may be reviewed. Calculated critical mass, considering the present value of Planck’s constant and Boltzmann constant, is 7.367 × 10^{−51} Kg. It is also observed that if phonon obeys de Broglie’s equation, generation of an electromagnetic radiation of frequency less than 56638721410Hz is not possible by thermal heating.
[2925] vixra:1712.0605 [pdf]
Physical Interpretation of de Broglie’s Wave Particle Duality Relation in a Phonon
Using Planck’s postulates for black body radiation and wave particle duality relation of de Broglie we can get a relation between mass of a phonon and emission frequency of a body at different temperatures. Using boundary condition lower limit of mass of a phonon is calculated which is 7.36 × 10^{−51} kg.
[2926] vixra:1712.0603 [pdf]
Tachyons: Properties and Way of Detection
The presently observed accelerating universe suggests that there is a possibility of the real existence of ‘Tachyons’ - a Boson class particle theo- rized to exceed the maximum speed of electro magnetic radiation. Theory suggests that Tachyons do not violate the theory of Special Relativity despite having a speed greater than that of light in vacuum. But their existence is not confirmed by experiment. In this article, possible properties of tachyons are discussed which would be helpful to test their existence and detection. Two thought experiments are proposed to detect them.
[2927] vixra:1712.0602 [pdf]
A New Model to Explain an Accelerating Universe Within Newtonian Mechanics
In this paper, applying only Newtonian mechanics, I have made an analysis of the nature of an accelerating universe. In this model loss of gravitational force replaces any need for dark energy to explain an accelerating universe. This model is consistent with the Big Bang model.
[2928] vixra:1712.0598 [pdf]
The General Relevance of the Modified Cosmological Model
The modified cosmological model (MCM) has been an active research program since 2009 and here we summarize the highlights while providing background and insights as they demonstrate mathematical merit. The main ``modification'' that underpins this work is to consider the principle of greatest action, not least, but cosmological energy functions do not appear. To accommodate infinite action, we develop a transfinite analytical framework $^\star\mathbb{C}$. In the first chapter, we define the MCM system as the union of two Kaluza--Klein theories disconnected by a topological obstruction. We identify a detail overlooked by Feynman in his ``many double slits'' thought experiment with which we directly motivate the principle of maximum action. Chapter two is mostly a review of fundamental concepts including twistors and quaternions. Much attention in chapter three is given to what are perceived as criticisms of the MCM mechanism for the unification of the theories of gravitation and quanta, and we closely examine the derivation of Einstein's equation from unrelated MCM concepts. The main technical result in chapter three is to demonstrate that dark energy and expanding space are inherent to the MCM metric and we examine the role of the advanced electromagnetic potential in ``hyperspacetime.'' Chapter four is a tour de force starting with a review of the foundations of the MCM. We extend Dirac's bra-ket to an MCM inner product for $^\star\mathbb{C}$. We continue with a study of conformal infinity and the transfinite extension into the region ``beyond conformal infinity.'' The double slit experiment gives an example of the empirical/philosophical utilities of the MCM principles and we examine their extension into the realm of numerical analysis before deriving the main result: a new formulation for the mass-energy budget of the universe that agrees perfectly with $\Lambda$CDM.
[2929] vixra:1712.0593 [pdf]
Pitfall of Space Expansion
<p>Space is everywhere with us. It's in our hands. However, it's impossible to bent, stretch, compress, or doing anything with it. Expansion of the space would intertwine with many paradoxes that is beyond comprehension. We have learned to manipulating matter and energy since our first existence on Earth, however, never space.</p> <p>Additionally, space can not have boundary. Otherwise, it would separate space from something else outside of the space. Neither the boundary or surface of the space nor the outside of the space can be defined as anything other than space. Hence, space, boundary of space, and the outside can only be <i><b>space</b></i>. Thus, the shape, size, surface, and boundary of the space can not be detected or measured. Neither the age of the space can be measured. Space and the universe can only be considered infinite.</p> <p>Logically, we can not detect vacuum, or emptiness. We can only detect the absence of detectable. Absence of detectable is not absolutely equal to emptiness. The same logics that we can only prove the absence of detectable matter and energy. It is impossible to prove the absence of space. The question is, how can you bend or expand infinite and undetectable space that has no surface and boundary?</p>
[2930] vixra:1712.0586 [pdf]
The Making of Planet and Gravity
Gravity is not force of attraction. An one way concentrative potential is created by head-on congregation of particles from gentle pairing to high speed collision. It can be started by electromagnetic attraction or disturbance in the environment internally or externally. A terrestrial planet, for example, is the coalition of elements. It grows in the process of construction. The coalition builds the body and the force of keeping it together, gravity.
[2931] vixra:1712.0585 [pdf]
The Making of Star and Solar System
To me, the gravity of super-sized planet can trigger global nuclear reactions and become a star. The expelled particles embrace objects in it's path and create a space cyclone, solar system. Our Solar System is a cyclone powered by the vortex force of a single dominating star, the Sun, in weightless space. The Sun regulates the orbits and reduces the collision of all plants and alike within it's reach. The question is, would it be second star if Jupiter got a chance to gather more mass?
[2932] vixra:1712.0584 [pdf]
The Making of Black Hole And Galaxy
To me, a star is analogous to a constant and all directional nuclear exploding jet engine. Expelling particles would create it's opposite imploding force. By squeezing mass and energy into single point, I don't see other action can create stronger concentrative mass and force than implosive compression. Under this constant compression, a star can reach it's limit of structure and go supernova. The surface activities would radiate the frequency beyond our detection, a black hole disappeared from our view. Massive black hole can create its' own super space cyclone. Other subsystems can be caught by the storm and form a galaxy, a superset collection of solar and planetary systems by nature's inheritance and self-similarity. Considering nature's capability of creating such varieties of microorganism; Planets, stars, galaxies, and alike will continue to surprise us with our slowly opening eyes. Yet, the universe can only be seen by anyone in very short blink.
[2933] vixra:1712.0582 [pdf]
From Redshift To Cosmic Background Radiation
Doppler redshift of radiations is dominating in all observations. And, path loss of frequency causes exponentially proportional redshift. Radiation, over space, can continue to stretch below visual and infrared detection, the source would merged into the background (CBR). In the meantime, space is filled with dominating low frequency below visible radiations which can come from outside of the visible universe.
[2934] vixra:1712.0580 [pdf]
Age of The Universe Paradox
The question are: Will we ever know if there are objects outside of the detectable universe radiating in wave length longer than CBR? Can we date fundamental particles, or space? We only measure the universe with visual radiations. Doesn't it seem that the size and age of the universe is determined by the size of our telescope?
[2935] vixra:1712.0579 [pdf]
Why Quantum Jump Essay
Atomic electron transition appears leaping from one energy level to another. The issue is, atomic particles are too small and too fast for our detectors to recognize their action and identity. I believe it is due to the sensors can only detect and register the repeated trajectory. Particle would have to revolving on the same orbit long enough. Otherwise, it would not trigger the reaction of the detectors. Transitional trajectory is short, and it does not repeat. It can not be detected, hence, jump.
[2936] vixra:1712.0578 [pdf]
Big Bang Inflation Paradox Essay
Big-bang and universe expansion are illusions of cosmic redshift. Doppler redshift is over 92% dominating. And, path loss of frequency causes exponentially proportional redshift. The expansion of the universe is defined by it's boundary, not the separation galaxies. The fundamentals of redshift and space are examined.
[2937] vixra:1712.0577 [pdf]
Spacetime Curvature Paradox Essay
<p>Space is in our hands and all around us. We have learned to manipulate matter and energy since our first existence on Earth, but never space. It is the absolute complement of the physical universe.</p> <p>It is impossible to bend space since it has no boundary and surface. Space can not have boundary. Otherwise, it would separate space from something else outside of the space. Neither the boundary or surface of the space nor the outside of the space can be defined as anything other than space. On the contrarily, we can bend time by varying clock ticks. It is true for all artificial measurements.</p> <p>Space has natural existence, time is artificial measurement. Space is absolutely recyclable, NEVER TIME BE RECYCLED. And, space is fundamental, it is independent of mass, energy, and everything else. Space and time can not go together.</p>
[2938] vixra:1712.0576 [pdf]
Dark Matter and Dark Energy Paradox Essay
Many incomprehensible paradoxes are created by dark matter and dark energy. Physical matter, dark or not, has to occupy space and prevent other matter from taking the same location. On the other hand, isn't it meaningless if dark matter and dark energy can occupy the same space of ordinary matter and energy without interactions?
[2939] vixra:1712.0574 [pdf]
Wave and Tide Essay
Wave is a fundamental action of the universe. Water waves when it is disturbed by internal or external force. And, there be higher water level when waves meet the shore. This study examines the driving forces of waving gas, liquid, and solid anew when gravity is not force of attraction.
[2940] vixra:1712.0566 [pdf]
Navier Stockes Equation, Integrals of Motion and Generalization of the Equation of Continuity of the Flow of Matter to the Theory of Relativity
The use of N-S equation is of outmost important for everyday life: airplanes, ships, underwater ships, etc. So, the Clay Institute promises 1 000 000 dollars for a good solution. Present paper is about Estonian author confidence, that he have solved the problem.
[2941] vixra:1712.0556 [pdf]
Experimental Demonstration of Quantum Tunneling in IBM Quantum Computer
According to Feynman, we should make nature to be quantum mechanical to simulate it better. Simulating quantum systems in a computer had been remained a challenging problem to tackle. It's mainly in case of a large quantum system. However, Feynman's 1982 conjecture that `physics can be simulated using a quantum computer other than using a Turing machine or a classical computer' has been proved to be correct. It is widely known that quantum computers have superior power as compared to classical computers in simulating quantum systems efficiently. Here we report the experimental realization of quantum tunneling through potential barriers by simulating it in the IBM quantum computer, which here acts as a universal quantum simulator. We take a two-qubit system for visualizing the tunneling process, which has a truly quantum nature. We clearly observe the tunneling through a barrier by our experimental results. This experiment inspires us to simulate other quantum mechanical problems which possess such quantum nature.
[2942] vixra:1712.0539 [pdf]
Integrals Containing the Infinite Product $\prod_{n=0}^\infty\left[1+\left(\frac{x}{b+n}\right)^3\right]$
We study several integrals that contain the infinite product ${\displaystyle\prod_{n=0}^\infty}\left[1+\left(\frac{x}{b+n}\right)^3\right]$ in the denominator of their integrand. These considerations lead to closed form evaluation $\displaystyle\int_{-\infty}^\infty\frac{dx}{\left(e^x+e^{-x}+e^{ix\sqrt{3}}\right)^2}=\frac{1}{3}$ and to some other formulas.
[2943] vixra:1712.0534 [pdf]
Horizontal Planar Motion Mechanism (HPMM) Incorporated to the Existing Towing Carriage for Ship Manoeuvring Studies
Planar Motion Mechanism (PMM) equipment is a facility generally attached with Towing Tank to perform experimental studies with ship models to determine the manoeuvring characteristics of a ship. Ship model is oscillated at prescribed amplitude and frequency in different modes of operation while it is towed along the towing tank at predefined speed.The hydrodynamic forces and moments are recorded, analyzed and processed to get the hydrodynamic derivatives appearing in the manoeuvring equations of motion of a ship. This paper presents the details about the Horizontal Planar Motion Mechanism (HPMM) equipment which is designed, developed and installed in Towing Tank laboratory at IIT Madras.
[2944] vixra:1712.0533 [pdf]
Classification of an Ads Solution
McVittie's model, which interpolates between a Schwarzschild Black Hole and an expanding global (FLRW) spacetime, can be constructed by a simple coordinate replacement in Schwarzschild's isotropic intervall. Analogously, one gets a similarly generated exact solution of Einstein's equations based on a static transformation of de Sitter's metric. The present article is concerned with the application of this method on the AdS (Anti de Sitter) spacetime. Multiplying the radial coordinate and it's differential by a function a(t) gives the basic line element. Einstein's equations for the modified interval reduce to a system of two differential equations, which are solved in the article. The resulting solution is classified depending on the value of the cosmological constant. Several promising theories like String theorie and AdS/CFT correspondence include spacetimes with higher dimensions. Thereby motivated, the previous results of this article are generalized to D dimensions in the last section.
[2945] vixra:1712.0524 [pdf]
Projection of a Vector upon a Plane from an Arbitrary Angle, via Geometric (Clifford) Algebra
We show how to calculate the projection of a vector, from an arbitrary direction, upon a given plane whose orientation is characterized by its normal vector, and by a bivector to which the plane is parallel. The resulting solutions are tested by means of an interactive GeoGebra construction.
[2946] vixra:1712.0517 [pdf]
The Refutation of Gravitational Attraction
<p>Gravitational acceleration is independent of composition, shape, size, surface, and distance by falling body experiments.</p> <p>On the other hand, attraction acceleration is dependent on it's mass and composition (shape, size, surface, and distance are distribution of mass, or function of mass and distance).</p> <p>Therefore,<i><b>gravity is not attraction</b></i>.</p>
[2947] vixra:1712.0516 [pdf]
Delusion of Time Dilation
<p>Exploits speed, time, and space to show why time dilation is impossible. Space and time are completely independent. Fundamental matter, energy, and space has existence and completely recyclable. However, time is only reference information. It doest not exist. Among all, <i><b>time can not be recycled</b></i>!</p> <p>Clocks can drift, yet, it is absolutely independent of time. Can we suppose the expansion or contraction of a yard stick meant the space has changed?</p>
[2948] vixra:1712.0515 [pdf]
Hidden Truth of Double-Slit Test
<p>Basically, double-slit device is intrusive wave detector. Shooting particles to study their properties is intrusion. It would not be the same as performing autopsy on tabletop. The difficulty is not only studying a particle in motion, but keeping it still is not any easier, if possible. We are incapable of identifying it's individuality. Particle is a mass of it's own entity, and wave is the reaction of it's complement. Above all, <i><b>The rest of the universe is the absolute complement of an object</b></i>. Even a particle, it is impossible for it to be anywhere else at the same time, since anywhere else is it's absolute complement.</p>
[2949] vixra:1712.0514 [pdf]
Event and Information
Our perception is the information of physical events collected by sensors. Observation is a special case of hi-speed transportation of stream of information packages. The arrival order and interval and the quality would determine the perception of the observer. Additionally, not all arrived packages are perceived. We can catch more packages by improving the sensitivity and faster response of apparatus. However, most is lost. We can be fooled by what we see.
[2950] vixra:1712.0513 [pdf]
Driving Force of Tectonic Plate
<p>Gravity is the inward force of keeping it's spherical shape. Force of tectonic motion has to fight against gravity to push the crust from the ocean floor to the top of mountain ranges. The only force that is persistently counteracting and can be stronger then gravity is the centrifugal force. It is likely the centrifugal acceleration made Earth oblate spheroid, and the main driving force of waving mantle along with crust, water and atmosphere.</p> <p>This preliminary is only an open proposal for further study from the view of mantle/crust waves. Additionally, the added mass and energy of Solar particles and momentum could impact the crust. Like kneading Earth dough, the magnitude could be larger than we think by far. Further study is also recommended. </p>
[2951] vixra:1712.0488 [pdf]
Brief Solutions to Collatz Problem, Goldbach Conjecture and Twin Primes
I published some solutions a time ago to Goldbach Conjecture, Collatz Problem and Twin Primes; but I noticed that there were some serious logic voids to explain the problems. After that I made some corrections in my another article; but still there were some mistakes. Even so, I can say it easily that here I brought exact solutions for them out by new methods back to the drawing board.
[2952] vixra:1712.0487 [pdf]
Field Theory with Fourth-order Differential Equations
We introduce a new class of higgs type fields $\{U,U^{\mu},U^{\mu\nu}\}$ with Feynman propagator $\thicksim 1/p^4$, and consider the matching to the traditional gauge fields with propagator $\thicksim 1/p^2$ in the viewpoint of effective potentials at tree level. With some particular restrictions on the convergence, there are a wealth of potential forms generated by the fields $\{U,U^{\mu},U^{\mu\nu}\}$, such as: (1) in the case of $U$ coupled to the intrinsic charges of matter fields, electromagnetic Coulomb potential with an extra linear potential and Newton's gravitation could be generated with the operators of different orders from the dynamics of $U$, respectively; (2) for the matter fields, with the multi-vacuum structure of a sine-Gordon type vector field $A^{\mu}$ induced from $U$, a seesaw mechanism for gauge symmetry and flavor symmetry of fermions could be generated, in which the heavy fermions could be produced; besides, by treating the fermion current as a field, a possible way for renormalizable gravity could be proposed; (3) the Coulomb potential in electromagnetism and gravitation could be generated by an anti-symmetric field strength of $U^{\mu}$, when it's coupled to the intrinsic charge and momentum of matter fields, respectively; and, except for the Coulomb part in each case, there is a linear and a logarithmic part in the former case which might correspond to the confinement in strong QED, while there is a linear and a logarithmic part in the latter case which might correspond to the dark energy effects in the impulsive case and dark matter effects in the attractive case, respectively; besides, a symmetric field strength of $U^{\mu}$ could also generate the same gravitation form as the anti-symmetric case; (4) a nonlinear version Klein-Gordon equation, QED and the Einstein's general relativity, could be generated as a low energy approximation of the dynamics of $U$, $U^{\mu}$ and $U^{\mu\nu}$, respectively; moreover, in the weak field case, the gauge symmetry could superficially arise, and, a linear QED, linear gravitation and a 3rd-order tensor version QED could be generated by relating the field strength of $U$, $U^{\mu}$ and $U^{\mu\nu}$ to the corresponding gauge fields, respectively; (5) for the massive $\{U, U^{\mu}\}$, attractive potentials for particles with the same kind of charges could be generated, which might serve as candidate for interactions maintaining the s-wave pairing and d-wave pairing Cooper pairs in superconductors, with electric charge in the $U$ case and magnetic moment in the $U^{\mu}$ case as interaction charge, respectively; etc.
[2953] vixra:1712.0486 [pdf]
Finite and Infinite Product Transformations
Several infinite products are studied that satisfy the transformation relation of the type $f(\alpha)=f(1/\alpha)$. For certain values of the parameters these infinite products reduce to modular forms. Finite counterparts of these infinite products are motivated by solution of Dirichlet boundary problem on a rectangular grid. These finite product formulas give an elementary proof of several modular transformations.
[2954] vixra:1712.0478 [pdf]
Two-Dimensional Fourier Transformations and Double Mordell Integrals
Several Fourier transformations of functions of one and two variables are evaluated andthen used to derive some integral and series identities. It is shown that certain double Mordellintegrals can be reduced to a sum of products of one-dimensional Mordell integrals. As aconsequence of this reduction, a quadratic polynomial identity is found connecting productsof certain one-dimensional Mordell integrals. An integral that depends on one real valuedparameter is calculated reminiscent of an integral previously calculated by Ramanujan andGlasser. Some connections to elliptic functions and lattice sums are discussed.
[2955] vixra:1712.0474 [pdf]
Consciousness
The present scientific paradigm holds that the brain is the seat of consciousness, and a complete knowledge of the neurons will yield a full understanding of this phenomena. There are also non-scientific paradigms that suggest that consciousness is some how connected to a great universal consciousness, the mind of God. In this paper I will separate these two ideas by defining a local and global consciousness, and will briefly discuss both.
[2956] vixra:1712.0470 [pdf]
Conformal Kerr-de Sitter Gravity
We show that the Kerr metric for de Sitter spacetime is consistent with the demand that the associated Einstein field equations be derived from a simple conformally invariant Lagrangian.
[2957] vixra:1712.0469 [pdf]
Predicting Yelp Star Reviews Based on Network Structure with Deep Learning
In this paper, we tackle the real-world problem of predicting Yelp star-review rating based on business features (such as images, descriptions), user features (average previous ratings), and, of particular interest, network properties (which businesses has a user rated before). We compare multiple models on different sets of features -- from simple linear regression on network features only to deep learning models on network and item features. In recent years, breakthroughs in deep learning have led to increased accuracy in common supervised learning tasks, such as image classification, captioning, and language understanding. However, the idea of combining deep learning with network feature and structure appears to be novel. While the problem of predicting future interactions in a network has been studied at length, these approaches have often ignored either node-specific data or global structure. We demonstrate that taking a mixed approach combining both node-level features and network information can effectively be used to predict Yelp-review star ratings. We evaluate on the Yelp dataset by splitting our data along the time dimension (as would naturally occur in the real-world) and comparing our model against others which do no take advantage of the network structure and/or deep learning.
[2958] vixra:1712.0468 [pdf]
The Effectiveness of Data Augmentation in Image Classification using Deep Learning
In this paper, we explore and compare multiple solutions to the problem of data augmentation in image classification. Previous work has demonstrated the effectiveness of data augmentation through simple techniques, such as cropping, rotating, and flipping input images. We artificially constrain our access to data to a small subset of the ImageNet dataset, and compare each data augmentation technique in turn. One of the more successful data augmentations strategies is the traditional transformations mentioned above. We also experiment with GANs to generate images of different styles. Finally, we propose a method to allow a neural net to learn augmentations that best improve the classifier, which we call neural augmentation. We discuss the successes and shortcomings of this method on various datasets.
[2959] vixra:1712.0467 [pdf]
Gaussian Processes for Crime Prediction
The ability to predict crime is incredibly useful for police departments, city planners, and many other parties, but thus far current approaches have not made use of recent developments of machine learning techniques. In this paper, we present a novel approach to this task: Gaussian processes regression. Gaussian processes (GP) are a rich family of distributions that are able to learn functions. We train GPs on historic crime data to learn the underlying probability distribution of crime incidence to make predictions on future crime distributions.
[2960] vixra:1712.0466 [pdf]
Harvard Q-Guide Predictions Market
Given the announcement by Harvard College to stop publishing difficulty rating for courses \cite{crimson}, a need has arisen for alternative methods of information gathering among undergraduates. In this paper, we propose different prediction market mechanisms, detailing user input/output, contract definitions, and payment rules for each of the proposed mechanisms. The goal of each mechanism is to obtain accurate predictions that could replace Q-guide data (overall course quality, difficulty rating, and workload rating). We further discuss properties of each prediction market, such as the truthfulness incentives of for individual agents, individual agent's optimal policies, and expected results from each market. We conclude with a discussion and explanation of a simple toy implementation of the market, detailing design consideration that might affect user behaviour in our market, and laying the groundwork for future expansion and testing.
[2961] vixra:1712.0465 [pdf]
Reinforcement Learning with Swingy Monkey
This paper explores model-free, model-based, and mixture models for reinforcement learning under the setting of a SwingyMonkey game \footnote{The code is hosted on a public repository \href{https://github.com/kandluis/machine-learning}{here} under the prac4 directory.}. SwingyMonkey is a simple game with well-defined goals and mechanisms, with a relatively small state-space. Using Bayesian Optimization \footnote{The optimization took place using the open-source software made available by HIPS \href{https://github.com/HIPS/Spearmint}{here}.} on a simple Q-Learning algorithm, we were able to obtain high scores within just a few training epochs. However, the system failed to scale well after continued training, and optimization over hundreds of iterations proved too time-consuming to be effective. After manually exploring multiple approaches, the best results were achieved using a mixture of $\epsilon$-greedy Q-Learning with a stable learning rate,$\alpha$, and $\delta \approx 1$ discount factor. Despite the theoretical limitations of this approach, the settings, resulted in maximum scores of over 5000 points with an average score of $\bar{x} \approx 684$ (averaged over the final 100 testing epochs, median of $\bar{m} = 357.5$). The results show an continuing linear log-relation capping only after 20,000 training epochs.
[2962] vixra:1712.0464 [pdf]
Multi-Document Text Summarization
We tackle the problem of multi-document extractive summarization by implementing two well-known algorithms for single-text summarization -- {\sc TextRank} and {\sc Grasshopper}. We use ROUGE-1 and ROUGE-2 precision scores with the DUC 2004 Task 2 data set to measure the performance of these two algorithms, with optimized parameters as described in their respective papers ($\alpha =0.25$ and $\lambda=0.5$ for Grasshopper and $d=0.85$ for TextRank). We compare these modified algorithms to common baselines as well as non-naive, novel baselines and we present the resulting ROUGE-1 and ROUGE-2 recall scores. Subsequently, we implement two novel algorithms as extensions of {\sc GrassHopper} and {\sc TextRank}, each termed {\sc ModifiedGrassHopper} and {\sc ModifiedTextRank}. The modified algorithms intuitively attempt to ``maximize'' diversity across the summary. We present the resulting ROUGE scores. We expect that with further optimizations, this unsupervised approach to extractive text summarization will prove useful in practice.
[2963] vixra:1712.0456 [pdf]
Emergence of the Laws of Nature in the Developing Entangled Universe
Evolution of our universe with continuous production of matter by the vacuum, is described. The analysis is based on the quantum modification of the general relativity (Qmoger), supported by the cosmic data without fitting. Various types of matter are selected by the vacuum in accordance with stability of the developing universe. All laws of nature seems to be emergent and approximate, including the conservation of energy. The (3+1)-dimensional space-time and gravity were selected first. Than came quantum condensate of entangled gravitons (dark matter). Photons and other ordinary matter were selected much later during formation of galaxies, when the background condensate becomes gravitationally unstable. The effect of radiation on the global dynamics is described in terms of conservation of the enthalpy density. Mass of neutrino, as the first massive fermionic particle created from the background condensate, is estimated, in accord with experimental bound. The electric dipole moment of neutrino is also estimated. The oscillations of neutrinos are explained in terms of interaction with background condensate. The phenomena of quantum entanglement of ordinary matter was, apparently, inherited from the background condensate. The phenomena of subjective experiences are also explained in terms of interaction of the action potentials of neurons with the background dipolar condensate, which opens a new window into the dark sector of matter. The Qmoger theory goes beyond the Standard Model and the Quantum Field Theory and can be combined with their achievements. Key words: quantum modification of general relativity, emergence of the laws of nature, isenthalpic universe, quantum condensate of gravitons, oscillating neutrinos, subjective experiences and dark sector of matter.
[2964] vixra:1712.0446 [pdf]
A New Divergence Measure for Basic Probability Assignment and Its Applications in Extremely Uncertain Environments
Information fusion under extremely uncertain environments is an important issue in pattern classification and decision-making problem. Dempster-Shafer evidence theory (D-S theory) is more and more extensively applied to information fusion for its advantage to deal with uncertain information. However, the results opposite to common sense are often obtained when combining the different evidences using Dempster’s combination rules. How to measure the difference between different evidences is still an open issue. In this paper, a new divergence is proposed based on Kullback-Leibler divergence in order to measure the difference between different basic probability assignments (BPAs). Numerical examples are used to illustrate the computational process of the proposed divergence. Then the similarity for different BPAs is also defined based on the proposed divergence. The basic knowledge about pattern recognition is introduced and a new classification algorithm is presented using the proposed divergence and similarity under extremely uncertain environments, which is illustrated by a small example handling robot sensing. The method put forward is motivated by desperately in need to develop intelligent systems, such as sensor-based data fusion manipulators, which need to work in complicated, extremely uncertain environments. Sensory data satisfy the conditions 1) fragmentary and 2) collected from multiple levels of resolution.
[2965] vixra:1712.0444 [pdf]
Environmental Impact Assessment Using D-Vikor Approach
Environmental impact assessment (EIA) is an open and important issue depends on factors such as social, ecological, economic, etc. Due to human judgment, a variety of uncertainties are brought into the EIA process. With regard to uncertainty, many existing methods seem powerless to represent and deal with it effectively. A new theory called D numbers, because of its advantage to handle uncertain information, is widely used to uncertainty modeling and decision making. VIKOR method has its unique advantages in dealing with multiple criteria decision making problems (MCDM), especially when the criteria are non-commensurable and even conflicting, it can also obtain the compromised optimal solution. In order to solve EIA problems more effectively, in this paper, a D-VIKOR approach is proposed, which expends the VIKOR method by D numbers theory. In the proposed approach, assessment information of environmental factors is expressed and modeled by D numbers. And a new combination rule for multiple D numbers is defined. Subjective weights and objective weights are considered in VIKOR process for more reasonable ranking results. A numerical example is conducted to analyze and demonstrate the practicality and effectiveness of the proposed D-VIKOR approach.
[2966] vixra:1712.0441 [pdf]
Natural Squarefree Numbers: Statistical Properties.
n this paper we calculate for various sets X (some subsets of the natural numbers) the probability of an element a of X is also squarefree. Furthermore we calculate the probability of c is squarefree, where c=a+b, a is an element of the set X and b is an element of the set Y.
[2967] vixra:1712.0432 [pdf]
DS-Vikor: a New Methodology for Supplier Selection
How to select the optimal supplier is an open and important issue in supply chain management (SCM), which needs to solve the problem of assessment and sorting the potential suppliers, and can be considered as a multi-criteria decision-making (MCDM) problem. Experts’ assessment play a very important role in the process of supplier selection, while the subjective judgment of human beings could introduce unpredictable uncertainty. However, existing methods seem powerless to represent and deal with this uncertainty effectively. Dempster-Shafer evidence theory (D- S theory) is widely used to uncertainty modeling, decision making and conflicts management due to its advantage to handle uncertain information. The VIKOR method has a great advantage to handle MCDM problems with non-commensurable and even conflicting criteria, and to obtain the compromised optimal solution. In this paper, a DS- VIKOR method is proposed for the supplier selection problem which expends the VIKOR method by D-S theory. In this method, the basic probability assignment (BPA) is used to denote the decision makers’ assessment for suppliers, Deng entropy weight-based method is defined and applied to determine the weights of multi-criteria, and VIKOR method is used for getting the final ranking results. An illustrative example under real life is conducted to analyze and demonstrate the practicality and effectiveness of the proposed DS-VIKOR method.
[2968] vixra:1712.0431 [pdf]
The Effect of Geometry on Quantum Mechanics
The author attempts to unify General Relativity with Quantum Mechanics. This involves the use of the Cartesian Axes as the building blocks of reality. A certain set is employed and this permeates the content of the article. The author can be contacted at jpeel01@hotmail.com.
[2969] vixra:1712.0430 [pdf]
Proposed Civilization Scale
Instead of measuring a civilization's level of technological advancement based on energy consumption point of view, I believe it is more appropriate on it's capability of commanding the cycle and recycle of energy and matter; or based on the knowledge and technology of mastering the energy and matter: <br><ol><li>Type 0 Civilization: Parasitic; </li><li>Type I: Energy Mastery; </li><li>Type II: Energy and Matter Mastery, achieving mastery of energy and matter. </li></ol>
[2970] vixra:1712.0429 [pdf]
Stochastic Functions of Blueshift vs. Redshift
Viewing the random motions of objects, an observer might think it is 50-50 chances that an object would move toward or away. It might be intuitive, however, it is far from the truth. This study derives the probability functions of Doppler blueshift and redshift effect of signal detection. The fact is, Doppler redshift detection is highly dominating in space, surface, and linear observation. Under the conditions of no quality loss of radiation over distance, and the observer has perfect vision; It is more than 92% probability of detecting redshift, in three-dimensional observation, 87% surface, and 75\% linear. In cosmic observation, only 7.81% of the observers in the universe will detect blueshift of radiations from any object, on average. The remaining 92.19% of the observers in the universe will detect redshift. It it universal for all observers, aliens or Earthlings at all locations of the universe.
[2971] vixra:1712.0409 [pdf]
A Brief Experiment of Space
We have learned to manipulating matter and energy since our first existence, however, never space. Luckily, there is vacuum sharing some properties of space. We can create and shape vacuum to some extent. The experiment observes the interactions of vacuum, mass and energy to derive a logical understanding of space.
[2972] vixra:1712.0407 [pdf]
Ellipsoid Method for Linear Programming Made Simple
In this paper, ellipsoid method for linear programming is derived using only minimal knowledge of algebra and matrices. Unfortunately, most authors first describe the algorithm, then later prove its correctness, which requires a good knowledge of linear algebra.
[2973] vixra:1712.0400 [pdf]
Adaptively Evidential Weighted Classifier Combination
Classifier combination plays an important role in classification. Due to the efficiency to handle and fuse uncertain information, Dempster-Shafer evidence theory is widely used in multi-classifiers fusion. In this paper, a method of adaptively evidential weighted classifier combination is presented. In our proposed method, the output of each classifier is modelled by basic probability assignment (BPA). Then, the weights are determined adaptively for individual classifier according to the uncertainty degree of the corresponding BPA. The uncertainty degree is measured by a belief entropy, named as Deng entropy. Discounting-and-combination scheme in D-S theory is used to calculate the weighted BPAs and combine them for the final BPA for classification. The effectiveness of the proposed weighted combination method is illustrated by numerical experimental results.
[2974] vixra:1712.0393 [pdf]
Formulas and Spreadsheets for Simple, Composite, and Complex Rotations of Vectors and Bivectors in Geometric (Clifford) Algebra
We show how to express the representations of single, composite, and ``rotated" rotations in GA terms that allow rotations to be calculated conveniently via spreadsheets. Worked examples include rotation of a single vector by a bivector angle; rotation of a vector about an axis; composite rotation of a vector; rotation of a bivector; and the ``rotation of a rotation". Spreadsheets for doing the calculations are made available via live links.
[2975] vixra:1712.0386 [pdf]
On Multi-Criteria Pythagorean Fuzzy Decision-Making
Pythagorean fuzzy set (PFS) initially extended by Yager from intuitionistic fuzzy set (IFS), which can model uncertain information with more general conditions in the process of multi-criteria decision making (MCDM). The fuzzy decision analysis of this paper is mainly based on two expressions in Pythagorean fuzzy environment, namely, Pythagorean fuzzy number (PFN) and interval-valued Pythagorean fuzzy number (IVPFN). We initiate a novel axiomatic definition of Pythagorean fuzzy distance measure, including PFNs and IVPFNs, and put forward the corresponding theorems and prove them. Based on the defined distance measures, the closeness indexes are developed for PFNs and IVPFNs inspired by the idea of technique for order preference by similarity to ideal solution (TOPSIS) approach. After these basic definitions have been established, the hierarchical decision approach is presented to handle MCDM problems under Pythagorean fuzzy environment. To address hierarchical decision issues, the closeness index-based score function is defined to calculate the score of each permutation for the optimal alternative. To determine criterion weights, a new method based on the proposed similarity measure and aggregation operator of PFNs and IVPFNs is presented according to Pythagorean fuzzy information from decision matrix, rather than being provided in advance by decision makers, which can effectively reduce human subjectivity. An experimental case is conducted to demonstrate the applicability and flexibility of the proposed decision approach. Finally, the extension forms of Pythagorean fuzzy decision approach for heterogeneous information are briefly introduced as the further application in other uncertain information processing fields.
[2976] vixra:1712.0384 [pdf]
Discovering and Proving that Pi is Irrational, 2nd Edition
Ivan Niven's proof of the irrationality of pi is often cited because it is brief and uses only calculus. However it is not well motivated. Using the concept that a quadratic function with the same symmetric properties as sine should when multiplied by sine and integrated obey upper and lower bounds for the integral, a contradiction is generated for rational candidate values of pi. This simplifying concept yields a more motivated proof of the irrationality of pi and pi squared.
[2977] vixra:1712.0356 [pdf]
On the Nature of W Boson
We study leptonic and semileptonic weak decays working in the framework of Hagen-Hurley equations. It is argued that the Hagen-Hurley equations describe decay of the intermediate gauge boson W. It follows that we get a universal picture with the W boson being a virtual, off-shell, particle with (partially undefined) spin in the $0\oplus 1$ space.
[2978] vixra:1712.0353 [pdf]
Goldbach Conjecture Proof
In this paper, we are going to give the proof of the Goldbach conjecture by introducing the lemma which implies Goldbach conjecture. first of all we are going to prove that the lemma implies Goldbach conjecture and in the following we are going to prove the validity of the lemma by using Chébotarev-Artin theorem's, Mertens formula and the Principle of inclusion - exclusion of Moivre
[2979] vixra:1712.0352 [pdf]
Legendre Conjecture
In this paper, we are going to give the proof of legendre conjecture by using the Chebotarev -Artin 's theorem ,Dirichlet arithmetic theorem and the principle inclusion-exclusion of Moivre
[2980] vixra:1712.0348 [pdf]
An Extended Version of the Natario Warp Drive Equation Based in the Original $3+1$ $adm$ Formalism Which Encompasses Accelerations and Variable Velocities
Warp Drives are solutions of the Einstein Field Equations that allows superluminal travel within the framework of General Relativity. There are at the present moment two known solutions: The Alcubierre warp drive discovered in $1994$ and the Natario warp drive discovered in $2001$. However the major drawback concerning warp drives is the huge amount of negative energy density able to sustain the warp bubble.In order to perform an interstellar space travel to a "nearby" star at $20$ light-years away in a reasonable amount of time a ship must attain a speed of about $200$ times faster than light.However the negative energy density at such a speed is directly proportional to the factor $10^{48}$ which is $1.000.000.000.000.000.000.000.000$ times bigger in magnitude than the mass of the planet Earth!!. With the correct form of the shape function the Natario warp drive can overcome this obstacle at least in theory.Other drawbacks that affects the warp drive geometry are the collisions with hazardous interstellar matter that will unavoidably occurs when a ship travels at superluminal speeds and the problem of the Horizons(causally disconnected portions of spacetime).The geometrical features of the Natario warp drive are the required ones to overcome these obstacles also at least in theory.However both the Alcubierre or Natario warp drive spacetimes always have a constant speed in the internal structure of their equations which means to say that these warp drives always travel with a constant speed.But a real warp drive must accelerate from zero to a superluminal speed of about $200$ times faster than light in the beginning of an interstellar journey and de-accelerate again to zero in the end of the journey.In this work we expand the Natario vector introducing the coordinate time as a new Canonical Basis for the Hodge star and we introduce an extended Natario warp drive equation which encompasses accelerations.
[2981] vixra:1712.0244 [pdf]
A Review of Multiple Try MCMC Algorithms for Signal Processing
Many applications in signal processing require the estimation of some parameters of interest given a set of observed data. More specifically, Bayesian inference needs the computation of a-posteriori estimators which are often expressed as complicated multi-dimensional integrals. Unfortunately, analytical expressions for these estimators cannot be found in most real-world applications, and Monte Carlo methods are the only feasible approach. A very powerful class of Monte Carlo techniques is formed by the Markov Chain Monte Carlo (MCMC) algorithms. They generate a Markov chain such that its stationary distribution coincides with the target posterior density. In this work, we perform a thorough review of MCMC methods using multiple candidates in order to select the next state of the chain, at each iteration. With respect to the classical Metropolis-Hastings method, the use of multiple try techniques foster the exploration of the sample space. We present different Multiple Try Metropolis schemes, Ensemble MCMC methods, Particle Metropolis-Hastings algorithms and the Delayed Rejection Metropolis technique. We highlight limitations, benefits, connections and dierences among the different methods, and compare them by numerical simulations.
[2982] vixra:1712.0149 [pdf]
On the General Solution to the Bratu and Generalized Bratu Equations
This work shows that the Bratu equation belongs to a general class of Liénard-type equations for which the general solution may be exactly and explicitly computed within the framework of the generalized Sundman transformation. In this perspective the exact solution of the Bratu nonlinear two-point boundary value problem as well as of some well-known Bratu-type problems have been determined.
[2983] vixra:1712.0134 [pdf]
Emergence of the Laws of Nature in the Developing Universe 1a
Evolution of our universe with continuous production of matter by the vacuum, is described. The analysis is based on the quantum modification of the general relativity (Qmoger), supported by the cosmic data without fitting. Various types of matter are selected by the vacuum in accordance with stability of the developing universe. All laws of nature are emergent and approximate, including the conservation of energy. The (3+1)-dimensional space-time and gravity were selected first. Than came quantum condensate of gravitons (dark matter). Photons and other ordinary matter were selected much later during formation of galaxies, when the background condensate becomes gravitationally unstable. The effect of radiation on the global dynamics is described in terms of conservation of the enthalpy density. Mass of neutrino (as the first massive fermionic particle) is estimated, in accord with experimental bound. The electric dipole moment of neutrino is also estimated. The oscillations of neutrinos are explained in terms of interaction with background condensate. The phenomena of subjective experiences are also explained in terms of interaction of the action potentials of neurons with the background dipolar condensate, which opens a new window into the dark sector of matter. The Qmoger theory goes beyond the Standard Model and the Quantum Field Theory, but can be combined with their achievements. Key words: quantum modification of general relativity, emergence of the laws of nature, isenthalpic universe, oscillating neutrinos, subjective experiences and dark sector of matter.
[2984] vixra:1712.0130 [pdf]
Standing Wave and Reference Frame
A standing wave consists of two identical waves moving in opposite direction. In a moving reference frame, this standing wave becomes a traveling wave. Based on the principle of superposition, the wavelengths of these two opposing waves are shown to be identical in any inertial reference frame. According to Doppler Effect, a moving wave detector will detect two different frequencies on these two waves. Consequently, the wave detector will detect different speeds from both waves due to the same wavelength but different frequencies. The calculation of the speed of the microwave in the standing wave is demonstrated with the typical household microwave oven which emits microwave of frequency range around 2.45 GHz and wavelength range around 12.2 cm.
[2985] vixra:1712.0081 [pdf]
The Mathematics of Eating
Using simple arithmetic of the prime numbers, a model of nutritional content of food need and content is created. This model allows for the dynamics of natural languages to be specified as a game theoretical construct. The goal of this model is to evolve human culture.
[2986] vixra:1712.0079 [pdf]
Einstein was Likely Right: “God Does Not Play Dice” Does Randomness Break Down at the Planck Scale ?
This note briefly outlines how numbers that appear to be totally and independently random switch to become deterministic at the Planck scale. In other words, God does not play dice.
[2987] vixra:1712.0076 [pdf]
The Algebra of Non Local Neutrino Gravity
The elementary algebra underlying the non local neutrino hypothesis is used to explain discrepancies in the value of Hubble's constant in terms of other physical constants, in the approximation of the R=ct semiclassical cosmology with Mach's principle. It is assumed that quantum gravity breaks the equivalence principle, in conjunction with a quantum Higgs mechanism, and this new view of the electroweak vacuum indicates an absence of dark matter and dark energy. We introduce mass quantisation in the Brannen-Koide scheme in this context.
[2988] vixra:1712.0075 [pdf]
Emergence of the Laws of Nature in the Developing Universe 1
Evolution of our universe with continuous production of matter by the vacuum, is described. The analysis is based on the quantum modification of the general relativity (Qmoger), supported by the cosmic data without fitting. Various types of matter are selected by the vacuum in accordance with stability of the developing universe. All laws of nature are emergent and approximate, including the conservation of energy. The (3+1)-dimensional space-time and gravity were selected first. Than came quantum condensate of gravitons (dark matter). Photons and other ordinary matter were selected much later during formation of galaxies, when the background condensate becomes gravitationally unstable. The effect of radiation on the global dynamics is described in terms of conservation of the enthalpy density. Mass of neutrino (as the first massive fermionic particle) is estimated, in accord with experimental bound. The electric dipole moment of neutrino is also estimated. The oscillations of neutrinos are explained in terms of interaction with background condensate. The phenomena of subjective experiences are also explained in terms of interaction of the action potentials of neurons with the background dipolar condensate, which opens a new window into the dark sector of matter. The Qmoger theory goes beyond the Standard Model and the Quantum Field Theory, but can be combined with their achievements. Key words: quantum modification of general relativity, emergence of the laws of nature, isenthalpic universe, oscillating neutrinos, subjective experiences and dark sector of matter.
[2989] vixra:1712.0021 [pdf]
The Colour-Independent Charges of Quarks of Magnitude 2/3 and -1/3 in the Standard Model are Basically Wrong!
The Standard Model in spite of being the most successful model of particle physics, has a well-known shortcoming/weakness; and which is that the electric charges of quarks of magnitude 2/3 and -1/3 are not properly quantized in it and are actually fixed arbitrarily. In this paper we show that under a proper in-depth study, in reality these charges are found to be basically "wrong". This is attributed to their lack of proper colour-dependence. Here the proper and correct quark charges are shown to be actually intrinsically colour dependent and which in turn give consistent and correct description of baryons in QCD. Hence these colour dependent charges are the correct ones to use in particle physics.
[2990] vixra:1712.0006 [pdf]
A Normal Hyperbolic, Global Extension of the Kerr Metric
A restriction of the Boyer-Lindquist model of the Kerr metric is considered that is globally hyperbolic on the space manifold ${\bf R}^3$ with the origin excluded, the quotient $m/r$ being unrestricted. The model becomes in this process a generalization of Brillouin's model, describing the gravitational field of a rotating massive point particle.
[2991] vixra:1712.0004 [pdf]
Emergence of the Laws of Nature in the Developing Universe
Evolution of our universe with continuous production of matter by the vacuum, is described. The analysis is based on the quantum modification of the general relativity (Qmoger), supported by the cosmic data without fitting. Various types of matter are selected by the vacuum in accordance with stability of the developing universe. All laws of nature are emergent and approximate, including the conservation of energy. The (3+1)-dimensional space-time and gravity were selected first. Than came quantum condensate of gravitons (dark matter). Photons and other ordinary matter were selected much later during formation of galaxies, when the background condensate becomes gravitationally unstable. The effect of radiation on the global dynamics is described in terms of conservation of the enthalpy density. Mass and electric dipole moment of neutrino (as the first massive fermionic particle) are estimated. The oscillations of neutrinos are explained in terms of interaction with background condensate. The phenomena of subjective experiences are also explained in terms of interaction of the action potentials of neurons with the background dipolar condensate.
[2992] vixra:1711.0459 [pdf]
Soliton Solutions to the Dynamics of Space Filling Curves
I sketch roughly how an Alcubierre drive could work, by examining exotic geometries consisting of soliton solutions to the dynamics of space filling curves. I also briefly consider how remote sensing might work for obstacle avoidance concerning a craft travelling through space via a 'wormhole wave'. Finally I look into how one might adopt remote sensing ideas to build intrasolar wormhole networks, as well as extrasolar jump gates.
[2993] vixra:1711.0450 [pdf]
The Fifth Force
A fifth force, the Cohesion Force, becomes necessary when building a toy universe based on a fully deterministic, Euclidean, 4-torus cellular automaton using a constructive approach. Each cell contains one integer number forming bubble-like patterns propagating at speeds at least equal to that of light, interacting and being reemitted constantly. The collective behavior of these integers looks like patterns of classical and quantum physics. The four forces of nature plus the new one are unified. In particular, the graviton fits nicely in this framework. Although essentially nonlocal, it preserves the no-signalling principle. This flexible model predicts three results: i) if an electron is left completely alone (if even possible), still continues to emit low frequency fundamental photons; ii) neutrinos are Majorana fermions; and, last but not least, iii) gravity is not quantized. Pseudocode first version implementing these ideas is contained in the appendix.
[2994] vixra:1711.0436 [pdf]
Cosmological Quantum Gravity
The mirror neutrino hypothesis for quantum gravity has been used to resolve known cosmological problems using quantised inertia. In this note we clarify the theoretical principles, describe the true electroweak vacuum, and explain where conventional holographic scenarios go wrong.
[2995] vixra:1711.0428 [pdf]
Where Standard Physics Runs into Infinite Challenges, Atomism Predicts Exact Limits
Where standard physics runs into infinite challenges, atomism predicts exact limits. We summarize the mathematical results briefly in a table in this note and also revisit the energy-momentum relationship based on this view.
[2996] vixra:1711.0420 [pdf]
Move the Tip to the Right a Language Based Computeranimation System in Box2d
Not only “robots need language”, but sometimes a human-operator too. To interact with complex domains, he needs a vocabulary to init the robot, let him walk and grasping objects. Natural language interfaces can support semi-autonomous and fully-autonomous systems on both sides. Instead of using neural networks, the language grounding problem can be solved with object-oriented programming. In the following paper a simulation of micro-manipulation under a microscope is given which is controlled with a C++ script. The small vocabulary consists of init, pregrasp, grasp and place.
[2997] vixra:1711.0408 [pdf]
A Motion Paradox from Einstein’s Relativity of Simultaneity
We are describing a new and potentially important paradox related to Einstein’s theories of special relativity and relativity of simultaneity.1 We fully agree on all of the mathematical derivations in Ein- stein’s special relativity theory and his result of relativity of simultaneity when using Einstein-Poincar ́e synchronized clocks. The paradox introduced shows that Einstein’s special relativity theory leads to a motion paradox, where a train moving relative to the ground (and the ground moving relative to the train) must stand still and be moving at the same time. We will see that one reference frame will claim that the train is moving and that the other reference frame must claim that the train is standing still in the time window “between” two distant events. This goes against common sense and logic. However, looking back at the history of relativity theory, even time dilation was going against common sense and a series of academics attempted to refute it.2 Still, based on this new paradox we have to ask ourselves if the world really can be that bizarre, or if Einstein’s special relativity could be incomplete in some way? We are not going to give an answer to the second question in this paper, but we will simply present the new paradox.
[2998] vixra:1711.0404 [pdf]
Conformal, Parameter-Free Riemannian Gravity
A simple, parameter-free conformal approach to gravity is presented based on the the square of the quadratic Ricci scalar R alone. It is shown that when R is a non-zero constant, the associated action is fully conformal and leads to the usual equations of motion associated with the standard Einstein-Hilbert action. To demonstrate the approach, we derive the Schwarzschild metric, the field of a charged particle and the Tolman-Oppenheimer-Volkoff equation.
[2999] vixra:1711.0388 [pdf]
Empirically Derived Fermion Higgs Yukawa Couplings and Pole Masses
Empirically derived formulas are proposed for calculating the Higgs field Yukawa couplings and pole masses of the twelve known fundamental fermions with experimental inputs me, mμ and the Fermi constant G0F.
[3000] vixra:1711.0360 [pdf]
Ontology Engineering for Robotics
Ontologies are a powerfull alternative to reinforcement learning. They store knowledge in a domain-specific language. The best-practice for implementing ontologies is a distributed version control system which is filled manually by programmers.
[3001] vixra:1711.0354 [pdf]
Special Relativity and Coordinate Transformation
Two inertial reference frames moving at identical velocity can be seperated if one of them is put under acceleration for a duration. The coordinates of both inertial reference frames are related by this acceleration and its duration. An immediate property of such coordinate transformation is the conservation of distance and length across reference frames. Therefore, the concept of length contraction from Special Relativity is impossible in reality and physics.
[3002] vixra:1711.0352 [pdf]
The Truth about the Energy-Momentum Tensor and Pseudotensor
The operational and canonical definitions of an energy-momentum tensor (EMT) are considered as well as the tensor and nontensor conservation laws. It is shown that the canonical EMT contradicts the experiments and the operational definition, the Belinfante-Rosenfeld procedure worsens the situation, and the nontensor “conservation laws” are meaningless. A definition of the 4-momentum of a system demands a translator since integration of vectors is meaningless. The mass of a fluid sphere is calculated. It is shown that, according to the standard energy-momentum pseudotensor, the mass-energy of a gravitational field is positive. This contradicts the idea of a negative gravitational energy and discredits the pseudotensor. And what is more, integral 4-pseudovectors are meaningless in general since reference frames for their components are not determined even for coordinates which areMinkowskian at infinity
[3003] vixra:1711.0336 [pdf]
Lepton Mass Phases and the CKM Matrix
The Brannen neutrino mass triplet extends Koide's rule for the charged leptons, which was used to correctly predict the tau mass. Assuming that Koide's rule is exact, we consider the fundamental 2/9 lepton phase, noting connections to the CKM matrix and arithmetic information. An estimate for the fine structure constant is included.
[3004] vixra:1711.0317 [pdf]
Moduli Spaces of Special Lagrangian Submanifolds with Singularities
We try to give a formulation of Strominger-Yau-Zaslow conjecture on mirror symmetry by studying the singularities of special Lagrangian submanifolds of 3-dimensional Calabi-Yau manifolds. In this paper we’ll give the description of the boundary of the moduli space of special Lagrangian manifolds. We do this by introducing special Lagrangian cones in the more general Kähler manifolds. Then we can focus on the almost Calabi-Yau manifolds. We consider the behaviour of the Lagrangian manifolds near the conical singular points to classify them according to the way they are approximated from the asymptotic cone. Then we analyze their deformations in Calabi-Yau manifolds.
[3005] vixra:1711.0313 [pdf]
James Clerk Maxwell Conocimiento Prohibido
Las ecuaciones de Maxwell son los postulados del desarrollo presentado. De ellas se deducen como teoremas propiedades físicas fundamentales, como9 la distribución discreta de la energía, el valor teórico de la constante de estructura fina, el factor g del electrón, la unidad de masa elemental basada en las leyes de las partículas elementales (no de tipo convencional como la una) y varias propiedades más. El documento tiene estilo ameno y didáctico, para optimizar la comprensión.
[3006] vixra:1711.0294 [pdf]
Gravitation Question on Perihelion Advance
We formulate and discuss, as much as we can, an inevitable mathematical and philosophical question: why do the General Theory of Relativity and the Relative-Velocity Dependence of Gravitational Interaction lead to the same well-known formula for the anomalous Perihelion Advance ?
[3007] vixra:1711.0292 [pdf]
Strengths and Potential of the SP Theory of Intelligence in General, Human-Like Artificial Intelligence
This paper first defines "general, human-like artificial intelligence" (GHLAI) in terms of five principles. In the light of the definition, the paper summarises the strengths and potential of the "SP theory of intelligence" and its realisation in the "computer model", outlined in an appendix, in three main areas: the versatility of the SP system in aspects of intelligence; its versatility in the representation of diverse kinds of knowledge; and its potential for the seamless integration of diverse aspects of intelligence and diverse kinds of knowledge, in any combination. There are reasons to believe that a mature version of the SP system may attain full GHLAI in diverse aspects of intelligence and in the representation of diverse kinds of knowledge.
[3008] vixra:1711.0291 [pdf]
The Irrationality of Trigonometric and Hyperbolic Functions
This article simplifies Niven's proofs that cos and cosh are irrational when evaluated at non-zero rational numbers. Only derivatives of polynomials are used. This is the third article in a series of articles that explores a unified approach to classic irrationality and transcendence proofs.
[3009] vixra:1711.0275 [pdf]
Elementary Forces and Particles Correlating with Ordinary Matter, Dark Matter, and Dark Energy
This paper discusses and applies a basis for modeling elementary forces and particles. We show that models based on isotropic quantum harmonic oscillators describe aspects of the four traditional fundamental physics forces and point to some known and possible elementary particles. We summarize results from models based on solutions to equations featuring isotropic pairs of isotropic quantum harmonic oscillators. Results include predictions for new elementary particles and possible descriptions of dark matter and dark energy.
[3010] vixra:1711.0266 [pdf]
Revisit Fuzzy Neural Network: Demystifying Batch Normalization and ReLU with Generalized Hamming Network
We revisit fuzzy neural network with a cornerstone notion of generalized hamming distance, which provides a novel and theoretically justified framework to re-interpret many useful neural network techniques in terms of fuzzy logic. In particular, we conjecture and empirically illustrate that, the celebrated batch normalization (BN) technique actually adapts the “normalized” bias such that it approximates the rightful bias induced by the generalized hamming distance. Once the due bias is enforced analytically, neither the optimization of bias terms nor the sophisticated batch normalization is needed. Also in the light of generalized hamming distance, the popular rectified linear units (ReLU) can be treated as setting a minimal hamming distance threshold between network inputs and weights. This thresholding scheme, on the one hand, can be improved by introducing double-thresholding on both positive and negative extremes of neuron outputs. On the other hand, ReLUs turn out to be non-essential and can be removed from networks trained for simple tasks like MNIST classification. The proposed generalized hamming network (GHN) as such not only lends itself to rigorous analysis and interpretation within the fuzzy logic theory but also demonstrates fast learning speed, well-controlled behaviour and state-of-the-art performances on a variety of learning tasks.
[3011] vixra:1711.0265 [pdf]
Revisit Fuzzy Neural Network: Bridging the Gap Between Fuzzy Logic and Deep Learning
This article aims to establish a concrete and fundamental connection between two important elds in artificial intelligence i.e. deep learning and fuzzy logic. On the one hand, we hope this article will pave the way for fuzzy logic researchers to develop convincing applications and tackle challenging problems which are of interest to machine learning community too. On the other hand, deep learning could benefit from the comparative research by re-examining many trail-and-error heuristics in the lens of fuzzy logic, and consequently, distilling the essential ingredients with rigorous foundations. Based on the new findings reported in [41] and this article, we believe the time is ripe to revisit fuzzy neural network as a crucial bridge between two schools of AI research i.e. symbolic versus connectionist [101] and eventually open the black-box of artificial neural networks.
[3012] vixra:1711.0258 [pdf]
The Squared Case of Pi^n is Irrational Gives Pi is Transcendental
This is companion article to The Irrationality and Transcendence of e Connected. In it the irrationality of pi^n is proven using the same lemmas used for e^n. Also the transcendence of pi is given as a simple extension of this irrationality result.
[3013] vixra:1711.0244 [pdf]
Bell's Theorem Refuted for Stem Students
Here begins a precautionary tale from a creative life in STEM. Bringing an elementary knowledge of vectors to Bell (1964)—en route to refuting Bell’s inequality and his theorem—we aim to help STEM students study one of the strangest double-errors in the history of science. To that end we question du Sautoy’s (2016) claim that Bell’s theorem is as mathematically robust as they come.
[3014] vixra:1711.0241 [pdf]
Dysfunktionale Methoden der Robotik
Bei der Realisierung von Robotik-Projekten kann man eine ganze Menge verkehrt machen. Damit sind nicht nur kalte Lötstellen oder abstürzende Software gemeint, sondern sehr viel grundsätzlichere Dinge spielen eine Rolle. Um Fehler zu vermeiden, muss man sich zunächst einmal mit den Failure-Patterns näher auseinandersetzen, also jenen Entwicklungsmethoden, nach denen man auf gar keinen Fall einen Roboter bauen und wie die Software möglichst nicht funktionieren sollte.
[3015] vixra:1711.0236 [pdf]
Some Elementary Identities in Q-Series and the Generating Functions of the (M,k)-Capsids and (M, R1, R2)-Capsids
We demonstrate some elementary identities for q-series involving the q-Pochhammer symbol, as well as an identity involving the generating functions of the (m,k)-capsids and (m, r1, r2)-capsids.
[3016] vixra:1711.0235 [pdf]
Not Merely Memorization in Deep Networks: Universal Fitting and Specific Generalization
We reinterpret the training of convolutional neural nets(CNNs) with universal classification theorem(UCT). This theory implies any disjoint datasets can be classified by two or more layers of CNNs based on ReLUs and rigid transformation switch units(RTSUs) we propose here, this explains why CNNs could memorize noise and real data. Subsequently, we present another fresh new hypothesis that CNN is insensitive to some variant from input training data example, this variant relates to original training input by generating functions. This hypothesis means CNNs can generalize well even for randomly generated training data and illuminates the paradox Why CNNs fit real and noise data and fail drastically when making predictions for noise data. Our findings suggest the study about generalization theory of CNNs should turn to generating functions instead of traditional statistics machine learning theory based on assumption that the training data and testing data are independent and identically distributed(IID), and apparently IID assumption contradicts our experiments in this paper.We experimentally verify these ideas correspondingly.
[3017] vixra:1711.0204 [pdf]
Singlet, Spin and Clock
A simple explanation is given for the continuation of the singlet state over large distances in an EPRBA experiment. The paper answers this question with clocks ticking in synchronized frequencies that can be carried by the particles.
[3018] vixra:1711.0202 [pdf]
Sums of Arctangents and Sums of Products of Arctangents
We present new infinite arctangent sums and infinite sums of products of arctangents. Many previously known evaluations appear as special cases of the general results derived in this paper.
[3019] vixra:1711.0185 [pdf]
An Improved Dempster-Shafer Algorithm Using a Partial Conflict Measurement
Multiple evidences based decision making is an important functionality for computers and robots. To combine multiple evidences, mathematical theory of evidence has been developed, and it involves the most vital part called Dempster’s rule of combination. The rule is used for combining multiple evidences.
[3020] vixra:1711.0184 [pdf]
An Internet of Things Approach for Extracting Featured Data Using AIS Database: An Application Based on the Viewpoint of Connected Ships
Automatic Identification System (AIS), as a major data source of navigational data, is widely used in the application of connected ships for the purpose of implementing maritime situation awareness and evaluating maritime transportation.
[3021] vixra:1711.0175 [pdf]
A View on Intuitionistic Smarandache Topological Semigroup Structure Spaces
The purpose of this paper is to introduce the concepts of intuitionistic Smarandache topological semigroups, intuitionistic Smarandache topological semigroup structure spaces, intuitionistic SG exteriors and intuitionistic SG semi exteriors.
[3022] vixra:1711.0174 [pdf]
Contributions to Differential Geometry of Spacelike Curves in Lorentzian Plane L2
In this work, first the differential equation characterizing position vector of spacelike curve is obtained in Lorentzian plane L2: Then the special curves mentioned above are studied in Lorentzian plane L2: Finally some characterizations of these special curves are given in L2.
[3023] vixra:1711.0154 [pdf]
Another Note on Paraconsistent Neutrosophic Sets
In an earlier paper, we proved that Smarandache’s definition of neutrosophic paraconsistent topology is neither a generalization of Çoker’s intuitionistic fuzzy topology nor a generalization of Smarandache’s neutrosophic topology. Recently, Salama and Alblowi proposed a new definition of neutrosophic topology, that generalizes Çoker’s intuitionistic fuzzy topology. Here, we study this new definition and its relation to Smarandache’s paraconsistent neutrosophic sets.
[3024] vixra:1711.0143 [pdf]
Einstein's Constant under the Planck Microscope
As Haug has shown in a long series of papers, Newton's gravitational constant is almost for sure a composite constant. What is this exotic animal that is meters cubed divided by kg and seconds squared? It is difficult to get any intuition from the gravitational constant alone, except from understanding that it is a constant we can measure empirically and use to get Newton's formula to match actual observations. Newton's gravitational constant is a composite constant that consists of the more fundamental constants. This means we also can rewrite Einstein's constant in a more intuitive form where it becomes independent of big G.
[3025] vixra:1711.0137 [pdf]
The Stellar Black Hole
A black hole model is proposed in which a neutron star is surrounded by a neutral gas of electrons and positrons. The gas is in a completely degenerate quantum state and does not radiate. The pressure and density in the gas are found to be much less than those in the neutron star. The radius of the black hole is far greater than the Schwarzschild radius.
[3026] vixra:1711.0136 [pdf]
On Dark Energy and the Relativistic Bohm-Poisson Equation
Recently, solutions to the $nonlinear$ Bohm-Poisson (BP) equation were found with relevant cosmological applications. We were able to obtain an exact analytical expression for the observed vacuum energy density, and explain the origins of its $repulsive$ gravitational nature. In this work we considerably improve our prior arguments in support of our findings, and provide further results which include two possible extensions of the Bohm-Poisson equation to the full relativistic regime; explain how Bohm's quantum potential in four-dimensions could be re-interpreted as a gravitational potential in five-dimensions, and which explains why the presence of dark energy/dark matter in our $4D$ spacetime can only be inferred $indirectly$, but not be detected/observed directly. We conclude with some comments about the Dirac-Eddington large numbers coincidences.
[3027] vixra:1711.0130 [pdf]
The Irrationality and Transcendence of e Connected
Using just the derivative of the sum is the sum of the derivatives and simple undergraduate mathematics a proof is given showing e^n is irrational. The proof of e's transcendence is a simple generalization from this result.
[3028] vixra:1711.0127 [pdf]
Demonstration de la Conjecture de Polignac
In this paperwe give the proof Polignac Conjecture by using Chebotarev -Artin theorem ,Mertens formula and Poincaré sieve For doing that we prove that .Let's X be an arbitrarily large real number and n an even integer we prove that there are many primes p such that p+n is prime between sqrt(X) and X
[3029] vixra:1711.0119 [pdf]
Non Local Mirror Neutrinos with R=ct
Beginning with the observationally successful FLRW constraint of Riofrio, a classical alternative to $\Lambda$CDM, we introduce a mass gap correction to cosmology, incorporating a few aspects of the $\Lambda$CDM model, wherein both neutrinos and non local mirror neutrinos play a key role. Non local neutrinos are antineutrinos. The equivalence principle is mildly broken using McCulloch's approach to quantum inertia and a new holographic principle. There is no dark matter and no dark energy, and mirror neutrino states are informationally connected to the CMB. Consequences include (i) a present day temperature of $2.73$K arising as a mirror rest mass, (ii) an estimate of the observable mass of the universe and (iii) an effective sterile mass of $1.29$eV, permitted by current oscillation results.
[3030] vixra:1711.0117 [pdf]
The Free Coding Manifesto
Working-class coders and vulnerability researchers the world over are subject to prior restraints on their speech imposed by the institutions they work for. The restraints are in the form of non-disclosure agreements (NDAs) and employment contracts that are typically enforced using a process called pre-publication censorship. Industrial pre-publication censorship chills contributions of source code to society and chills the publication of vulnerabilities found in code that has been given to society. This has a harmful effect on the depth, breadth, and information assurance of society's foundation of code. Restrictions on the human spirit call for new liberties to be defined and upheld. This manifesto defines Freedom A and Freedom B as follows. Freedom A: you have the freedom to write code and give it to society under conditions of your choosing. Freedom B: you have the freedom to write and publish, under conditions of your choosing, a critique or documentation of code that has been given to society. Free coding is defined as Freedom A and Freedom B. Obstructions to free coding are identified and measures are presented to uphold free coding. The measures presented include a proposed corporate policy that balances institutional equities with personal liberty, a software license term tailored after Freedom B, and an experimental free coding software license. Utilitarian, philosophical, and theological foundations of free coding are given. Obstructions to free coding form a subset of the problem of knowledge hoarding. I present my interpretations of the Book of Genesis, namely, the Original Command and the Original Paradox. I believe that these interpretations reveal the root of the problem of knowledge hoarding.
[3031] vixra:1711.0113 [pdf]
Tautology Problem and Two Dimensional Formulas
Finding whether a boolean formula is a tautology or not in a feasible time is an important problem of computer science. Many algorithms have been developed to solve this problem but none of them is a polynomial time algorithm. Our aim is to develop an algorithm that achieve this in polynomial time. In this article, we convert boolean functions to some graph forms in polynomial time. They are called two dimensional formulas and similar to AND-OR graphs except arches on them are bidirectional. Then these graphs are investigated to find properties which can be used to differentiate tautological formulas from non tautological ones.
[3032] vixra:1711.0106 [pdf]
A Novel Single-Valued Neutrosophic Set Similarity Measure and Its Application in Multicriteria Decision-Making
The single-valued neutrosophic set is a subclass of neutrosophic set, and has been proposed in recent years. An important application for single-valued neutrosophic sets is to solve multicriteria decision-making problems.
[3033] vixra:1711.0099 [pdf]
Certain Competition Graphs Based on Intuitionistic Neutrosophic Environment
The concept of intuitionistic neutrosophic sets provides an additional possibility to represent imprecise, uncertain, inconsistent and incomplete information, which exists in real situations. This research article first presents the notion of intuitionistic neutrosophic competition graphs.
[3034] vixra:1711.0097 [pdf]
Content-based Image Retrieval with Color and Texture Features in Neutrosophic Domain
In this paper, a new content-based image retrieval (CBIR) scheme is proposed in neutrosophic (NS) domain. For this task, RGB images are first transformed to three subsets in NS domain and then segmented.
[3035] vixra:1711.0093 [pdf]
Domestic Violence Against Women Using Induced Linked Bidirectional Associative Memories (ILBAM)
Domestic violence is an abusive behaviour perpetrated by intimate partner and other members in the family. Traditionally women were expected to married soon and settle down in her life.
[3036] vixra:1711.0090 [pdf]
Evaluating Investment Risks of Metallic Mines Using an Extended TOPSIS Method with Linguistic Neutrosophic Numbers
The investment in and development of mineral resources play an important role in the national economy. A good mining project investment can improve economic efficiency and increase social wealth.
[3037] vixra:1711.0089 [pdf]
Exponential Operations and an Aggregation Method for Single-Valued Neutrosophic Numbers in Decision Making
As an extension of an intuitionistic fuzzy set, a single-valued neutrosophic set is described independently by the membership functions of its truth, indeterminacy, and falsity, which is a subclass of a neutrosophic set (NS).
[3038] vixra:1711.0088 [pdf]
Expression and Analysis of Joint Roughness Coefficient Using Neutrosophic Number Functions
In nature, the mechanical properties of geological bodies are very complex, and its various mechanical parameters are vague, incomplete, imprecise, and indeterminate. In these cases, we cannot always compute or provide exact/crisp values for the joint roughness coefficient (JRC),which is a quite crucial parameter for determining the shear strength in rock mechanics, but we need to approximate them.
[3039] vixra:1711.0087 [pdf]
Expressions of Rock Joint Roughness Coefficient Using Neutrosophic Interval Statistical Numbers
In nature, the mechanical properties of geological bodies are very complex, and their various mechanical parameters are vague, incomplete, imprecise, and indeterminate. However, we cannot express them by the crisp values in classical probability and statistics.
[3040] vixra:1711.0083 [pdf]
Green Supplier Evaluation and Selection Using Cloud Model Theory and the QUALIFLEX Method
Nowadays, companies have to improve their practices in the management of green supply chain with increased awareness of environmental issues worldwide. Selecting the optimum green supplier is crucial for green supply chain management, which is a challenging multi-criteria decision making (MCDM) problem.
[3041] vixra:1711.0082 [pdf]
Group Decision Making Method Based on Single Valued Neutrosophic Choquet Integral Operator
Single valued neutrosophic set (SVNS) depicts not only the incomplete information, but also the indeterminate information and inconsistent information which exist commonly in belief systems.
[3042] vixra:1711.0077 [pdf]
Interval Neutrosophic Sets and Their Application in Multicriteria Decision Making Problems
As a generalization of fuzzy sets and intuitionistic fuzzy sets, neutrosophic sets have been developed to represent uncertain, imprecise, incomplete, and inconsistent information existing in the real world.
[3043] vixra:1711.0072 [pdf]
Linguistic Neutrosophic Cubic Numbers and Their Multiple Attribute Decision-Making Method
To describe both certain linguistic neutrosophic information and uncertain linguistic neutrosophic information simultaneously in the real world, this paper originally proposes the concept of a linguistic neutrosophic cubic number (LNCN), including an internal LNCN and external LNCN.
[3044] vixra:1711.0070 [pdf]
Merger and Acquisition Target Selection Based on Interval Neutrosophic Multigranulation Rough Sets over Two Universes
As a significant business activity, merger and acquisition (M&A) generally means transactions in which the ownership of companies, other business organizations or their operating units are transferred or combined.
[3045] vixra:1711.0069 [pdf]
Minimal Solution of Fuzzy Neutrosophic Soft Matrix
The aim of this article is to study the concept of unique solvability of max-min fuzzy neutrosophic soft matrix equation and strong regularity of fuzzy neutrosophic soft matrices over Fuzzy Neutrosophic Soft Algebra (FNSA).
[3046] vixra:1711.0060 [pdf]
Multiple Attribute Group Decision-Making Method Based on Linguistic Neutrosophic Numbers
Existing intuitionistic linguistic variables can describe the linguistic information of both the truth/membership and falsity/non-membership degrees, but it cannot represent the indeterminate and inconsistent linguistic information.
[3047] vixra:1711.0049 [pdf]
Neutrosophic Subalgebras of Bck=bci-Algebras Based on Neutrosophic Points
The concept of neutrosophic set (NS) developed by Smarandacheis a more general platform which extends the concepts of the classic set and fuzzy set, intuitionistic fuzzy set and interval valued intuitionistic fuzzy set.
[3048] vixra:1711.0048 [pdf]
Neutrosophic Subalgebras of Several Types in Bck=bci-Algebras
The concept of neutrosophic set (NS) developed by Smarandache is a more general platform which extends the concepts of the classic set and fuzzy set, intuitionistic fuzzy set and interval valued intuitionistic fuzzy set.
[3049] vixra:1711.0036 [pdf]
Representation of Graph Structure Based on I-V Neutrosophic Sets
In this research article, we apply the concept of interval-valued neutrosophic sets to graph structures. We present the concept of interval-valued neutrosophic graph structures. We describe certain operations on interval-valued neutrosophic graph structures and elaborate them with appropriate examples. Further,we investigate some relevant properties of these operators. Moreover,we propose some open problems on interval-valued neutrosophic line graph structures.
[3050] vixra:1711.0031 [pdf]
Selecting Project Delivery Systems Based on Simplified Neutrosophic Linguistic Preference Relations
Project delivery system selection is an essential part of project management. In the process of choosing appropriate transaction model, many factors should be under consideration, such as the capability and experience of proprietors, project implementation risk, and so on. How to make their comprehensive evaluations and select the optimal delivery system?
[3051] vixra:1711.0029 [pdf]
On the Variation of Vacuum Permittivity in General Relativity
Vacuum permittivity is the scalar in Maxwell’s equations that determines the speed of light and the strength of electrical fields. In 1907 Albert Einstein found that vacuum permittivity changes with gravity. Møller, Landau & Lifshitz, and Sumner found that it changes with spacetime curvature. The wavelengths of both photons and atomic emissions decrease with vacuum permittivity but the photons emitted from atoms decrease more than photons themselves do. This changes the interpretation of gravitational redshift to one where blueshifted photons are compared to greater blueshifted atomic emissions. For Schwarzschild geometry, redshifts can be calculated most accurately using the metric. The equation derived using time dilation and the one derived here are identical. Their weak field approximation is the same as the redshift derivation using special relativity with Doppler shift. For Friedmann geometry, redshifts calculated by comparing blueshifted atomic emissions to those of photons flips the meaning of Hubble redshift. The universe is accelerating in collapse, a result confirmed by supernova redshift observations. The equations and logic used for Friedmann and Schwarzschild redshifts are identical. Only their vacuum permittivities are different.
[3052] vixra:1711.0027 [pdf]
Shortest Path Problem by Minimal Spanning Tree Algorithm Using Bipolar Neutrosophic Numbers
Normally, Minimal Spanning Tree algorithm is used to nd the shortest route in a network. Neutrosophic set theory is used when incomplete, inconsistancy and indeterminacy occurs. In this paper, Bipolar Neutrosophic Numbers are used in Minimal Spanning Tree algorithm for finding the shortest path on a network when the distances are inconsistant and indeterminate and it is illustrated by a numerical example.
[3053] vixra:1711.0019 [pdf]
Some Single-Valued Neutrosophic DombiWeighted Aggregation Operators for Multiple Attribute Decision-Making
The Dombi operations of T-norm and T-conorm introduced by Dombi can have the advantage of good flexibility with the operational parameter. In existing studies, however, the Dombi operations have so far not yet been used for neutrosophic sets.
[3054] vixra:1711.0018 [pdf]
Subtraction and Division Operations of Simplified Neutrosophic Sets
A simplified neutrosophic set is characterized by a truth-membership function, an indeterminacy-membership function, and a falsity-membership function, which is a subclass of the neutrosophic set and contains the concepts of an interval neutrosophic set and a single valued neutrosophic set.
[3055] vixra:1711.0011 [pdf]
Vector Similarity Measures Between Refined Simplified Neutrosophic Sets and Their Multiple Attribute Decision-Making Method
A refined single-valued/interval neutrosophic set is very suitable for the expression and application of decision-making problems with both attributes and sub-attributes since it is described by its refined truth, indeterminacy, and falsity degrees.
[3056] vixra:1711.0010 [pdf]
Vector Similarity Measures for Simplified Neutrosophic Hesitant Fuzzy Set and Their Applications
In this article we present three similarity measures between simplied neutrosophic hesitant fuzzy sets, which contain the concept of single valued neutrosophic hesitant fuzzy sets and interval valued neutrosophic hesitant fuzzy sets, based on the extension of Jaccard similarity measure, Dice similarity measure and Cosine similarity in the vector space.
[3057] vixra:1711.0002 [pdf]
The Relativistic Motion of RLC Circuit
The special theory of relativity formula for the photon Doppler effect is applied to the frequency of the RLC circuit moving vith velocity v relativly to the rest system. The relativistic transformation of the RLC circuit components is derived in case the components are all in series with the voltage source (R−L−C−v). The generalization to the more complex situation was not considered. It is not excluded that the article is the preamble for the future investigation of electronic physics and will be integral part of such instituions as Bell Laboratories, NASA, CERN and so on.
[3058] vixra:1710.0350 [pdf]
Background Evolution from Modified Actions of Gravity from General Disformal Transformation
We study theory of modified gravity, namely disformal gravity, which is con- structed from disformal metric. We derive the action for disformal gravity from general purely disformal transformation. Then we find the equations of motion for the back- ground universe and find that the disformal gravity does not provide the kinetic driven for cosmic acceleration as usually expected from Galilean-like theories.
[3059] vixra:1710.0345 [pdf]
Higher Derivative Relativistic Quantum Gravity
Relativistic quantum gravity with the action including tems quadratic in the curvature tensor is analyzed. We derive new expressions for the crresponding Lagrangian and the graviton propagator. We argue that the considered model is a good candidate for the fundamental quantum theory of gravitation.
[3060] vixra:1710.0340 [pdf]
The Incompatibility of the Planck Acceleration and Modern Physics? And a New Acceleration Limit for Anything with Mass after Acceleration
The Planck second is likely the shortest relevant time interval. If the Planck acceleration lasts for one Planck second, one will reach the speed of light. Yet, according to Einstein, no particle with rest-mass can travel at the speed of light as this would require an infinite amount of energy. Modern physics is incompatible with the Planck acceleration in many ways. However in atomism we see that the Planck acceleration happens for the building blocks of the Planck mass and that the Planck mass is dissolved into energy within one Planck second. Further, the Planck mass stands absolutely still as observed from any reference frame. Atomism is fully consistent with the Planck acceleration. The relativistic Planck acceleration is unique among accelerations because it can only happen from absolute rest; it is therefore the same as the Planck acceleration. In other words, atomism predicts breaks in Lorentz invariance at the Planck scale, something several quantum gravity theories address as well. Atomism seems to solve a series of challenges in modern physics and this paper is one of a series in pointing this out.
[3061] vixra:1710.0325 [pdf]
Electromagnetic Synthesis of Four Fundamental Forces from Quantized Impedance Networks of Geometric Wavefunction Interactions
Quantum Mechanics is all about wavefunctions and their interactions. If one seeks to understand Quantum Mechanics, then a deep intuitive understanding of wavefunctions and wavefunction collapse would seem essential, indispensable. That’s where it all starts, the causal origin of the quantum as manifested in the physical world. We introduce a wavefunction comprised of the geometric elements of the Pauli algebra of space - point, line, plane, and volume elements - endowed with quantized electromagnetic fields. Wavefunction interactions are described by the geometric product of geometric Clifford algebra, generating the Dirac algebra of flat Minkowski spacetime, the particle physicist’s S-matrix. Electromagnetic synthesis of four fundamental forces becomes apparent via this Geometric Wavefunction Interpretation (GWI).
[3062] vixra:1710.0324 [pdf]
New Sufficient Conditions of Signal Recovery with Tight Frames Via $l_1$-Analysis
The paper discusses the recovery of signals in the case that signals are nearly sparse with respect to a tight frame $D$ by means of the $l_1$-analysis approach. We establish several new sufficient conditions regarding the $D$-restricted isometry property to ensure stable reconstruction of signals that are approximately sparse with respect to $D$. It is shown that if the measurement matrix $\Phi$ fulfils the condition $\delta_{ts}<t/(4-t)$ for $0<t<4/3$, then signals which are approximately sparse with respect to $D$ can be stably recovered by the $l_1$-analysis method. In the case of $D=I$, the bound is sharp, see Cai and Zhang's work \cite{Cai and Zhang 2014}. When $t=1$, the present bound improves the condition $\delta_s<0.307$ from Lin et al.'s reuslt to $\delta_s<1/3$. In addition, numerical simulations are conducted to indicate that the $l_1$-analysis method can stably reconstruct the sparse signal in terms of tight frames.
[3063] vixra:1710.0323 [pdf]
Potential Energy Deficit as an Alternative for Dark Matter?
The problem of gravitational potential energy is analyzed within a simple model, in which the 3-dimensional (3D) space is a curved hypersurface of a 4-dimensional (4D) Euclidean space. The analysis shows that the effect of gravitational potential energy deficit is present in the model. For a particular profile of hypersurface representing 3D space, the effects of aforementioned deficit are similar to effects attributed to dark matter, while not being contrary to Newton's law of gravity.
[3064] vixra:1710.0319 [pdf]
Kirchhoff’s Law of Thermal Emission and Its Consequences for Astronomy, Astrophysics and Cosmology
The key to the stars and the basis for big bang cosmology is thermal emission, not the kinetic theory of gases or Einstein's General Theory of Relativity, although the latter employs thermal emission in the form given by Kirchhoff and Planck. Kirchhoff's Law of Thermal Emission is fundamental to astronomy and much of physics, including quantum mechanics. Via Kirchhoff's Law of Thermal Emission, Max Planck introduced the quantum of action in developing his equation for blackbody spectra. From Kirchhoff's Law Planck's equation acquired universality and Planck's mystical absolute units. Without Kirchhoff's Law and universality of Planck's equation, astronomy and cosmology completely collapse. Kirchhoff's Law of Thermal Emission is certainly false, as the clinical existence of Magnetic Resonance Imaging (MRI) proves. Consequently Planck's equation is not universal, Planck's absolute units have no special character, and astronomy and cosmology lose their foundations entirely.
[3065] vixra:1710.0290 [pdf]
On Achieving Superluminal Communication
What is proposed here is a simple modication in the quantum protocol [1] for achieving instantaneous teleportation of arbitrary quantum state from Alice to Bob even when Bob is several light years away. This modied quantum protocol is constructed by adding a step the in the celebrated quantum teleportation protocol [1]. It consists of the action of a unitary operator to be performed by Alice on the qubits in her possession before she does the Bell basis measurement. It is important to note that the existing quantum teleportation protocol [1] requires certain classical communication between the participants, Alice and Bob, and we are going to eliminate this classical communication through our modification. In the existing protocol [1] Alice requires sending the classical bits generated during her Bell basis measurement over a classical channel to Bob for Bob to use these classical bits to determine the exact recovery operation to be performed on the qubit(s) in his possession for creating the exactly identical copy of unknown quantum state that was with Alice and got destroyed during her Bell basis measurement. Alice cannot send the classical bits she obtained to Bob with the speed faster than that of light since it is the well known experimentally veried universal upper limit on the speed. We show that by incorporating suitable unitary operation to be performed by Alice on her qubits before the Bell basis measurement the requirement of the transmission of classical bits by Alice to Bob for the creation of the unknown quantum state by Bob at his place can be completely eliminated. Our modification in the teleportation protocols [1], [2] thus clearly demonstrates the enormous advantage of remaining in the quantum regime and avoiding the requirement of any classical communication for the teleportation of the quantum states.
[3066] vixra:1710.0287 [pdf]
The World is Binary! When the Speed of Light is Zero from Any Reference Frame
This is a very short non-technical note pointing out a key finding from modern mathematical atomism, namely that the world is Binary, and that the Planck mass, the Planck length, and the Planck second are invariant entities. With Einstein-Poincare synchronized clocks, the speed of light (in a vacuum) is the same in every direction, it is isotropic and it is often represented with the character c. The speed of light is, per definition, exactly 299 792 458 m/s, a tremendous speed. We do not contest that this is the speed of light as measured with Einstein-Poincare synchronized clocks, but still we ask: ``Is this truly always the case?".
[3067] vixra:1710.0270 [pdf]
Novel Remarks on the Cosmological Constant, the Bohm-Poisson Equation and Asymptotic Safety in Quantum Gravity
Recently, solutions to the $nonlinear$ Bohm-Poisson equation were found [2] with cosmological applications. We were able to obtain a value for the vacuum energy density of the same order of magnitude as the extremely small observed vacuum energy density, and explained the origins of its $repulsive$ gravitational nature. In this work we show how to obtain a value for the vacuum energy density which coincides $exactly$ with the extremely small observed vacuum energy density. This construction allows also to borrow the results over the past two decades pertaining the study of the Renormalization Group (RG) within the context of Weinberg's Asymptotic Safety scenario. The RG flow behavior of $ G $ shows that $ G $ $increases$ with distance, so that the magnitude of the repulsive force exemplified by $ - G < 0 $ becomes larger, and larger, as the universe expands. This is what is observed.
[3068] vixra:1710.0265 [pdf]
Neutronium or Neutron?
In the reading Nyambuya (2015), we proposed a hypothetical state of the Hydrogen atom whose name we coined 'Neutronium'. That is to say, in the typical Hydrogen atom, the Electron is assumed to orbit the Proton, while in the Neutronium, the converse is assumed, i.e., the Proton orbits the Electron. In the present reading, we present some seductive argument which lead us to think that this Neutronium may actually be the usual Neutron that we are used to know. That is to say, we show that under certain assumed conditions, a free Neutronium may be unstable while a non-free Neutronium is stable in its confinement. Given that a free Neutron is stable in it confinement of the nucleus and unstable where free with a lifetime of ∼ 15 min, one wonders whether or not this Neutronium might be the Neutron if we are to match the lifetime of a free Neutronium to that of a free Neutron.
[3069] vixra:1710.0240 [pdf]
Pulsar Frequency and Pulsar Tilted Axis Explained as Geodetic Precession Effects
The hypothesis is presented that pulsar-time is geodetic precession rotation time, in both the causal sense and the quantitative sense (T-pulsar exactly equals T-geodetic). The causal sense implies the hypothesis that, in the outer crust of the neutron star, the curvature of the metric favors alignment of elementary particle magnetic moments along the geodetic precession. A consequence of this hypothesis is the partial decoupling of pulsar time and orbital rotation time. For a ``canonical'' neutron star, with 1.4 solar mass and a radius of 10 km, this implies that T-orbit equals approximately one fifth of T-pulsar. The pulsar time as being geodetic precession time explains the extreme stability of pulsar frequencies, despite strong magnetic turbulences. It also quite naturally explains the tilted axis of the neutron stars magnetic moment relative to its orbital axis. The hypothesis is formulated within the environment of the Ehlers-Pirani-Schild Weyl Space Free Fall Grid approach as developed in two previous papers, but it should be theory independent and thus be derivable in GR-Schwarzschild as well.
[3070] vixra:1710.0238 [pdf]
Quantum Mechanics Expressed in Terms of the Approach "Emission & Regeneration" UFT.
Quantum mechanics differential equations are based on the de Broglie postulate. This paper presents the repercussions on quantum mechanics differential equations when the de Broglie wavelength is replaced by a relation between the radius and the relativistic energy of a particle. This relation results from a theoretical work about the interaction of charged particles, where the particles are modelled as focal points of rays of fundamental particles with longitudinal and transversal angular momentum. Interaction of subatomic particles is described as the interaction of the angular momenta of their fundamental particles. The relationship between the solution of the differential equation for a radial Coulomb field and the Correspondence Principle is presented. All four known forces are the result of electromagnetic interactions, so that only QED is required to describe them. The potential well of an atomic nucleus is shown with the regions that are responsible for the four type of interactions defined in quantum mechanics. Also the compatibility of the gravitation model derived in the theoretical work with quantum mechanics is shown, model where gravitation is the result of the reintegration of migrated electrons and positrons to their nuclei.
[3071] vixra:1710.0227 [pdf]
Self-Consistent Generation of Quantum Fermions in Theories of Gravity
I search for concepts that would allow self-consistent generation of dressed fermions in theories of gravitation. Self-consistency means here having the Compton wave lengths of the same order of magnitude for all particles and the four interactions. To build the quarks and leptons of the standard model preons of spin 1/2 and charge 1/3 or 0 have been introduced by the author. Classification of preons, quarks and leptons is provided by the two lowest representations of the quantum group SLq(2). Three extensions of general relativity are considered for self-consistency: (a) propagating and (b) non-propagating torsion theories in Einstein-Cartan spacetime and (c) a Kerr-Newman metric based theory in general relativity (GR). For self-consistency, the case (a) is not excluded, (b) is possible and (c) has been shown to provide it, reinforcing the preon model, too. Therefore I propose that semiclassical GR with its quantum extension (c) and the preon model will be considered a basis for unification of physics. The possibility remains that there are 'true' quantum gravitational phenomena at or near the Planck scale.
[3072] vixra:1710.0215 [pdf]
An Elegant Solution to the Cosmological Constant Problem based on The Bohm-Poisson Equation
After applying the recently proposed Bohm-Poisson equation [1] to the observable Universe as a $whole$, and by introducing an ultraviolet (very close to the Planck scale) and an infrared (Hubble radius) scale, one can naturally obtain a value for the vacuum energy density which coincides $exactly$ with the extremely small observed vacuum energy density, and explain the origins of its $repulsive$ gravitational nature. Because Bohm's formulation of QM is by construction non-local, it is this non-locality which casts light into the crucial ultraviolet/infrared entanglement of the Planck/Hubble scales which was required in order to obtain the observed value of the vacuum energy density.
[3073] vixra:1710.0207 [pdf]
A Conjecture On The Nature Of Time
In our previous publications we argue that finite mathematics is fundamental, classical mathematics (involving such notions as infinitely small/large, continuity etc.) is a degenerate special case of finite one, and ultimate quantum theory will be based on finite mathematics. We consider a finite quantum theory (FQT) based on a finite field or ring with a large characteristic $p$ and show that standard continuous quantum theory is a special case of FQT in the formal limit $p\to\infty$. Space and time are purely classical notions and are not present in FQT at all. In the present paper we discuss how classical equations of motions arise as a consequence of the fact that $p$ changes, i.e. $p$ is the evolution parameter.
[3074] vixra:1710.0205 [pdf]
An Approximation to the Prime Counting Function Through the Sum of Consecutive Prime Numbers
In this paper it is proved that the sum of consecutive prime numbers under the square root of a given natural number is asymptotically equivalent to the prime counting function. Also, it is proved another asymptotic relationship between the sum of the first prime numbers up to the integer part of the square root of a given natural number and the prime counting function.
[3075] vixra:1710.0201 [pdf]
Automatic Intelligent Translation of Videos
There are a lot of educational videos online which are in English and inaccessible to 80% population of the world. This paper presents a process to translate a video into another language by creating its transcript and using TTS to produce synthesized fragments of speech. It introduces an algorithm which synthesyses intelligent, synchronized, and easily understandable audio by combining those fragments of speech. This algorithm is also compared to an algorithm from another research paper on the basis of performance.
[3076] vixra:1710.0198 [pdf]
Intuitive Geometric Significance of Pauli Matrices and Others in a Plane
The geometric significance of complex numbers is well known, such as the meaning of imaginary unit i is to rotate a vector with pi/2, etc. In this article, we will try to find some intuitive geometric significances of Pauli matrices, split-complex numbers, SU(2), SO(3), and their relations, and some other operators often used in quantum physics, including a new method to lead to the spinor-space and Dirac equation.
[3077] vixra:1710.0165 [pdf]
Using a Syntonized Free Fall Grid of Atomic Clocks in Ehlers-Pirani-Schild Weyl Space to Derive Second Order Relativistic GNSS Redshift Terms
In GNSS, the improvement of atomic clocks will lead to Phi/c^2 in relativistic gravitational redshift three to four decades from today. Research towards a relativistic positioning system capable of handling this expected accuracy is all based upon the Schwarzschild metric as a replacement of todays GNSS Euclidian-Newtonian metric. The method employed in this paper to determining frequency shifts between atomic clocks is an intermediate Minkowski-EEP approach. This approach is based on relating two atomic clocks to one another through a background ensemble of frequency gauged clocks, a grid. The crucial grid of this paper, the Free Fall Grid or FFG, can be related to the Ehlers-Pirani-Schild P-CP Weyl space formalism, when applied to a central mass. The FFG second order in Phi/c^2 redshifts are derived for static to static, static to satellite and satellite to satellite atomic oscillators and then compared to GR-Schwarzschild and PPN (Parametrized Post Newtonian) results.
[3078] vixra:1710.0158 [pdf]
Salvaging Newton's 313 Year Old Corpuscular Theory of Light
As is well known – Newton's corpuscular model of light can explain the Law of Reflection and Snell's Law of Refraction. Sadly and regrettably – its predictions about the speed of light in different mediums runs contrary to experience. Because of this, Newton's theory of light was abandoned in favour of the wave theory. It [Newton's corpuscular model of light] predicts that the speed of light is larger in higher density mediums. This prediction was shown to be wrong by Foucault's 1850 landmark-ing experiment that brought down Newton's theory. The major assumption of Newton's corpuscular model of light is that the corpuscles of light have an attraction with the particle of the medium. When the converse is assumed, i.e., the corpuscles of light are assumed to not have an attraction-effect, but a repulsion-effect with the particles of the medium, one obtains the correct predictions of the speed of light in denser mediums. This assumption of Newton's corpuscles repelling with the particles of the medium – this, might explain why light has the maximum speed in any given medium.
[3079] vixra:1710.0147 [pdf]
How to Effect a Composite Rotation of a Vector via Geometric (Clifford) Algebra
We show how to express the representation of a composite rotation in terms that allow the rotation of a vector to be calculated conveniently via a spreadsheet that uses formulas developed, previously, for a single rotation. The work presented here (which includes a sample calculation) also shows how to determine the bivector angle that produces, in a single operation, the same rotation that is effected by the composite of two rotations.
[3080] vixra:1710.0145 [pdf]
Visualizing Zeta(n>1) and Proving Its Irrationality
A number system is developed to visualize the terms and partials of zeta(n>1). This number system consists of radii that generate sectors. The sectors have areas corresponing to all rational numbers and can be added via a tail to head vector addition. Dots on the circles give an un-ambiguous cross reference to decimal systems in all bases. We show, in the proof section of this paper, first that all partials require decimal bases greater than the last denominator used in the partial, then that this can be used to make a sequence of nested intervals with rational endpoints. Using Cantor's Nested Interval theorem this gives the convergence point of zeta series and disallows rational values, thus proving the irrationality of zeta(n>1).
[3081] vixra:1710.0135 [pdf]
Quest for the Ultimate Automaton
Abstract A fully deterministic, Euclidean, 4-torus cellular automaton is presented axiomatically using a constructive approach. Each cell contains one integer number forming bubble-like patterns propagating at speeds at least equal to that of light, interacting and being reemitted constantly. The collective behavior of these integers looks like patterns of classical and quantum physics. In this toy universe, the four forces of nature are unified. In particular, the graviton fits nicely in this framework. Although essentially nonlocal, it preserves the no-signalling principle. This flexible model predicts three results: i) if an electron is left completely alone (if even possible), still continues to emit low frequency fundamental photons; ii) neutrinos are Majorana fermions; and, last but not least, iii) gravity is not quantized. Pseudocode implementing these ideas is contained in the appendix. This is the first, raw, version of this document. I expect to make corrections in future releases.
[3082] vixra:1710.0122 [pdf]
On the Validity of Quantum Physics Below the Planck Length
The widely held expectation that quantum physics breaks down below the Planck length ($10^{-33}$ cm) is brought into question. A possible experiment is suggested that might test its validity at a sub-Planckian length scale.
[3083] vixra:1710.0121 [pdf]
Quantum Equations in Empty Space Using Mutual Energy and Self-Energy Principle
For photon we have obtained the results that the waves of photon obey the mutual energy principle and self-energy principle. In this article we will extended the results for photon to other quantum. The mutual energy principle and self energy principle corresponding to the Schrödinger equation are introduced. The results are that a electron, for example, travel in the empty space from point A to point B, there are 4 different waves. The retarded wave started from point A to infinite big sphere. The advanced wave started from point B to infinite big sphere. The return waves corresponding to the above both waves. There are 5 different flow corresponding to these waves. The self-energy flow corresponding to the retarded wave, the self-energy flow corresponding to the advanced wave. The return flows corresponding to the above two return waves. The mutual energy flow of the retarded wave and the advanced wave. It is found that the the mutual energy flow is the energy flow or the charge intensity flow or electric current of the the electron. Hence the electron travel in the empty space is a complicated process and do not only obey one Schrödinger equation. This result can also extend to to Dirac equations.
[3084] vixra:1710.0091 [pdf]
Relative−Velocity Dependence: a Property of Gravitational Interaction
Herein we propound the property, present its empirical law—a two Relative-Velocity Dependent (RVD) terms completion of Newton’s law of gravitation—and thereby solve some historic problems as: Solar cycle, apparent connection with Jupiter’s revolution, and why the two periods do not quite coincide; about 2.9–4.5×1019 joules/yr from earth’s rotation slowdown (Secular retardation) “missing” in tidal effects and attributed to some “unknown mechanism”; and the “unknown” nature (and magnitude) of the “driving/propelling force” in Tectonic plates drift; moreover, the RVD completion of Newton’s gravity law predicts that tectonic plates drift, globally, to the west, not randomly, causing earthquakes and volcano eruptions to occur most probable at equinoxes (around March and September). The well-known formula of Perihelion advance is also derived. Several experiments are proposed, some feasible now.
[3085] vixra:1710.0083 [pdf]
Non-Standard General Numerical Methods for the Direct Solution of Differential Equations not Cleared in Canonical Forms
In this work I develop numerical algorithms that can be applied directly to differential equations of the general form f (t, x, x ) = 0, without the need to cleared x . My methods are hybrid algorithms between standard methods of solving differential equations and methods of solving algebraic equations, with which the variable x is numerically cleared. The application of these methods ranges from the ordinary differential equations of order one, to the more general case of systems of m equations of order n. These algorithms are applied to the solution of different physical-mathematical equations. Finally, the corresponding numerical analysis of existence, uniqueness, stability, consistency and convergence is made, mainly for the simplest case of a single ordinary differential equation of the first order.
[3086] vixra:1710.0082 [pdf]
Hubble’s Law and Antigravity Higgs Boson and Gravity
The unified theory of dynamic space has been conceived and written by Professor Physicist Naoum Gosdas, inspired from the principle of antithesis(opposition), because of which all natural phenomena are created. These phenomena are derived from the unique absolute dynamic space, which is structured with the fundamental elements, namely the dimension or distance or length, the elementary electric charges (units) and the forces, according to the principle of antithesis. However, due to the finite dimensions of the Universe and the opposition between existence (theUniverse) and nonexistence (non space), a spherical deformity of the space occurred, which has created the equality of the peripheral and radial cohesive forces (Universal symmetry) and the Universal antigravity force, whereby the Hubble’s Law is proved. The breaking of the above Universal symmetry, close to the Universe center, causes the Genesis of the primary form of matter, the first grand Cosmic event of Universe. The gravitational mass of the particle is defined as a stretching of the dynamic space by the core vacuum (Higgs boson) of the particle.
[3087] vixra:1710.0074 [pdf]
Exact Classical and Quantum Mechanics of a Generalized Singular Equation of Quadratic Liénard Type
Authors introduce a generalized singular differential equation of quadratic Liénard type for study of exact classical and quantum mechanical solutions. The equation is shown to exhibit periodic solutions and to include the linear harmonic oscillator equation and the Painlevé-Gambier XVII equation as special cases. It is also shown that the equation may exhibit discrete eigenstates as quantum behavior under Nikiforov-Uvarov approach after several point transformations.
[3088] vixra:1710.0067 [pdf]
Bell's Inequality is Violated in Classical Systems as Well as Quantum Systems
Bell's inequality is usually considered to belong to mathematics and not quantum mechanics. We think that this makes it difficult to understand Bell's theory. Thus in this paper, contrary to Bell's spirit (which inherits Einstein's spirit), we try to discuss Bell's inequality in the framework of quantum theory with the linguistic Copenhagen interpretation. And we clarify that whether or not Bell's inequality holds does not depend on whether classical systems or quantum systems, but depend on whether a kind of simultaneous measurements exist or not. And further we assert that our argument ( based on the linguistic Copenhagen interpretation) should be regarded as a scientific representation of Bell's philosophical argument (based on Einstein's spirit).
[3089] vixra:1710.0052 [pdf]
Exact Solutions of the Newton-Schroedinger Equation, Infinite Derivative Gravity and Schwarzschild Atoms
Exact solutions to the stationary spherically symmetric Newton-Schroedinger equation are proposed in terms of integrals involving $generalized$ Gaussians. The energy eigenvalues are also obtained in terms of these integrals which agree with the numerical results in the literature. A discussion of infinite-derivative-gravity follows which allows to generalize the Newton-Schroedinger equation by $replacing$ the ordinary Poisson equation with a $modified$ non-local Poisson equation associated with infinite-derivative gravity. We proceed to replace the nonlinear Newton-Schroedinger equation for a non-linear quantum-like Bohm-Poisson equation involving Bohm's quantum potential, and where the fundamental quantity is $no$ longer the wave-function $ \Psi$ but the real-valued probability density $ \rho$. Finally, we discuss how the latter equations reflect a $nonlinear$ $feeding$ loop mechanism between matter and geometry which allows us to envisage a ``Schwarzschild atom" as a spherically symmetric probability cloud of matter which curves the geometry, and in turn, the geometry back-reacts on this matter cloud perturbing its initial distribution over the space, which in turn will affect the geometry, and so forth until static equilibrium is reached.
[3090] vixra:1710.0036 [pdf]
Laws of General Solutions of Partial Differential Equations
In this paper, four kinds of Z Transformations are proposed to get many laws of general solutions of mth-order linear and nonlinear partial differential equations with n variables. Some general solutions of first-order linear partial differential equations, which cannot be obtained by using the characteristic equation method, can be solved by the Z Transformations. By comparing, we find that the general solutions of some first-order partial differential equations got by the characteristic equation method are not complete.
[3091] vixra:1710.0035 [pdf]
The Statistical Proof of the Riemann Hypothesis
Derived the Statistics of the un-solved problems (conjectures). The probability, what a conjecture will be solved is 50 %. The probability, that a conjecture is true is p=37 %. The probability, what we get to know the latter is psi=29 %....
[3092] vixra:1710.0029 [pdf]
Perturbations of Compressed Data Separation with Redundant Tight Frames
In the era of big data, the multi-modal data can be seen everywhere. Research on such data has attracted extensive attention in the past few years. In this paper, we investigate perturbations of compressed data separation with redundant tight frames via ˜Φ-ℓq-minimization. By exploiting the properties of the redundant tight frame and the perturbation matrix, i.e., mutual coherence, null space property and restricted isometry property, the condition on reconstruction of sparse signal with redundant tight frames is established and the error estimation between the local optimal solution and the original signal is also provided. Numerical experiments are carried out to show that ˜Φ-ℓq-minimization are robust and stable for the reconstruction of sparse signal with redundant tight frames. To our knowledge, our works may be the first study concerning perturbations of the measurement matrix and the redundant tight frame for compressed data separation.
[3093] vixra:1710.0026 [pdf]
On Consistency in the Skyrme Topological Model
We point to a significant mismatch between the nature of the baryon number and of the electric charge of baryons in the Skyrme topological model. Requirement of consistency between these two then demands a significant improvement in how the electric charge is defined in this model. The Skyrme model thereafter has a consistent electric charge which has a unique colour dependence built into it. Its relationship with other theoretical model structures is also studied.
[3094] vixra:1710.0024 [pdf]
The Mathematical Machinery of Logical Independence Underlying Quantum Randomness and Indeterminacy
Abstract:<br>In 2008 Tomasz Paterek et al published experiments demonstrating that quantum randomness results from logical independence. The job of this paper is to derive implications for Matrix Mechanics. Surprisingly (and apparently unwittingly), the Paterek experiments imply that faithful representation of (non-random) pure states contradicts the Quantum Postulate which imposes unitary, Hermitian and Hilbert space mathematics on all states. Paterek's Boolean formalism asserts and demands a non-unitary environment for pure states, which is freely restricted to logically independent unitary structure, wherever the creation of mixed states demands unitarity. Consequently, the Paterek experiments contradict the Quantum Postulate which imposes unitary, Hermitian and Hilbert space structures, axiomatically as blanket ontology, across the whole theory. Examination of the ‘non-unitary to unitary transition’ reveals the machinery of quantum indeterminacy. That machinery involves self-referential circularity, inaccessible history, and the geometrical ambiguity of perfect symmetry. The findings here provide answers for researchers studying Foundations of Quantum Mechanics; they make intuitive good sense of indeterminacy; they provide reason and significance for observable operators and eigenvectors; and they should be helpful for those interested in the Measurement Problem, the EPR paradox and possibly those looking for a method to quantize Gravity.<br><br>Keywords:<br>foundations of quantum theory, quantum randomness, quantum indeterminacy, logical independence, self-reference, logical circularity, mathematical undecidability.
[3095] vixra:1710.0021 [pdf]
A Monte Carlo Implementation of the Ising Model in Python
This article explores an implementation of the 2D Ising model using the Metropolis algorithm in the Python programming language. The goal of this work was to explore the scope of behaviours this model can demonstrate through a simplistic implementation on a relatively low-end machine. The Ising model itself is particularly interesting as it demonstrates relatively complex behaviours of ferromagnetics e.g. the second-order phase transition in spite of its simplicity. To study the specifics of this model several parameters were measured (namely the net magnetization and energy, the specific heat and correlation function) and a visualization of the grid was implemented. The simulations demonstrated an easily observable phase transition near critical temperature on a 100 × 100 Ising grid with the measured parameters behaving nearly as predicted by the exact solution developed for this model.
[3096] vixra:1709.0444 [pdf]
Self-Energy Principle with a Time-Reversal Field is Applied to Photon and Electromagnetic Theory
The photon energy transfer is from point to point. But the wave according to the Maxwell equation spreads from the source point to the entire empty space. In order to explain this phenomenon the concept of wave function collapse is created. This concept is very rough, if there are many partition boards with small holes between the emitter charge and the absorber charge. The light is clear can go through all these small holes from emitter to the absorber. But according to the concept of the wave function collapse the wave must collapse N times if there are N holes on the partition boards. Collapse one is strange enough, if the wave collapse N times, that is unbelievable! In another article we have proved that the photon energy is actually transferred by the “mutual energy flow” which is point to point instead of spread to the entire space. Since energy can be transferred by the mutual energy flow, the concept of the wave function collapse is not necessary. In order to build the mutual energy flow it is required to build the self-energy flow also. The self-energy flow is spread to the entire empty space. What will do for the self-energy flow, it is possible the self-flow also collapse to the absorber. However if self-energy flow collapse we have also meet the same problem as the whole wave collapse that means if there are partition sheets with N holes, the self-energy flow has to collapse N times. In the article about mutual energy principle we have propose another possibility in which the self-energy flow instead collapse, we believe it is returned. It is returned with a time reversal process, hence the self-energy dose not contributed to the energy transfer of the photon. The return process can be seen as also a collapse process, however it is collapse to the source of the wave instead of the target of the wave. In this article we will discuss the self-energy flow and the time reversal process in details.
[3097] vixra:1709.0431 [pdf]
Photons Evolution on Very Long Distances
We lay down the fundamental hypothesis that any electromagnetic radiation transforms progressively, evolving towards and finally reaching after an appropriate distance the value of the cosmic microwave background radiation wavelength at 1,873 mm or the frequency of 160,2 GHz. This way we explain the cosmic redshift Z of far away Galaxies using only Maxwell’s equations and the energy quantum principle for photons. This hypothesis is also true for wavelength longer or for frequency less than that of the cosmic microwave bacground. Hubble’s law sprouts out naturally as the consequence of this transformation. According to this hypothesis we compute the Hubble constant using Pioneer satellite data and doing so deciphering the enigma of its anomalous behaviour. We speculate about a numerical composition of the Hubble constant and introduce the Hubble surface. This hypothesis helps to solve some cases that are still enigmatic for the standard cosmology. We discuss about the maximal observation distance of cosmological phenomena. We give an answer to the anomalous acceleration of the Pioneer satellite and we show that it is a universal constant common to any satellite.
[3098] vixra:1709.0430 [pdf]
L'évolution Des Photons Sur de Très Longues Distances
Nous posons l’hypothèse fondamentale que toute radiation électromagnétique se transforme progressivement, évoluant vers, et atteignant après une distance appropriée, la valeur de la radiation cosmique résiduelle soit une longueur d’onde de 1,873 mm ou la fréquence de 160,2 GHz. Ainsi nous expliquons le décalage vers le rouge Z de la radiation provenant des galaxies éloignées moyennant les équations classiques de Maxwell et l’énergie quantique des photons. Cette hypothèse est aussi valable lorsque la radiation émise est de longueur d’onde plus grande, ou de fréquence plus basse que celle de la radiation cosmique résiduelle. La loi de Hubble émerge tout naturellement comme conséquence de cette transformation. Suivant cette hypothèse, nous évaluons la constante de Hubble en utilisant les données fournies par le satellite Pioneer tout en expliquant l’anomalie de comportement attribuée à ce satellite. Nous spéculons sur une composition possible de la constante de Hubble et introduisons la surface de Hubble. Ce modèle permet la résolution de quelques situations inexpliquées par la cosmologie actuelle. Nous discutons de la distance limite d’observation des phénomènes cosmologiques. Nous expliquons l’accélération anormale du satellite Pioneer et montrons qu’elle est une constante universelle, la même pour tout satellite.
[3099] vixra:1709.0424 [pdf]
Survival of small PBHs in the Very Early Universe
The formation of Primordial Black Holes is a robust prediction of several gravitational theories. Whereas the creation of PBHs was very active in the remote past, such process seem to be very negligible at the present epoch. In this work, we estimate the effects from the radiation surrounding PBHs due to the absorption term in the equations that describe how their masses depend on time. The Hawking radiation contributes with mass loss and the absorption term contributes with gain, but a interesting competition between these terms is analysed. These effects are included in the equations describing PBHs and its mass density as the universe evolves in time and the model is able to describes the evolution of the numerical density of PBHs and the mass evolution and comparisons with cosmological constraints set upper limits in their abundances. We evaluate the effect of this accretion onto PBHs and we get some corrections for the initial masses that indicates some deviations from de- fault values for the time scale for evaporation. The scale time of the PBHs in the early universe is modified due to the energy accretion and we can estimate how these contributions may alter the standard model of PBHs.
[3100] vixra:1709.0423 [pdf]
The Law of Inertia from Spacetime Symmetries
The law of inertia has been treated as a fundamental assumption in classical physics. However in this article, I show that the law of inertia is completely possible to be derived, from the homogeneity of the spacetime, by using the time slice as a tool for dealing with the spacetime symmetries.
[3101] vixra:1709.0408 [pdf]
Fermat's Proof Of Fermat's Last Theorem
Employing only basic arithmetic and algebraic techniques that would have been known to Fermat, and utilizing alternate computation methods for arriving at $\sqrt[n]{c^n}$, we identify a governing relationship between $\sqrt{(a^2 + b^2)}$ and $\sqrt[n]{(a^n + b^n)}$ (for all $n > 2$), and are able to establish that $c = \sqrt[n]{(a^n + b^n)}$ can never be an integer for any value of $n > 2$.
[3102] vixra:1709.0405 [pdf]
Fundamental Physics and the Fine-Structure Constant
From the exponential function of Euler's equation to the geometry of a fundamental form, a calculation of the fine-structure constant and its relationship to the proton-electron mass ratio is given. Equations are found for the fundamental constants of the four forces of nature: electromagnetism, the weak force, the strong force and the force of gravitation. Symmetry principles are then associated with traditional physical measures.
[3103] vixra:1709.0402 [pdf]
Detection and Prevention of Non-PC Botnets
Botnet attacks are serious and well-established threat to the internet community. These attacks are not only restricted to PC or laptops but spreading their roots to a device such as smartphones, refrigerators, and medical instruments. According to users, they are devices which are least prone to attacks. On the other hand, a device that is expected to be least vulnerable has low-security aspects which attract the attackers. In this paper, we have listed the details of latest Botnet attacks and common vulnerabilities behind such attacks. We have also explained as well as suggested proved Detection ways based on their types. After an analysis of attacks and detection techniques, we have suggested recommendations which can be utilized in order to mitigate such attacks.
[3104] vixra:1709.0401 [pdf]
Holy Cosmic Condensate of Ultralight Gravitons with Electric Dipole Moment
Quantum modification of general relativity (Qmoger) is supported by cosmic data (without fitting). Qmoger equations consist of Einstein equations with two additional terms responsible for production/absorption of matter. In Qmoger cosmology there was no Big Bang and matter is continuously producing by the Vacuum. Particularly, production of the ultralight gravitons with possible tiny electric dipole moment was started about 284 billion years ago. Quantum effects dominate interaction of these particles and they form the quantum condensate. Under influence of gravitation, the condensate is forming galaxies and producing ordinary matter, including photons. As one important result of this activity, it recently created us, the people, and continues to support us. Particularly, our subjective experiences (qualia) are a result of an interaction between the background condensate and the neural system of the brain. The action potentials of neural system create traps and coherent dynamic patterns in the dipolar condensate. So, qualia are graviton-based, which can open new directions of research in biology and medicine. At the same time, a specialized study of qualia can open a new window into the dark sector of matter. The Qmoger theory explains why most of the ordinary particles are fermions, predicts the mass of neutrino (in accord with the experimental bound) and explained their oscillations (between three flavors) in terms of interaction with the background condensate. The achievements of the Standard Model and the Quantum Field Theory can be combined with the Qmoger theory. Key words: cosmology with continuous production of energy, ultralight gravitons with tiny electric dipole moment, biophysics, qualia.
[3105] vixra:1709.0393 [pdf]
Neumann Series Seen as Expansion Into Bessel Functions J_{n} Based on Derivative Matching.
Multiplicative coefficients of a series of Bessel functions of the first kind can be adjusted so as to match desired values corresponding to a derivatives of a function to be expanded. In this way Neumann series of Bessel functions is constructed. Text presents known results.
[3106] vixra:1709.0387 [pdf]
Silicon n-p-n Cold Emission Cathode
A study of a silicon n-p-n structure used as a cold emission cathode. Such a device is able to emit electrons in low external field and also to internally control the intensity of emission.
[3107] vixra:1709.0359 [pdf]
A Sharp Sufficient Condition of Block Signal Recovery Via $l_2/l_1$-Minimization
This work gains a sharp sufficient condition on the block restricted isometry property for the recovery of sparse signal. Under the certain assumption, the signal with block structure can be stably recovered in the present of noisy case and the block sparse signal can be exactly reconstructed in the noise-free case. Besides, an example is proposed to exhibit the condition is sharp. As byproduct, when $t=1$, the result enhances the bound of block restricted isometry constant $\delta_{s|\mathcal{I}}$ in Lin and Li (Acta Math. Sin. Engl. Ser. 29(7): 1401-1412, 2013).
[3108] vixra:1709.0355 [pdf]
Common Description of Quantum Electromagnetism and Relativistic Gravitation
We use 3 equations as postulates: Pem referring to electromagnetism, Pgrav referring to gravity, and Pqm referring to quantum mechanic and defining the wave function. Combining Pem with "Sommerfeld's quantum rules" corresponds to the original quantum theory of Hydrogen, which produces the correct relativistic energy levels of atoms (Sommerfeld's and Dirac's theories of matter produces the same energy levels, and Schrodinger's theory produces the approximation of those energy levels). Pqm implies that the wave function is solution of both Schrodinger's and Klein-Gordon's equations in the non interacting case while, in the interacting case it implies "Sommerfeld's quantum rules": Pem and Pqm then produce the correct relativistic energy levels of atoms (the same as Dirac's energy levels). We check that the required degeneracy is justified by pure deduction, without any other assumption (Schrodinger's theory only justifies one half of the degeneracy). We observe the connection between Pqm, Quantum Field Theories and tunnel effect. From Pgrav we deduce an equation of motion very similar to general relativity (with accuracy 10^{-6} at the surface of the Sun), our postulate being explicitly an approximation. First of all, we discuss classical Kepler problems (Newtonian motion of the Earth around the Sun), explain the link between Kelpler's law of periods (1619) and Plank's law (1900) and observe the links between all historical models of atoms (Bohr, Sommerfeld, Pauli, Schrodinger, Dirac, Fock).
[3109] vixra:1709.0354 [pdf]
Laparoscopic Removal of a Migrated Intra Uterine Device Embedded in the Anterior Abdominal Wall in Yaounde (Cameroon), a Third World Country
Uterine perforation is a serious complication which can happen after intrauterine device (IUD) insertion. Following the uterine rupture, an IUD may migrate into gynecologic, urinary or gastro-intestinal system organs. There are many reports of migrated iuds but fewer report of iuds embedded in the abdominal wall. Laparoscopic removal of a migrated IUD wasn’t yet described in our country For more information : https://symbiosisonlinepublishing.com/
[3110] vixra:1709.0353 [pdf]
Controversies in Pregnancy Management after Prenatal Diagnosis of a Twin Pregnancy Discordant for Trisomy 21 Diagnosed by Cell-Free Fetal DNA Testing
Background: Current preliminary data is advocating cfDNA testing in twin pregnancies since both increasing use of ART and higher maternal ages have raised the incidence of (discordant) aneuploidies. Procedures and findings:This report is raising ethical implications deriving from a twin pregnancy discordant for trisomy 21 conceived from egg donation and diagnosed by cfDNA testing after low risk conventional first-trimester screening. For More Information : https://symbiosisonlinepublishing.com/
[3111] vixra:1709.0352 [pdf]
Giant Oocytes with Two Meiotic Spindles and Two Polar Bodies: Report of Two Cases
With the advent of IVF technology, the terms normal and abnormal oocytes have been defined and one type of abnormal oocyte is the “giant oocyte”. Giant oocytes are defined to have a 30% larger diameter and twice the volume of normal oocytes[1,2]. Giant oocyte is a rarery observed phenomenon among humans and embryos may develop from these oocytes [2,3].The first hypothesis for the mechanism of their formation is cytoplasmic fusion of two oogonia and the second one is the lack of cytokinesis during mitotic divisions in an oogonium [4]. Fertilization and progression of a giant oocyte is suspected to be the cause of digynic triploidy, which is defined as triploidy with two maternal and one paternal complements [5]. In this case report, we present two giant oocytes, each shown to have two meiotic spindles via visualization by polarization microscope. Because giant oocytes can develop into embryos that are morphologically normal, but genetically abnormal, an embryologist has to be aware of this phenomenon. For this reason, the scientific aim of this report is to present the polarization microscopic properties of giant oocytes and increase the awareness of such oocytes among embryologists.
[3112] vixra:1709.0351 [pdf]
Risk Factors for Preterm Birth among Women Who Delivered Preterm Babies at Bugando Medical Centre, Tanzania
Background: Preterm birth is the leading cause of infant morbidity and mortality globally. Infants who are born preterm suffer long term health consequences. There only few studies done on risk factors for prematurity in Tanzania. This study aimed to determine the risk factors for preterm birth among women who delivered preterm babies at Bugando Medical Centre in Mwanza, Tanzania. Methods:A matched case-control study was conducted at the Bugando Medical Centre from May to June 2015. A total of 50 women with preterm birth (cases) were matched with 50 women who had term births (controls). Cases were matched with controls by date of delivery. We excluded mothers with multiple gestations and those who were sick and unsuitable for the interview. A structured questionnaire was used to collected relevant information from all participants. Data analysis was performed using SPSS version 20.0. Odds ratios with 95% confidence interval were estimated in a multivariate regression model to determine factors associated with preterm delivery.
[3113] vixra:1709.0350 [pdf]
Pregnancy, Exercise and Late Effects in the Offspring until Adult Age
Maternal exercise during pregnancy as one of the critical periods can have significant delayed effect in the offsprring’s fetal imprinting of future development until adult age; adequate and voluntary exercise is provided, not a forced one as a stress. Spontaneous physical activity of the offspring until adult age can be increased, and body composition, cardiac micro structure and reactibility (greater resistance to noxi), vasomotor function, glucose metabolism, insulin sensitivity along with related diseases (diabetes) can be positively influenced. Bone development and also the enhancement of brain function and learning sensitivity can be improved as revealed in a number of experimental model animal studies. Exercise during pregnancy was also shown to compensate in the offspring the detrimental effect of inadequate, e.g. high fat diets. Possibility of introducing significant modifications of the programming of the offspring’s desirable development and health status by adequate and physiological maternal exercise during pregnancy was supported also by some observations in humans.
[3114] vixra:1709.0335 [pdf]
An Insight into the Capability of Composite Technology to Enable Magnesium to Spread its Wings in Engineering and Biomedical Applications
Nature has various ways to inspire humans through a lot of its creations that researchers in different disciplines are trying to understand and mimic in order to enhance current state of technology and quality of life. From materials perspective, nature uses composites in its creations involving plants and animals with bone as a classical example.
[3115] vixra:1709.0334 [pdf]
Polyvinyl Alcohol-Nanobioceramic Based Drug Delivery System
Drug delivery research today is an advanced and important area in pharmaceutical research and application of nanotechnology includes enhancement of the solubility and permeability of the drugs so as to improve their bioavailability including delivery to the targeted site. Hydroxyapatite (HAP) based bioceramic nanoparticles composed of biodegradable polymers have been used in the present work to develop an amoxicillin based delivery systems. The synthesized n-HAP powders were estimated for the Ca/p ratio. This ratio indicates the presence of HAP as a single phase. The nano structure, morphology and presence of vibrational groups are con- firmed using instrumental analysis. The SEM images show the spherical shaped particles of nano hydroxyapatite are confirmed. The loading and unloading characteristics of the drug were recorded spectro photometrically.
[3116] vixra:1709.0330 [pdf]
The Inglorious History of Thermodynamics
Usually, physics students don't like thermodynamics: it is incomprehensible. They commonly get told to get used to it. Later on, as an expert, they'll find that the thermodynamic calculations come with surprises: sometimes evil, sometimes good. That can mean only one thing: The theory is inconsistent. In here, it will be shown where that is.
[3117] vixra:1709.0306 [pdf]
A $4\times 4$ Diagonal Matrix Schr{\"o}dinger Equation from Relativistic Total Energy with a $2\times 2$ Lorentz Invariant Solution.
In this paper an algebraic method is presented to derive a non-Hermitian Schr{\"o}dinger equation from $E=V+c\sqrt{m^2c^2+\left(\mathbf{p}-\frac{e}{c}\mathbf{A}\right)^2}$ with $E\rightarrow i\hbar \frac{\partial}{\partial t}$ and $\mathbf{p} \rightarrow -i\hbar \nabla$. In the derivation no use is made of Dirac's method of four vectors and the root operator isn't squared either. In this paper use is made of the algebra of operators to derive a matrix Schr{\"o}dinger equation. It is demonstrated that the obtained equation is Lorentz invariant.
[3118] vixra:1709.0304 [pdf]
Boys' Function Computed by Gauss-Jacobi Quadrature
Boys' Function F_m(z) that appears in the quantum mechanics of Gaussian Type Orbitals is a special case of Kummer's confluent hypergeometric function. We evaluate its integral representation of a product of a power and an exponential function over the unit interval with the numerical Gauss-Jacobi quadrature. We provide an implementation in C for real values of the argument z which basically employs a table of the weights and abscissae of the quadrature rule for integer quantum numbers m <= 129.
[3119] vixra:1709.0289 [pdf]
Selected Papers de L'Ingénieur Abdelmajid Ben Hadj Salem
This book concerns the tome III of the selected papers of the Senior Engineer Abdelmajid Ben Hadj Salem that contains papers about: - the robust estimators, - the coordinates Fuseaux in Tunisia, - the problem of the movement of n body.
[3120] vixra:1709.0253 [pdf]
"Experimentally Proven": the Big Fallacy of Theoretical Physics.
The intention of theoretical physics is to construct mathematical models so that experimental data can be predicted through calculation. If the prevailing model cannot explain new experimental data because of its imperfections, theorists add to the model fictitious entities and helpmates which are defined on the basis of the new experimental data, so that the new experimental data can now be calculated with the modified model. Remaining contradictions are camouflaged as good as possible to make the model consistent. If later on calculated data obtained indirectly from new experiments are consistent with the fictitious entities and helpmates, theorists conclude that this is the prove that the fictitious entities and helpmates really exist. The conclusion is a fallacy because it ignores that the model was previously made consistent, what means that all data obtained through experiments and calculations must be explained with the model, otherwise the model would not be consistent. Adding more and more fictitious entities and helpmates to an imperfect model makes it, with the time, more and more complex, untrustworthy and non-physical and finally a radically new model is required to overcome these problems.
[3121] vixra:1709.0242 [pdf]
Exact Map Inference in General Higher-Order Graphical Models Using Linear Programming
This paper is concerned with the problem of exact MAP inference in general higher-order graphical models by means of a traditional linear programming relaxation approach. In fact, the proof that we have developed in this paper is a rather simple algebraic proof being made straightforward, above all, by the introduction of two novel algebraic tools. Indeed, on the one hand, we introduce the notion of delta-distribution which merely stands for the difference of two arbitrary probability distributions, and which mainly serves to alleviate the sign constraint inherent to a traditional probability distribution. On the other hand, we develop an approximation framework of general discrete functions by means of an orthogonal projection expressing in terms of linear combinations of function margins with respect to a given collection of point subsets, though, we rather exploit the latter approach for the purpose of modeling locally consistent sets of discrete functions from a global perspective. After that, as a first step, we develop from scratch the expectation optimization framework which is nothing else than a reformulation, on stochastic grounds, of the convex-hull approach, as a second step, we develop the traditional LP relaxation of such an expectation optimization approach, and we show that it enables to solve the MAP inference problem in graphical models under rather general assumptions. Last but not least, we describe an algorithm which allows to compute an exact MAP solution from a perhaps fractional optimal (probability) solution of the proposed LP relaxation.
[3122] vixra:1709.0207 [pdf]
Dialogues on Various Relativistic Paradoxes
The author attempts to give a self-contained view of various paradoxes in the theory of relativity, and provides an extensive discussion of them in hopefully elementary terms.
[3123] vixra:1709.0203 [pdf]
A Lattice Theoretic Look: A Negated Approach to Adjectival (Intersective, Neutrosophic and Private) Phrases
The aim of this paper is to provide a contribution to Natural Logic and Neutrosophic Theory. This paper considers lattice structures built on noun phrases. Firstly, we present some new negations of intersective adjectival phrases and their settheoretic semantics such as non-red non-cars and red non-cars. Secondly, a lattice structure is built on positive and negative nouns and their positive and negative intersective adjectival phrases. Thirdly, a richer lattice is obtained from previous one by adding neutrosophic prefixes neut and anti to intersective adjectival phrases. Finally, the richest lattice is constructed via extending the previous lattice structures by private adjectives (fake, counterfeit). We call these lattice classes Neutrosophic Linguistic Lattices (NLL).
[3124] vixra:1709.0202 [pdf]
An Efficient Image Segmentation Algorithm Using Neutrosophic Graph Cut
Segmentation is considered as an important step in image processing and computer vision applications, which divides an input image into various non-overlapping homogenous regions and helps to interpret the image more conveniently. This paper presents an efficient image segmentation algorithm using neutrosophic graph cut (NGC).
[3125] vixra:1709.0201 [pdf]
A Novel NeutrosophicWeighted Extreme Learning Machine for Imbalanced Data Set
Extreme learning machine (ELM) is known as a kind of single-hidden layer feedforward network (SLFN), and has obtained considerable attention within the machine learning community and achieved various real-world applications.
[3126] vixra:1709.0185 [pdf]
Neutrosophic Quadruple Algebraic Hyperstructures
The objective of this paper is to develop neutro- sophic quadruple algebraic hyperstructures. Specically, we develop neutrosophic quadruple semihypergroups, neutrosophic quadruple canonical hypergroups and neutrosophic quadruple hyperrings and we present elementary properties which characterize them.
[3127] vixra:1709.0184 [pdf]
NS-K-NN: Neutrosophic Set-Based K-Nearest Neighbors Classifier
k-nearest neighbors (k-NN), which is known to be a simple and efficient approach,is a non-parametric supervised classifier. It aims to determine the class label of an unknown sample by its k-nearest neighbors that are stored in a training set.
[3128] vixra:1709.0173 [pdf]
Special Types of Bipolar Single Valued Neutrosophic Graphs
Neutrosophic theory has many applications in graph theory, bipolar single valued neutrosophic graphs (BSVNGs) is the generalization of fuzzy graphs and intuitionistic fuzzy graphs, SVNGs. In this paper we introduce some types of BSVNGs, such as subdivision BSVNGs, middle BSVNGs, total BSVNGs and bipolar single valued neutrosophic line graphs (BSVNLGs), also investigate the isomorphism, co weak isomorphism and weak isomorphism properties of subdivision BSVNGs, middle BSVNGs,total BSVNGs and BSVNLGs.
[3129] vixra:1709.0164 [pdf]
RSA Cryptography over Polynomials (II)
Here is presented a cryptosystem near the RSA cryptosystem but for polynomials over a finite field, more precisely two irreducible polynomials instead of two prime numbers.
[3130] vixra:1709.0153 [pdf]
Wherefrom Comes the Missing Baryon Number in the Eightfoldway Model?
An extremely puzzling problem of particle physics is, how come, no baryon number arises mathematically to describe the spin-1/2 octet baryons in the Eightfold way model. Recently the author has shown that all the canonical proposals to provide a baryon number to solve the above problem, are funda- mentally wrong. So what is the resolution of this conundrum? Here we show that the topological Skyrme-Witten model which takes account of the Wess- Zumino anomaly comes to our rescue. In contrast to the two avour model, the presence of this anomaly term for three avours, shows that the quantal states are monopolar harmonics, which are not functions but sections of a ber bun- dle. This generates a profoundly signicant "right hypercharges", which lead to making the adjoint representation of SU(3) as being the ground state. This provides a topologically generated baryon number for the spin-1/2 baryons in the adjoint representation, to connect to the Eightfold way model baryon octet states. This solves the mystery of the missing baryon number in the Eightfold way model.
[3131] vixra:1709.0147 [pdf]
The Temperature Dependence on Intermolecular Potential Energy in the Design of a Supercritical Stirling Cycle Heat Engine
The Stirling thermodynamic heat engine cycle is modified, where instead of an ideal gas, a real, supercritical, monatomic working fluid subjected to intermolecular attractive forces is used. The potential energy of real gases is redefined to show it decreasing with temperature as a result of the attractive Keesom forces, which are temperature dependent. This new definition of potential energy is used to thermodynamically design a Stirling cycle heat engine with supercritical xenon gas, and an engine efficiency that exceeds the Carnot efficiency is demonstrated. The change in internal energy predicted is compared to experimental measurements of condensing steam, xenon, argon, krypton, nitrogen, methane, ethane, propane, normal butane, and iso-butane, and the close match validates this new definition of temperature-dependent real gas potential energy, as well as the thermodynamic feasibility of the modified supercritical Stirling cycle heat engine.
[3132] vixra:1709.0120 [pdf]
Coordinate Transformation Between Inertial Reference Frames
Two inertial reference frames moving at identical velocity can be seperated if one of them is put under acceleration for a duration. The coordinates of both inertial reference frames are related by the acceleration and its duration. An immediate property of this coordinate transformation is the conservation of distance and length across reference frames. Therefore, the concept of length contraction from Lorentz Transformation is impossible in reality and physics.
[3133] vixra:1709.0108 [pdf]
A New Semantic Theory of Nature Language
Formal Semantics and Distributional Semantics are two important semantic frameworks in Natural Language Processing (NLP). Cognitive Semantics belongs to the movement of Cognitive Linguistics, which is based on contemporary cognitive science. Each framework could deal with some meaning phenomena, but none of them fulfills all requirements proposed by applications. A unified semantic theory characterizing all important language phenomena has both theoretical and practical significance; however, although many attempts have been made in recent years, no existing theory has achieved this goal yet. This article introduces a new semantic theory that has the potential to characterize most of the important meaning phenomena of natural language and to fulfill most of the necessary requirements for philosophical analysis and for NLP applications. The theory is based on a unified representation of information, and constructs a kind of mathematical model called cognitive model to interpret natural language expressions in a compositional manner. It accepts the empirical assumption of Cognitive Semantics, and overcomes most shortcomings of Formal Semantics and of Distributional Semantics. The theory, however, is not a simple combination of existing theories, but an extensive generalization of classic logic and Formal Semantics. It inherits nearly all advantages of Formal Semantics, and also provides descriptive contents for objects and events as fine-gram as possible, descriptive contents which represent the results of human cognition.
[3134] vixra:1709.0099 [pdf]
Behaviour of a Matter Torus Under the Influence of the Expanding Cosmos
In this paper the behaviour of a matter torus under the influence of the expanding cosmos model (according to our theory PUFT) is investigated. This task can numerically be treated for the full life time of the cosmos model (from the Urstart to the Finish). As we know from our literature cited, the graphic course of the cosmological scalaric field after the Urstart exhibits an increase , followed by a return down to the Finish. This curious behaviour of the cosmological scalaric field leads to the suspicion that the shape of the torus during the life time of the cosmos may be: sphere at the Urstart, transformation of the sphere to a ring with decreasing thickness, return to a sphere at the Finish. It seems that such a theoretically predicted physical effect could perhaps exist in Nature. The torus was chosen as a test object, since the numerical calculations can be done without approximation procedures. We took this rather simple example to show this cosmological effect for further applications in astrophysics.
[3135] vixra:1709.0083 [pdf]
Quintessential Nature of the Fine-Structure Constant
An introduction is given to the geometry and harmonics of the Golden Apex in the Great Pyramid, with the metaphysical and mathematical determination of the fine-structure constant of electromagnetic interactions. Newton's gravitational constant is also presented in harmonic form and other fundamental physical constants are then found related to the quintessential geometry of the Golden Apex in the Great Pyramid.
[3136] vixra:1709.0082 [pdf]
A Possibility of Brain Stimulation with Oscillating Neutrinos
Special properties of the first ordinary (observable) matter, produced in the universe -the oscillating neutrinos - are discussed in a context of their use for healing and stimulation of the brain.
[3137] vixra:1709.0053 [pdf]
The Theory of Quantum Gravity Without Divergenies and Calculation of Cosmological Constant
To construct quantum gravity we introduce the quantum gravity state as function of particle coordinates and functional of fields, We add metric as the new argument of state: $$ \Psi=\Psi(t,x_{1},...x_{n},\lbrace A^{\gamma}(x)\rbrace, \lbrace g_{\mu\nu}(x) \rbrace) $$ we calculate the cosmological constant assuming that the quantum state is a function of time and radius of universe (mini-superspace) $$ \Psi=\Psi(t,a) $$ To avoid infinities in the solutions, we substitute the usual equation for propagotor with initial value Cauchy problem, which has finite and unique solution, for example we substitute the equation for Dirac electron propagator $$ (\gamma^\mu p_\mu - mc)K(t,x,t_0,x_0)= \delta(\vec{x} - \vec{x_0})\delta(t-t_0) $$ which already has infinity at the start $t = t_0 $, with the initial value Cauchy problem $$ \begin{cases} (H - i \hbar\partial / \partial t)K(t,x,t_0,x_0)=0,\\ K(t,x,t_0,x_0) = \delta(\vec{x} - \vec{x_0}),\quad t=t_0, \end{cases} $$ which has finite and unique solution.
[3138] vixra:1709.0036 [pdf]
Evolution and the Mind of God
This essay asks the question who, or what, is God. This is not new. Philosophers and religions have made many attempts to understand the nature of God. This essay is different from earlier attempts in that it develops a theory of God based on all known science and our limited understanding of the processes in the universe. It is not necessary to start with the belief that God exists; rather, by following the logic of the essay, it can be concluded that God exists.
[3139] vixra:1709.0035 [pdf]
Neutrino Mixing with Hopf Algebras
Neutrino mixing in a spectral model for QFT employs the Bogoliubov transformation, which is a quantum Fourier transform. We look at the Hopf algebras in this setting, from a more motivic perspective. Experimental results for mixing are considered.
[3140] vixra:1709.0007 [pdf]
Computing, Cognition and Information Compression
This article develops the idea that the storage and processing of information in computers and in brains may often be understood as information compression. The article first reviews what is meant by information and, in particular, what is meant by redundancy, a concept which is fundamental in all methods for information compression. Principles of information compression are described. The major part of the article describes how these principles may be seen in a range of observations and ideas in computing and cognition: the phenomena of adaptation and inhibition in nervous systems; 'neural' computing; the creation and recognition of 'objects' and 'classes'in perception and cognition; stereoscopic vision and random-dot stereograms; the organisation of natural languages; the organisation of grammars; the organisation of functional, structured, logic and object-oriented computer programs; the application and de-referencing of identifiers in computing; retrieval of information from databases; access and retrieval of information from computer memory; logical deduction and resolution theorem proving; inductive reasoning and probabilistic inference; parsing; normalisation of databases.
[3141] vixra:1708.0462 [pdf]
How to Effect a Desired Rotation of a Vector about a Given Axis via Geometric (Clifford) Algebra
We show how to transform a "rotate a vector around a given axis" problem into one that may be solved via GA, which rotates objects with respect to bivectors. A sample problem is worked to show how to calculate the result of such a rotation conveniently via an Excel spreadsheet, to which a link is provided.
[3142] vixra:1708.0406 [pdf]
Nouvelle Ecriture des Equations du Problème de n Corps
From the equations of the problem of $n$ body, we consider that $t$ is a function of the variables $(x_k,y_k,z_k)_{k=1,n}$. We write a new formulation of the equations of the $n$ body problem.
[3143] vixra:1708.0404 [pdf]
Statistics on Small Graphs
We create the unlabeled or vertex-labeled graphs with up to 10 edges and up to 10 vertices and classify them by a set of standard properties: directed or not, vertex-labeled or not, connectivity, presence of isolated vertices, presence of multiedges and presence of loops. We present tables of how many graphs exist in these categories.
[3144] vixra:1708.0380 [pdf]
Division by Zero and the Arrival of Ada
Division by 0 is not defined in mathematics. Mathematics suggests solutions by work around methods. However they give only approximate, not the actual or exact, results. Through this paper we propose methods to solve those problems. One characteristic of our solution methods is that they produce actual or exact results. They are also in conformity with, and supported by, physical or empirical facts. Other characteristic is their simplicity. We can do computations easily based on basic arithmetic or algebra or other computation methods we already familiar with.
[3145] vixra:1708.0373 [pdf]
Further Tractability Results for Fractional Hypertree Width
The authors have recently shown that recognizing low fractional hypertree-width (fhw) is NP-complete in the general case and that the problem becomes tractable if the hypergraphs under consideration have degree and intersection width bounded by a constant, i.e., every vertex is contained in only constantly many different edges and the intersection of two edges contains only constantly many vertices. In this article, we show that bounded degree alone suffices to ensure tractability.
[3146] vixra:1708.0372 [pdf]
Dynamics of the Gravity Field
We derive the canonical momentum of the gravity field. Then we use it to derive the path integral of the gravity field. The canonical momentum is represented in Lorentz group. We derive it from the holonomy U(A) of the connection A of Lorentz group. We derive the path integral of gravity field as known in quantum fields theory and discuss the situation of free gravity field (like electromagnetic field). We find that this situation is only in the background spacetime, weak gravity, the situation of low matter density. We search for a theory in which the gravity field is a dynamical at any energy in arbitrary curved spacetime. For that, we suggest a duality gravity-area. That duality lets to possibility to study both gravity and area as dynamical fields in arbitrary curved spacetime. We find that the area field exists in the space-like and the gravity field exists in the time-like. We find that the tensor product of gravity and area fields, in selfdual representation, satisfies the reality condition. We derive the static potential of exchanging gravitons in scalar and spinor fields, the Newtonian gravitational potential.
[3147] vixra:1708.0354 [pdf]
Sirenomelia, the Mermaid Syndrome in Kuwait: A case Report
Sirenomelia also called as Mermaid Syndrome, is a rare congenital malformation of uncertain etiology. It is characterized by fusion of the lower limbs and commonly associated with severe urogenital and gastrointestinal malformation. We report a case of sirenomelia occurring in a 25 year old Kuwaiti woman following premature rupture of membranes. This is the first documented case in this country. You can submit your Manuscripts at:  https://symbiosisonlinepublishing.com/submitManuscript.php 
[3148] vixra:1708.0353 [pdf]
Fetal Echography Remotely Controlled Using A TeleOperated Motorized Probe and Echograph Unit
Objective to evaluate the performance of a new device for fetal tele-echographyin isolated medical centers. Methods fetal tele-echography and Doppler was performed using,a) a portable echograph which setting and function (Doppler pulsed and color, 3D capture..) can be operated from away via internet, b) equipped with motorized probes (400g, 430cm3) which transducer can be orientated from away by an expert also via internet. The pregnant were in medical center far away from the expert center. You can submit your Manuscripts at:  https://symbiosisonlinepublishing.com/submitManuscript.php 
[3149] vixra:1708.0352 [pdf]
A Staged Feasibility Study of a Novel Vaginal Bowel Control System for the Treatment of Accidental Bowel Leakage in Adult Women
Background Accidental bowel leakage, or fecal incontinence, impacts the quality of life in women of all ages. A minimallyinvasive vaginal bowel control system was designed to reduce accidents and provides a new health care option for women. Methods A feasibility study was conducted to evaluate fit, patient comfort, and ease-of-use of this novel vaginal bowel control therapy at home to better inform device design, treatment delivery, and the design of a subsequent pivotal clinical trial protocol. Staged evaluations were performed in women without and with self-reported accidental bowel leakage of any severity. Wear duration progressed from an initial one-time, in-office fitting to extended-wear periods at home. Device-related adverse events were collected in all subjects exposed to the device. Treatment responses were collected at baseline and after 1-month wear in women with accidental bowel leakage. Additionally, device comfort and satisfaction were assessed. You can submit your Manuscripts at:  https://symbiosisonlinepublishing.com/submitManuscript.php 
[3150] vixra:1708.0351 [pdf]
A Rare Case of Newborn with Accessory Scrotum Associated with Bifid Scrotum and Perineal Lipoma
We report a case of bifid scrotum with accessory scrotum and peduncular lipoma in perineal region occurring in a full-term male neonate. Physical examination showed two soft perineal masses located between a bifid scrotum and the anus. No abnormalities of anus were detected. The patient underwent ultrasound and magnetic resonance examinations confirming a homogeneous fat tissue matter of the posterior mass and showing fluid content inside the anterior one. The patient also underwent a Gastrografin enema and no analcolon anomalies were detected. The masses were completely excised and the histological examination revealed a lipoma, with tissue suggestive of scrotum, so a definite diagnosis of accessory scrotum, associated with lipoma was made. You can submit your Manuscripts at:  https://symbiosisonlinepublishing.com/submitManuscript.php 
[3151] vixra:1708.0341 [pdf]
Routing Games Over Time with Fifo Policy
We study atomic routing games where every agent travels both along its decided edges and through time. The agents arriving on an edge are first lined up in a \emph{first-in-first-out} queue and may wait: an edge is associated with a capacity, which defines how many agents-per-time-step can pop from the queue's head and enter the edge, to transit for a fixed delay. We show that the best-response optimization problem is not approximable, and that deciding the existence of a Nash equilibrium is complete for the second level of the polynomial hierarchy. Then, we drop the rationality assumption, introduce a behavioral concept based on GPS navigation, and study its worst-case efficiency ratio to coordination.
[3152] vixra:1708.0337 [pdf]
QCD Self-Consistent Only With a Self-Consistenct QED
The Standard Model of particle physics, based on the group structure SU (N ) c ⊗ SU (2) L ⊗U (1) Y (f orN c = 3), has been very successful. However in it, the electric charge is not quantized and is fixed by hand to be 2/3 and -1/3. This is its major shortcoming. This model runs into conflict with another similarly structured, but actually quite different model, wherein the electric charge is fully quantized and depends upon colour degree of freedom as well. We study this basic conflict between these models and how they connect to a consistent study of Quantum Chromodynamics (QCD) for arbitrary number of colours. We run into a basic issue of consistency of Quantum Electrodynamics (QED) with these fundamentally different charges. Study of consistency of ( QCD + QED ) together, makes discriminating and conclusive statements about the relevance of these two model structures.
[3153] vixra:1708.0280 [pdf]
New Approaches to Melanoma Treatment: Checkpoint Inhibition with Novel Targeted Therapy
Melanoma is the most dangerous type of skin cancer, largely due to its propensity for recurrence and metastasis, even after removal of malignant tissue. When melanoma reaches advanced stages, the disease becomes refractory to many types of therapy, which has created a need for novel therapeutic strategies to combat the disease. Our group focuses on the oncogenic function of a neuronal receptor, metabotropic glutamate receptor 1 (mGluR1). When mGluR1 is aberrantly expressed in melanocytes, elevated levels of extracellular glutamate mediate the constitutive activation of the receptor to promote cell proliferation. We are exploring the potential synergistic efficacy when combining a glutamatergic signaling inhibitor with a checkpoint inhibitor antibody
[3154] vixra:1708.0267 [pdf]
Density Matrices and the Standard Model
We use density matrices to explore the possibility that the various flavors of quarks and leptons are linear superpositions over a single particle whose symmetry follows the finite subgroup $S_4$ of the simple Lie group SO(3). We use density matrices which allow modeling of symmetry breaking over temperature, and can incorporate superselection sectors. We obtain three generations each consisting of the quarks and leptons and an SU(2) dark matter doublet. We apply the model to the Koide mass equations and propose extensions of the theory to other parts of the Standard Model and gravitation.
[3155] vixra:1708.0257 [pdf]
Some Insight into Relativity Principle, Covariant Equations, and the Use of Abduction in Physics
We show relations of the Relativity Principle (RP) to the linear features of space(time), Sec 1. In Sec.2 RP is written in Minkowski’s space. Sec. 3 is devoted to Einstein’s relativity principle. The covariant form of equation and the RP in the free Fock space (FFS) are discussed in Sec.4.In Sec.5 are also discussed a possible trace of the quantum features in classical mechanics and possible sources of nonlinearity of basic equations of nature. The final comments are contained in Sec.6.
[3156] vixra:1708.0255 [pdf]
New Idea of the Goldbach Conjecture
A new idea of the Goldbach conjecture has been studied, it is that the even number is more bigger, the average form of the sum of two primes are more larger too. And then, we prove that every sufficiently large even number is the sum of two primes.
[3157] vixra:1708.0254 [pdf]
Double Conformal Space-Time Algebra for General Quadric Surfaces in Space-Time
The G(4,8) Double Conformal Space-Time Algebra (DCSTA) is a high-dimensional 12D Geometric Algebra that extends the concepts introduced with the G(8,2) Double Conformal / Darboux Cyclide Geometric Algebra (DCGA) with entities for Darboux cyclides (incl. parabolic and Dupin cyclides, general quadrics, and ring torus) in spacetime with a new boost operator. The base algebra in which spacetime geometry is modeled is the G(1,3) Space-Time Algebra (STA). Two G(2,4) Conformal Space-Time subalgebras (CSTA) provide spacetime entities for points, hypercones, hyperplanes, hyperpseudospheres (and their intersections) and a complete set of versors for their spacetime transformations that includes rotation, translation, isotropic dilation, hyperbolic rotation (boost), planar reflection, and (pseudo)spherical inversion. G(4,8) DCSTA is a doubling product of two orthogonal G(2,4) CSTA subalgebras that inherits doubled CSTA entities and versors from CSTA and adds new 2-vector entities for general (pseudo)quadrics and Darboux (pseudo)cyclides in spacetime that are also transformed by the doubled versors. The "pseudo" surface entities are spacetime surface entities that use the time axis as a pseudospatial dimension. The (pseudo)cyclides are the inversions of (pseudo)quadrics in hyperpseudospheres. An operation for the directed non-uniform scaling (anisotropic dilation) of the 2-vector general quadric entities is defined using the boost operator and a spatial projection. Quadric surface entities can be boosted into moving surfaces with constant velocities that display the Thomas-Wigner rotation and length contraction of special relativity. DCSTA is an algebra for computing with general quadrics and their inversive geometry in spacetime. For applications or testing, G(4,8) DCSTA can be computed using various software packages, such as the symbolic computer algebra system SymPy with the GAlgebra module.
[3158] vixra:1708.0240 [pdf]
Counting Complexity by Using Partial Circuit Independency
This paper describes about complexity of NP problems by using “Effective circuit” independency, and apply SAT problem. Inputs of circuit family that compute P problem have some explicit symmetry that indicated circuit structure. To clarify this explict symmetry, we define “Effective circuit” as partial circuit which are necessary to compute target inputs. Effective circuit set divide problem to some symmetric partial problems. The other hand, inputs of NTM that compute NP problem have extra implicit symmetry that indicated nondeterministic transition functions. To clarify this implicit symmetry, we define special DTM “Concrete DTM”which index i correspond to selection of nondeterministic transition functions. That is, NTM split many different asymmetry DTM and compute all DTM in same time. Consider concrete DTM and effective circuit set, circuit family [SAT] that solve SAT problem have to include all effective circuit set [CVPi] that correspond to concrete DTM as Circuit Value Problem. [CVPi] have unique gate and [SAT] must include all [CVPi]. Number of [CVPi] is over polynomial size of input. Therefore, [SAT] is over polynomial size.
[3159] vixra:1708.0213 [pdf]
Feeding the Universe, Quantum Scaling and Stable Neutrinos
Based on the quantum modification of the general relativity (Qmoger), it is shown, that the Vacuum is continuously feeding the universe with ultralight particles (vacumo). Vacumos are transforming into more heavy (but still ultralight) gravitons, which form quantum condensate even for high temperature. The condensate, under gravitational pressure in galaxies, produces and expels from the hot places the first generation of "ordinary" massive fermions, with are identified with neutrinos. It explains the stability of all three neutrinos, which was a puzzle in the Standard Model. The mass of neutrino, estimated in terms of a new scaling in Qmoger, satisfies the experimental bound. The oscillations of neutrino are explained in terms of interaction with the background condensate of gravitons. The electric dipole moment of neutrino is also estimated. The situation with neutrinos is an example of interface between dark and ordinary matter (Idom), introduced before in explanation of the phenomena of subjectivity.
[3160] vixra:1708.0211 [pdf]
On the Evidence of the Number of Colours in Particle Physics
It is commonly believed ( and as well reflected in current textbooks in particle physics ) that the R ratio in $e^+ e^-$ scattering and $\pi^0 \rightarrow \gamma \gamma$ decay provide strong evidences of the three colours of the Quantum Chromodynamics group ${SU(3)}_c$. This is well documented in current literature. However, here we show that with a better understanding of the structure of the electric charge in the Standard Model of particle physics at hand, one rejects the second evidence as given above but continues to accept the first one. Thus $\pi^0 \rightarrow \gamma \gamma$ decay is not a proof of three colours anymore. This fact is well known. However unfortunately some kind of inertia has prevented this being taught to the students. As such the textbooks and monographs should be corrected so that more accurate information may be transmitted to the students.
[3161] vixra:1708.0190 [pdf]
Universal Theory of General Invariance
Through the introduction of a new principle of physics, we extend quantum mechanics by proposing three additional postulates. With them we construct a quantum theory where gravitation emerges from the thermodynamics of an entangled vacuum. This new quantum theory reproduces the observations of the $\Lambda$CDM model of cosmology, predicting the existence of massive vacua $ M_{\text{on}} = \sqrt{\dfrac{\hbar c}{G}}$ and $M_{\text{off}} = \sqrt{\dfrac{ \Lambda^{2} \hbar^{3}G}{c^{5}}}$. Finally, we propose an experiment for the formers direct detection.
[3162] vixra:1708.0143 [pdf]
Hilbert's Forgotten Equation of Velocity Dependent Acceleration in a Gravitational Field
The principle of equivalence is used to argue that the known law of decreasing acceleration for high speed motion, in a low acceleration regime, produces the same result as found for a weak gravitational field, with subsequent implications for stronger fields. This result coincides with Hilbert's little explored equation of 1917, regarding the velocity dependence of acceleration under gravity. We derive this result, from first principles exploiting the principle of equivalence, without need for the full general theory of relativity.
[3163] vixra:1708.0138 [pdf]
Feeding the Universe, Quantum Scaling and Neutrino
Based on the quantum modification of the general relativity (Qmoger), it is shown, that the Vacuum is continuously feeding the universe with ultralight particles (vacumo). Vacumos are transforming into more heavy (but still ultralight) gravitons, which form quantum condensate even for high temperature. The condensate, under gravitational pressure in galaxies, produces the first generation of "ordinary" massive particles, with are identified with neutrinos. The estimated in this theory mass of neutrino satisfies the experimental bound. The oscillations of neutrino are explained in terms of interaction with the background condensate of gravitons. The electric dipole moment of neutrino is estimated. A connection of this theory with the Standard Model is discussed.
[3164] vixra:1708.0137 [pdf]
Approaches Used in Theoretical Physics
The intention of theoretical physics is to construct mathematical models so that experimental data can be predicted through calculation. Models are based on approaches which define the nature of the models. The most common approaches used are of mythological, mathematical or physical nature. Examples of mythological approaches are gluons, gravitons, dark matter, dark energy. Examples of mathematical approaches are the MOND theory for gravitation, special relativity and general relativity, the theory of quantum mechanics with the gauge principle. Examples of approaches of physical nature are the String, Vortex and Focal Point theories, the Emission theories, the theory of gravitation as the result of the reintegration of migrated electrons and positrons to their nuclei, the theory of Galilean relativity with the gamma factor.
[3165] vixra:1708.0122 [pdf]
Gauge Groups and Wavefunctions - Balancing at the Tipping Point
“What the Hell is Going On?” is Peter Woit’s ‘Not Even Wrong’ blog post of July 22nd 2017, a commentary on Nima Arkani-Hamed’s view of the present barren state of LHC physics, the long-dreaded Desert. This paper addresses the roots of the quandary which are fundamental, branching deep into the measurement problem and the enigmatic unobservable character of the wavefunction, and the confusion generating an ongoing proliferation of quantum interpretations.
[3166] vixra:1708.0116 [pdf]
Feeding the Universe, Qualia and Neutrino
Based on the quantum modification of the general relativity (Qmoger), it is shown, that Vacuum is continuously feeding the universe and partially merge with it, not unlike an ovary with a fruit. Subjective experiences (qualia) are considered in frames of the Qmoger theory. A relation is found between qualia and the neutrino oscillations.
[3167] vixra:1708.0115 [pdf]
The Higgs Troika
Ternary Clifford algebra is connected with three Higgs bosons and three fermion generations, whereas cube roots of time vector are associated with three quark colors and three weak gauge fields. Four-fermion condensations break chiral symmetries, induce axion-like bosons, and dictate fermion mass hierarchies.
[3168] vixra:1708.0090 [pdf]
A Generalization of the Thomas Precession, Part I
The time shown by a moving clock depends on its history. The Lorentz transform does not distinguish between the history of an accelerated clock and a constant velocity clock. The history of the clock can be assimilated by integrating the first derivative of the Thomas precession from the time t=0. A definite integral is required because the unknown trajectory of the clock in the distant past affects its displayed time. The coordinates are spinning in the second frame of reference during the integration, but in the definite integral from time t=0 to time t the spin accumulates to a specific angle. The integral is equivalent to a Lorentz transform followed by a space rotation. A space rotation does not affect the invariant quantity r<sup>2</sup> - c<sup>2</sup> t<sup>2</sup>. The history of a jerked clock is different than that of an accelerated clock. The solution in that order is equivalent to a Lorentz transform followed by two consecutive space rotations in different directions. Similarly, there are three rotations in the <b>ä</b> solution.
[3169] vixra:1708.0067 [pdf]
Is the Chemical Bond Consistent with the Theory of Relativity?
An experimental non-model determination of the number of electrons participating in a chemical bond has been achieved. This determination corroborates the valence theory of Lewis and coincides with the current state of the art. The relationship between a normalized bond area and its bond energy is used to precisely characterize selected organic molecules. The mass fusion of bonding electrons with its mass loss or gain, is the probable origin of the chemical energy. As a consequence, a probable geometric meaning of thermodynamic functions is provided.
[3170] vixra:1708.0065 [pdf]
Meta Mass Function
In this paper, a meta mass function (MMF) is presented. A new evidence theory with complex numbers is developed. Different with existing evidence theory, the new mass function in complex evidence theory is modelled as complex numbers and named as meta mass function. The classical evidence theory is the special case under the condition that the mass function is degenerated from complex number as real number.
[3171] vixra:1708.0054 [pdf]
Entropy as a Bound for Expectation Values and Variances of a General Quantum Mechanical Observable
Quantum information-theoretic approach has been identied as a way to understand the foundations of quantum mechanics as early as 1950 due to Shannon. However there hasn't been enough advancement or rigorous development of the subject. In the following paper we try to find relationship between a general quantum mechanical observable and von Neumann entropy. We find that the expectation values and the uncertainties of the observables have bounds which depend on the entropy. The results also show that von Neumann entropy is not just the uncertainty of the state but also encompasses the information about expectation values and uncertainties of any observable which depends on the observers choice for a particular measurement. Also a reverese uncertainty relation is derived for n quantum mechanical observables.
[3172] vixra:1708.0053 [pdf]
Kirchhoff’s Law of Thermal Emission: What Happens When a Law of Physics Fails an Experimental Test?
Kirchhoff’s Law of Thermal Emission asserts that, given sufficient dimensions to neglect diffraction, the radiation contained within arbitrary cavities must always be black, or normal, dependent only upon the frequency of observation and the temperature, while independent of the nature of the walls. With this in mind, simple tests were devised to demonstrate that Kirchhoff’s Law is invalid. It is readily apparent that all cavities appear black at room temperature within the laboratory. However, two completely different causes are responsible: 1) cavities made from good emitters self-generate the appropriate radiation and 2) cavities made from poor emitters are filled with radiation already contained in the room, completely independent of the temperature of the cavity. The distinction between these two scenarios can be made by placing a heated object near either type of cavity. In the first case, the cavity emission will remain essentially undisturbed. That is because a real blackbody can do work, instantly converting incoming radiation to an emission which corresponds to the temperature of its walls. In the second case, the cavity becomes filled with radiation which is not characteristic of its own temperature. Contrary to current belief, cavity radiation is entirely dependent on the nature of the walls. When considering a perfect reflector, the radiation will not be black but, rather, will reflect any radiation which was previously incident upon the cavity from the surroundings. This explains why microwave cavities are resonant, not black, and why it is possible to acquire Ultra High Field Magnetic Resonance Imaging (UHFMRI) images using cavity resonators. Conversely, real blackbodies cannot contain any radiation other than that which is characteristic of the temperature of their walls, as shown in Planck’s equation. Blackbody radiation is not universal, Kirchhoff’s Law is false, and cavity radiation is absolutely dependent on the nature of the walls at every frequency of observation. Since they were derived from this law, the concepts of Planck time, Planck temperature, Planck length, and Planck mass are not universal and are devoid of any fundamental meaning in physics.
[3173] vixra:1708.0015 [pdf]
The Cyclic Universe Through the Origins
I report the result of the cyclic universe theory and connect it to the origin of science and God showing some discussions about the cyclic model and the string theory, the cyclic model and the cosmological constant. And discussing the equation of Albert Einstein of the relation between mass and energy.
[3174] vixra:1708.0011 [pdf]
General Solutions of Mathematical Physics Equations
In this paper, using proposed three new transformation methods we have solved general solutions and exact solutions of the problems of definite solutions of the Laplace equation, Poisson equation, Schrödinger equation, the homogeneous and non-homogeneous wave equations, Helmholtz equation and heat equation. In the process of solving, we find that in the more universal case, general solutions of partial differential equations have various forms such as basic general solution, series general solution, transformational general solution, generalized series general solution and so on.
[3175] vixra:1708.0008 [pdf]
Special Relativity in Complex Space-Time. Part 2. Basic Problems of Electrodynamics.
This article discusses an electric field in complex space-time. Using an orthogonal paravector transformation that preserves the invariance of the wave equation and does not belong to the Lorentz group, the Gaussian equation has been transformed to obtain the relationships corresponding to the Maxwell equations. These equations are analysed for compliance with classic electrodynamics. Although, the Lorenz gauge condition has been abandoned and two of the modified Maxwell's equations are different from the classical ones, the obtained results are not inconsistent with the experience because they preserve the classical laws of the theory of electricity and magnetism contained therein. In conjunction with the previous papers our purpose is to show that space-time of high velocity has a complex structure that differently orders the laws of classical physics but does not change them.
[3176] vixra:1708.0002 [pdf]
Sedeonic Duality-Invariant Field Equations for Dyons
We discuss the theoretical description of dyons having simultaneously both electric and magnetic charges on the basis of space-time algebra of sixteen-component sedeons. We show that the generalized sedeonic equations for electromagnetic field of dyons can be reformulated in equivalent canonical form as the equations for redefined field potentials, field strengths and sources. The relations for energy and momentum as well as the relations for Lorentz invariants of dyonic electromagnetic field are derived. Additionally, we discuss the sedeonic second-order Klein-Gordon and first-order Dirac wave equations describing the quantum behavior of dyons in an external dyonic electromagnetic field.
[3177] vixra:1707.0417 [pdf]
Quantum Scaling, Neutrino and Life
From the quantum modification of general relativity (Qmoger), supported by cosmic data (without fitting), a new quantum scaling is derived. This scaling indicates a mechanism of formation new particles from the background matter. Based on this scaling, mass of neutrino is estimated in agreement with experimental bounds. The neutrino oscillations are explained in terms of interaction with the background quantum condensate of gravitons. Subjective experiences (qualia) and functioning of living cell are also connected with the background condensate.
[3178] vixra:1707.0385 [pdf]
The Seven Higgs Bosons and the Heisenberg Uncertainty Principle Extended to D Dimensions
The proof of the existence of seven dimensions compacted in circles: the principle of uncertainty of Heisenberg extended to d dimensions; Allows us to obtain the masses of the seven Higgs bosons, including the known empirically (125.0901 GeV = mh (1)); And theorize the calculation of the mass of the boson stop quark (745 GeV)
[3179] vixra:1707.0381 [pdf]
Quantum Cosmology, New Scaling, Mass of Oscillating Neutrino and Life
From the quantum modification of general relativity (Qmoger), supported by cosmic data (without fitting), a new quantum scaling is derived. This scaling indicates a mechanism of formation new particles from the background matter. Based on this scaling, mass of neutrino is estimated in agreement with experimental bounds. The neutrino oscillations are explained in terms of interaction with the background quantum condensate of gravitons. Subjective experiences (qualia) and functioning of living cell are also connected with the background condensate.
[3180] vixra:1707.0344 [pdf]
An Approximate Non-Quantum Calculation of the Aharonov-Bohm Effect
In the Aharonov-Bohm effect for a magnetic solenoid a moving charged particle seems to be influenced by the 4-potential in a region where there are no fields in the laboratory frame of reference. The 4-potential should be transformed to the frame of reference of the particle before computing the fields. There is an E field in its frame of reference. The field accelerates a moving charged particle. One of the components of the acceleration vector is in the same direction as the particle's velocity in the first frame of reference. The resulting longitudinal displacement in the path integral, when scaled in units of the de Broglie wavelength for the particle, is approximately the same as the phase of the Aharonov-Bohm solution for long paths. The scalar solution does not require transformation. It follows from the static Coulomb solution and the Newton equations.
[3181] vixra:1707.0338 [pdf]
Conductivity Equations of Protons Transporting Through 2D Crystals Obtained with the Rate Process Theory and Free Volume Concept
The Eyring’s rate process theory and free volume concept are employed to treat protons (or other particles) transporting through a 2D (two dimensional) crystal like graphene and hexagonal boron nitride. The protons are assumed to be activated first in order to participate conduction and the conduction rate is dependent on how much free volume available in the system. The obtained proton conductivity equations show that only the number of conduction protons, proton size and packing structure, and the energy barrier associated with 2D crystals are critical; the quantization conductance is unexpectedly predicted with a simple Arrhenius type temperature dependence. The predictions agree well with experimental observations and clear out many puzzles like much smaller energy barrier determined from experiments than from the density function calculations and isotope separation rate independent of the energy barrier of 2D crystals, etc.. Our work may deepen our understandings on how protons transport through a membrane and has direct implications on hydrogen related technology and proton involved bioprocesses.
[3182] vixra:1707.0333 [pdf]
Mass Interaction Principle as a Common Origin of Special Relativity and Quantum Behaviours of Massive Particles
The author believes there are spacetime particles(STP) which can sense all matter particles ubiquitously. Matter particles will change their states collided by STP . The underlying property of mass is a statistical property emerging from random impact in spacetime. We propose a mass interaction principle (MIP) which states any particle with mass m will involve a random motion without friction, due to random impacts from spacetime. Each impact changes the amount nh (n is any integer) for an action of the particle. Starting from the concept of statistical mass, we propose the fundamental MIP. We conclude that inertial mass has to be a statistical property, which measures the diffusion ability of all matter particles in spacetime. We prove all the essential results of special relativity come from MIP. Speed of light in the vacuum need no longer any special treatment. Instead, speed of STP has more fundamentally physical meaning, which represents the upper limit of information propagational speed in physics. Moreover, we derive the uncertainty relation asserting a fundamental limit to the precision regarding mass and diffusion coefficient. Within this context, wave-particle duality is a novel property emerging from random impact by STP. Further more, an interpretation of Heisenberg’s uncertainty principle is suggested, with a stochastic origin of Feynman’s path integral formalism. It is shown that we can construct a physical picture distinct from Copenhagen interpretation, and reinvestigate the nature of spacetime and reveal the origin of quantum behaviours from a realistic point of view.
[3183] vixra:1707.0326 [pdf]
Tracer Diffusion in Hard-Sphere Colloidal Suspensions
A theory of tracer diffusion in hard-sphere suspensions is developed by using irreversible thermodynamics to obtain a colloidal version of the Kedem-Katchalsky equations. Onsager reciprocity yields relationships between the cross diffusion coefficients of the particles and the reflection coefficient of the colloidal suspension. The theory is illustrated by modelling a self-forming colloidal membrane that filters tracer impurities from the pore fluid.
[3184] vixra:1707.0322 [pdf]
Bell's Dilemma Resolved, Nonlocality Negated, QM Demystified, Etc.
Eschewing naive realism, we define true (classical/quantum) realism:= some existents (ie, some Bell-beables) may change interactively. We then show that Bell's mathematical ideas re local causality—from his 1964:(1)-(2) to his 1990a:(6.9.3)—are valid under true realism. But we refute Bell's analyses (and his ‘local realism’), as we resolve his consequent ‘action-at-a-distance’ dilemma in favor of true locality:= no influence propagates superluminally. In short: defining beables by properties and values—and allowing that locally-causal interactions may yield new beables—we predict the probabilities of such interaction outcomes via equivalence-classes that are weaker (hence more general) than the corresponding classes in EPR/Bell. In this way delivering the same results as quantum theory and experiment—using EPRB, CHSH, GHZ and 3-space—we also advance QM's reconstruction in spacetime with a new vector-product for geometric algebra. True local realism thus supports local causality, resolves Bell's dilemma, negates nonlocality, demystifies QM, rejects naive realism, eliminates the quantum/classical divide (since observables are clearly beables; being or not being, prior to an interaction, but certainly existing thereafter), etc: all at the level of undergraduate math and logic, and all contra the analyses and impossibility-claims of Bell and many others. We also show that Bayes' Law and Malus' Law hold, undiminished, under true local realism and the quantum.
[3185] vixra:1707.0303 [pdf]
Restraining General Covariance
The Postulate of General covariance, a last choice that Einstein introduced in his General relativity theory of gravitation, endows the theory with an excessive generality that needs to be restrained. Otherwise it is easy to check that, beyond the first order of approximation, several space-time models, all of them derived from the original Schwarzschild's initial solution of Einstein's field equations corresponding to a static spherical source, would predict different results to two fundamental experiments: the measure of the force acting on a given passive test mass at a distance $R$ of the center of the source, or equivalently, its initial acceleration when falling from rest, and the comparison of two way transit times of light traveling into an optic fiber along a vertical direction and along a meridian. The last section describes how to find numeric, static and spherically symmetric, interior models with pre-selected mass $m$ and radius $R$ with $m/R<1$ or $m/R>1$ solving the horizon problem.
[3186] vixra:1707.0301 [pdf]
Theoretical Physics
This book proposes a review and, on important points, a new formulation of the main concepts of Theoretical Physics. Rather than offering an interpretation based on exotic physical assumptions (additional dimension, new particle, cosmological phenomenon,...) or a brand new abstract mathematical formalism, it proceeds to a systematic review of the main concepts of Physics, as Physicists have always understood them : space, time, material body, force fields, momentum, energy... and proposes the right mathematical objects to deal with them, chosen among well grounded mathematical theories. Proceeding this way, the reader will have a comprehensive, consistent and rigorous understanding of the main topics of the Physics of the XXI° century, together with many tools to do practical computations. After a short introduction about the meaning of Theories in Physics, a new interpretation of the main axioms of Quantum Mechanics is proposed. It is proven that these axioms come actually from the way mathematical models are expressed, and this leads to theorems which validate most of the usual computations and provide safe and clear conditions for their use, as it is shown in the rest of the book. Relativity is introduced through the construct of the Geometry of General Relativity, from 5 propositions and the use of tetrads and fiber bundles, which provide tools to deal with practical problems, such as deformable solids. A review of the concept of motion leads to associate a frame to all material bodies, whatever their scale, and to the representation of motion in Clifford Algebras. Momenta, translational and rotational, are then represented by spinors, which provide a clear explanation for the spin and the existence of anti-particles. The force fields are introduced through connections, in the framework of gauge theories, which is here extended to the gravitational field. It shows that this field has actually a rotational and a transversal component, which are masked under the usual treatment by the metric and the Levy-Civita connection. A thorough attention is given to the topic of the propagation of fields with new and important results. The general theory of lagrangians in the application of the Principle of Least Action is reviewed, and two general models, incorporating all particles and fields are explored, and used for the introduction of the concepts of currents and energy-momentum tensor. Precise guidelines are given to find solutions for the equations representing a system in the most general case. The topic of the last chapter is discontinuous processes. The phenomenon of collision is studied, and we show that bosons can be understood as discontinuities in the fields.
[3187] vixra:1707.0291 [pdf]
Quantum Cosmology and Life
In frame of the quantum modification of general relativity (Qmoger), supported by cosmic data (without fitting), a new physically distinguished scale is obtained. This scale indicate a mechanism of formation new particles from the background matter. At the same time that scale corresponds to the size of a living cell.
[3188] vixra:1707.0270 [pdf]
Modified Coulomb Forces and the Point Particles States Theory
A system of equations of motion of point particles is considered within the framework of the classical dynamics (the three Newton’s laws). Equations of the system are similar to the equation by Wilhelm Eduard Weber from his theory of electrodynamics. However, while deriving equations of the system, the Coulomb law as the law for point particles which are motionless relatively one another (used by Weber for formulation of his equation) is regarded as a hypothesis unverified experimentally. An alternative hypothesis was proposed, presuming that the Coulomb law describes the interaction of the two electrically charged point particles within a determined range of their relative velocity magnitudes excluding a zero value – if the relative velocity magnitude is equal to zero, particles with like charges attract one another and those with unlike charges repulse. The results of mathematical analysis of the system of equations of motion of point particles with Coulomb forces modified in accordance with the alternative hypothesis and acting between them were used for modelling of the various physical phenomena and processes.
[3189] vixra:1707.0269 [pdf]
Statistical Methods in Astronomy
We present a review of data types and statistical methods often encountered in astronomy. The aim is to provide an introduction to statistical applications in astronomy for statisticians and computer scientists. We highlight the complex, often hierarchical, nature of many astronomy inference problems and advocate for cross-disciplinary collaborations to address these challenges.
[3190] vixra:1707.0262 [pdf]
The Universe or Nothing: the Heat Death of the Universe and It's Ultimate Fate
This paper overviews my hypothesis of the ultimate fate of the universe, showing how it will reach the heat death, and how the dark energy has the main role in the universe movement and even it's end, this paper opens a new path in the searching about the nature of dark energy, by knowing that it's the reason why the universe is expanding, cooling and losing energy. And to note that in this paper we based on that the universe is closed.
[3191] vixra:1707.0258 [pdf]
A New Result About Prime Numbers: Lim N→+∞ N/(p(n) − N(ln N + ln ln N − 1)) = +∞
In this short paper we propose a new result about prime numbers: lim n→+∞ n/(p(n) − n(ln n + ln ln n − 1)) = +∞ .
[3192] vixra:1707.0220 [pdf]
Proof of ZFC Axioms as Normal Statements No. 2.2
We interpret 5 of the 10 axioms of ZFC (Zermelo-Fraenkel set theory with axiom of choice) as normal statements and proof them. So these 5 sentences don't need to be introduced as axioms, but can be used as proven statements.
[3193] vixra:1707.0181 [pdf]
14
Let $V$ be an asymptotically cylindrical K\"{a}hler manifold with asymptotic cross-section $\mathfrak{D}$. Let $E_\mathfrak{D}$ be a stable Higgs bundle over $\mathfrak{D}$, and $E$ a Higgs bundle over $V$ which is asymptotic to $E_\mathfrak{D}$. In this paper, using the continuity method of Uhlenbeck and Yau, we prove that there exists an asymptotically translation-invariant Hermitian projectively Hermitian Yang-Mills metric on $E$.
[3194] vixra:1707.0176 [pdf]
Uncertainty and the Lonely Runner Conjecture
By convolving the distribution of one of the non-chosen runners with a step function (to introduce some uncertainty in its start time) we arrange that the mutual expectation reverts to the continuous extension of its value in the transcendental case.
[3195] vixra:1707.0161 [pdf]
Information Compression as a Unifying Principle in Human Learning, Perception, and Cognition
This paper reviews evidence for the idea that much of human learning, perception, and cognition, may be understood as information compression, and often more specifically as 'information compression via the matching and unification of patterns' (ICMUP). Evidence includes: information compression can mean selective advantage for any creature; the storage and utilisation of the relatively enormous quantities of sensory information would be made easier if the redundancy of incoming information were to be reduced; content words in natural languages, with their meanings, may be seen as ICMUP; other techniques for compression of information -- such as class-inclusion hierarchies, schema-plus-correction, run-length coding, and part-whole hierarchies -- may be seen in psychological phenomena; ICMUP may be seen in how we merge multiple views to make one, in recognition, in binocular vision, in how we can abstract object concepts via motion, in adaptation of sensory units in the eye of Limulus, the horseshoe crab, and in other examples of adaptation; the discovery of the segmental structure of language (words and phrases), grammatical inference, and the correction of over- and under-generalisations in learning, may be understood in terms of ICMUP; information compression may be seen in the perceptual constancies; there is indirect evidence for ICMUP in human cognition via kinds of redundancy such as the decimal expansion of Pi which are difficult for people to detect; much of the structure and workings of mathematics -- an aid to human thinking -- may be understood in terms of ICMUP; and there is additional evidence via the SP Theory of Intelligence and its realisation in the SP Computer Model. Three objections to the main thesis of this paper are described, with suggested answers. These ideas may be seen to be part of a 'Big Picture' with six components, outlined in the paper.
[3196] vixra:1707.0153 [pdf]
Stationary Frame is the “denied” Absolute Rest
Einstein’s relativism is not only “physical” but also philosophical because it forbids any concept leading to the idea of the absolute. But this phobia is translated into flight forward without giving decisive answers to the crucial questions raised by the paradoxes arising from Special Relativity. In what follows, we show by a thought experiment that absolute rest has it’s discreet but real place in the Einsteinian reasoning under the pseudonym of stationary system.
[3197] vixra:1707.0152 [pdf]
A Conjecture About Prime Numbers Assuming the Riemann Hypothesis
In this paper we propose a conjecture about prime numbers. Based on the result of Pierre Dusart stating that the n th prime number is smaller than n(ln n + ln ln n − 0.9484) for n ≥ 39017 we propose that the n th prime number is smaller than n(ln n + ln ln n − 1+) when n → +∞.
[3198] vixra:1707.0126 [pdf]
A Cursory Examination of "Electric Universe" Claims Regarding Planetary Orbits
In this paper I examine the claim that the orbits of planets can be explained by nothing more than the electricity and magnetism. For the "overdensity claim," I find that the surface charge densities required to account for observations of the orbits of planets in our own Solar System are not physical. For the "dipole claim," I find that the electric field from the Sun is negligibly small, causing a central force that is 75 orders of magnitude too small to account for the motion of the Earth. These models cannot explain planetary orbits.
[3199] vixra:1707.0121 [pdf]
Dynamics of Statistical Fermionic and Boson-Fermionic Quantum System in Terms of Occupation Numbers
The ergodic second-order approach of entropy gradient maximization, applied on the problem of a quantum bosonic system, does not provide dynamic equations for pure fermionic system. The first-order dynamic equation results for a system of bosonic and fermionic \dofs interacting by a conservation of a common sum of quantum occupation numbers.
[3200] vixra:1707.0116 [pdf]
Unique Relativistic Extension of the Pauli Hamiltonian
Relativistic extension of the Pauli Hamiltonian is ostensibly achieved by minimal coupling of electromagnetism to the free-particle Dirac Hamiltonian. But the free-particle Pauli Hamiltonian is pathology-free in its nonrelativistic domain, while the free-particle Dirac Hamiltonian yields completely fixed particle speed which is greater than c, spin orbit torque whose ratio to kinetic energy tends to infinity in the zero-momentum limit, and mega-violation of Newton's First Law in that limit. Furthermore, relativistic extension of the Pauli Hamiltonian is unique in principle because inertial frame hopping can keep the particle nonrelativistic. That extension is indeed readily achieved by upgrading the terms of the Pauli Hamiltonian's corresponding action to appropriate Lorentz invariants. The resulting relativistic Lagrangian yields a canonical momentum that can't be analytically inverted in general, but a physically-sensible successive-approximation scheme applies. For hydrogen and simpler systems approximation isn't needed, and the result, which includes spin-orbit coupling, is as transparently physically sensible as the relativistic Lorentz Hamiltonian is, a far cry from the Dirac Hamiltonian pathologies.
[3201] vixra:1707.0109 [pdf]
General Exact Tetrahedron Argument for the Fundamental Laws of Continuum Mechanics
In this article, we give a general exact mathematical framework that all the fundamental relations and conservation equations of continuum mechanics can be derived based on it. We consider a general integral equation contains the parameters that act on the volume and the surface of the integral's domain. The idea is to determine how many local relations can be derived from this general integral equation and what these local relations are. After obtaining the general Cauchy lemma, we derive two other local relations by a new general exact tetrahedron argument. So, there are three local relations that can be derived from the general integral equation. Then we show that all the fundamental laws of continuum mechanics, including the conservation of mass, linear momentum, angular momentum, energy, and the entropy law, can be considered in this general framework. Applying the general three local relations to the integral form of the fundamental laws of continuum mechanics in this new framework leads to exact derivation of the mass flow, continuity equation, Cauchy lemma for traction vectors, existence of stress tensor, general equation of motion, symmetry of stress tensor, existence of heat flux vector, differential energy equation, and differential form of the Clausius-Duhem inequality for entropy law. The general exact tetrahedron argument is an exact proof that removes all the challenges on derivation of the fundamental relations of continuum mechanics. In this proof, there is no approximate or limited process and all the parameters are exact point-based functions. Also, it gives a new understanding and a deep insight into the origins and the physics and mathematics of the fundamental relations and conservation equations of continuum mechanics. This general mathematical framework can be used in many branches of continuum physics and the other sciences.
[3202] vixra:1707.0106 [pdf]
Cauchy Tetrahedron Argument and the Proofs of the Existence of Stress Tensor, a Comprehensive Review, Challenges, and Improvements
In 1822, Cauchy presented the idea of traction vector that contains both the normal and tangential components of the internal surface forces per unit area and gave the tetrahedron argument to prove the existence of stress tensor. These great achievements form the main part of the foundation of continuum mechanics. For about two centuries, some versions of tetrahedron argument and a few other proofs of the existence of stress tensor are presented in every text on continuum mechanics, fluid mechanics, and the relevant subjects. In this article, we show the birth, importance, and location of these Cauchy's achievements, then by presenting the formal tetrahedron argument in detail, for the first time, we extract some fundamental challenges. These conceptual challenges are related to the result of applying the conservation of linear momentum to any mass element, the order of magnitude of the surface and volume terms, the definition of traction vectors on the surfaces that pass through the same point, the approximate processes in the derivation of stress tensor, and some others. In a comprehensive review, we present the different tetrahedron arguments and the proofs of the existence of stress tensor, discuss the challenges in each one, and classify them in two general approaches. In the first approach that is followed in most texts, the traction vectors do not exactly define on the surfaces that pass through the same point, so most of the challenges hold. But in the second approach, the traction vectors are defined on the surfaces that pass exactly through the same point, therefore some of the relevant challenges are removed. We also study the improved works of Hamel and Backus, and indicate that the original work of Backus removes most of the challenges. This article shows that the foundation of continuum mechanics is not a finished subject and there are still some fundamental challenges.
[3203] vixra:1707.0103 [pdf]
Second Quantization of the Square-Root Klein-Gordon Operator
The square-root Klein-Gordon operator,√(m^2− ∇^2), is a non-local operator with a natural scale inversely proportional to the mass (the Compton wavelength). There is no fundamental reason to exclude negative energy states from a “square-root” propagation law. We find several possible Hamiltonians associated with √(m^2− ∇^2) which include both positive and negative energy plane wave states. It is possible to satisfy the equations of motion with commutators or anticommutators. For the scalar case, only the canonical commutation rules yield a stable vacuum. We investigate microscopic causality for the commutator of the Hamiltonian density. We find that despite the non-local dependence of the energy density on the field operators, the commutators of the physical observables vanish for space-like separations. Hence, Pauli’s result can be extended to the non-local case. Pauli explicitly excluded √(m^2− ∇^2) because this operator acts non-locally in the coordinate space. The Mandelstam representation offers the possibility of avoiding the difficulties inherent in minimal coupling (Lorentz invariance and gauge invariance). We also compute the propagators for the scattering problem and investigate thesolutions of the square-root equation in the Aharonov-Bohm problem.
[3204] vixra:1707.0093 [pdf]
Schrodinger’s Register: Foundational Issues and Physical Realization
This work-in-progress paper consists of four points which relate to the foundations and physical realization of quantum computing. The first point is that the qubit cannot be taken as the basic unit for quantum computing, because not every superposition of bit-strings of length n can be factored into a string of n-qubits. The second point is that the “No-cloning” theorem does not apply to the copying of one quantum register into another register, because the mathematical representation of this copying is the identity operator, which is manifestly linear. The third point is that quantum parallelism is not destroyed only by environmental decoherence. There are two other forms of decoherence, which we call measurement decoherence and internal decoherence, that can also destroy quantum parallelism. The fourth point is that processing the contents of a quantum register “one qubit at a time” destroys entanglement.
[3205] vixra:1707.0088 [pdf]
The Role of Vaginal Acidity: the Production of Glycogen and It's Role on Determining the Gender of the Fetus
This paper describes the important role of the vaginal acidity not only in the health of the vagina, but also in determining the gender of the fetus, we have put in mind that where there is vaginal acidity, so we should connect this to the production of oestrogen hormones, putting in mind the cases that lead to high oestrogen level.
[3206] vixra:1707.0086 [pdf]
Zeta Function and Infinite Sums
I have come to the conclusion, after finishing a first reection on infinite sums, that all the functions which are written in the form of an infinite sum are written according to the famous Zeta function, this statement is explicitly presented in this article.
[3207] vixra:1707.0057 [pdf]
The Shape of the Universe and It's Density Parameter; the Ratio of the Actual Density of the Universe to the Critical Density that Would be Required to Cause the Expansion to Stop
This paper overviewing the answer to a reasonable question to wonder what the shape of the Universe is. Is it a sphere? A torus? Is it open or closed, or flat? And what does that all mean anyway? Is it doubly curved like a western saddle? What can determine the entire fate of the Universe. Does the Universe go on forever? If not is there some kind of giant brick wall at the edge of the Universe? As it turns out, the answer is both simpler and weirder than all those options. What does the Universe look like is a question we love to guess at as a species and make up all kinds of nonsense.
[3208] vixra:1707.0056 [pdf]
Exact Tetrahedron Argument for the Existence of Stress Tensor and General Equation of Motion
The birth of modern continuum mechanics is the Cauchy's idea for traction vectors and his achievements of the existence of stress tensor and derivation of the general equation of motion. He gave a proof of the existence of stress tensor that is called Cauchy tetrahedron argument. But there are some challenges on the different versions of tetrahedron argument and the proofs of the existence of stress tensor. We give a new proof of the existence of stress tensor and derivation of the general equation of motion. The exact tetrahedron argument gives us, for the first time, a clear and deep insight into the origins and the nature of these fundamental concepts and equations of continuum mechanics. This new approach leads to the exact definition and derivation of these fundamental parameters and relations of continuum mechanics. By the exact tetrahedron argument we derived the relation for the existence of stress tensor and the general equation of motion, simultaneously. In this new proof, there is no limited, average, or approximate process and all of the effective parameters are exact values. Also in this proof, we show that all the challenges on the previous tetrahedron arguments and the proofs of the existence of stress tensor are removed.
[3209] vixra:1707.0045 [pdf]
The Wheeler-Feynman Interpretation of the Delayed-Choice Experiment and its Consequences for Quantum Computation
In this paper, we shall describe the delayed-choice experiment first proposed by Wheeler and then analyze the experiment based on both our interpretation of what is happening and the Wheeler/Feynman interpretation. Our interpretation includes wave-function collapse due to a measurement, while the Wheeler/Feynman interpretation attempts to avoid wave-function collapse in a measurement, as part of their explanation, to preserve consistent unitarity. in quantum processes. We will also show that there are severe consequences for quantum computing if there is no wave function collapse due to a measurement.
[3210] vixra:1707.0023 [pdf]
An Interval Unifying Theorem About Primes
In this paper it is proved the existence of a prime number in the interval between the square of any natural number greater than one, and the number resulting from adding or subtracting this natural number to its square (Oppermann’s Conjecture). As corollaries of this proof, they are proved three classical prime number’s conjectures: Legendre’s, Brocard’s, and Andrica’s. It is also defined a new maximum interval between any natural number and the nearest prime number. Finally, it is stated as corollary the existence of infinite prime numbers equal to the square of a natural number, plus a natural number inferior to that natural number, and minus a natural number inferior to that natural number.
[3211] vixra:1707.0020 [pdf]
One, Zero, Ada, and Cyclic Number System
Division by 0 is not defined in mathematics. Mathematics suggests solutions by work around methods. However they give only approximate, not the actual or exact, results. Through this paper we propose methods to solve those problems. One characteristic of our solution methods is that they produce actual or exact results. They are also in conformity with, and supported by, physical or empirical facts. Other characteristic is their simplicity. We can do computations easily based on basic arithmetic or algebra or other computation methods we already familiar with.
[3212] vixra:1707.0011 [pdf]
Solution of Poincare's Vector Field Problem
When a meromorphic vector field is given on the projective plane, a complete holomorphic limit cycle, because it is a closed singular submanifold of projective space, is defined by algebraic equations. Also the meromorphic vector field is an algebraic object. Poincare had asked, is there just an algebraic calculation leading from the vector field to the defining equations of the solution, without the mysterious intermediary of the dynamical system. The answer is yes, that there is nothing more mysterious or wonderful that happens when a complete holomorphic limit cycle is formed than could have been defined using algebra.
[3213] vixra:1707.0008 [pdf]
Standing Waves
Contents<br><br> Part I <br> O(j) is not fully equivariant if j is odd <br> Representations of SO3<br> Eigenfunctions of the Laplacian<br> Attempts to explain the periodic table<br> Attempts to explain atomic spectra<br><br> Part II <br> Three corrections to the fine structure<br> A discussion of perturbation theory <br> Half-integer values of l – first explanation<br> Generalities about the Hamiltonian formulation<br> Schroedinger’s function W<br> Half-integer values of l – other explanations<br> The atom<br> Schroedinger’s choice of solutions<br> Residues<br> The canonical divisor and j quantum number<br> Intuitive description of polarization<br> Relation between smooth and holomorphic Laplacian operators<br> A coincidence about the periodic table<br> Linear systems of wave forms<br> Hodge theory<br> The Levi-Civita action is not Lagrangian for its Riemann metric<br> Foundations<br> Unification of the various coupling schemes<br> Discussion of the problem of non-convergent integrals<br> The ethics of science<br> First concepts of the Lamb shift<br> A question about Pauli exclusion<br> The notion of probability<br> Concluding remark<br> False fine structure<br> Electron spin and Pauli exclusion<br> It was not the right conclusion<br>
[3214] vixra:1706.0511 [pdf]
Clear Local Realism Advances Bell's Ideas, Demystifies QM, Etc.
Negating the classical/quantum divide in line with Bell's hidden-variable ideas, we resolve Bell's ‘action-at-a-distance' dilemma in accord with his hopes. We identify the resultant theory as clear local realism (CLR), the union of Bohr's ‘measurement' insight, Einstein locality and Bell beables. Our method follows: (i) consistent with Bohr's insight, we replace EPR's elements of physical reality with Bell's beables; (ii) we let Bell's beable λ denote a pristine particle's total angular momentum; (iii) validating Malus' Law in our quantum-compatible equivalence relations, we deliver the hopes of Bell and Einstein for a simple constructive model of EPRB; (iv) we then derive the correct results for CHSH and Mermin's version of GHZ; (v) we thus justify EPR's belief that additional variables would bring locality and causality to QM. In short, advancing Bell's ideas in line with his expectations: we amend EPR, resolve Bell's dilemma, negate nonlocality, endorse Einstein's locally-causal Lorentz-invariant worldview, demystify the classical/quantum divide, etc. CLR: clear via Bohr's insight, local via Einstein locality, realistic via Bell beables.
[3215] vixra:1706.0510 [pdf]
Preons, Gravity and Black Holes
A previous preon model for the substructure of the the standard model quarks and leptons is completed to provide a model of Planck scale gravity and black holes. Gravity theory with torsion is introduced in the model. Torsion has been shown to produce an axial-vector field coupled to spinors, in the present case preons, causing an attractive preon-preon interaction. This is assumed to be the leading term of UV gravity. The boson has an estimated mass near the Planck scale. At high enough density it can materialize and become the center of a black hole. Chiral phase preons are proposed to form the horizon with thickness of order of Planck length. Using quantum information theoretic concepts this is seen to lead to an area law of black hole entropy.
[3216] vixra:1706.0498 [pdf]
Simultaneity and Translational Symmetry
Two identical stopwatches moving at the same speed will elapse the same time after moving the same distance. If both stopwatches were started at the same time, there will be no time difference between these two stopwatches after both stopwatches have elapsed the same time. Both stopwatches will continue to show no time difference under identical acceleration. Therefore, both stopwatches show identical time in an accelerating reference frame if both stopwatches were restarted at the same time in a stationary reference frame. Consequently, a physical system that exhibits Translational Symmetry in its motion demonstrates that two simultaneous events in one reference frame should be simultaneous in another reference frame.
[3217] vixra:1706.0467 [pdf]
A Proposed Experiment to Test Theories of the Flyby Anomaly
We use Lorentz covariant retarded gravitation theory (RGT), without simplifications, to validate the earlier calculations for the flyby anomaly as a gravitational effect of Earth's rotation at the special relativistic (v/c) level. Small differences persist between the theoretical predictions of RGT and the data reported by Anderson et al. That reported data, however, is not direct observational data but consists of un-modeled residues. To settle doubts, we propose a 3-way experimental test to discriminate between RGT, Newtonian gravitation (no flyby anomaly), and Anderson et al.'s formula. This involves two satellites orbiting Earth in opposite directions in the equatorial plane in eccentric orbits. For these orbits, Earth's rotation should not affect velocity on (1) Newtonian gravitation and (2) the formula of Anderson et al. However, (3) on RGT, one satellite gains and the other loses velocity, by typically a few cm/s/day, which is easily measurable by satellite laser ranging.
[3218] vixra:1706.0448 [pdf]
The Hypothesis of the Virtual Reality World; According to Astrophysical and Mathematical Presumption I. Overviewing the Creator's Mind
Context. This paper suggests a hypothesis that the world is a simulation of a digital world done by a creator who is the ruler of the highest civilization. As this creator can control the cosmic distances and so he used his science and arts in a simulation called life, this life is based upon the creator strategies and those strategies are simple as your computer, you turn on your computer so the life began, but in the highest civilization computer there is no shut down, but a timer set according to the highest civilization order.It's because we are controlled by a higher civilization that we cannot go beyond them because they are at a higher dimension than ours, and mostly they are at the highest dimension that can get over all the dimensions of the hyperspace, Science is a dynamic process of questioning, hypothesizing, discovering, and changing previous ideas based on what is learned. Scientific ideas are developed through reasoning and tested against observations. Scientists assess and question each other's work in a critical process called peer review. Our understanding about the universe and our place in it has changed over time. New information can cause us to rethink what we know and reevaluate how we classify objects in order to better understand them. New ideas and perspectives can come from questioning a theory or seeing where a classification breaks down Aims. The aim of this paper is to over viewing the data that the creator used to shape the hyperspace and ruling the life we live in and this incredible data is connected to achieve the main goal of life. Methods. Collecting data and analyzing then connecting them Results. A creator from a higher civilization made this hyperspace of multi-verse by using a special kind of programming
[3219] vixra:1706.0422 [pdf]
The Physical Basis of Spirituality.
Spirituality is often seen as a part of religion, it is about rules for dealing with the spirits from the point of view of God the almighty, the creator of our universe. Of course, these rules have been written down by humans which are accepted to be so-called inspired and speaking the words of that same God. Whereas the point of view these rules are taking has to do with eternal good and bad, the morality and dangers of dealing with spirits and engaging with deamons; the point of view expressed in this book is a scientic one. It tries to descipher rules spirits have to obey and it lays down the foundations for behavioral psychology, devoid of good and evil, from the point of view of physical charges. I wish to advocate the point of view that nobody is good or evil, we can all do things which many people accept to be good or evil, but there is no such thing as intrinsically good or bad people. There are on the other hand, strong and weak ones, those with grand visions and small ones, quick and slow thinkers and so on.
[3220] vixra:1706.0415 [pdf]
The Hypergeometrical Universe - Supernova and SDSS Modeling
This paper presents a simple and purely geometrical Grand Unification Theory. Quantum Gravity, Electrostatic and Magnetic interactions are shown in a unified framework. Newton's Gravitational Law, Gauss' Electrostatics Law and Biot-Savart's Electromagnetism Law are derived from first principles. Gravitational Lensing and Mercury Perihelion Precession are replicated within the theory. Unification symmetry is defined for all the existing forces. This alternative model does not require Strong and Electroweak forces. A 4D Shock-Wave Hyperspherical topology is proposed for the Universe which together with a Quantum Lagrangian Principle and a Dilator based model for matter result in a quantized stepwise expansion for the whole Universe along a radial direction within a 4D spatial manifold. The Hypergeometrical Standard Model for matter, Universe Topology and a new Law of Gravitation are presented. Newton's and Einstein's Laws of Gravitation and Dynamics, Gauss Law of Electrostatics among others are challenged when HU presents Type 1A Supernova Survey results. HU's SN1a results challenge current Cosmological Standard Model (L-CDM) by challenging its Cosmological Ruler d(z). SDSS BOSS dataset is shown to support a new Cosmogenesis theory and HU proposal that we are embedded in a 5D Spacetime. The Big Bang Theory is shown to be challenged by SDSS BOSS dataset. Hyperspherical Acoustic Oscillations are demonstrated in the SDSS BOSS Galaxy density. A New de-Broglie Force is proposed.
[3221] vixra:1706.0408 [pdf]
A New Sufficient Condition by Euler Function for Riemann Hypothesis
The aim of this paper is to show a new sufficient condition (NSC) by the Euler function for the Riemann hypothesis and its possibility. We build the NSC for any natural numbers ≥ 2 from well-known Robin theorem, and prove that the NSC holds for all odd and some even numbers while, the NSC holds for any even numbers under a certain condition, which would be called the condition (d).
[3222] vixra:1706.0407 [pdf]
An Upper Bound for Error Term of Mertens' Formula
In this paper, it is obtained a new estimate for the error term E(t) of the Mertens' formula sum_{p≤t}{p^{-1}}=loglogt+b+E(t), where t>1 is a real number, p is the prime number and b is the well-known Mertens' constant. We , first, provide an upper bound, not a lower bound, of E(p) for any prime number p≥3 and, next, give one in the form as E(t)<logt/√t for any real number t≥3. This is an essential improvement of already known results. Such estimate is very effective in the study of the distribution of the prime numbers.
[3223] vixra:1706.0401 [pdf]
Does the One-Way Speed of Light Depend on the Distance Between the Emitter and Absorber?
We present a simple model of light propagation that allows for the one-way speed of light, or equivalently, the simultaneity convention, to depend on the distance between the emitter and the absorber. This is distinct from variable speed of light (VSL) theories that assume the two-way speed of light is variable. We show that this model predicts wavelength shifts that are consistent with wavelength shifts measured from light propagating on astrophysical scales, thus eliminating the need to propose ad hoc mechanisms, such as dark matter, dark energy, and cosmological expansion.
[3224] vixra:1706.0382 [pdf]
Breaking a Multi-Layer Crypter Through Reverse-Engineering, a Case Study Into the Man1 Crypter
Crypters and packers are common in the malware world, lots of tech- niques have been invented over the years to help people bypass security measures commonly used. One such technique where a crypter will use multiple, sometimes dynamically generated, layers to decode and unpack the protected executable allows a crypter to bypass common security mea- sures such as Antivirus. While at the end of this paper we will have con- structed a working proof of concept for an unpacker it is by no means meant as a production level mechanism, the goal is simply to show the reversing of routines found in a crypter while using a reverse-engineering framework that is geared towards shellcode analysis to our benefit for malware analysis.
[3225] vixra:1706.0377 [pdf]
Dissecting the Dyre Loader
Dyre or Dyreza, is a pretty prominent figure in the world of financial malware. The Dyre of today comes loaded with a multitude of mod- ules and features while also appearing to be well maintained. The first recorded instance of Dyre I have found is an article in June 2014 and the sample in question is version 1001, while at the time of this report Dyre is already up to version 1166. While the crypters and packers have varied over time, for at least the past 6 months Dyre has used the same loader to perform it’s initial checks and injection sequence. It is the purpose of this report to go through the various techniques and algorithms present in the loader, and at times reverse them to python proof of concepts.
[3226] vixra:1706.0371 [pdf]
Newton’s E = mc^2 Two Hundred Years Before Einstein? Newton = Einstein at the Quantum Scale
Here we will show the existence of a simple relationship between Einstein’s and Newton’s formulas. They are closely connected in terms of fundamental particles. Without knowing so, Newton indirectly conceptualized E = mc^2 two hundred years before Einstein. As we will see, the speed of light (which is equal to the speed of gravity) was hidden within Newton’s formula.
[3227] vixra:1706.0361 [pdf]
Entropy Measures on Neutrosophic Soft Sets and Its Application in Multi Attribute Decision Making
The focus of the paper is to furnish the entropy measure for a neutrosophic set and neutrosophic soft set which is a measure of uncertainty and it permeates discourse and system. Various characterization of entropy measures are derived. Further we exemplify this concept by applying entropy in various real time decision making problems.
[3228] vixra:1706.0356 [pdf]
Further Research of Single Valued Neutrosophic Rough Set Model
Neutrosophic sets (NSs), as a new mathematical tool for dealing with problems involving incomplete, indeterminant and inconsistent knowledge, were proposed by Smarandache. By simplifying NSs, Wang et al. proposed the concept of single valued neutrosophic sets (SVNSs) and studied some properties of SVNSs. In this paper, we mainly investigate the topological structures of single valued neutrosophic rough sets which is constructed by combining SVNSs and rough sets. Firstly, we introduce the concept of single valued neutrosophic topological spaces.
[3229] vixra:1706.0346 [pdf]
Generalized Inverse of Fuzzy Neutrosophic Soft Matrix
Aim of this article is to find the maximum and minimum solution of the fuzzy neutrosophic soft relational equation = and = , where and are fuzzy neutrosophic soft vector and is a fuzzy neutrosophic soft matrix.
[3230] vixra:1706.0342 [pdf]
Information Fusion of Conflicting Input Data
Sensors, and also actuators or external sources such as databases, serve as data sources in order to realise condition monitoring of industrial applications or the acquisition of characteristic parameters like production speed or reject rate. Modern facilities create such a large amount of complex data that a machine operator is unable to comprehend and process the information contained in the data.
[3231] vixra:1706.0336 [pdf]
Interval-Valued Neutrosophic Competition Graphs
We first introduce the concept of interval-valued neutrosophic competition graphs. We then discuss certain types, including k-competition interval-valued neutrosophic graphs, p-competition interval-valued neutrosophic graphs and m-step interval-valued neutrosophic competition graphs. Moreover, we present the concept of m-step interval-valued neutrosophic neighbouhood graphs.
[3232] vixra:1706.0334 [pdf]
Introduction to Neutrosophic Nnearrings
The objective of this paper is to introduce the concept of neutrosophic nearrings. The concept of neutrosophic N-group of a neutrosophic nearring is introduced. We study neutrosophic subnearrings of neutrosophic nearrings and also neutrosophic N-subgroups of neutrosophic N- groups. The notions of neutrosophic ideals in neutrosophic nearrings and neutrosophic N-groups are introduced and their elementary properties are presented. In addition, we introduce the concepts of neutrosophic homomorphisms of neutrosophic nearrings and neutrosophic N-homomorphisms of neutrosophic N-groups and also, we present neutrosophic quotient nearrings and quotient N-groups.
[3233] vixra:1706.0333 [pdf]
Jaccard Vector Similarity Measure of Bipolar Neutrosophic Set Based on Multi-Criteria Decision Making
The main aim of this study is to present a novel method based on multi-criteria decision making for bipolar neutrosophic sets. Therefore, Jaccard vector similarity and weighted Jaccard vector similarity measure is defined to develop the bipolar neutrosophic decision making method. In addition, the method is applied to a numerical example in order to confirm the practicality and accuracy of the proposed method.
[3234] vixra:1706.0329 [pdf]
Measure Distance Between Neutrosophic Sets: an Evidential Approach
Due to the efficiency to handle uncertainty information, the single valued neutrosophic set is widely used in multicriteria decision-making. In MCDM, it is inevitable to measure the distance between two single valued neutrosophic sets. In this paper, an evidence distance for neutrosophic sets is proposed. There are two main contributions of this work. One is a new method to transform the single valued neutrosophic set into basic probability assignment. The other is evidence distance function between two single valued neutrosophic sets. The application in MCDM is illustrated the efficiency of the proposed distance.
[3235] vixra:1706.0288 [pdf]
On Fermat's Last Theorem - An Elementary Approach
An attempt of using elementary approach to prove Fermat's last theorem (FLT) is given. For infinitely many prime numbers, Case I of the FLT can be proved using this approach. Furthermore, if a conjecture proposed in this paper is true (k-3 conjecture), then case I of the FLT is proved for all prime numbers. For case II of the FLT, a constraint for possible solutions is obtained.
[3236] vixra:1706.0282 [pdf]
Neutrosophic Subalgebras of Bck/bci-Algebras Based on Neutrosophic Points
The concept of neutrosophic set (NS) developed by Smarandache is a more general platform which extends the concepts of the classic set and fuzzy set, intuitionistic fuzzy set and interval valued intuitionistic fuzzy set.
[3237] vixra:1706.0275 [pdf]
Novel Single-Valued Neutrosophic Aggregated Operators Under Frank Norm Operation and Its Application to Decision-Making Process
Uncertainties play a dominant role during the aggregation process and hence their corresponding decisions are made fuzzier. Single-value neutrosophic numbers (SVNNs) contain the three ranges: truth, indeterminacy, and falsity membership degrees, and are very useful for describing and handling the uncertainties in the day-to-day life situations. In this study, some operations of SVNNs such as sum, product, and scalar multiplication are defined under Frank norm operations and, based on it, some averaging and geometric aggregation operators have been developed. We further establish some of its properties. Moreover, a decision-making method based on the proposed operators is established and illustrated with a numerical example.
[3238] vixra:1706.0267 [pdf]
Operations on Complex Multi-Fuzzy Sets
In this paper, we introduce the concept of complex multi-fuzzy sets (CMkFSs) as a generalization of the concept of multi-fuzzy sets by adding the phase term to the definition of multi-fuzzy sets. In other words, we extend the range of multi-membership function from the interval [0,1] to unit circle in the complex plane. The novelty of CMkFSs lies in the ability of complex multi- membership functions to achieve more range of values while handling uncertainty of data that is periodic in nature. The basic operations on CMkFSs, namely complement, union, intersection, product and Cartesian product are studied along with accompanying examples. Properties of these operations are derived. Finally, we introduce the intuitive definition of the distance measure between two complex multi-fuzzy sets which are used to define δ-equalities of complex multi-fuzzy sets.
[3239] vixra:1706.0260 [pdf]
Power Aggregation Operators of Simplified Neutrosophic Sets and Their Use in Multi-attribute Group Decision Making
The simplified neutrosophic set (SNS) is a useful generalization of the fuzzy set that is designed for some practical situations in which each element has different truth membership function, indeterminacy membership function and falsity membership function. In this paper, we develop a series of power aggregation operators called simplified neutrosophic number power weighted averaging (SNNPWA) operator, simplified neutrosophic number power weighted geometric (SNNPWG) operator, simplified neutrosophic number power ordered weighted averaging (SNNPOWA) operator and simplified neutrosophic number power ordered weighted geometric (SNNPOWG) operator.
[3240] vixra:1706.0259 [pdf]
Proposal for the Formalization of Dialectical Logic
Classical logic is typically concerned with abstract analysis. The problem for a synthetic logic is to transcend and unify available data to reconstruct the object as a totality. Three rules are proposed to pass from classic logic to synthetic logic. We present the category logic of qualitative opposition using examples from various sciences. This logic has been defined to include the neuter as part of qualitative opposition. The application of these rules to qualitative opposition, and, in particular, its neuter, demonstrated that a synthetic logic allows the truth of some contradictions. This synthetic logic is dialectical with a multi-valued logic, which gives every proposition a truth value in the interval [0,1] that is the square of the modulus of a complex number. In this dialectical logic, contradictions of the neuter of an opposition may be true.
[3241] vixra:1706.0254 [pdf]
Representation of Graphs Using Intuitionistic Neutrosophic Soft Sets
The concept of intuitionistic neutrosophic soft sets can be utilized as a mathematical tool to deal with imprecise and unspecified information. In this paper, we apply the concept of intuitionistic neutrosophic soft sets to graphs. We introduce the concept of intuitionistic neutrosophic soft graphs, and present applications of intuitionistic neutrosophic soft graphs in multiple-attribute decision-making problems. We also present an algorithm of our proposed method.
[3242] vixra:1706.0241 [pdf]
Single-Valued Neutrosophic Planar Graphs
We apply the concept of single-valued neutrosophic sets to multigraphs, planar graphs and dual graphs. We introduce the notions of single-valued neutrosophic multigraphs, single-valued neutrosophic planar graphs, and single-valued neutrosophic dual graphs. We illustrate these concepts with examples. We also investigate some of their properties.
[3243] vixra:1706.0231 [pdf]
Supervised Pattern Recognition Using Similarity Measure Between Two Interval
F. Smarandache introduced the concept of neutrosophic set in 1995 and P. K. Maji introduced the notion of neutrosophic soft set in 2013, which is a hybridization of neutrosophic set and soft set. Irfan Deli introduced the concept of interval valued neutrosophic soft sets. Interval valued neutrosophic soft sets are most efficient tools to deals with problems that contain uncertainty such as problem in social, economic system, medical diagnosis, pattern recognition, game theory, coding theory and so on. In this article we introduce similarity measure between two interval valued neutrosophic soft sets and study some basic properties of similarity measure. An algorithm is developed in interval valued neutrosophic soft set setting using similarity measure. Using this algorithm a model is constructed for supervised pattern recognition problem using similarity measure.
[3244] vixra:1706.0224 [pdf]
The Category of Neutrosophic Crisp Sets
We introduce the category NCSet consisting of neutrosophic crisp sets and morphisms between them. And we study NCSet in the sense of a topological universe and prove that it is Cartesian closed over Set, where Set denotes the category consisting of ordinary sets and ordinary mappings between them.
[3245] vixra:1706.0212 [pdf]
Unification of Evidence Theoretic Fusion Algorithms: A Case Study in Level-2 and Level-3 Fingerprint Features
This paper formulates an evidence-theoretic multimodal unification approach using belief functions that takes into account the variability in biometric image characteristics. While processing non-ideal images the variation in the quality of features at different levels of abstraction may cause individual classifiers to generate conflicting genuine-impostor decisions.
[3246] vixra:1706.0210 [pdf]
Statistic-Based Approach for Highest Precision Numerical Differentiation
If several independent algorithms for a computer-calculated quantity exist, then one can expect their results (which differ because of numerical errors) to follow approximately Gaussian distribution. The mean of this distribution, interpreted as the value of the quantity of interest, can be determined with much better precision than what is the precision provided by a single algorithm. Many practical algorithms introduce a bias using a parameter, e.g. a small but finite number to compute a limit or a large but finite number (cutoff) to approximate infinity. One may vary such parameter of a single algorithm, interpret the resulting numbers as generated by several algorithms and compute the average. A numerical evidence for the validity of this approach is, in the context of a fixed machine epsilon, shown for differentiation: the method greatly improves the precision and leads, presumably, to the most precise numerical differentiation nowadays known.
[3247] vixra:1706.0180 [pdf]
Alliance Based Evidential Reasoning Approach with Unknown Evidence Weights
In the evidential reasoning approach of decision theory, different evidence weights can generate different combined results. Consequently, evidence weights can significantly influence solutions. In terms of the “psychology of economic man,” decision-makers may tend to seek similar pieces of evidence to support their own evidence and thereby form alliances.
[3248] vixra:1706.0174 [pdf]
A Multi-Valued Neutrosophic Qualitative flexible Approach Based on Likelihood for Multi-Criteria Decision-Making Problems
In this paper, multi-criteria decision-making (MCDM) problems based on the qualitative flexible multiple criteria method (QUALIFLEX), in which the criteria values are expressed by multi-valued neutrosophic information, are investigated. First,multi-valued neutrosophic sets(MVNSs),which allow the truth-membership function,indeterminacy-membership function and falsity-membership function to have a set of crisp values between zeroand one, are introduced.
[3249] vixra:1706.0149 [pdf]
An Improved Score Function for Ranking Neutrosophic Sets and Its Application to Decision-Making Process
The neutrosophic set (NS) is a more general platform which generalizes the concept of crisp, fuzzy, and intuitionistic fuzzy sets to describe the membership functions in terms of truth, indeterminacy, and false degree. Under this environment, the present paper proposes an improved score function for ranking the single as well as interval-valued NSs by incorporating the idea of hesitation degree between the truth and false degrees. Shortcomings of the existing function have been highlighted in it. Further, the decision-making method has been presented based on proposed function and illustrates it with a numerical example to demonstrate its practicality and effectiveness.
[3250] vixra:1706.0124 [pdf]
On Proton-Neutron Indistinguishabilty and Distinguishability in the Nucleus
There is a fundamental duality in as to how protons and neutrons are treated as formimg the nucleus. A nucleus can be described well in an SU (2) I model (where (p-n) are indistinguishable) and in another independent picture where the pair (p-n) is treated as made up of distinguishable proton and netron fermions. Both of these apparently provide successful equivalent descriptions of the nucleus. How this is possible is the focus of this paper. Starting with the Standard Model and the SU(3)-flavour quark models, we look at the microssopic basis for this duality. Chirality and anomaly cancellation and its matching, play a basic role in our work.
[3251] vixra:1706.0112 [pdf]
On the Quantum Differentiation of Smooth Real-Valued Functions
Calculating the value of $C^{k\in\{1,\infty\}}$ class of smoothness real-valued function's derivative in point of $\mathbb{R}^+$ in radius of convergence of its Taylor polynomial (or series), applying an analog of Newton's binomial theorem and $q$-difference operator. $(P,q)$-power difference introduced in section 5. Additionally, by means of Newton's interpolation formula, the discrete analog of Taylor series, interpolation using $q$-difference and $p,q$-power difference is shown. Keywords: derivative, differential calculus, differentiation, Taylor's theorem, Taylor's formula, Taylor's series, Taylor's polynomial, power function, Binomial theorem, smooth function, real calculus, Newton's interpolation formula, finite difference, q-derivative, Jackson derivative, q-calculus, quantum calculus, (p,q)-derivative, (p,q)-Taylor formula, mathematics, math, maths, science, arxiv, preprint
[3252] vixra:1706.0111 [pdf]
On the Link Between Finite Differences and Derivatives of Polynomials
The main aim of this paper to establish the relations between forward, backward and central finite (divided) differences (that is discrete analog of the derivative) and partial & ordinary high-order derivatives of the polynomials.
[3253] vixra:1706.0107 [pdf]
Momentum Conservation in Electromagnetic Systems
Newton's third law doesn't apply to electromagnetic systems. Nevertheless a relativistic dissertation, directly founded on Maxwell's equations and on relativistic dynamics, allows us to establish rigorously the law of total momentum for such systems. Some undervalued details about the role of internal forces in isolated systems are emphasized. The laws governing momentum in systems subject to electromagnetic forces are consistent in every situation. There are no reasons to postulate the existence of a hidden momentum to avoid non-existent paradoxes in the case of static fields.
[3254] vixra:1706.0094 [pdf]
The Mystery Behind the Fine Structure Constant
This paper examines various alternatives for what the fine structure constant might represent. In particular, we look at an alternative where the fine structure constant represents the radius ratio divided by the mass ratio of the electron, versus the proton as newly suggested by Koshy [5], but here derived and interpreted based on Haug atomism (see [7]). This ratio is remarkably very close to the fine structure constant, and it is a dimensionless number. We also examine other alternatives such as the proton mass divided by the Higgs mass, which also appears as a possible candidate for what the fine structure constant might represent.
[3255] vixra:1706.0090 [pdf]
An Holomorphic Study Of Smarandache Automorphic and Cross Inverse Property Loops
By studying the holomorphic structure of automorphic inverse property quasigroups and loops[AIPQ and (AIPL)] and cross inverse property quasigroups and loops[CIPQ and (CIPL)], it is established that the holomorph of a loop is a Smarandache; AIPL, CIPL, K-loop, Bruck-loop or Kikkawa-loop if and only if its Smarandache automorphism group is trivial and the loop is itself is a Smarandache; AIPL, CIPL, K-loop, Bruck-loop or Kikkawa-loop.
[3256] vixra:1706.0089 [pdf]
A Pair of Smarandachely Isotopic Quasigroups and Loops Of The Same Variety
The isotopic invariance or universality of types and varieties of quasigroups and loops described by one or more equivalent identities has been of interest to researchers in loop theory in the recent past. A variety of quasigroups(loops) that are not universal have been found to be isotopic invariant relative to a special type of isotopism or the other. Presently, there are two outstanding open problems on universality of loops: semi automorphic inverse property loops(1999) and Osborn loops(2005). Smarandache isotopism(S-isotopism) was originally introduced by Vasantha Kandasamy in 2002.
[3257] vixra:1706.0080 [pdf]
Generalized Fibonacci Sequences in Groupoids
In this paper, we introduce the notion of generalized Fibonacci sequences over a groupoid and discuss it in particular for the case where the groupoid contains idempotents and pre-idempotents. Using the notion of Smarandache-type P-algebra, we obtain several relations on groupoids which are derived from generalized Fibonacci sequences.
[3258] vixra:1706.0066 [pdf]
On the Smarandache-Pascal Derived Sequences of Generalized Tribonacci Numbers
The main purpose of this paper is, using the elementary method and the properties of the third-order linear recurrence sequence, to unify the above results by proving the following theorem.
[3259] vixra:1706.0065 [pdf]
On the Universality of Some Smarandache Loops of Bol-Moufang Type
A Smarandache quasigroup(loop) is shown to be universal if all its f; g-principal isotopes are Smarandache f; g-principal isotopes. Also, weak Smarandache loops of Bol-Moufang type such as Smarandache: left(right) Bol, Moufang and extra loops are shown to be universal if all their f; g-principal isotopes are Smarandache f; g-principal isotopes.
[3260] vixra:1706.0061 [pdf]
Shared Multi-Space Representation for Neural-Symbolic Reasoning
This paper presents a new neural-symbolic reasoning approach based on a sharing of neural multi-space representation for coded fractions of first-order logic. A multi-space is the union of spaces with different dimensions, each one for a different set of distinct features.
[3261] vixra:1706.0056 [pdf]
Smarandache Isotopy Of Second Smarandache Bol Loops
The study of the Smarandache concept in groupoids was initiated by W. B. Vasantha Kandasamy. In her book and first paper on Smarandache concept in loops,she defined a Smarandache loop(S-loop) as a loop with at least a subloop which forms a subgroup under the binary operation of the loop.
[3262] vixra:1706.0036 [pdf]
Planck Dimensional Analysis of The Speed of Light
This is a short note to show how the speed of light c can be derived from dimensional analysis from the Gravitational constant, the Planck constant and the Planck length.
[3263] vixra:1706.0004 [pdf]
On the "Mysterious" Effectiveness of Mathematics in Science
This paper notes first that the effectiveness of mathematics in science appears to some writers to be "mysterious" or "unreasonable". Then reasons are given for thinking that science is, at root, the search for compression in the world. At more length, several reasons are given for believing that mathematics is, fundamentally, a set of techniques for compressing information and their application. From there, it is argued that the effectiveness of mathematics in science is because it provides a means of achieving the compression of information which lies at the heart of science. The anthropic principle provides an explanation of why we find the world - aspects of it at least - to be compressible. Information compression may be seen to be important in both science and mathematics, not only as a means of representing knowledge succinctly, but as a basis for scientific and mathematical inferences - because of the intimate relation that is known to exist between information compression and concepts of prediction and probability. The idea that mathematics may be seen to be largely about the compression of information is in keeping with the view, supported by evidence that is outlined in the paper, that much of human learning, perception, and cognition may be understood as information compression. That connection is itself in keeping with the observation that mathematics is the product of human ingenuity and an aid to human thinking.
[3264] vixra:1706.0003 [pdf]
On the "Mysterious" Effectiveness of Mathematics in Science
This paper notes first that the effectiveness of mathematics in science appears to some writers to be "mysterious" or "unreasonable". Then reasons are given for thinking that science is, at root, the search for compression in the world. At more length, several reasons are given for believing that mathematics is, fundamentally, a set of techniques for information compression via the matching and unification of patterns (ICMUP), and their application. From there, it is argued that the effectiveness of mathematics in science is because it provides a means of achieving the compression of information which lies at the heart of science. The anthropic principle provides an explanation for why we find the world -- aspects of it at least -- to be compressible. ICMUP may be seen to be important in both science and mathematics, not only as a means of representing knowledge succinctly, but as a basis for scientific and mathematical inferences -- because of the intimate relation that is known to exist between information compression and concepts of prediction and probability. Since ICMUP is a key part of the "SP theory of intelligence", evidence presented in this paper strengthens the already-strong evidence for the SP theory as a unifying principle across artificial intelligence, mainstream computing, mathematics, human learning, perception, and cognition, and neuroscience. The evidence and ideas in this paper may provide the basis for a "new mathematics for science" with potential benefits and applications in science and science-related areas.
[3265] vixra:1705.0446 [pdf]
Uniform and Partially Uniform Redistribution Rules
This paper introduces two new fusion rules for combining quantitative basic belief assignments. These rules although very simple have not been proposed in literature so far and could serve as useful alternatives because of their low computation cost with respect to the recent advanced Proportional Conflict Redistribution rules developed in the DSmT framework.
[3266] vixra:1705.0437 [pdf]
Neutrosophic Filters in be-Algebras
In this paper, we introduce the notion of (implicative) neutrosophic filters in BE-algebras. The relation between implicative neutrosophic filters and neutrosophic filters is investigated and we show that in self distributive BEalgebras these notions are equivalent.
[3267] vixra:1705.0430 [pdf]
Triple Refined Indeterminate Neutrosophic Sets for Personality Classification
Personality tests are most commonly objective type, where the users rate their behaviour. Instead of providing a single forced choice, they can be provided with more options. A person may not be in general capable to judge his/her behaviour very precisely and categorize it into a single category. Since it is self rating there is a lot of uncertain and indeterminate feelings involved.
[3268] vixra:1705.0410 [pdf]
New Principles of Differential Equations Ⅰ
This is the first part of the total paper. Since the theory of partial differential equations (PDEs) has been established nearly 300 years, there are many important problems have not been resolved, such as what are the general solutions of Laplace equation, acoustic wave equation, Helmholtz equation, heat conduction equation, Schrodinger equation and other important equations? How to solve the problems of definite solutions which have universal significance for these equations? What are the laws of general solution of the mth-order linear PDEs with n variables (n,m≥2)? Is there any general rule for the solution of a PDE in arbitrary orthogonal coordinate systems? Can we obtain the general solution of vector PDEs? Are there very simple methods to quickly and efficiently solve the exact solutions of nonlinear PDEs? And even general solution? Etc. These problems are all effectively solved in this paper. Substituting the results into the original equations, we have verified that they are all correct.
[3269] vixra:1705.0408 [pdf]
Bright Matter
Quantum modification of general relativity (Qmoger) is supported by cosmic data (without fitting). Qmoger equations consist of Einstein equations with two additional terms responsible for production/absorption of matter. In Qmoger cosmology there was no Big Bang and matter is continuously producing by the Vacuum. Particularly, production of the ultralight gravitons with tiny electric dipole moment (EDM) was started about 284 billion years ago. Quantum effects dominate interaction of these particles and they form the quantum condensate. Under influence of gravitation, the condensate is forming galaxies and producing ordinary matter, including photons. As one important result of this activity, it recently created us, the people, and continues to support us. Particularly, our subjective experiences are a result of an interaction between the background condensate and the neural system of the brain. The action potentials of neural system create traps and coherent dynamic patterns in the dipolar condensate. So, our subjective experiences are graviton-based. Some problems with the origin of life can also be clarified by taking into account the background dipolar condensate. It seems natural to call this graviton condensate bright matter. It not only produced ordinary matter, including light, but also produced and nurturing conscious life, as we know it, and, perhaps, some other forms of life in the universe. EDM of gravitons is small and existing telescopes can not see them, but we actually see bright matter in our subjective experiences. So, cosmology and brain science must work together to investigate bright matter, which will be the most important enterprise of humankind,
[3270] vixra:1705.0389 [pdf]
Rotation Curves and Dark Matter
In present paper we argue that to explain the shape of the Rotation Curves (RC) of galaxies, there is no need to involve the concept of dark matter. Rotation curves are completely determined by the distribution of baryon matter and gas kinetics. Such parameters of the galaxy as barion mass and its distribution can be easily calculated from the observed RC. We show the extended parts of RCs to be just a wind tails, formed by gas of the outer disks in assumption that it obeys the laws of gas kinetics. As examples, the Galaxy, NGC7331 and NGC3198 are considered. We calculate total mass of the Galaxy and find it to be 23.7x10(10)M\_sun. For the NGC7331 and NGC3198 the calculated total masses are 37.6x10(10)M\_sun and 7.7x10(10)M\_sun respectively. Consequences for cosmology are discussed.
[3271] vixra:1705.0386 [pdf]
Einstein's Road Not Taken
When confronted with the challenge of defining distant simultaneity Einstein looked down two roads that seemingly diverged. One road led to a theory based on backward null cone simultaneity and the other road led to a theory based on standard simultaneity. He felt that alone he could not travel both. After careful consideration he looked down the former and then took the latter. Sadly, years hence, he did not return to the first. In the following we investigate Einstein's road not taken, i.e., the road that leads to a theory based on backward null cone simultaneity. We show that both roads must be traveled to develop a consistent quantum theory of gravity and also to understand the relationship between the gravitational and electromagnetic fields.
[3272] vixra:1705.0377 [pdf]
Simulated Bell-like Correlations from Geometric Probability
Simulating Bell correlations by Monte Carlo methods can be time-consuming due to the large number of trials required to produce reliable statistics. For a noisy vector model, formulating the vector threshold crossing in terms of geometric probability can eliminate the need for trials, with inferred probabilities replacing statistical frequencies.
[3273] vixra:1705.0375 [pdf]
Holy Cosmic Condensate of Dipolar Gravitons
Quantum modification of general relativity (Qmoger) is supported by cosmic data (without fitting). Qmoger equations consist of Einstein equations with two additional terms responsible for production/absorption of matter. In Qmoger cosmology there was no Big Bang and matter is continuously producing by the Vacuum. Particularly, production of the ultralight gravitons with tiny electric dipole moment was started about 284 billion years ago. Quantum effects dominate interaction of these particles and they form the quantum condensate. Under influence of gravitation, the condensate is forming galaxies and producing ordinary matter, including photons. As one important result of this activity, it recently created us, the people, and continues to support us. Particularly, our subjective experiences are a result of an interaction between the background condensate and the neural system of the brain. The action potentials of neural system create traps and coherent dynamic patterns in the dipolar condensate. So, our subjective experiences are graviton-based, which can open new directions of research in biology and medicine.
[3274] vixra:1705.0374 [pdf]
Is Mechanics a Proper Approach to Fundamental Physics?
Physicists are proposing different mechanics to describe the nature, physical body is measured by intrinsic properties like electric charge, and extrinsic properties being related to space like generalized coordinates or velocities etc., with these properties we can predict what event will happen. We can naturally define the fact of the event and the cause of the event as information, the information grasped by physicist must be originated from something objective, information must have its object container. Intrinsic property information is contained by object itself, but container of extrinsic property information like position is ambiguous, position is a relation based on multiple objects, it's hard to define which one is the information container. With such ambiguity, no mechanics is a complete theory, errors hidden in assumptions are hard to find. Here we show a new theoretical framework with strict information container restriction, on which we can build complete determinism theories to approach grand unification.
[3275] vixra:1705.0358 [pdf]
Construction of the Lovas-Andai Two-Qubit Function $\tilde{\chi}_2 (\varepsilon )=\frac{1}{3} \varepsilon ^2 \left(4-\varepsilon ^2\right)$ Verifies the $\frac{8}{33}$-Hilbert Schmidt Separability Probability Conjecture
We investigate relationships between two forms of Hilbert-Schmidt two-re[al]bit and two-qubit "separability functions''--those recently advanced by Lovas and Andai (arXiv:1610.01410), and those earlier presented by Slater ({\it J. Phys. A} {\bf{40}} [2007] 14279). In the Lovas-Andai framework, the independent variable $\varepsilon \in [0,1]$ is the ratio $\sigma(V)$ of the singular values of the $2 \times 2$ matrix $V=D_2^{1/2} D_1^{-1/2}$ formed from the two $2 \times 2$ diagonal blocks ($D_1, D_2$) of a randomly generated $4 \times 4$ density matrix $D$. In the Slater setting, the independent variable $\mu$ is the diagonal-entry ratio $\sqrt{\frac{d_ {11} d_ {44}}{d_ {22} d_ {33}}}$--with, importantly, $\mu=\varepsilon$ or $\mu=\frac{1}{\varepsilon}$ when both $D_1$ and $D_2$ are themselves diagonal. Lovas and Andai established that their two-rebit function $\tilde{\chi}_1 (\varepsilon )$ ($\approx \varepsilon$) yields the previously conjectured Hilbert-Schmidt separability probability of $\frac{29}{64}$. We are able, in the Slater framework (using cylindrical algebraic decompositions [CAD] to enforce positivity constraints), to reproduce this result. Further, we similarly obtain its new (much simpler) two-qubit counterpart, $\tilde{\chi}_2(\varepsilon) =\frac{1}{3} \varepsilon ^2 \left(4-\varepsilon ^2\right)$. Verification of the companion conjecture of a Hilbert-Schmidt separability probability of $\frac{8}{33}$ immediately follows in the Lovas-Andai framework. We obtain the formulas for $\tilde{\chi}_1(\varepsilon)$ and $\tilde{\chi}_2(\varepsilon)$ by taking $D_1$ and $D_2$ to be diagonal, allowing us to proceed in lower (7 and 11), rather than the full (9 and 15) dimensions occupied by the convex sets of two-rebit and two-qubit states. The CAD's themselves involve 4 and 8 variables, in addition to $\mu=\varepsilon$. We also investigate extensions of these analyses to rebit-retrit and qubit-qutrit ($6 \times 6$) settings.
[3276] vixra:1705.0355 [pdf]
Theoretical-Heuristic Derivation Sommerfeld's Fine Structure Constant by Feigenbaum's Constant (Delta): Perodic Logistic Maps of Double Bifurcation
In an article recently published in Vixra: http://vixra.org/abs/1704.0365. Its author (Mario Hieb) conjectured the possible relationship of Feigenbaum's constant delta with the fine-structure constant of electromagnetism (Sommerfeld's Fine-Structure Constant). In this article it demonstrated, that indeed, there is an unequivocal physical-mathematical relationship. The logistic map of double bifurcation is a physical image of the random process of the creation-annihilation of virtual pairs lepton-antilepton with electric charge; Using virtual photons. The probability of emission or absorption of a photon by an electron is precisely the fine structure constant for zero momentum, that is to say: Sommerfeld's Fine-Structure Constant. This probability is coded as the surface of a sphere, or equivalently: four times the surface of a circle. The original, conjectured calculation of Mario Hieb is corrected or improved by the contribution of the entropies of the virtual pairs of leptons with electric charge: muon, tau and electron. Including a correction factor due to the contributions of virtual bosons W and Z; And its decay in electrically charged leptons and quarks.
[3277] vixra:1705.0349 [pdf]
Einstein's Key to Hubble Redshift
In 1907 Einstein discovered the key to understanding accelerating Hubble redshifts. By assuming that acceleration and gravity are equivalent (“The Happiest Thought of my Life”), he proved that Maxwell’s equations are the same in every acceler- ated reference frame but that vacuum permittivity depends on the acceleration. Vacuum permittivity is the scalar in Maxwell’s equations that determines the speed of light and the strength of electrical fields. Maxwell’s equations are valid in every coordinate sys- tem in general relativity. Vacuum permittivity depends on the spacetime curvature. For Friedmann spacetime, vacuum permittivity is proportional to the radius of the universe. When the radius changes, changing electrical fields in atoms change the wavelengths of emitted photons by about twice as much as photon wavelengths change. This is the key Einstein left us: The evolution of both photons and atoms must be used together to understand Hubble redshift. When this is done, the physics of Maxwell, Einstein, Bohr, and Friedmann fits modern Hubble redshift observations beautifully.
[3278] vixra:1705.0346 [pdf]
A Study on the Time Dependence of the Equation-of-State Parameter Using Brans-Dicke Theory of Gravitation
The time dependence of the equation of state (EoS) parameter of the cosmic fluid, for a space of zero curvature, has been determined in the framework of the Brans-Dicke (BD) theory of gravity, using FRW metric. For this purpose, empirical expressions of the scale factor, scalar field and the dimensionless BD parameter have been used. The constant parameters involved in these expressions have been determined from the field equations. The dependence of the scalar field upon the scale factor and the dependence of the BD parameter upon the scalar field have been explored to determine the time dependence of the EoS parameter. Its rate of change with time has been found to depend upon a parameter that governs the time dependent behavior of the scalar field. Time dependence of the EoS parameter has been graphically depicted.
[3279] vixra:1705.0338 [pdf]
The Prognostics Equation for Biogeochemical Tracers Has no Unique Solution.
In this paper a tracer prognostic differential equation related to the marine chemistry HAMOC model is studied. Recently, the present author found that the Navier Stokes equation has no exact solution. The following question can therefore be justified. Do numerical solutions from prognostic equations provide unique information about the distribution of nutrients in the ocean.
[3280] vixra:1705.0324 [pdf]
Speed of Microwave in Standing Wave
A standing wave consists of two identical waves moving in opposite direction. A frequency detector moving toward the standing wave will detect two different frequencies. One is blueshifted, the other is redshifted. The distance between two adjacent nodes in the standing wave is equal to half of the wavelength of both waves. Consequently, the wave detector will detect different speeds from both waves due to the same wavelength and the different frequencies. The calculation of speed is demonstrated with a typical household microwave oven which emits microwave of frequency range around 2.45 GHz and wavelength range around 12.2 cm.
[3281] vixra:1705.0289 [pdf]
Distribution of the Residues and Cycle Counting
In this paper we take a closer look to the distribution of the residues of squarefree natural numbers and explain an algorithm to compute those distributions. We also give some conjectures about the minimal number of cycles in the squarefree arithmetic progression and explain an algorithm to compute this minimal numbers.
[3282] vixra:1705.0264 [pdf]
As Spinor χ = a| ↑> +b| ↓> is Physical in SU (2) Spin Space, Then Why is Isospinor ψ = A|p > +b|n > Unphysical in SU (2) Isospin Space?
A spin angular momentum state with a polarization orientation in any ar- bitrary direction can be constructed as a spinor in the SU(2)-spin space as χ = a| ↑> +b| ↓>. However the corresponding isospinor in the SU(2)-isospin space, ψ = a|p > +b|n > is discarded on empirical grounds. Still, we do not have any sound theoretcal understanding of this phenomenon. Here we provide a consistent explanation of this effect.
[3283] vixra:1705.0262 [pdf]
Solutions of the Duffing and Painleve-Gambier Equations by Generalized Sundman Transformation
This paper shows that explicit and exact general periodic solutions for various types of Lienard equations can be computed by applying the generalized Sundman transformation. As an il- lustration of the efficiency of the proposed theory, the cubic Duffing equation and Painleve- Gambier equations were considered. As a major result, it has been found, for the first time, that equation XII of the Painleve-Gambier classication can exhibit, according to an appropriate parametric choice, trigonometric solutions, but with a shift factor.
[3284] vixra:1705.0216 [pdf]
Improved First Estimates to the Solution of Kepler's Equation
The manuscripts provides a novel starting guess for the solution of Kepler's equation for unknown eccentric anomaly E given the eccentricity e and the mean anomaly M of an elliptical orbit.
[3285] vixra:1705.0214 [pdf]
Quantum Nonlinear Four-Wave Mixing with a Single Atom in an Optical Cavity
Single atom cavity quantum electrodynamics grants access to nonclassical photon statistics, while electromagnetically induced transparency exhibits a dark state of long coherence time. The combination of the two produces a new light field via four-wave mixing that shows long-lived quantum statistics. We observe the new field in the emission from the cavity as a beat with the probe light that together with the control beam and the cavity vacuum is driving the four-wave mixing process. Moreover, the control field allows us to tune the new light field from antibunching to bunching, demonstrating our all-optical control over the photon-pair emission.
[3286] vixra:1705.0210 [pdf]
Machineless Solution to the Problem of Four Colors
It is proved that the irreducible map according to Franklin consists of 5 regions and, as a consequence, 4 colors are sufficient for colouring any map on the sphere
[3287] vixra:1705.0205 [pdf]
Isenthalpic Quantum Gravity
New simple and exact analytical solutions of Einstein equations of general relativity (GR) and of Qmoger (quantum modification of GR) equations are obtained. These solutions corresponds to processes with invariant density of enthalpy (energy plus pressure). Interpretation of this solutions in terms of cosmic radiation and production of massive particles, as well as comparison with cosmic data (without fitting), are presented. It is suggested, that isenthalpic processes can be relevant also to excessive radiation from Jupiter and Saturn. Similar processes potentially can be used as a new source of energy on Earth.
[3288] vixra:1705.0203 [pdf]
The Relations Between Ancient China’s Taoism And Modern Mathematics & Physics
I have mainly analyzed the mathematical meaning of non-classical mathematical theory for three fundamental physics equations - Maxwell’s equations, Dirac’s equations, Einstein’s equations from the quantized core theory of ancient China’s Taoism, and found they have some structures described in the core of the theory of ancient China’s Taoism, especially they all obviously own the yin-yang induction structure. This reveals the relations between the ancient China’s Taoism and modern mathematics and physics in a way, which may help us to understand some problems of the fundamental theory of physics.
[3289] vixra:1705.0190 [pdf]
Isenthalpic Universe
New simple and exact analytical solutions of Einstein equations of general relativity (GR) and of Qmoger (quantum modification of GR) equations are obtained. These solutions corresponds to processes with invariant density of enthalpy (energy plus pressure). Interpretation of this solutions in terms of cosmic radiation and production of massive particles, as well as comparison with cosmic data (without fitting), are presented. It is suggested, that isenthalpic processes can be relevant also to excessive radiation from Jupiter and Saturn. Similar processes potentially can be used as a new source of energy on Earth.
[3290] vixra:1705.0160 [pdf]
Isenthalpic Processes in Cosmology, Astrophysics and at Home
New simple and exact analytical solutions of Einstein equations of general relativity (GR) and of Qmoger (quantum modification of GR) equations are obtained. These solutions corresponds to processes with invariant density of enthalpy (energy plus pressure). Interpretation of this solutions in terms of cosmic radiation and production of massive particles, as well as comparison with cosmic data (without fitting), are presented. It is suggested, that isenthalpic processes can be relevant also to excessive radiation from Jupiter and Saturn. Similar processes potentially can be used as a new source of energy on Earth.
[3291] vixra:1705.0157 [pdf]
OPRA Technique for M-QAM over Nakagami-m Fading Channel with Imperfect CSI
Analysis of an Optimum Power and Rate Adaptation (OPRA) technique has been carried out for Multilevel-Quadrature Amplitude Modulation (M-QAM) over Nakagami-m ?at fading channels considering an imperfect channel estimation at the receiver side. The optimal solution has been derived for a continuous adaptation, which is a specific bound function and not possible to express in close mathematical form. Therefore, a sub-optimal solution is derived for the continuous adaptation and it has been observed that it tends to the optimum solution as the correlation coefficient between the true channel gain and its estimation tends to one. It has been observed that the receiver performance degrades with an increase in estimation error.
[3292] vixra:1705.0142 [pdf]
On the Riemann Hypothesis, Complex Scalings and Logarithmic Time Reversal
An approach to solving the Riemann Hypothesis is revisited within the framework of the special properties of $\Theta$ (theta) functions, and the notion of $ {\cal C } { \cal T} $ invariance. The conjugation operation $ {\cal C }$ amounts to complex scaling transformations, and the $ {\cal T } $ operation $ t \rightarrow ( 1/ t ) $ amounts to the reversal $ log (t) \rightarrow - log ( t ) $. A judicious scaling-like operator is constructed whose spectrum $E_s = s ( 1 - s ) $ is real-valued, leading to $ s = {1\over 2} + i \rho$, and/or $ s $ = real. These values are the location of the non-trivial and trivial zeta zeros, respectively. A thorough analysis of the one-to-one correspondence among the zeta zeros, and the orthogonality conditions among pairs of eigenfunctions, reveals that $no$ zeros exist off the critical line. The role of the $ {\cal C }, {\cal T } $ transformations, and the properties of the Mellin transform of $ \Theta$ functions were essential in our construction.
[3293] vixra:1705.0119 [pdf]
A Theory of Baryonic Dark Matter
A model of the Universe is constructed and a number of problems in Contemporary physics like Baryon Asymmetry, Dark Matter, Proton Decay, Galaxy Rotation Curve, Quasars, SMBH, Relativistic Astrophysical Jets, Coronal Heating, Solar Cycle, Supernovae mechanism, Magnetar magnetic field, Cosmological Lithium problem, Solar neutrino problem, Existence of Black holes and Electron spin are discussed in the light of the Hypothetical Universe. It is proposed that there exists stable neutrons and antineutrons that could explain the Dark matter and the missing antimatter of the Universe.
[3294] vixra:1705.0118 [pdf]
Fluxon and Quantum of Canonical Angular Momentum Determined by the Same Conditional Equation
A transformation of the conditional equation for the magnetic flux quantum $\vec{\Phi}_{0} = \frac{2\pi}{e} \hspace{2} \vec{\hbar}/2$ yields the conditional equation for the quantum of electromagnetic canonical angular momentum: $ \frac{e}{2 \pi} \hspace{2} \vec{\Phi}_{0} = \vec{\hbar}/2$.
[3295] vixra:1705.0115 [pdf]
A Criterion Arising from Explorations Pertaining to the Oesterle-Masser Conjecture
Using an extension of the idea of the radical of a number, as well as a few other ideas, it is indicated as to why one might expect the Oesterle-Masser conjecture to be true. Based on structural elements arising from this proof, a criterion is then developed and shown to be potentially sufficient to resolve two relatively deep conjectures about the structure of the prime numbers. A sketch is consequently provided as to how it might be possible to demonstrate this criterion, borrowing ideas from information theory and cybernetics.
[3296] vixra:1705.0095 [pdf]
A Detailed Analysis of Geometry Using Two Variables
Calculating certain aspects of geometry has been difficult. They have defied analytics. Here I propose a method of analysing shape and space in terms of two variables (n,m).
[3297] vixra:1705.0093 [pdf]
Parsimonious Adaptive Rejection Sampling
Monte Carlo (MC) methods have become very popular in signal processing during the past decades. The adaptive rejection sampling (ARS) algorithms are well-known MC technique which draw efficiently independent samples from univariate target densities. The ARS schemes yield a sequence of proposal functions that converge toward the target, so that the probability of accepting a sample approaches one. However, sampling from the proposal pdf becomes more computationally demanding each time it is updated. We propose the Parsimonious Adaptive Rejection Sampling (PARS) method, where an efficient trade-off between acceptance rate and proposal complexity is obtained. Thus, the resulting algorithm is faster than the standard ARS approach.
[3298] vixra:1705.0028 [pdf]
An Efficient Computational Method for Handling Singular Second-Order, Three Points Volterra Integrodifferenital Equations
In this paper, a powerful computational algorithm is developed for the solution of classes of singular second-order, three-point Volterra integrodifferential equations in favorable reproducing kernel Hilbert spaces. The solutions is represented in the form of series in the Hilbert space W₂³[0,1] with easily computable components. In finding the computational solutions, we use generating the orthogonal basis from the obtained kernel functions such that the orthonormal basis is constructing in order to formulate and utilize the solutions. Numerical experiments are carried where two smooth reproducing kernel functions are used throughout the evolution of the algorithm to obtain the required nodal values of the unknown variables. Error estimates are proven that it converge to zero in the sense of the space norm. Several computational simulation experiments are given to show the good performance of the proposed procedure. Finally, the utilized results show that the present algorithm and simulated annealing provide a good scheduling methodology to multipoint singular boundary value problems restricted by Volterra operator.
[3299] vixra:1705.0019 [pdf]
Double Conformal Geometric Algebra (long CGI2016/GACSE2016 paper in SI of AACA)
This paper introduces the Double Conformal / Darboux Cyclide Geometric Algebra (DCGA), based in the $\mathcal{G}_{8, 2}$ Clifford geometric algebra. DCGA is an extension of CGA and has entities representing points and general (quartic) Darboux cyclide surfaces in Euclidean 3D space, including circular tori and all quadrics, and all surfaces formed by their inversions in spheres. Dupin cyclides are quartic surfaces formed by inversions in spheres of torus, cylinder, and cone surfaces. Parabolic cyclides are cubic surfaces formed by inversions in spheres that are centered on points of other surfaces. All DCGA entities can be transformed by versors, and reflected in spheres and planes. Keywords: Conformal geometric algebra, Darboux Dupin cyclide, Quadric surface Math. Subj. Class.: 15A66, 53A30, 14J26, 53A05, 51N20, 51K05
[3300] vixra:1705.0018 [pdf]
On Synchronization and the Relativity Principle
Lorentz transformation allows two ways to compare time measures from two moving clocks. We show that the more realistic way leads to discover that absolute rest plays a hidden role and prescribes a restriction on the relativity principle.
[3301] vixra:1704.0343 [pdf]
Moduli Space of Compact Lagrangian Submanifolds
We describe the deformations of the moduli space M of Special Lagrangian submanifolds in the compact case and we give a characterization of the topology of M by using McLean theorem. We consider Banach spaces on bundle sections and elliptical operators and we use Hodge theory to study the topology of the manifold. Starting from McLean results, for which the moduli space of compact special Lagrangian submanifolds is smooth and its tangent space can be identified with harmonic 1-forms on these submanifolds, we can analyze their deformations. Then we introduce a Riemannian metric on M, from which we obtain other important properties.
[3302] vixra:1704.0341 [pdf]
The Dirichlet and the Neumann Boundary Conditions May not Produce Equivalent Solutions to the Same Electrostatic Problem
Electrostatic problems are widely solved using two types of boundary conditions (BC), namely, the Dirichlet condition (DC) and Neumann condition (NC). The DC specifies values of electrostatic potential ($\psi$), while the NC specifies values of $\nabla \psi$ at the boundaries. Here we show that DC and NC may not produce equivalent solutions to a given problem; we demonstrate it with a particular problem: 1-D linearized Poisson-Boltzmann equation (PBE), which has been regularly used to find the distribution of ionic charges within electrolyte solutions. Our findings are immediately applicable to many other problems in electrostatics.
[3303] vixra:1704.0317 [pdf]
Study of Tornadoes that Have Reached the State of Paraná
Several tornadoes have solid recorded in the Midwest, Southeast and South of Brazil. The southern region of Brazil has been hit by several of them in the last decade, highlighted the state of Paraná a to record three tornadoes in 2015. The work is a survey of tornadoes that caused major damage to the Paraná population, relevance those who reached the Balsa Nova counties, Francisco Beltrão, Cafelândia, Nova Aurora and Marechal Cândido Rondon. The main cause because it is related to El Niño which has caused a significant rise in temperature and water vapor present in the atmosphere in the state’s regions in surroundings that influence the climate of the state. Another likely factor is the increase in global temperature of the planet, a ripple effect on the warming of Pacific Ocean waters. The meeting is the possibility of the formation of large storms that funnel and reach the states of Paraná and Santa Catarina. Overall fronts storms fall into two, forming a separation channel as a wave, their crests (storms) and valleys (lull), advancing the state of Paraná.
[3304] vixra:1704.0305 [pdf]
Study of the Molecular Electrostatic Potential of D-Pinitol an Active Hypoglycemic Principle Found in Spring Flower Three Marys (Bougainvillea Species) in the Mm+ Method
Diabetes is one of the major causes of premature illness and death worldwide. The prevalence of diabetes has reached epidemic proportions. The work is a study of the molecular electrostatic potential via molecular mechanics of the D-Pinitol found in the Bougainvillea species, a Nyctaginaceae. A computational study of the molecular geometry of the D-pinitol through Mm+ method of the hypoglycemic compounds present in Bougainvillea species is described in a computer simulation. It is a active antidiabetic agent compounds. The study the cyclitol resembles the hooks weed bur plant Asteraceae, it showed up as appearance of a bur molecule. Probably bind to sugar molecules contained in the blood, through hydrogen bonds.
[3305] vixra:1704.0288 [pdf]
Mathematics for Input Space Probes in the Atmosphere of Gliese 581d.
The work is a mathematical approach to the entry of an aerospace vehicle, as a probe or capsule in the atmosphere of the planet Gliese 581d, using data collected from the results of atmospheric models of the planet. GJ581d was the first planet candidate of a few Earth masses reported in the circum-stellar habitable zone of another star. It is located in the Gliese 581 star system, is a star red dwarf about 20 light years away from Earth in the constellation Libra. Its estimated mass is about a third of that of the Sun. It has been suggested that the recently discovered exoplanet GJ581d might be able to support liquid water due to its relatively low mass and orbital distance. However, GJ581d receives 35% less stellar energy than the planet Mars and is probably locked in tidal resonance, with extremely low insolation at the poles and possibly a permanent night side. The climate that demonstrate GJ581d will have a stable atmosphere and surface liquid water for a wide range of plausible cases, making it the first confirmed super-Earth (2-10 Earth masses) in the habitable zone. According to the general principle of relativity, “All systems of reference are equivalent with respect to the formulation of the fundamental laws of physics.” In this case all the equations studied apply to the exoplanet Gliese 581d. If humanity is able to send a probe to Gliese 581d, this has all the mathematical conditions set it down successfully on its surface.
[3306] vixra:1704.0284 [pdf]
Allocryptopine, Berberine, Chelerythrine, Copsitine, Dihydrosanguinarine, Protopine and Sanguinarine. Molecular Geometry of the Main Alkaloids Found in the Seeds of Argemone Mexicana Linn
The work is a study of the geometry of the molecules via molecular mechanics of the main alkaloids found in the seeds of prickly poppy. A computational study of the molecular geometry of the molecules through molecular mechanics of the main alkaloids compounds present in plant seeds is described in a computer simulation. The plant has active ingredients compounds: allocryptopine, berberine, chelerythrine, copsitine, dihydrosanguinarine, protopine and sanguinarine. The Argemone Mexicana Linn, which is considered one of the most important species of plants in traditional Mexican and Indian medicine system. The seeds have toxic properties as well as bactericide, hallucinogenic, fungicide, insecticide, in isoquinolines and sanguinarine alkaloids such as berberine. The studied alkaloids form two groups having distribution characteristics similar to each other loads, to which they have dipole moments twice higher than in the other group.
[3307] vixra:1704.0283 [pdf]
Molecular Electrostatic Potential of the Main Monoterpenoids Compounds Found in Oil Lemon Tahiti (Citrus Latifolia Var Tahiti)
The work is a study of the geometry of the molecules via molecular mechanics of the main monoterpenoids found in the oil of Lemon Tahiti. Lemon Tahiti is the result of grafting of Persia file on Rangpur lime and has no seeds. A computational study of the molecular geometry of the molecules through molecular mechanics of the main monoterpenoids compounds present in fruit oil is described in a computer simulation. The fruit has active terpenoids compounds: alfa-pinene, beta-pinene, limonene and gama-terpenine. The studied monotepernoides form two groups of distribution characteristics of fillers and similar electrical potentials between groups. Since alfa-pinene and limonene present, major and minor moment of electric dipoles, respectively.
[3308] vixra:1704.0282 [pdf]
The Asymptotic Behavior of Defocusing Nonlinear Schrödinger Equations
This article is concerned with the scattering problem for the defocusing nonlinear Schrödinger equations (NLS) with a power nonlinear |u|^p u where 2/n < p < 4/n. We show that for any initial data in H^{0,1} x the solution will eventually scatter, i.e. U(-t)u(t) tends to some function u+ as t tends to innity.
[3309] vixra:1704.0268 [pdf]
Can Quantum Mechanical Systems Influence the Geometry of the Fiber Bundle/space-Time?
We suggest that gravitation is an emergent phenomenon which origin is the information signal associated with quantum fields acting like test particles. We have shown how the metric (Lamè) coefficients emerge as position & time operator mean value densities. The scalar curvature of the space-time in the case of a Bose-Einstein condensate or super- fluid/conductor is calculated and an experimentally verifiable prediction of the theory is made.
[3310] vixra:1704.0236 [pdf]
Unrealistic Assumptions Inherent in Maximal Extension
I argue that maximal extension makes improbable assumptions about future conditions. I start by looking at the Schwarzschild metric, and showing that it does not quite represent the exterior of a collapsed star, although it is easy to argue that the mismatch is immaterial. I then look at the collapse of a cloud of dust using the Robinson Walker metric, which might seem to justify using the Schwarzschild metric to describe the exterior of a black hole. I then show how the Schwarzschild metric is modified when the interior is a collapsed dust cloud, and finally show how the maximal extension of a Schwarzschild black hole makes unrealistic assumptions about the future.
[3311] vixra:1704.0215 [pdf]
Unit-Jacobian Coordinate Transformations: The Superior Consequence of the Little-Known Einstein-Schwarzschild Coordinate Condition
Because the Einstein equation can't uniquely determine the metric, it must be supplemented by additional metric constraints. Since the Einstein equation can be derived in a purely special-relativistic context, those constraints (which can't be generally covariant) should be Lorentz-covariant; moreover, for the effect of the constraints to be natural from the perspective of observational and empirical physical scientists, they should also constrain the general coordinate transformations (which are compatible with the unconstrained Einstein equation) so that the constrained transformations manifest a salient feature of the Lorentz transformations. The little-known Einstein-Schwarzschild coordinate condition, which requires the metric's determinant to have its -1 Minkowski value, thereby constrains coordinate transformations to have unit Jacobian, and for that reason causes tensor densities to transform as true tensors, which is a salient feature of the Lorentz transformations. The Einstein-Schwarzschild coordinate condition also allows the static Schwarzschild solution's singular radius to be exactly zero; though another coordinate condition that allows zero Schwarzschild radius exists, it isn't Lorentz-covariant.
[3312] vixra:1704.0205 [pdf]
Formula Analyzer: Find the Formula by Parameters
Let it be a formula, e.g.: x + y^2 - z = r. It is usually necessary to find a parameter’s value by knowing others’ ones. However, let’s set another problem to find the formula itself, knowing only its parameters. The solution of such a problem we call reverse computing. For that we'll create an algorithm and accomplish it as a program code.
[3313] vixra:1704.0196 [pdf]
New Approximation Algorithms of Pi, Accelerated Convergence Formulas from N = 100 to N = 2m
We give algorithms for the calculation of pi. These algorithms can be easily developed in a linear manner and allows the calculation of pi with an infinite degree of convergence. Of course, the calculation of the second term passes through the first one, and it is necessary, as this type of algorithms, for a larger memory for calculations contrary to the formula BBP [1] whose execution corresponds to the order of the desired number. The advantage of our formulas in spite of the dificulty associated with extracting sin(x) lies in their degree of convergence, which is infinite, they prove the Borweins brothers hypothesis on the construction of algorithms At any speed as symbolized in our generic formula (8) of this paper. These formulas for the most part are totally new : We had found several other formulas of pi l
[3314] vixra:1704.0187 [pdf]
Reflection Symmetry and Time
Two identical stopwatches moving at the same speed will elapse the same time after moving the same distance. Start one stopwatch later than the other stopwatch. The time difference between these two stopwatches will remain constant after both stopwatches have elapsed the same time. Such time difference will remain constant while both stopwatches are under identical acceleration. Therefore, the elapsed time in an accelerating reference frame is identical to the elapsed time in a stationary reference frame. Consequently, a physical system that exhibits Reflection Symmetry in its motion demonstrates that the time of a moving clock is independent of the relative motion between the clock and its observer.
[3315] vixra:1704.0166 [pdf]
The Relativistic Mass Ratio in Ultrarelativistic Photon Rockets
In this paper we take a closer look at the initial mass relative to the relativistic mass of the payload for an ideal photon rocket travelling at its maximum velocity. Haug has recently suggested that for all known subatomic particles, a minimum of two Planck masses of fuel are needed to accelerate the fundamental particle to its suggested maximum velocity (see [1]). Here we will show how this view is consistent with insight given by Tipler at a NASA Breakthrough Propulsion Physics Workshop Proceedings in 1999 (see [2]). Tipler suggested that the mass ratio of the initial rest mass of an ultra-relativistic rocket relative to the relativistic mass of the payload is likely “just” two. An ultrarelativistic rocket is one travelling at a velocity very close to the speed of light. We will here show that the Tipler factor is consistent with results derived from Haug’s suggested maximum velocity for any known observed subatomic particle. However, we will show that the Tipler factor of two is unlikely to hold for ultra-heavy subatomic particle payloads. With ultra-heavy particles, we think of subatomic particles with mass close to that of the Planck mass. Our analysis indicates that the initial mass relative to the relativistic mass of the payload for any type of subatomic particle rocket must be between one and two. Remarkably, the mass ratio is only one for a Planck mass particle. This at first sounds absurd until we understand that the Planck mass particle is probably the very collision point between two photons. Even if a photon’s speed “always is” considered to be the speed of light, we can think of it as standing still at the instant it collides with another photon (backscattering). The mass ratio to accelerate a particle that only exists at velocity zero is naturally one. This is true since no fuel is needed to go from zero to zero velocity. Remarkably this indicates that the Planck mass particle and the Planck length likely are invariant. This can only happen if the Planck mass particle only lasts for an instant before it bursts into energy, which is what we could expect for the collision between two photons.
[3316] vixra:1704.0159 [pdf]
Why Do Planets Rotate Around Themselves ?
It is commonly believed that the self-rotation angular momentum of planets is due to an original angular momentum of dense interstellar clouds at the formation stage of the stars. However, the study shows something completely dierent: a test planet in free-fall, in fact, follows two geodesics; the rst is the usual Schwarzschild path, and the second is a Schwarzschild-like path, dened (spatially) locally: an elliptical orbit in the plane (U(1)-variable, azimuthal angle). The analysis leads to the fact that: the motion along these geodesics (physically) is exactly the self-rotation of a charged test planet in Reissner-Nordstrom spacetime. The results reveal a more general understanding of Einstein equivalence principle: locally, gravitational eld can be (in the Reissner- Nordstrom space) replaced with an accelerated and rotated local frame.
[3317] vixra:1704.0128 [pdf]
Photon Models Are Derived by Solving a Bug in Poynting and Maxwell Theory
It is found that the Poynting theorem is conflict with the energy conservation principle. It is a bug of the Poynting theorem. The Poynting theorem is derived from Maxwell equations by using the superimposition principle of the fields. Hence, this bug also existed at either in superimposition principle or in the Maxwell equations. The Poynting theorem is corrected in this article. After the correction the energy is not quadratic and hence the field is also not linear. The concept of the superposition of fields need also to be corrected. Hence the new definitions for the inner product and cross product are proposed. The corrected Poynting theorem become the mutual energy formula, it is strongly related to the mutual energy theorems. It is shown that starting from the mutual energy formula, the whole electromagnetic theory can be reconstructed. The Poynting theorem can be proved from the mutual energy formula by adding pseudo items. The Maxwell equations can be derived from Poynting theorem as sufficient conditions. Hence if the mutual energy formula is corrected, the Maxwell equations still can be applied with knowing its problem. Most the problems originally caused by Maxwell equations are solved. Examples of this problems are: (1) electric field infinity which need to be re-normalized in quantum physics; (2) collapse of the electromagnetic field, the waves has to be collapsed to its absorber, otherwise the energy is not conserved; (3) the emitter can send energy without absorber, this is conflict to the direct interaction principle and absorber theory; (4) if our universe is not completely opaque, the charges will continually send energy to the outside of our universe, our universe will have a continual loss of energy. However there is no testimony supporting that our universe is opaque. The new theory supports the existence of advanced wave, hence also strongly support the absorber theory and transactional interpretation of quantum physics. It can offer an equation for photon and a good explanation for the duality of the photon. If photon and electromagnetic field obeys the mutual energy formula, it is very possible that all other quanta also obey their similar mutual energy formula. Hence the mutual energy formula can be applied as a principle or axiom for the electromagnetic theory and quantum physics. According to this theory the asychronous retarded wave and the asychronous advanced wave of electromagnetic fields both are an ability or probability waves, which is also partly agree with Copenhagen interpretation.
[3318] vixra:1704.0108 [pdf]
Closed-Form Solution for the Nontrivial Zeros of the Riemann Zeta Function
In the year 2017 it was formally conjectured that if the Bender-Brody-M\"uller (BBM) Hamiltonian can be shown to be self-adjoint, then the Riemann hypothesis holds true. Herein we discuss the domain and eigenvalues of the Bender-Brody-M\"uller conjecture. Moreover, a second quantization of the BBM Schr\"odinger equation is performed, and a closed-form solution for the nontrivial zeros of the Riemann zeta function is obtained. Finally, it is shown that all of the nontrivial zeros are located at $\Re(z)=1/2$.
[3319] vixra:1704.0094 [pdf]
On the Possible Role of Mach's Principle and Quantum Gravity in Cosmic Rotation a Short Communication
In this paper, we show one theoretical possibility for cosmic rotation. We would like to appeal that: 1) A globally rotating universe is consistent with general relativity and quantum gravity. 2) As currently believed dark energy is having no observational evidence, it is better to search for cosmic rotational effects. In this context, one can see the main stream journal articles on cosmic axis of rotation and observational effects of cosmic rotation. Based on Mach's principle and quantum gravity, we imagine our universe as the best quantum gravity sphere and assume that, at any stage of cosmic evolution: 1) Planck scale Hubble parameter plays a crucial role. 2) Space-time curvature follows, ${GM_t}\cong{R_tc^2}$ where $M_t$ and $R_t$ represent the ordinary cosmic mass and radius respectively. 3) Cosmic thermal wavelength is inversely proportional to the ordinary matter density. 4) Magnitude of angular velocity is equal to the magnitude of Hubble parameter. Based on these assumptions, at $H_0\cong 70 \textrm{\,km/sec/Mpc\,},$ estimated current matter density is 0.04341$\left(\frac{3H_0^2}{8 \pi G}\right)$ and corresponding radius is 29 Gpc. Current cosmic rotational kinetic energy density is 0.667$\left(\frac{3H_0^2c^2}{8 \pi G}\right)$. We would like to emphasize that: 1) Currently believed mysterious dark energy can be identified with current cosmic rotational kinetic energy. 2) Currently believed `inflation' concept can be relinquished. With advanced science, engineering and technology and by considering the most recent observations on `cosmic axis of evil' and `axial alignment' of distance astronomical bodies, a unified model of quantum cosmology can be developed.
[3320] vixra:1704.0093 [pdf]
A Finite Field Analogue for Appell Series F_{3}
In this paper we introduce a finite field analogue for the Appell series F_{3} and give some reduction formulae and certain generating functions for this function over finite fields.
[3321] vixra:1704.0075 [pdf]
Mind-Body Problem; A Final Solution by Quantum Language
Recently we proposed “quantum language", which was not only characterized as the metaphysical and linguistic turn of quantum mechanics but also the linguistic turn of Descartes=Kant epistemology. If this turn is regarded as progress in the history of Western philosophy, we should study the linguistic mind-body problem more than the epistemological mind-body problem. In this paper we show that to solve the mind-body problem and to propose "measurement axiom" in quantum language are equivalent. Since our approach is within dualistic idealism, we believe that our linguistic solution is the only true solution ( i.e., even if the other solutions exist, they are not in philosophy but in science).
[3322] vixra:1704.0063 [pdf]
Group Importance Sampling for Particle Filtering and MCMC
Bayesian methods and their implementations by means of sophisticated Monte Carlo techniques have become very popular in signal processing over the last years. Importance Sampling (IS) is a well-known Monte Carlo technique that approximates integrals involving a posterior distribution by means of weighted samples. In this work, we study the assignation of a single weighted sample which compresses the information contained in a population of weighted samples. Part of the theory that we present as Group Importance Sampling (GIS) has been employed implicitly in dierent works in the literature. The provided analysis yields several theoretical and practical consequences. For instance, we discuss theapplication of GIS into the Sequential Importance Resampling framework and show that Independent Multiple Try Metropolis schemes can be interpreted as a standard Metropolis-Hastings algorithm, following the GIS approach. We also introduce two novel Markov Chain Monte Carlo (MCMC) techniques based on GIS. The rst one, named Group Metropolis Sampling method, produces a Markov chain of sets of weighted samples. All these sets are then employed for obtaining a unique global estimator. The second one is the Distributed Particle Metropolis-Hastings technique, where dierent parallel particle lters are jointly used to drive an MCMC algorithm. Dierent resampled trajectories are compared and then tested with a proper acceptance probability. The novel schemes are tested in dierent numerical experiments such as learning the hyperparameters of Gaussian Processes, two localization problems in a wireless sensor network (with synthetic and real data) and the tracking of vegetation parameters given satellite observations, where they are compared with several benchmark Monte Carlo techniques. Three illustrative Matlab demos are also provided.
[3323] vixra:1704.0061 [pdf]
A More Complete Model for High-Temperature Superconductors
To date, the Hubbard model and its strong coupling limit, the t-J model, serve as the canonical model for strongly correlated electron systems in solids. Approximating the Coulomb interaction by only the on-site term (Hubbard U-term), however, may not be sufficient to describe the essential physics of interacting electron systems. We develop a more complete model in which all the next leading order terms besides the on-site term are retained. Moreover, we discuss how the inclusion of these neglected interaction terms in the Hubbard model changes the t-J model.
[3324] vixra:1704.0037 [pdf]
Preons, Standard Model, Gravity with Torsion and Black Holes
A previous spin 1/2 preon model for the substructure of the the standard model quarks and leptons is complemented to provide particle classification group, preon interactions and a tentative model of black holes. The goal of this study is to analyze a phenomenological theory of all interactions. A minimal amount of physical assumptions are made and only experimentally verified global and gauge groups are employed: SLq(2), the three of the standard model and the full Poincar\'e group. Gravity theory with torsion is introduced producing an axial-vector field coupled to preons. The mass of the axial-vector particle is estimated to be near the GUT scale. The boson can materialize above this scale and gain further mass to become a black hole at Planck mass while massless preons may form the horizon. A particle-black hole duality is proposed.
[3325] vixra:1704.0029 [pdf]
An Introduction to the N-Irreducible Sequents and the N-Irreducible Number
In this work, we introduce the $n$-irreducible sequents and the $n$-irreducible numbers defined with the help of the second order logic. We give many concrete examples of $n$-irreducible numbers and $n$-irreducible sequents with the Peano's axioms and the axioms of the real numbers. Shortly, a sequent is $n$-irreducible iff the sequent is composed by some closed hypotheses and a $n$-irreducible formula (a close formula with one internal variable such that the formula is only true when we set that variable to the unique natural number $n$), and it does not exist some strict sub-sequent which are composed by some closed sub-hypotheses and some sub-$m$-irreducible formula with $m>1$. The definition is motivated by the intuition that \Nathypo do not carry natural numbers or "hidden natural numbers" except for the numbers $0$ and $1$, i.e., they can be used in a $n$-irreducible sequent. Moreover, we postulate at second order of logic that \Nathypo are not chosen randomly: \Nathypo has the propriety to give the largest $n$-irreducible number $N_Z \NZ$ among a finite number of $n$-irreducible sequents. The Collatz conjecture, the Goldbach's conjecture, the Polignac's conjecture, the Firoozbakht's conjecture, the Oppermann's conjecture, the Agoh-Giuga conjecture, the generalized Fermat's conjecture and the Schinzel's hypothesis H are reviewed with this new (second order logic) $n$-irreducible axiom. Finally, two open questions remain: Can we prove that a natural number is not $n$-irreducible? If a $n$-irreducible number $n$ is found with a function symbol $f$ where its outputs values are only $0$ and $1$, can we always replace the function symbol $f$ by a another function symbol $\tilde{f}$ such that $\tilde{f}=1-f$ and the new sequent is still $n$-irreducible?
[3326] vixra:1704.0002 [pdf]
Special Relativity and Einstein Equivalence Principle
Einstein Equivalence Principle is the cornerstone of general theory of relativity. Special relativity is assumed to be veried at any point on the Riemann curved manifold. This leads to a mathematical consistency between Einstein equations and special relativity principles.
[3327] vixra:1703.0307 [pdf]
Strings and Loops in the Language of Geometric Clifford Algebra
Understanding quantum gravity motivates string and loop theorists. Both employ geometric wavefunction models. Gravity enters strings by taking the one fundamental length permitted by quantum field theory to be not the high energy cutoff, rather the Planck length. This comes at a price - string theory cannot be renormalized, and the solutions landscape is effectively infinite. Loop theory seeks only to quantize gravity, with hope that insights gained might inform particle physics. It does so via interactions of two dimensional loops in three-dimensional space. While both approaches offer possibilities not available to Standard Model theorists, it is not unreasonable to suggest that geometric wavefunctions comprised of fundamental geometric objects of three-dimensional space are required for successful models, written in the language of geometric Clifford algebra.
[3328] vixra:1703.0304 [pdf]
Towards a Solution of the Riemann Hypothesis
In 1859, Georg Friedrich Bernhard Riemann had announced the following conjecture, called Riemann Hypothesis : The nontrivial roots (zeros) $s=\sigma+it$ of the zeta function, defined by: $$\zeta(s) = \sum_{n=1}^{+\infty}\frac{1}{n^s},\,\mbox{for}\quad \Re(s)>1$$ have real part $\sigma= 1/2$. We give a proof that $\sigma= 1/2$ using an equivalent statement of the Riemann Hypothesis concerning the Dirichlet $\eta$ function.
[3329] vixra:1703.0299 [pdf]
Part II - Gravity, Anomaly Cancellation, Anomaly Matching, and the Nucleus
Here we provide a consistent solution of the baryon asymmetry problem. The same model is also able to tell us as to what is the mathematical basis of the ”equivalence principle” (i.e. the inertial mass being equal to the gravitational mass). We are also able to see as to wherefrom arises the semi-simple group structure of hadrons as SU (2) I ⊗U (1) B (of the pre-eightfold-way-model period). Thus we are able to understand the origin of the Gell-Mann-Nishijima expression for the electric charges, Q = I 3 + B 2 . This paper is in continuation of my recent paper, ”Gravity, Anomaly Cancellation, Anomaly Matching, and the Nucleus” (syedafsarabbas.blogspot.in).
[3330] vixra:1703.0296 [pdf]
Detection of Vibranium in the Seyfert Galaxy NGC 1365 Through X-Ray Spectroscopy
We present results from a joint NuSTAR/VLT monitoring of the Seyfert 2 galaxy NGC 1365. We find conclusive evidences for an emission K alpha line from atomic vibranium in the X-ray spectrum, in combination with molecular vibranium absorption features in the mid-infrared spectrum. This is the first direct observation of vibranium in an astronomical environment. We also derive a measurement for its abundance of 2e-6 with respect to hydrogen.
[3331] vixra:1703.0295 [pdf]
Higher Order Derivatives of the Inverse Function
A general recursive and limit formula for higher order derivatives of the inverse function is presented. The formula is next used in couple of mathematical applications: expansion of the inverse function into Taylor series, solving equations, constructing random numbers with a given distribution from uniformly distributed randomnumbers and expanding a function in the neighborhood of a given point in an alternative way to the Taylor expansion.
[3332] vixra:1703.0282 [pdf]
Selfinteraction of Adiabatic Systems
Given an adiabatic system of particles as defined in [4], the problem is whether and to what degree one can break it into its constituents and describe their mutual interaction.
[3333] vixra:1703.0276 [pdf]
Reconciling General Relativity with Quantum Mechanics
There has been a crisis in theory regarding General Relativity and Quantum Mechanics. Here I propose a solution by saying there is a flexible structure to the universe and it is essentially the structure of the vacuum energy.
[3334] vixra:1703.0272 [pdf]
The Pioneer Satellites Anomaly is a Natural Constant
It is believed that the Pioneer satellites anomaly could be resolved by the orbit determination programs (ODP) if some particular elements of the satellite that were omitted or rejected as non applicable were taken into account. This is not the case and up to now, not a single proposition has been able to resolve the anomalous acceleration that plagued those satellites. We show that the Pioneer anomaly is in fact a natural and universal constant also remarked as an apparent numerical coincidence.
[3335] vixra:1703.0271 [pdf]
L’anomalie Des Satellites Pioneer Est Une Constante Naturelle
On croit que l’anomalie des satellites Pioneer serait expliquée en tenant compte de certains éléments omis ou rejetés comme non pertinents dans les programmes de calcul des orbites des satellites (Orbit Determination Program). Il n’en est rien, et jusqu’à maintenant, aucune proposition en ce sens n’a réussi de façon concluante à expliquer cette accélération. Nous montrons que ce qui est apparu comme une coïncidence numérique n’en n’est pas une, mais n’est en fait qu’une constante universelle naturelle.
[3336] vixra:1703.0267 [pdf]
Iterative Computation of Moment Forms for Subdivision Surfaces
The derivation of multilinear forms used to compute the moments of sets bounded by subdivision surfaces requires solving a number of systems of linear equations. As the support of the subdivision mask or the degree of the moment grows, the corresponding linear system becomes intractable to construct, let alone to solve by Gaussian elimination. In the paper, we argue that the power iteration and the geometric series are feasible methods to approximate the multilinear forms. The tensor iterations investigated in this work are shown to converge at favorable rates, achieve arbitrary numerical accuracy, and have a small memory footprint. In particular, our approach makes it possible to compute the volume, centroid, and inertia of spatial domains bounded by Catmull-Clark and Loop subdivision surfaces.
[3337] vixra:1703.0260 [pdf]
Bell's Questions Resolved Via Local Realistic Quantum Mechanics
<p>‘... all this action at a distance business will pass [like the ether]. If we're lucky it will be to some big new development like the theory of relativity. Maybe someone will just point out that we were being rather silly, with no big new development. But anyway, I believe the questions will be resolved,' after Bell (1990:9). ‘Nobody knows where the boundary between the classical and quantum domain is situated. More plausible to me is that we'll find that there is no boundary: the hidden-variable possibility,' after Bell (2004:28-29).</p> <p><b>Abstract:</b> Studying Bell's work, using classical analysis and author-date referencing suited to undergraduate STEM students, we arrive at a new classical theory: local realistic quantum mechanics. Adjusting EPR to accord with Bohr's insight, and accepting Bell's principles (but not his false inferences), our method follows: (i) we allow Bell's pristine λ (and its pairwise twin μ) to be classical fair-coin vectors in 3-space; (ii) we complete the QM account of EPR correlations in a classical way; (iii) we deliver Bell's hope for a simple constructive model of EPRB; (iv) we justify EPR's belief that additional variables would bring locality and causality to QM's completion; (v) we refute key claims that such variables are impossible; (vi) we show that interactions between particles and polarizers are driven by the total angular momentum; (vii) we bypass Pauli's vector-of-matrices, but retain all the tools of the quantum trade. In short, under local realism: classically deriving the related results of quantum theory, we classically endorse Einstein's locally-causal Lorentz-invariant worldview.</p>
[3338] vixra:1703.0256 [pdf]
Quantum of Canonical Electromagnetic Angular Momentum = $\hbar/2$
\begin{abstract} It is analytically determined that the smallest theoretically possible nonzero canonical electromagnetic angular momentum $\hbar/2$ arises when an electron is inserted into one magnetic flux quantum. The analysis further reveals how magnetic flux quantization is inherently linked up with angular momentum quantization. Bohr's correspondence principle is satisfied. \end{abstract}
[3339] vixra:1703.0254 [pdf]
The Mass Gap, Kg, the Planck Constant and the Gravity Gap
In this paper we discuss and calculate the mass gap. Based on the mass gap we are redefining what a kilogram may truly represent. This enables us to redefine the Planck constant in what we consider to be more fundamental units. Part of the analysis is based on recent developments in mathematical atomism. Haug [1, 2, 3] has shown that all of Einstein’s special relativity mathematical end results [4] can be derived from two postulates in atomism. However, atomism gives some additional boundary conditions and removes a series of infinite challenges in physics in a very simple and logical way. While the mass gap in quantum field theory is an unsolved mystery, under atomism we have an easily defined, discrete, and “exact” mass gap. The minimum rest-mass that exists above zero is 1.1734 × 10^(−51) kg, assuming an observational time window of one second. Under our theory it seems meaningless to talk about a mass gap without also talking about the observational time window. The mass gap in one Planck second is the Planck mass. Further, the mass gap of just 1.1734 × 10^(−51) kg has a relativistic mass equal to the Planck mass. The very fundamental particle that makes up all mass and energy has a rest-mass of 1.1734 × 10^(−51) kg. This is also equivalent to a Planck mass that lasts for one Planck second.
[3340] vixra:1703.0247 [pdf]
Preons, Standard Model and Gravity with Torsion
A preon model for the substructure of the the standard model quarks and leptons is discussed. Global group representations for preons, quarks and leptons are addressed using two preons and their antiparticles. The preon construction endorses the standard model gauge group structure. Preons are subject to electromagnetic and gravitational interactions only. Gravity with torsion, expressed as an axial-vector field, is applied to preons in the energy range between GUT and Planck scale. The mass of the axial-vector particle is estimated to be near the GUT scale. A tentative model for quantum gravity, excluding black holes, is considered.
[3341] vixra:1703.0244 [pdf]
Pseudo-Forces Within Non-Local Geometrodynamic Model?
In this letter we describe a new concept of "pseudo-forces" that is obtained from a non-local geometrodynamic model. We argue that while the gravity force is described by the curvature of spacetime, the other three forces are, in fact, pseudo-forces that evolve from such geometrodynamic model.
[3342] vixra:1703.0224 [pdf]
Gravity, Anomaly Cancellation, Anomaly Matching, and the Nucleus
In the Standard Model there has been the well known issue of charge quantization arising from the anomalies and with or without spontaneous symmetry breaking being brought into it. It is well known that in the purely anomalies case, unexpectedly there pops in a so called "bizarre" solution. We discuss this issue and bring in the 't Hooft anomaly matching condition to find a resolution of the above bizarreness conundrum. We find a completely consistent solution with a unique single nucleon-lepton chiral family. We find that at low energies, the nucleus should be understood as made up of fundamental proton and neutron and where quarks play no role whatsoever. In addition it provides a new understanding and consistent solutions of some long standing basic problems in nuclear physics, like the quenching of the Gamow-Teller strength in nuclei and the issue of the same "effective charge" of magnitude 1/2 for both neutron and proton in the nucleus. Fermi kind of four-fermion-point-interaction appears as an exact (non-gauge) result.
[3343] vixra:1703.0169 [pdf]
Translational Symmetry and FitzGerald-Lorentz Contraction
Translational symmetry in one-dimensional space requires the distance between two objects moving at equal speed under equal acceleration to be constant in time. However, motion between the object and the observer is relative. Therefore, this distance is constant in time for an accelerating observer. Consequently, the length of an accelerating object is constant in time. The length of an moving object in the direction of motion is independent of its speed.
[3344] vixra:1703.0160 [pdf]
Logarithmic Extension of Real Numbers and Hyperbolic Representation of Generalized Lorentz Transforms
We construct the logarithmic extension for real numbers in which the numbers, less then $-\infty$ exist. Using this logarithmic extension we give the single formula for hyperbolic representation of generalized tachyon Lorentz transforms.
[3345] vixra:1703.0150 [pdf]
Galilean and Einsteinian Observers
Physicists since Einstein have assumed that the Galilean system of clock-synchronised stationary observers is consistent with the Special Theory of Relativity. More specifically, they have always assumed that the Galilean system of clock-synchronised stationary observers, that obeys the Galilean transformation equations, is consistent with the non-Galilean Lorentz transformation equations. Einstein's assumption is however, demonstrably false.
[3346] vixra:1703.0143 [pdf]
On the Logical Inconsistency of Einstein's Length Contraction
Length contraction is a principal feature of the Special Theory of Relativity. It is purported to be independent of position, being a function only of uniform relative velocity, via systems of clock-synchronised stationary observers and the Lorentz Transformation. However, a system of stationary observers reports not length contraction but length expansion. Two observers in a system of clock-synchronised observers assign a common length contraction, but at the expense of time dilation and of being stationary. Systems of clock-synchronised stationary observers are logically inconsistent with the Lorentz Transformation. Consequently, the Theory of Relativity is false due to an insurmountable intrinsic logical contradiction.
[3347] vixra:1703.0138 [pdf]
Electron Stability Approach to Finite Quantum Electrodynamics
This paper analyses electron stability and applies the resulting stability principle to resolve divergence issues in quantum electrodynamics (QED) without renormalization. Stability is enforced by requiring that the positive electromagnetic field energy be balanced by a negative interaction energy between the observed electron charge and a local vacuum potential. Then in addition to the observed core mechanical mass m, an electron system consists of two electromagnetic mass components of equal magnitude M but opposite sign; consequently, the net electromagnetic mass is zero. Two virtual, electromagnetically dressed mass levels m±M, constructed to form a complete set of mass levels and isolate the electron-vacuum interaction, provide essential S-matrix corrections for radiative processes involving infinite field actions. Total scattering amplitudes for radiative corrections are shown to be convergent in the limit M → ∞ and equal to renormalized amplitudes when Feynman diagrams for all mass levels are included. In each case, the infinity in the core mass amplitude is canceled by the average amplitude for electromagnetically dressed mass levels, which become separated in intermediate states and account for the stabilizing interaction energy between an electron and its surrounding polarized vacuum. In this manner, S-matrix corrections are shown to be finite for any order diagram in perturbation theory, all the while maintaining the mass and charge at their physically observed values.
[3348] vixra:1703.0124 [pdf]
Constant Quality of the Riemann Zeta's Non-Trivial Zeros
In this article we are closely examining Riemann zeta function's non-trivial zeros. Especially, we examine real part of non-trivial zeros. Real part of Riemann zeta function's non-trivial zeros is considered in the light of constant quality of such zeros. We propose and prove a theorem of this quality. We also uncover a definition phenomenons of zeta and Riemann xi functions. In conclusion and as an conclusion we observe Riemann hypothesis in perspective of our researches.
[3349] vixra:1703.0106 [pdf]
Hydrothermal Synthesis of Sodium Tantalate Nanocubes
Experiments were conducted to optimize the growth parameters of perovskite structure of sodium tantalate in energy efficient hydrothermal process. We have succesfully grown sodium tantalate nanocubes at low temperature of 140oC for 15 hours in rich alkaline atmosphere. It contains orthorhombic crystal system of perovskite structure with an average size of 80 nm. The morphological, compositional, structural, and thermal properties of as-synthesized nanocubes were characterized by scanning electron microscope (SEM), x-ray powder diffraction (XRD), and thermal gravimetric analysis (TGA) techniques.
[3350] vixra:1703.0103 [pdf]
Some Remarks about Mathematical Structures in Science, Operator Version of General Laplace Principle of Equal Ignorance (GLPEI), Symmetry and too much Symmetry
It is suggested that the same form of equations in classical and quantum physics allow to elaborate the same algorithms to find their solutions if the free Fock space (FFS) is used. “The miracle of the appropriateness of the language of mathematics for the formulation of the laws of physics” is addressed on the example of the causality principle, (Sec.2). Notes on the role of the fields and their sources, and disposal of the excess of information are set out in Secs 3 and 5. Possible obstacles in constructing quantum gravity are discussed and remedies are proposed in Secs 4, 5 and 6. A connection of symmetries with the Laplace principle of equal ignorance (LPEI) and its operator generalization are considered in Sec.7. The classical and quantum vacuums related to isolation of a system are suggested, (Sec.8)
[3351] vixra:1703.0101 [pdf]
Quantum Mechanics of Singular Inverse Square Potentials Under Usual Boundary Conditions
The quantum mechanics of inverse square potentials in one dimension is usually studied through renormalization, self-adjoint extension and WKB approximation. This paper shows that such potentials may be investigated within the framework of the position-dependent mass quantum mechanics formalism under the usual boundary conditions. As a result, exact discrete bound state solutions are expressed in terms of associated Laguerre polynomials with negative energy spectrum using the Nikiforov-Uvarov method for the repulsive inverse square potential.
[3352] vixra:1703.0097 [pdf]
High Degree Diophantine Equation C^q=a^p+b^p
The main idea of this article is simply calculating integer functions in module. The algebraic in the integer modules is studied in completely new style. By a careful construction the result that two finite numbers is with unequal logarithms in a corresponding module is proven, which result is applied to solving a kind of diophantine equation: $c^q=a^p+b^p$.
[3353] vixra:1703.0093 [pdf]
On the Logical Inconsistency of Einstein's Time Dilation
Time dilation is a principal feature of the Special Theory of Relativity. It is purported to be independent of position, being a function only of uniform relative velocity, via the Lorentz Transformation. However, it is not possible for a 'clock-synchronised stationary system' of observers K to assign a definite time to any 'event' relative to a 'moving system' k using the Lorentz Transformation. Consequently, the Theory of Relativity is false due to an insurmountable intrinsic logical contradiction.
[3354] vixra:1703.0092 [pdf]
The Influence of Electronic Solid-State Plasma on Attenuation of Transverse Sound Wave in a Conductor
The effect of the electron sound absorption in a conducting medium (metal) was previously considered on the assumption of the Fermi-surface deformation under the action of the sound wave. In the present work will be considered another approach to the problem based on dynamic (kinetic) interaction of the electron gas with the lattice vibrations. The analysis is carried out for the case of arbitrary degeneration degree of the solid-state plasma.
[3355] vixra:1703.0091 [pdf]
The Theory of Quantum Gravity and Calculation of Cosmological Constant
To construct quantum gravity we formulate quantum electrodynamics in equivalent form with possibility to generalize, we calculate the сosmological constant assuming that the quantum state is a function of time and radius of universe.
[3356] vixra:1703.0082 [pdf]
Bell's Theorem Refuted in Our Locally-Causal Lorentz-Invariant World
Adjusting EPR (to accord with Bohr's insight), and accepting Bell's principles (but not his false inferences), we:— (i) complete the QM account of EPR correlations in a classical way; (ii) deliver Bell's hope for a simple constructive model of EPRB; (iii) justify EPR's belief that additional variables would bring locality and causality to QM's completion; (iv) refute key claims that such variables are impossible — including CHSH, Mermin's three-particle always-vs-never variant of GHZ, and this: in the context of Bell's theorem ‘it's a proven scientific fact that a violation of local realism has been demonstrated theoretically and experimentally,' (Annals of Physics Editors, 2016). In short: we refute Bell's theorem and endorse Einstein's locally-causal Lorentz-invariant worldview.
[3357] vixra:1703.0078 [pdf]
Exponential Diophantine Equation
The main idea of this article is simply calculating integer functions in module. The algebraic in the integer modules is studied in completely new style. By a careful construction a result is obtained on two finite numbers with unequal logarithms, which result is applied to solving a kind of diophantine equations.
[3358] vixra:1703.0073 [pdf]
On The Riemann Zeta Function
We discuss the Riemann zeta function, the topology of its domain, and make an argument against the Riemann hypothesis. While making the argument in the classical formalism, we discuss the material as it relates to the theory of infinite complexity (TOIC). We extend Riemann's own (planar) analytic continuation $\mathbb{R}\to\mathbb{C}$ into (bulk) hypercomplexity with $\mathbb{C}\to\,^\star\mathbb{C}$. We propose a solution to the Banach--Tarski paradox.
[3359] vixra:1703.0040 [pdf]
The Asymptotic Riemann Hypothesis (ARH)
We propose in the present paper to consider the Riemann Hypothesis asympotically (ARH) ; it means when the imaginary part of the zero in the critical band is great. We show that the problem, expressed in these terms, is equivalent to the fact that an equation called the * equation has only a finite number of solutions, but we have not proved it.
[3360] vixra:1703.0039 [pdf]
On the Origin of the Fine-Structure Constant
It is shown, utilizing dimensional analysis, that the quantization of electric charge can be explained, in a fundamentally consistent manner, as a manifestation of the quantization of the intrinsic vibrational energy of the fabric of spacetime by a non-Planckian "action" in sub-Planckian spacetime. It is found that this conceptualization of the elementary charge provides a natural explanation of some of the more vexing questions that have plagued quantum electrodynamics since its inception. A possible experiment is suggested that might test for the presence of such a non-Planckian "action" in gravitational radiation.
[3361] vixra:1703.0038 [pdf]
Point de Laplace & Exemple de Calcul Géodésique et Analyse Des Résultats en Géodésie Tridimensionnelle
It is my thesis to obtain the diploma of Engineer from the French National School of Geographic Sciences(ENSG, IGN France), presented in October 1981. The first part of the thesis is concerned with the determination of the equation of the observation of Laplace point in the option of 3D geodesy. The second part is about a study of a 3D model of deformation of geodetic networks that was presented by the Senior Geodesist H.M. Dufour in two dimensions.
[3362] vixra:1703.0034 [pdf]
Rethinking the Universe
Human ideas of how life and consciousness relate to mathematics and physics are conditioned by the fact that we have lived our lives on a 5:97 x 10 ^24 kg ball of matter. These ideas would arguably be different if we had evolved instead inside a large rotating world far from astronomical bodies. Contemplating the latter perspective provides some insight on how prevailing views may be in error and how to correct them.
[3363] vixra:1703.0031 [pdf]
Aims and Intention from Mindful Mathematics: The Encompassing Physicality of Geometric Clifford Algebra
The emergence of sentience in the physical world - the ability to sense, feel, and respond - is central to questions surrounding the mind-body problem. Cloaked in the modern mystery of the wavefunction and its many interpretations, the search for a solid fundamental foundation to which one might anchor a model trails back into antiquity. Given the rather astounding presumption that abstractions of the mathematician might somehow inform this quest, we examine the role of geometric algebra of 3D space and 4D spacetime in establishing the foundation needed to resolve contentions of quantum interpretations. The resulting geometric wavefunction permits gut-level intuitive visualization, clarifies confusion regarding observables and observers, and provides the solid quantum foundation essential for attempts to address emergence of the phenomenon of sentience.
[3364] vixra:1703.0024 [pdf]
The Influence of Electron Solid-State Plasma on Attenuation of Longitudinal Sound Waves in a Conductor
In the present work the problem of the attenuation of longitudinal sound oscillations in a conducting medium are considered. The proposed approach is based on the dynamic interaction of electron gas with the lattice vibrations. This interaction is manifested in the modification of kinetic equation for electrons. The process is accompanied by generation of an electric field.
[3365] vixra:1703.0020 [pdf]
Green Function Theory of Strongly Correlated Electron Systems
A novel effective Hamiltonian in the subspace of singly occupied states is obtained by applying the Gutzwiller projection approach to a generalized Hubbard model with the interactions between two nearest-neighbor sites. This model provides a more complete description of the physics of strongly correlated electron systems. The system is not necessarily in a ferromagnetic state as temperature T->0 at any doping level. The system, however, must be in an antiferromagnetic state at the origin of the doping-temperature plane. Moreover, the model exhibits superconductivity in a doped region at sufficiently low temperatures. We summarize the studies and provide a phase diagram of the antiferromagnetism and the superconductivity of the model in the doping-temperature plane here. Details will be presented in subsequent papers.
[3366] vixra:1703.0007 [pdf]
On Einstein's Time Dilation and Length Contraction
Einstein's method of synchronising clocks in his Special Theory of Relativity is inconsistent with the Lorentz Transformation, despite the latter being a fundamental component of his theory. This inconsistency subverts the very foundations of Special Relativity because it follows that Einstein's time dilation and length contraction are also quite generally inconsistent with the Lorentz Transformation. Moreover, clock synchronisation is inconsistent with the Lorentz Transformation. Clock synchronisation and the Lorentz Transformation are mutually exclusive.
[3367] vixra:1703.0006 [pdf]
The Drunken Walk Towards a Goal
This essay begins by endeavouring to ask the question "How can mindless mathematical laws give rise to aims and intentions" but quickly runs into difficulties with the question itself (not least that there is an implication that there is no current mathematical law that may be considered to be "mindful"), which requires some in-depth exploration. I then explore what constitutes "Creative Intelligence" - coming to a surprising conclusion that concurs with Maharish Mahesh Yogi's definition.
[3368] vixra:1702.0332 [pdf]
A New Empirical Approach to Lepton and Quark Masses
A lepton ratio with a small dimensionless residual ke = me m_t^2 /m_u^3 and mass scaling factor alpha_f = 27me /m_t are used to construct empirical formulas for charged leptons, left-handed neutrinos and quarks. The predicted masses are in excellent agreement with known experimental values and constraints.
[3369] vixra:1702.0327 [pdf]
Exploring the Combination Rules of D Numbers From a Perspective of Conflict Redistribution
Dempster-Shafer theory of evidence is widely applied to uncertainty modelling and knowledge reasoning because of its advantages in dealing with uncertain information. But some conditions or requirements, such as exclusiveness hypothesis and completeness constraint, limit the development and application of that theory to a large extend. To overcome the shortcomings and enhance its capability of representing the uncertainty, a novel model, called D numbers, has been proposed recently. However, many key issues, for example how to implement the combination of D numbers, remain unsolved. In the paper, we have explored the combination of D Numbers from a perspective of conflict redistribution, and proposed two combination rules being suitable for different situations for the fusion of two D numbers. The proposed combination rules can reduce to the classical Dempster's rule in Dempster-Shafer theory under a certain conditions. Numerical examples and discussion about the proposed rules are also given in the paper.
[3370] vixra:1702.0324 [pdf]
Why Finite Mathematics Is The Most Fundamental and Ultimate Quantum Theory Will Be Based on Finite Mathematics
Classical mathematics (involving such notions as infinitely small/large and continuity) is usually treated as fundamental while finite mathematics is treated as inferior which is used only in special applications. We first argue that the situation is the opposite: classical mathematics is only a degenerate special case of finite one and finite mathematics is more pertinent for describing nature than standard one. Then we describe results of a quantum theory based on finite mathematics. Implications for foundation of mathematics are discussed.
[3371] vixra:1702.0297 [pdf]
Some General Results On Overfitting In Machine Learning
Overfitting has always been a problem in machine learning. Recently a related phenomenon called “oversearching” has been analyzed. This paper takes a theoretical approach using a very general methodology covering most learning paradigms in current use. Overfitting is defined in terms of the “expressive accuracy” of a model for the data, rather than “predictive accuracy”. The results show that even if the learner can identify a set of best models, overfitting will cause it to bounce from one model to another. Overfitting is ameliorated by having the learner bound the search space, and bounding is equivalent to using an accuracy (or bias) more restrictive than the problem accuracy. Also, Ramsey’s Theorem shows that every data sequence has an situation where either consistent overfitting or underfitting is unavoidable. We show that oversearching is simply overfitting where the resource used to express a model is the search space itself rather than a more common resource such as a program that executes the model. We show that the smallest data sequence guessing a model defines a canonical resource. There is an equivalence in the limit between any two resources to express the same model space, but it may not be effectively computable.
[3372] vixra:1702.0265 [pdf]
A New Simple Recursive Algorithm for Finding Prime Numbers Using Rosser's Theorem
In our previous work (The distribution of prime numbers: overview of n.ln(n), (1) and (2)) we defined a new method derived from Rosser's theorem (2) and we used it in order to approximate the nth prime number. In this paper we improve our method to try to determine the next prime number if the previous is known. We use our method with five intervals and two values for n (see Methods and results). Our preliminary results show a reduced difference between the real next prime number and the number given by our algorithm. However long-term studies are required to better estimate the next prime number and to reduce the difference when n tends to infinity. Indeed an efficient algorithm is an algorithm that could be used in practical research to find new prime numbers for instance.
[3373] vixra:1702.0263 [pdf]
Can the Planck Length Be Found Independent of Big G ?
In this paper we show how it is possible to measure the Planck length from a series of different measurements. One of these measurements is totally independent of big G, but requires particle accelerators far more powerful than the ones that we have today. However, a Cavendish-style experiment can be performed to find the Planck length with no knowledge of the value of big G. Not only that, the Cavendish style set-up gives half the relative measurement error in the Planck length compared to the measurement error in big G.
[3374] vixra:1702.0253 [pdf]
The Distribution of Prime Numbers: Overview of N.ln(n)
The empirical formula giving the nth prime number p(n) is p(n) = n.ln(n) (from ROSSER (2)). Other studies have been performed (from DUSART for example (1)) in order to better estimate the nth prime number. Unfortunately these formulas don't work since there is a significant difference between the real nth prime number and the number given by the formulas. Here we propose a new model in which the difference is effectively reduced compared to the empirical formula. We discuss about the results and hypothesize that p(n) can be approximated with a constant defined in this work. As prime numbers are important to cryptography and other fields, a better knowledge of the distribution of prime numbers would be very useful. Further investigations are needed to understand the behavior of this constant and therefore to determine the nth prime number with a basic formula that could be used in both theoretical and practical research.
[3375] vixra:1702.0238 [pdf]
Certain Types of Graphs in Interval-Valued Intuitionistic Fuzzy Setting
Interval-valued intuitionistic fuzzy set (IVIFS) as a generalization of intuitionistic fuzzy set (IFS) increase its elasticity drastically. In this paper, some important types of interval-valued intuitionistic fuzzy graphs (IVIFGs) such as regular, irregular, neighbourly irregular, highly irregular and strongly irregular IVIFGs are discussed. The relation among neighbourly irregular, highly irregular and strongly irregular IVIFGs is proved. The notion of interval-valued intuitionistic fuzzy clique (IVIFC) is introduced. A complete characterization of the structure of the IVIFC is presented.
[3376] vixra:1702.0237 [pdf]
Measurement of Planarity in Product Bipolar Fuzzy Graphs
Bipolar fuzzy set theory provides a basis for bipolar cognitive modeling and multiagent decision analysis, where in some situations, the product operator may be preferred to the min operator, from theoretical and experimental aspects. In this paper, the definition of product bipolar fuzzy graphs (PBFGs) is modified. The concepts of product bipolar fuzzy multigraphs (PBFMGs), product bipolar fuzzy planar graphs (PBFPGs) and product bipolar fuzzy dual graphs (PBFDGs) are introduced and investigated. Product bipolar fuzzy planarity value of PBFPG is introduced. The relation between PBFPG and PBFDG is also established. Isomorphism between PBFPGs is discussed. Finally, an application of the proposed concepts is provided.
[3377] vixra:1702.0234 [pdf]
On the K-Macga Mother Algebras of Conformal Geometric Algebras and the K-Cga Algebras
This note very briefly describes or sketches the general ideas of some applications of the G(p,q) Geometric Algebra (GA) of a complex vector space C^(p,q) of signature (p,q), which is also known as the Clifford algebra Cl(p,q). Complex number scalars are only used for the anisotropic dilation (directed scaling) operation and to represent infinite distances, but otherwise only real number scalars are used. The anisotropic dilation operation is implemented in Minkowski spacetime as hyperbolic rotation (boost) by an imaginary rapidity (+/-)f = atanh(sqrt(1-d^2)) for dilation factor d>1, using +f in the Minkowski spacetime of signature (1,n) and -f in the signature (n,1). The G(k(p+q+2),k(q+p+2)) Mother Algebra of CGA (k-MACGA) is a generalization of G(p+1,q+1) Conformal Geometric Algebra (CGA) having k orthogonal G(p+1,q+1):p>q Euclidean CGA (ECGA) subalgebras and k orthogonal G(q+1,p+1) anti-Euclidean CGA (ACGA) subalgebras with opposite signature. Any k-MACGA has an even 2k total count of orthogonal subalgebras and cannot have an odd 2k+1 total count of orthogonal subalgebras. The more generalized G(l(p+1)+m(q+1),l (q+1)+m(p+1)):p>q k-CGA algebra, for even or odd k=l+m, has any l orthogonal G(p+1,q+1) ECGA subalgebras and any m orthogonal G(q+1,p+1) ACGA subalgebras with opposite signature. Any 2k-CGA with even 2k orthogonal subalgebras can be represented as a k-MACGA with different signature, requiring some sign changes. All of the orthogonal CGA subalgebras are corresponding by representing the same vectors, geometric entities, and transformation versors in each CGA subalgebra, which may differ only by some sign changes. A k-MACGA or a 2k-CGA has even-grade 2k-vector geometric inner product null space (GIPNS) entities representing general even-degree 2k polynomial implicit hypersurface functions F for even-degree 2k hypersurfaces, usually in a p-dimensional space or (p+1)-spacetime. Only a k-CGA with odd k has odd-grade k-vector GIPNS entities representing general odd-degree k polynomial implicit hypersurface functions F for odd-degree k hypersurfaces, usually in a p-dimensional space or (p+1)-spacetime. In any k-CGA, there are k-blade GIPNS entities representing the usual G(p+1,q+1) CGA GIPNS 1-blade entities, but which are representing an implicit hypersurface function F^k with multiplicity k and the k-CGA null point entity is a k-point entity. In the conformal Minkowski spacetime algebras G(p+1,2) and G(2,p+1), the null 1-blade point embedding is a GOPNS null 1-blade point entity but is a GIPNS null 1-blade hypercone entity.
[3378] vixra:1702.0230 [pdf]
A Maximum Limit on Proper Velocity
Here we examine maximum proper velocity (sometimes referred to as celerity), based on the recently suggested maximum velocity for anything with rest mass, as given by Haug. Proper velocity is a quantity that has been suggested for use in a series of calculations in relativity theory. Current standard theory imposes no limit on how close to infinity the proper velocity for an object with mass can be. Under our extended theory, by contrast, there is a strict upper limit on the proper velocity for anything with rest mass, which again is directly related to our newly suggested maximum velocity for anything with rest mass.
[3379] vixra:1702.0212 [pdf]
Temperature Effects in Second Stokes' Problem
The second Stokes's problem about the behavior of rarefied gas filling half-space, when limiting the half-space the plane performs harmonic oscillations in its plane is considered. Continuum mechanics equations with the slip are used. It is shown that in quadratic in the velocity of wall approximation in gas have taken place the temperature effects due to influence of viscous dissipation. In this case there is a temperature difference between the surface of the body and the gas away from the surface.
[3380] vixra:1702.0185 [pdf]
G-Factor and the Helical Solenoid Electron Model
A new model of the electron with Helical Solenoid geometry is presented. This new model is an extension of the Parson’s Ring Electron Model and the Hestenes’ Zitter Electron Model. In this new electron model, the g-factor appears as a simple consequence of the geometry of the electron. The calculation of the g-factor is performed in a simple manner and we obtain the value of 1.0011607. This value of the g-factor is more accurate that the value provided by the Schwinger’s factor.
[3381] vixra:1702.0182 [pdf]
The Real-Zeros of Jones Polynomial of Torus
This article proved two theorems and presented one conjecture about the real-zeros of Jones Polynomial of Torus. Topological quantum computer is related to knots/braids theory where Jones polynomials are characters of the quantum computing. Since the real-zeros of Jones polynomials of torus are observable physical quantities, except the real-zero at 1.0 there exists another distinguished real-zero in 1 < r < 2 for every Jones polynomial of Torus, these unique real zeros can be IDs of torus knots in topological quantum computing.
[3382] vixra:1702.0175 [pdf]
Complementary Inferences on Theoretical Physics and Mathematics
I have been working for a long time about basic laws which direct existence, and some mathematical problems which are waited for a solution. I can count myself lucky, that I could make some important inferences during this time, and I published them in a few papers partially as some propositions. This work aimed to explain and discuss these inferences all together by relating them one another by some extra additions, corrections and explanations being physical phenomena are prior. There are many motivation instruments for exact physical inferences.
[3383] vixra:1702.0161 [pdf]
Radius of Single Fluxon Electron Model Identical with Classical Electron Radius
Analytical determination of the magnetic flux included in the electron's dipole field - with consideration of magnetic flux quantization - reveals that it precisely comprises one magnetic flux quantum $\Phi_{0}$. The analysis further delivers a redefinition of classical electron radius $r_{e}$ by a factorized relation among electron radius $r_{e}$, vacuum permeability $\mu_{0}$, magneton $\mu_{B}$ and fluxon $\Phi_{0}$, exclusively determined by the electron's quantized magnetic dipole field: \begin{center} $r_{e} =\mu_{0}\hspace{1} \mu_{B}\hspace{1}(\Phi_{0})^{-1}= e^{2}/ 4 \pi \epsilon_{0} m_{e} c^{2}$ \end{center} The single fluxon electron model further enables analytical determination of its vector potential at $r_{e}$: $\vec{A}_{re} = \vec{\Phi}_{0}/2\pi r_{e}}$ and canonical angular momentum: $ e \vec{A}_{re}\hspace{2} 2 \hspace{2}\pi r_{e} %= e \hspace{2}\vec{\Phi_{0}} 2 \hspace{2}\pi = \hbar/2$.\\ Consideration of flux-quantization supports a toroidal electron model.
[3384] vixra:1702.0142 [pdf]
A “HUMAN” Teaching Method for Physics
In order to enliven the environment of the physics classroom and deepen the understanding of physics concepts, we propose a ``human'' teaching method that uses the students' bodies themselves as the sole medium to approximately recreate major physical processes. In concrete terms, participants play definite physical roles and assume definite physical functions. All participants form a group and perform the demonstration, thereby recreating major physical processes. We use ``surface tension'' as an example to illustrate this teaching method. This technique increases the interest factor in physics teaching, thus stimulating students' exploratory enthusiasm while cultivating cooperation and team spirit. Therefore, this method is conducive to improving students' collective creative abilities.
[3385] vixra:1702.0141 [pdf]
Experimental Verification or Refutation of the Electric Charge Conservation Using a Cylinder-Capacitor with Rotating Core
The electric force from a uniformly moving point charge onto a resting point charge does not correspond exactly to the Coulomb force. This is a consequence of the Liénard–Wiechert potentials which are derived from Maxwell's equations. If the point charge is moving toward or away from a resting point charge, the electric force seems to be weakened compared to the Coulomb force. In contrary, the electric force appears to be strengthened when the point charge is passing the resting charge sideways. Together, both effects compensate each other so that the total charge is independent of the relative speed. This article proposes and discusses an experiment with which this claim can be verified. The experiment is of major importance, because besides the field formula of a point charge derived from Maxwell's equations a recently discovered, clearly easier structured alternative exist in which no longer a magnetic part occurs. Although both formulas differ significantly, it is impossible to design experiments with current loops of any form to decide between both alternatives, because theoretical considerations lead always to the same experimental predictions. The electrical part of both field formulas differs only by a Lorentz factor. This has the consequence that the total charge is in the alternative formula no longer independent from the relative speed between source and destination charge. Thus, the electric charge depends here on the reference frame and we get rest and relativistic charge. The experiment proposed in this article makes it possible to measure this effect so that a decision between both alternatives becomes possible.
[3386] vixra:1702.0137 [pdf]
A New Framework for Viewing Reality
The basic building blocks of the universe have been debated for millennia. Recently advances have been made in string theory and variant schools of thought. Here I propose the notion that, at extremely small scales, information has a structure, and the information that determines a system is equivalent to it's momentum. This is achieved through an analogous mechanism to the Cartesian axes. These axes are aware of their position. This has many implications and I believe the mathematics regarding this will flourish.
[3387] vixra:1702.0134 [pdf]
Energy from the Vacuum and Superluminal Communication
Some aspects of the future of humankind are considered based on applications of the quantum modification of general relativity. Particularly, the energy supply from the vacuum and a new form of communication are discussed.
[3388] vixra:1702.0133 [pdf]
Goal, Free-Will and Qualia in Biological Evolution
The author is developing the idea that genes were not enough in evolution to create goals, because the first goals should have arisen quickly. This is another clue that consciousness exists also at unicellular organisms. Besides, qualia are more primitive form of a goal. Consciousness is composed of qualia and free-will, and both need new physics. Free-will is based also on quantum consciousness. Although it seems that it is disproved by Tegmark, it is very obvious, thus it cannot be disproved until it will not be clarified what a physical base of consciousness is. Quantum consciousness is based also on panpsychism, which has already some support in mainstream science. Explanations of goal, free-will, qualia, and consciousness are also things of explanation of time. It is shown how time is connected with matter and how with consciousness. At the end it is criticized, how official science too much ignores goals and ideas of authors that do not belong to it.
[3389] vixra:1702.0131 [pdf]
Nontrivial Unit Vector Phase-Shifted Stable Superpositions of Elliptical Polarised Plane Waves Using Jones Vectors as Spinors
Castillo demonstrates an important case of successful superposition of elliptically-polarised light by moving to spinor representations of electromagnetic plane waves: when the angle between the two unit spinors as represented on a Poincare sphere are (as a complex number) either 1, -1, i or -i. This paper demonstrates that there are additional conditions under which superposition is successful: phase-shifting of one of the waves by 90 degrees prior to superposition. Two and three superpositions are shown, and the candidate configurations for each are listed. The result is significant for Particle Physics at least, in that Castillo and Rubalcava-Garcia's prior work show a correspondance between Jones Calculus and SU(2), and gives a direct mapping between Jones and Pauli Matrices.
[3390] vixra:1702.0126 [pdf]
Future of Humankind in Light of New Science
Some aspects of the future of humankind are considered based on application of the quantum modification of general relativity. Particularly, the energy supply from the vacuum and a new form of communication are discussed.
[3391] vixra:1702.0120 [pdf]
Poisson Boltzmann Equation Cannot be Solved Using Dirichlet Boundary Condition
The Poisson-Boltzmann equation (PBE) gives us very simple formula for charge density distribution $(\rho_e)$ within ionic solutions. PBE is widely solved by specifying values to electrostatic potential ($\psi$) at different boundaries; this type of boundary condition (BC) is known as Dirichlet condition (DC). Here we show that DC cannot be used to solve the PBE, because it leads to unphysical consequences. For example, when we change the reference for $\psi$, the functional forms of $\psi$ and $\rho_e$ change in non-trivial ways i.e. it changes the physics, which is not acceptable. Our result should have far reaching effects on many branches of physical, chemical and biological sciences.
[3392] vixra:1702.0116 [pdf]
Geometric (Clifford) Algebra Calculation of the Trajectory of a Gas Molecule Desorbed from the Earth's Surface
As a step toward understanding why the Earth's atmosphere "rotates" with the Earth, we use Geometric (Clifford) Algebra to investigate the trajectory of a single molecule that desorbs vertically upward from the Equator, then falls back to Earth without colliding with any other molecules. Sample calculations are presented for a molecule whose vertical velocity is equal to the surface velocity of the Earth at the Equator (463 m/s) and for one with a vertical velocity three times as high. The latter velocity is sufficient for the molecule to reach the Kármán Line (100,000 m). We find that both molecules fall to Earth behind the point from which they desorbed: by 0.25 degrees of latitude for the higher vertical velocity, but by only 0.001 degrees for the lower.
[3393] vixra:1702.0115 [pdf]
On the Possible Role of Mach's Principle and Quantum Gravity in Modern Quantum Cosmology
Based on Mach's principle and quantum gravity, we imagine our universe as a best quantum gravitational sphere and assume that, at any stage of cosmic evolution: 1) Planck scale Hubble parameter plays a crucial role. 2) Space-time curvature follows, ${GM_t}\cong{R_tc^2}$ where $M_t$ and $R_t$ represent the ordinary cosmic mass and radius respectively. 3) Both, cosmic radius and expansion velocity, are proportional to the ratio of dark matter density and ordinary matter density. 4) Cosmic temperature is proportional to the ratio of ordinary matter density and critical density. With further research, a unified model of `quantum cosmology' with evolving dark energy or evolving vacuum energy can be developed.
[3394] vixra:1702.0110 [pdf]
The Analysis of Chris Van Den Broeck Applied to the Natario Warp Drive Spacetime Using the Original Alcubierre Shape Function to Generate the Broeck Spacetime Distortion:the Natario-Broeck Warp Drive
Warp Drives are solutions of the Einstein Field Equations that allows superluminal travel within the framework of General Relativity. There are at the present moment two known solutions: The Alcubierre warp drive discovered in $1994$ and the Natario warp drive discovered in $2001$. However the major drawback concerning warp drives is the huge amount of negative energy density able to sustain the warp bubble.In order to perform an interstellar space travel to a "nearby" star at $20$ light-years away in a reasonable amount of time a ship must attain a speed of about $200$ times faster than light.However the negative energy density at such a speed is directly proportional to the factor $10^{48}$ which is $1.000.000.000.000.000.000.000.000$ times bigger in magnitude than the mass of the planet Earth!!. With the correct form of the shape function the Natario warp drive can overcome this obstacle at least in theory.Other drawbacks that affects the warp drive geometry are the collisions with hazardous interstellar matter(asteroids,comets,interstellar dust etc)that will unavoidably occurs when a ship travels at superluminal speeds and the problem of the Horizons(causally disconnected portions of spacetime).The geometrical features of the Natario warp drive are the required ones to overcome these obstacles also at least in theory.Some years ago in $1999$ Chris Van Den Broeck appeared with a very interesting idea.Broeck proposed a warp bubble with a large internal radius able to accommodate a ship inside while having a submicroscopic outer radius and a submicroscopic contact external surface in order to better avoid the collisions against the interstellar matter.The Broeck spacetime distortion have the shape of a bottle with $200$ meters of inner diameter able to accommodate a spaceship inside the bottle but the bottleneck possesses a very small outer radius with only $10^{-15}$ meters $100$ billion time smaller than a millimeter therefore reducing the probabilities of collisions against large objects in interstellar space.In this work we apply the Broeck idea to the Natario warp drive spacetime but out bottle have $200$ kilometers of inner size $1000$ times the size of the original Broeck bottle and we use the original Alcubierre shape function to generate our version of the Broeck bottle with very low energy density requirements.The Broeck idea is more than welcome and solves definitively the problem of the collisions against large objects. Any future development for the Natario warp drive must encompass the Broeck bottle and this approach must be named as the Natario-Broeck warp drive.
[3395] vixra:1702.0091 [pdf]
Coupled Diffusion of Impurity Atoms and Point Defects in Silicon Crystals. Context and Preliminary
A theory describing the processes of atomic diffusion in a nonequilibrium state with nonuniform distributions of components in a defect?impurity system of silicon crystals is proposed. Based on this theory, partial diffusion models are constructed, and simulation of a large number of experimental data are curried out. A comparison of the simulation results with the experiment confirms the correctness and importance of the theory developed.
[3396] vixra:1702.0086 [pdf]
Can Two Differently Prepared Mixed Quantum-Ensembles be Discriminated Via Measurement Variance ?
Alice prepares two large qubit-ensembles E1 and E2 in the following states: She individually prepares each qubit of E1 in |0> or |1>, the eigenstates of Pauli-z operator Z, depending on the outcome of an unbiased coin toss. Similarly, she individually prepares each qubit of E2 in |+> or |-> the eigenstates of Pauli-x operator X. Bob, who is aware of the above states preparation procedures, but knows neither which of the two is E1 nor Alice's outcomes of coin tosses, needs to discriminate between the two maximally mixed ensembles. Here we argue that Bob can partially purify the mixed states (E1, E2), using the information supplied by central limit theorem. We will show that, subsequently Bob can discriminate between ensembles E1 and E2 by individually rotating each qubit state about the x-axis on Bloch sphere by a random angle, and then projectively measuring Z. By these operations, the variance of sample mean of Z measurement outcomes corresponding to the ensemble E1 gets reduced. On the other hand, qubit states in E2 are invariant under the x-rotations and therefore the variance remains unaltered. Thus Bob can discriminate between the two maximally mixed ensembles. We analyse the above problem both analytically as well as numerically, and show that the latter supports the former.
[3397] vixra:1702.0077 [pdf]
A Lexicon and Exploration Status Document for the Extended Rishon Model
The Extended Rishon Model is currently in continuous development, expansion and clarification, yet with nothing found that is contradictory to its initial foundations as of over three decades ago. However there are a series of recurring themes that have a large body of evidence to support, some less-well-confirmed themes and a body of hypotheses that need significant further exploration. This document - which will be continuously revised - therefore keeps track of the different categories in order to avoid repetition, and to make it much easier for others to understand the Extended Rishon Model.
[3398] vixra:1702.0076 [pdf]
Preon Model, Knot Algebra and Gravity
I study the properties of a preon model for the substructure of the the standard model quarks and leptons. The goal is to establish both local and global group representations for the particles of the model. Knot theory algebra SLq(2) is shown to be applicable to the model. Teleparallel gravity is discussed with an interesting result to hadronic physics. A tentative glimpse on quantum gravity is indicated.
[3399] vixra:1702.0072 [pdf]
Why Theory of Quantum Computing Should be Based on Finite Mathematics
We discuss finite quantum theory (FQT) developed in our previous publications and give a simple explanation that standard quantum theory is a special case of FQT in the formal limit $p\to\infty$ where $p$ is the characteristic of the ring or field used in FQT. Then we argue that FQT is a more natural basis for quantum computing than standard quantum theory.
[3400] vixra:1702.0051 [pdf]
Quantum Interpretation of the Proton Anomalous Magnetic Moment
The role of the anomalous moment in the geometric Clifford algebra of proton topological mass generation suggests that the anomaly is not an intrinsic property of the free space proton, but rather a topological effect of applying the electromagnetic bias field required to define the eigenstates probed by the magnetic moment measurement. Quantum interpretations strive to explain emergence of the world we observe from formal quantum theory. This variant on the canonical measurement problem is examined in the larger context of quantum interpretations.
[3401] vixra:1702.0015 [pdf]
Topics in Space-Time, Gravity and Cosmology
We derive the Poincar´e model of the Lobachevsky geometry from the Fermat principle. The Lobachevsky geometry is interpreted as the Lobachevsky-Beltrami-Fok velocity space geometry of moving particles. The relation of this geometry to the decay of the neutral π-meson is considered. The generalization of the Lobachevsky geometry is performed and the new angle of parallelism is derived. Then, we determine nonlinear transformations between coordinate systems which are mutually in a constant symmetrical accelerated motion. The maximal acceleration limit follows from the kinematic origin. Maximal acceleration is an analogue of the maximal velocity in special relativity. We derive the dependence of mass, length, time, Doppler effect, on acceleration as an analogue phenomena in special theory of relativity. We apply the derived nonlinear Lorentz group to the so called Thomas precession. The total quantum energy loss of binary is caused by the production of gravitons emitted by the rotation motion of binary. We have calculated it in the framework of the Schwinger theory of gravity for the situation where the gravitational propagator involves radiative corrections. We also derive the finite-temperature gravitational Cherenkov radiation involving radiative corrections. The graviton action in vacuum is generalized for the medium with the constant gravitational index of refraction. From this generalized action the power spectral formula of the Cherenkov radiation of gravitons is derived in the framework of the Schwinger theory at zero and nonzero temperature. The next text deals with non-relativistic quantum energy shift of H-atom electrons due to Gibbons-Hawking thermal bath. The seventh chapter deals with gravity as the deformation of the space time and it involves the light deflection by the screw dislocation. In conclusion, we consider the scientific and technological meaning and the perspectives of the results derived. Some parts of the complex are published in the reputable journals. 1
[3402] vixra:1702.0013 [pdf]
Contradictory Stimulation
Motivated by the seemingly chaotic state of affairs of contemporary world political actions, I describe a possible psychological masses domination strategy called “contradictory stimulation”. Assuming that there is the possibility of a genuine intention on the part of some families and powerful organizations in guiding humanity in the way of their interests, I point through mathematical and computational models that contradictory stimulation can be effective in inducing a subservient mentality in the political citizens. Recognize the existence of artful stratagems of manipulation and “brainwashing” sharpens our crit- ical sense and changes our world view. It is the first step in a reaction that try to ensure individual freedoms, if they are desirable.
[3403] vixra:1701.0677 [pdf]
The Relativistic Origin of the Electric Charge
Considering the electron in the hydrogen atom as classical, and bound to the proton like a planet is bound to the sun, we are led to consider that it is in free fall and therefore that we can apply the Einstein's equivalence principle, thus the special relativity can be used to study its motion. Doing so, we are able to demonstrate that the electron's charge-to-mass ratio is the subsequent relativistic frequency that appears to the observer in the laboratory. We also show that a magnetic moment, very similar to the one of the quantum mechanics, must appear, although we stay in the fields of classical and relativistic physics.
[3404] vixra:1701.0660 [pdf]
Asymptotic Safety in Quantum Gravity and Diffeomorphic Non-isometric Metric Solutions to the Schwarzschild Metric
We revisit the construction of diffeomorphic but $not$ isometric metric solutions to the Schwarzschild metric. These solutions require the introduction of non-trivial areal-radial functions and are characterized by the key property that the radial horizon's location is $displaced$ continuously towards the singularity ($ r = 0 $). In the limiting case scenario the location of the singularity and horizon $merges$ and any infalling observer hits a null singularity at the very moment he/she crosses the horizon. This fact may have important consequences for the resolution of the fire wall problem and the complementarity controversy in black holes. This construction allows to borrow the results over the past two decades pertaining the study of the Renormalization Group (RG) improvement of Einstein's equations which was based on the possibility that Quantum Einstein Gravity might be non-perturbatively renormalizable and asymptotically safe due to the presence of interacting (non-Gaussian) ultraviolet fixed points. The particular areal-radial function that eliminates the interior of a black hole, and furnishes a truly static metric solution everywhere, is used to establish the desired energy-scale relation $ k = k (r) $ which is obtained from the $k$ (energy) dependent modifications to the running Newtonian coupling $G (k) $, cosmological constant $\Lambda (k) $ and spacetime metric $g_{ij, (k) } (x)$. (Anti) de Sitter-Schwarzschild metrics are also explored as examples. We conclude with a discussion of the role that Asymptotic Safety might have in the geometry of phase spaces (cotangent bundles of spacetime); i.e. namely, in establishing a quantum spacetime geometry/classical phase geometry correspondence $g_{ij, (k) } (x) \leftrightarrow g_{ij} (x, E) $.
[3405] vixra:1701.0651 [pdf]
Double Conformal Space-Time Algebra (ICNPAA 2016)
The Double Conformal Space-Time Algebra (DCSTA) is a high-dimensional 12D Geometric Algebra G(4,8) that extends the concepts introduced with the Double Conformal / Darboux Cyclide Geometric Algebra (DCGA) G(8,2) with entities for Darboux cyclides (incl. parabolic and Dupin cyclides, general quadrics, and ring torus) in spacetime with a new boost operator. The base algebra in which spacetime geometry is modeled is the Space-Time Algebra (STA) G(1,3). Two Conformal Space-Time subalgebras (CSTA) G(2,4) provide spacetime entities for points, flats (incl. worldlines), and hyperbolics, and a complete set of versors for their spacetime transformations that includes rotation, translation, isotropic dilation, hyperbolic rotation (boost), planar reflection, and (pseudo)spherical inversion in rounds or hyperbolics. The DCSTA G(4,8) is a doubling product of two G(2,4) CSTA subalgebras that inherits doubled CSTA entities and versors from CSTA and adds new bivector entities for (pseudo)quadrics and Darboux (pseudo)cyclides in spacetime that are also transformed by the doubled versors. The "pseudo" surface entities are spacetime hyperbolics or other surface entities using the time axis as a pseudospatial dimension. The (pseudo)cyclides are the inversions of (pseudo)quadrics in rounds or hyperbolics. An operation for the directed non-uniform scaling (anisotropic dilation) of the bivector general quadric entities is defined using the boost operator and a spatial projection. DCSTA allows general quadric surfaces to be transformed in spacetime by the same complete set of doubled CSTA versor (i.e., DCSTA versor) operations that are also valid on the doubled CSTA point entity (i.e., DCSTA point) and the other doubled CSTA entities. The new DCSTA bivector entities are formed by extracting values from the DCSTA point entity using specifically defined inner product extraction operators. Quadric surface entities can be boosted into moving surfaces with constant velocities that display the length contraction effect of special relativity. DCSTA is an algebra for computing with quadrics and their cyclide inversions in spacetime. For applications or testing, DCSTA G(4,8) can be computed using various software packages, such as Gaalop, the Clifford Multivector Toolbox (for MATLAB), or the symbolic computer algebra system SymPy with the GAlgebra module.
[3406] vixra:1701.0630 [pdf]
Hyperspheres in Fermat's Last Theorem
This paper provides a potential pathway to a formal simple proof of Fermat's Last Theorem. The geometrical formulations of n-dimensional hypergeometrical models in relation to Fermat's Last Theorem are presented. By imposing geometrical constraints pertaining to the spatial allowance of these hypersphere configurations, it can be shown that a violation of the constraints confirms the theorem for n equal to infinity to be true.
[3407] vixra:1701.0629 [pdf]
Regarding the Scalar Perturbations of Small Bodies; a Link Between Gravitational Nonlocality and Quantum Indeterminacy.
Using the Friedman-Lemaitre-Robertson-Walker (FLRW) universe as a background metric, purely General relativistic (classical) scalar metric perturbations are investigated for small bodies. For the approximation of a point-like perturbing mass in the closed FLRW universe, the scalar perturbation may be written in a form obeying precisely the Dirac equation up to a factor playing the role of Planck’s constant. A physical interpretation suggests the scalar perturbation in this form is the wavefunction of quantum mechanics. Such an interpretation indicates the nonlocality of gravitational energy/momentum in General relativity leads naturally to the indeterminacy of quantum mechanics. Some physical consequences and predictions are discussed and briefly explored.
[3408] vixra:1701.0621 [pdf]
How Well Do Classically Produced Correlations Match Quantum Theory?
A two-dimensional vector can be made from a constant signal component plus a randomly oriented noise component. This simple model can exploit detection and post-selection loopholes to produce Bell correlations within 0.01 of the theoretical cosine expected from quantum mechanics. The model is shown to be in accord with McEachern's hypothesis that quantum correlations are associated with processes which can provide only one bit of information per sample.
[3409] vixra:1701.0618 [pdf]
An Algorithmic Proof of the Twin Primes Conjecture and the Goldbach Conjecture
Abstract. This paper introduces proofs to several open problems in number theory, particularly the Goldbach Conjecture and the Twin Prime Conjecture. These two conjectures are proven by using a greedy elimination algorithm, and incorporating Mertens' third theorem and the twin prime constant. The argument is extended to Germain primes, Cousin Primes, and other prime related conjectures. A generalization is provided for all algorithms that result in an Euler product like\prod{\left(1-\frac{a}{p}\right)}.  
[3410] vixra:1701.0616 [pdf]
Bipolar Neutrosophic Planar Graphs
Fuzzy graph theory is used for solving real-world problems in different fields, including theoretical computer science, engineering, physics, combinatorics and medical sciences. In this paper, we present conepts of bipolar neutrosophic multigraphs, bipolar neutrosophic planar graphs, bipolar neutrosophic dual graphs, and study some of their related properties. We also describe applications of bipolar neutrosophic graphs in road network and electrical connections.
[3411] vixra:1701.0603 [pdf]
Study of the Molecular Geometry of Caramboxin Toxin Found in Star Flower (Averrhoa Carambola L.)
The present work describes the equilibrium configuration of the caramboxin molecule studied using the Hartree-Fock (HF) and Density functional theory (DFT) calculations. With the DFT calculations, the total energy for the singlet state of caramboxin molecule has been estimated to be -933.3870701 a.u. Furthermore, the binding energy of the caramboxin molecule has been estimated to be 171.636 kJ/mol. The carambola or star fruit is a fruit used for human consumption in juices, desserts, pastries, custards, jellies, or even in natural consumption. Recent research indicates that it has great toxicity for people with kidney failure, and may even lead to death. Experiments demonstrated that it has glutamatergic effects, which means that it affects the function of the neurotransmitter glutamate, thus explaining the neurological effects. Our calculations indicate that the main active sites in carambox are the -OH (alcohols) groups, and the two carboxyl (-COOH) groups.
[3412] vixra:1701.0599 [pdf]
Emergence of Space in Quantum Shape Kinematics
A model universe is analyzed with N protons and electrons where there are electromagnetic and spin interactions in the Hamiltonian is investigated in the context of quantum shape kinematics. We have found that quantum shape space exists for N≥4 particles and has 2N−7 functional degrees of freedom in the case of spin-1/2 particles. The emergence of space is associated with non-vanishing expectation value ⟨L^2⟩. We have shown that for odd N space always emerges, and for large even N space almost always emerges because ⟨L^2⟩≠0 for almost all states. In the limit N→∞ the density of states that yields ⟨L^2⟩=0 vanishes. Therefore we conclude that the space is almost always emergent in quantum shape kinematics.
[3413] vixra:1701.0597 [pdf]
Certain Single-Valued Neutrosophic Graphs
Neutrosophic sets are the generalization of the concept of fuzzy sets and intuitionistic fuzzy sets. Neutrosophic models give more flexibility, precisions and compatibility to the system as compared to the classical, fuzzy and intuitionistic fuzzy models. In this research paper, we present certain types of single-valued neutrosophic graphs, including regular single-valued neutrosophic graphs, totally regular single-valued neutrosophic graphs, edge regular single-valued neutrosophic graphs and totally edge regular single-valued neutrosophic graphs. We also investigate some of their related properties
[3414] vixra:1701.0567 [pdf]
Geometry and Fields: Illuminating the Standard Model from Within
We present a wavefunction comprised of the eight fundamental geometric objects of a minimally complete Pauli algebra of 3D space - point, line, plane, and volume elements - endowed with electromagnetic fields. Interactions are modeled as geometric products of wavefunctions, generating a 4D Dirac algebra of flat Minkowski spacetime. The resulting model is naturally gauge invariant, finite, and confined. With regard to the U1 x SU2 x SU3 gauge group at the core of the Standard Model, natural finiteness and gauge invariance are benign. However, reflections from wavefunction geometric impedance mismatches yields natural confinement to the Compton wavelength, providing a new perspective on both weak and strong nuclear forces.
[3415] vixra:1701.0523 [pdf]
Draft Introduction to Abstract Kinematics
This work lays the foundations of the theory of kinematic changeable sets ("abstract kinematics"). Theory of kinematic changeable sets is based on the theory of changeable sets. From an intuitive point of view, changeable sets are sets of objects which, unlike elements of ordinary (static) sets, may be in the process of continuous transformations, and which may change properties depending on the point of view on them (that is depending on the reference frame). From the philosophical and imaginative point of view the changeable sets may look like as "worlds" in which evolution obeys arbitrary laws. Kinematic changeable sets are the mathematical objects, consisting of changeable sets, equipped by different geometrical or topological structures (namely metric, topological, linear, Banach, Hilbert and other spaces). In author opinion, theories of changeable and kinematic changeable sets (in the process of their development and improvement), may become some tools of solving the sixth Hilbert problem at least for physics of macrocosm. Investigations in this direction may be interesting for astrophysics, because there exists the hypothesis, that in the large scale of Universe, physical laws (in particular, the laws of kinematics) may be different from the laws, acting in the neighborhood of our solar System. Also these investigations may be applied for the construction of mathematical foundations of tachyon kinematics. We believe, that theories of changeable and kinematic changeable sets may be interesting not only for theoretical physics but also for other fields of science as some, new, mathematical apparatus for description of evolution of complex systems.
[3416] vixra:1701.0513 [pdf]
On Maximal Proper Force, Black Hole Horizons and Matter as Curvature in Momentum Space
Starting with the study of the geometry on the cotangent bundle (phase space), it is shown that the maximal proper force condition, in the case of a uniformly accelerated observer of mass $m$ along the $x$ axis, leads to a minimum value of $x$ lying $inside$ the Rindler wedge and given by the black hole horizon radius $ 2Gm$. Whereas in the uniform circular motion case, we find that the maximal proper force condition implies that the radius of the circle cannot exceed the value of the horizon radius $2Gm$. A correspondence is found between the black hole horizon radius and a singularity in the curvature of momentum space. The fact that the geometry (metric) in phase spaces is observer dependent (on the momentum of the massive particle/observer) indicates further that the matter stress energy tensor and vacuum energy in the underlying spacetime may admit an interpretation in terms of the curvature in momentum spaces. Consequently, phase space geometry seems to be the proper arena for a space-time-matter unification.
[3417] vixra:1701.0497 [pdf]
A Suggested Boundary for Heisenberg's Uncertainty Principle
In this paper we are combining Heisenberg's uncertainty principle with Haug's suggested maximum velocity for anything with rest-mass. This leads to a suggested exact boundary condition on Heisenberg's uncertainty principle. The uncertainty in position at the potential maximum momentum for subatomic particles as derived from the maximum velocity is half of the Planck length.
[3418] vixra:1701.0491 [pdf]
Brute-Force Computer Modelling and Derivation of Group Operations for the 12 1st Level Extended Rishon Model Particles, Assuming Elliptically-Polarised Mobius-Light Topology
This paper continues prior work based on the insight that Rishon ultracoloured triplets (electron, up, neutrino in left and right forms) might simply be elliptically-polarised "mobius light". The important first step is therefore to identify the twelve (24 including both left and right handed forms) phases, the correct topology, and then to peform transformations (mirroring, rotation, time-reversal) to double-check which "particles" are identical to each other and which are anti-particle opposites. Ultimately, a brute-force systematic analysis will allow a formal mathematical group to be dropped seamlessly on top of the twelve (24) particles.
[3419] vixra:1701.0463 [pdf]
New Rotational Doppler-Effect
By oblique reflection of circularly polarized photons on a rotating cylindrical mirror the frequency of the reflected photons is shifted against the ferquency of incident photons by nearly twice the rotational frequency $n$ of the mirror: $\Delta \nu = 2\hspace{2} n \hspace{2}\sin \alpha$, where $\alpha$ is the axial angle of incidence. $\Delta \nu$ can be substantially enhanced by multiple reflections between counter-rotating coaxial mirrors.
[3420] vixra:1701.0452 [pdf]
Relations on Neutrosophic Multi Sets with Properties
In this paper, we first give the cartesian product of two neutrosophic multi sets(NMS). Then, we define relations on neutrosophic multi sets to extend the intuitionistic fuzzy multi relations to neutrosophic multi relations. The relations allows to compose two neutrosophic sets. Also, various properties like reflexivity, symmetry and transitivity are studied.
[3421] vixra:1701.0433 [pdf]
The 3n±p Conjecture: A Generalization of Collatz Conjecture
The Collatz conjecture is an open conjecture in mathematics named so after Lothar Collatz who proposed it in 1937. It is also known as 3n + 1 conjecture, the Ulam conjecture (after Stanislaw Ulam), Kakutanis problem (after Shizuo Kakutani) and so on. Several various generalization of the Collatz conjecture has been carried.
[3422] vixra:1701.0423 [pdf]
Triple Refined Indeterminate Neutrosophic Sets for Personality Classification
Personality tests are most commonly objective type where the users rate their behaviour. Instead of providing a single forced choice, they can be provided with more options. A person may not be in general capable to judge his/her behaviour very precisely and categorize it into a single category. Since it is self rating there is a lot of uncertain and indeterminate feelings involved. The results of the test depend a lot on the circumstances under which the test is taken, the amount of time that is spent, the past experience of the person, the emotion the person is feeling and the person’s self image at that time and so on.
[3423] vixra:1701.0419 [pdf]
α-D MCDM-Topsis Multi-Criteria Decision Making Method for N-Wise Criteria Comparisons and Inconsistent Problems
The purpose of this paper is to present an extension and alternative of the hybrid approach using Saaty’s Analytical Hierarchy Process (AHP) and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) method (AHP-TOPSIS), that based on the AHP and its use of pairwise comparisons, to a new method called α-D MCDM-TOPSIS(α-Discounting Method for Multi-Criteria Decision Making-TOPSIS). The proposed method works not only for preferences that are pairwise comparisons of criteria as AHP does, but for preferences of any n-wise (with n ≥ 2) comparisons of criteria. Finally the α-D MCDM-TOPSIS methodology is verified by some examples to demonstrate how it might be applied in different types of matrices and is how it allwos for consistency, inconsistent, weak inconsistent, and strong inconsistent problems.
[3424] vixra:1701.0413 [pdf]
NeutrosophicfiltersinBE-Algebras
In this paper, we introduce the notion of (implicative)neutrosophic filters in BE-algebras. The relation between implicative neutrosophic filters and neutrosophic filters is investigated and we show that in self distributive BEalgebras these notions are equivalent.
[3425] vixra:1701.0406 [pdf]
Neutrosophic Set Approch for Characterizations of Left Almost Semigroups
In this paper we have dened neutrosophic ideals, neutrosophic interior ideals, netrosophic quasi-ideals and neutrosophic bi-ideals (neutrosophic generalized bi-ideals) and proved some results related to them.
[3426] vixra:1701.0392 [pdf]
A New 3n−1 Conjecture Akin to Collatz Conjecture
The Collatz conjecture is an open conjecture in mathematics named so after Lothar Collatz who proposed it in 1937. It is also known as 3n + 1 conjecture, the Ulam conjecture (after Stanislaw Ulam), Kakutani’s problem (after Shizuo Kakutani) and so on.
[3427] vixra:1701.0373 [pdf]
Clustering Algorithm of Triple Refined Indeterminate Neutrosophic Set for Personality Grouping
Triple Refined Indeterminate Neutrosophic Set (TRINS) which is a case of the refined neutrosophic set was introduced. It provides the additional possibility to represent with sensitivity and accuracy the uncertain, imprecise, incomplete, and inconsistent information which are available in real world.
[3428] vixra:1701.0371 [pdf]
Clustering of Personality using Indeterminacy Based Personality Test
Triple Refined Indeterminate Neutrosophic Set (TRINS) a case of the refined neutrosophic set was introduced in [8]. The uncertain and inconsistent information which are available in real world is represented with sensitivity and accuracy by TRINS.
[3429] vixra:1701.0354 [pdf]
Interval-Valued Neutrosophic Soft Rough Sets
We first defined interval-valued neutrosophic soft rough sets (IVN-soft rough sets for short) which combine interval-valued neutrosophic soft set and rough sets and studied some of its basic properties. This concept is an extension of interval-valued intuitionistic fuzzy soft rough sets(IVIF-soft rough sets).
[3430] vixra:1701.0335 [pdf]
Relativistic Quantum Theory of Matter and Gravitation from 3 Postulates
We use three postulates P1, P2a/b and P3 : Combining P1 and P2a with "Sommerfeld's quantum rules"; correspond to the original quan- tum theory of Hydrogen, which produces the correct relativistic energy levels for atoms (Sommerfeld's and Dirac's theories of matter produces the same energy levels, and Schrodinger's theory produces the approximation of those energy levels). P3 can be found in Schrodinger's famous paper introducing his equation, P3 being his first assumption (a second assumption, suppressed here, is required to deduce his equation). P3 implies that the wavefunction solution of both Schrodinger's and Klein-Gordon's equations in the non interacting case while, in the interacting case, it immediatly implies "Sommerfeld's quantum rules" : P1, P2a, and P3 then produce the correct relativistic energy levels of atoms, and we check that the required degeneracy is justied by pure deduction, without any other assumption (Schrodinger's theory only justies one half of the degeneracy). We observe that the introduction of an interaction in P1 is equivalent to a modication of the metric inside the wavefunction in P3, such that the equation of motion of a system can be deduced with two dierent methods, with or without the metric. Replacing the electromagnetic potential P2a by the suggested gravitationnal potential P2b, the equation of motion (deduced in two ways) is equivalent to the equation of motion of General Relativity in the low field approximation (with accuracy 10-6 at the surface of the Sun). We have no coordinate singularity inside the metric. Other motions can be obtained by modifying P2b, the theory is adaptable. First of all, we discuss classical Kepler problems (Newtonian motion of the Earth around the Sun), explain the link between Kelpler law of periods (1619) and Plank's law (1900) and observe the links between all historical models of atoms (Bohr, Sommerfeld, Pauli, Schrodinger, Dirac, Fock). This being done, we introduce P1, P2a/b, and P3 to then describe electromagnetism and gravitation in the same formalism.
[3431] vixra:1701.0334 [pdf]
Matter and Energy in a Non-Relativistic Approach Amongst the Mustard Seed and the ”faith”. a Metaphysical Conclusion
The work is the result of a philosophical study of several passages of the Holy Bible, with regard to faith. We analyzed verses that include mustard seed parables. The study discusses the various concepts of faith as belief and faith as a form of energy. In this concept of faith as energy, we made a connection and this matter. We approach the gravitational field using the Law of Universal Gravitation and the equation of equivalence between energy and matter not to relativistic effects. Of Scriptures, we focus on Matthew 17:20, and according to the concept of faith as a form of energy, we calculate the energy needed to raise a mountain, for the conversion of matter to energy in a mustard seed and we compare a massive iron mountain, Mount Everest and Mount Sinai. We conclude with these concepts and considerations that energy ”faith” can move a mountain.
[3432] vixra:1701.0319 [pdf]
The Ultimate Limits of the Relativistic Rocket Equation. The Planck Photon Rocket
In this paper we look at the ultimate limits of a photon propulsion rocket. The maximum velocity for a photon propulsion rocket is just below the speed of light and is a function of the reduced Compton wavelength of the heaviest subatomic particles in the rocket. We are basically combining the relativistic rocket equation with Haug’s new insight in the maximum velocity for anything with rest mass; see [1, 2, 3]. An interesting new finding is that in order to accelerate any sub-atomic “fundamental” particle to its maximum velocity, the particle rocket basically needs two Planck masses of initial load. This might sound illogical until one understands that subatomic particles with different masses have different maximum velocities. This can be generalized to large rockets and gives us the maximum theoretical velocity of a fully-efficient and ideal rocket. Further, no additional fuel is needed to accelerate a Planck mass particle to its maximum velocity; this also might sound absurd, but it has a very simple and logical solution that is explained in this paper. This paper is Classified!
[3433] vixra:1701.0316 [pdf]
A Further Exploration of the Preliminary Implications of Hypercolour as Being Phase-Order of "Mobius" Elliptically-Polarised Light in the Extended Rishon Model
In a prior paper ultracolour was added back in to the Extended Rishon Model, and the I-Frame structure explored using the proton as an example. Bearing in mind that because Maxwell's equations have to be obeyed, the Rishons have to have actual phase, position, momentum and velocity. The only pattern of motion that fitted the stringent requirements was if the Rishons circulated on mobius strips. Fascinatingly and very excitingly, exactly such a previously-theoretical elliptically-transverse mobius topology of light has been experimentally confirmed last year. The next logical task of writing out Rishon triplets in a circle as actual starting phases of the elliptically polarized mobius-walking light has proven to be a huge breakthrough, providing startling insight with massive implications such as implying the existence of two previously undiscovered quarks very similar to up and down (provisionally nicknamed over and under), logically and naturally confirming that "decay" is just a "phase transform", and generally being really rather disruptive to both the Standard Model and the Extended Rishon Model. A huge task is therefore ahead, to revisit the available data on particle decays and masses (bear in mind that the Standard Model's statistical inference confirmation techniques assume the up and over, and down and under, to be the same particles), so this paper endeavours to lay some groundwork and ask pertinent questions.
[3434] vixra:1701.0314 [pdf]
The Geometry of Non-Linear Adjustment of the Trisection Problem in the the Option of the 3-Dimensional Geodesy
In an article, E. Grafarend and B. Schaffrin studied the geometry of non-linear adjustment of the planar trisection problem using the Gauss Markov model and the method of the least squares. This paper develops the same method working on an example of the determination of a point by trilateration in the three-dimensional geodetic option for determining the coordinates (x, y, z) of an unknown point from measurements known distances to n points.
[3435] vixra:1701.0309 [pdf]
Inversions And Invariants Of Space And Time
This paper is on the mathematical structure of space, time, and gravity. It is shown that electrodynamics is neither charge inversion invariant, nor is it time inversion invariant.
[3436] vixra:1701.0303 [pdf]
Geometric Theory of Inversion and Seismic Imaging II: Inversion + Datuming + Static + Enhancement
The goal of seismic processing is to convert input data collected in the field into a meaningful image based on signal processing and wave equation processing and other algorithms. It is normally a global approach like tomography or FWI (full waveform inversion). Seismic imaging or inversion methods fail partly due to the thin-lens effect or rough surfaces. These interfaces are non-invertible. To mitigate the problem, we propose a more stable method for seismic imaging in 4 steps as layer stripping approach of INVERSION + DATUMING + STATIC + ENHANCEMENT.
[3437] vixra:1701.0300 [pdf]
Golden and Harmonic Mean in the Genetic Code
In previous two works [1], [2] we have shown the determination of genetic code by golden and harmonic mean within standard Genetic Code Table (GCT), i.e. nucleotide triplet table, whereas in this paper we show the same determination through a specific connection between two tables – of nucleotide doublets Table (DT) and triplets Table (TT), over polarity of amino acids, measured by Cloister energy. (Miloje M. Rakočević) (Belgrade, 6.01.2017) (www.rakocevcode.rs) (mirkovmiloje@gmail.com)
[3438] vixra:1701.0293 [pdf]
Einstein Versus Fitzgerald, Lorentz, and Larmor Length Contraction
This paper discusses the similarities between Einstein’s length contraction and the FitzGerald, Lorentz, and Larmor length contraction. The FitzGerald, Lorentz, and Larmor length contraction was originally derived for only the case of a frame moving relative to the ether frame, and not for two moving frames. When extending the FitzGerald, Lorentz, and Larmor length transformation to any two frames, we will clearly see that it is different than the Einstein length contraction. Under the FitzGerald, Lorentz, and Larmor length transformation we get both length contraction and length expansion, and non-reciprocality, while under Einstein’s special relativity theory we have only length contraction and reciprocality. However, we show that there is a mathematical and logical link between the two methods of measuring length. This paper shows that the Einstein length contraction can be derived from assuming an anisotropic one-way speed of light. Further, we show that that the reciprocality for length contraction under special relativity is an apparent reciprocality due to Einstein-Poincar ́e synchronization. The Einstein length contraction is real in the sense that the predictions are correct when measured with Einstein-Poincar ́e synchronized clocks. Still we will claim that there likely is a deeper and more fundamental reality that is better described with the extended FitzGerald, Lorentz, and Larmor framework, which, in the special case of using Einstein-Poincar ́e synchonized clocks gives Einstein’s length contraction. The extended FitzGerald, Lorentz, and Larmor length contraction is also about length expansion, and it is not recipro- cal between frames. Still, when using Einstein synchronized clocks the length contraction is apparently reciprocal. An enduring, open question concerns whether or not it is possible to measure the one-way speed of light without relying on Einstein-Poincar ́e synchronization or slow clock transportation synchronization, and if the one-way speed of light then is anisotropic or isotropic. Several experiments performed and published claim to have found an anisotropic one-way speed of light. These experiments have been ignored or ridiculed, but in our view they should be repeated and investigated further.
[3439] vixra:1701.0290 [pdf]
Exploring the Addition of Hypercolour to the Extended Rishon Model
Colour (R,G,B) seems to be fashionable in particle physics theories, where it may be interpreted to be phase. In the context of the Extended Rishon Model, where we interpret particles to comprise photons in phase-harmonic braid-ordered inter-dependence, Colour takes on a very specific relevance and meaning, not least because Maxwell's equations have to be obeyed literally and undeniably, and phase is an absolutely critical part of Maxwell's equations. A number of potential candidate layouts are explored, including taking Sundance O Bilson-Thompson's topological braid-order literally. Ultimately though, the only thing that worked out that still respected the rules of the Extended Rishon Model was to place the Rishons on a mobius strip, mirroring Williamson's toroidal pattern, which, with its back-to-back two-cycle rotation, reminds us of Qiu-Hong Hu's Hubius Helix. The layout of the 2nd level I-Frame is therefore explored, using the proton as a candidate.
[3440] vixra:1701.0278 [pdf]
Structural Properties of Neutrosophic Abel-Grassmanns Groupoids
In this paper, we have introduced the notion of neutrosophic (2;2)regular, neutrosophic strongly regular neutrosophic AG-groupoids and investigated these structures. We have shown that neutrosophic regular, neutrosophic intraregular and neutrosophic strongly regular AG-groupoid are the only generalized classes of an AG-groupoid.
[3441] vixra:1701.0275 [pdf]
Supervised Pattern Recognition Using Similarity Measure Between Two Interval Valued Neutrosophic Soft Sets
F. Smarandache introduced the concept of neutrosophic set in 1995 and P. K. Maji introduced the notion of neutrosophic soft set in 2013, which is a hybridization of neutrosophic set and soft set.
[3442] vixra:1701.0272 [pdf]
Sustainable Assessment of Alternative Sites for the Construction of a Waste Incineration Plant by Applying WASPAS Method with Single-Valued Neutrosophic Set
The principles of sustainability have become particularly important in the construction, real estate maintenance sector, and all areas of life in recent years. The one of the major problem of urban territories that domestic and construction waste of generated products cannot be removed automatically.
[3443] vixra:1701.0263 [pdf]
The Smarandache Bryant Schneider Group Of A Smarandache Loop
The concept of Smarandache Bryant Schneider Group of a Smarandache loop is introduced. Relationship(s) between the Bryant Schneider Group and the Smarandache Bryant Schneider Group of an S-loop are discovered and the later is found to be useful in finding Smarandache isotopy-isomorphy condition(s) in S-loops just like the formal is useful in finding isotopy-isomorphy condition(s) inloops.
[3444] vixra:1701.0258 [pdf]
Black Hole Clusters: The Dark Matter
Supermassive black holes were created during the Big Bang. As such, they were available for clustering in the early Universe. This paper describes the role these clusters could play in explaining dark matter, and it answers the following question: What is the energy source for the extremely hot gas found in galactic clusters?
[3445] vixra:1701.0255 [pdf]
The Weighted Distance Measure Based Method to Neutrosophic Multi-Attribute Group Decision Making
Neutrosophic set (NS) is a generalization of fuzzy set (FS) that is designed for some practical situations in which each element has different truth membership function, indeterminacy membership function and falsity membership function.
[3446] vixra:1701.0244 [pdf]
Vector Similarity Measures for Simplied Neutrosophic Hesitant Fuzzy Set and Their Applications
In this article we present three similarity measures between simplied neutrosophic hesitant fuzzy sets, which contain the concept of single valued neutrosophic hesitant fuzzy sets and interval valued neutrosophic hesitant fuzzy sets, based on the extension of Jaccard similarity measure, Dice similarity measure and Cosine similarity in the vector space.
[3447] vixra:1701.0238 [pdf]
Liar Liar, Pants on fire; or How to Use Subjective Logic and Argumentation to Evaluate Information from Untrustworthy Sources
This paper presents a non-prioritized belief change operator, designed specifically for incorporating new information from many heterogeneous sources in an uncertain environment. We take into account that sources may be untrustworthy and provide a principled method for dealing with the reception of contradictory information.
[3448] vixra:1701.0230 [pdf]
Multi-Criteria Decision Making Method Based on Similarity Measures Under Single Valued Neutrosophic Refined and Interval Neutrosophic Refined Environments
In this paper, we propose three similarity measure methods for single valued neutrosophic refined sets and interval neutrosophic refined sets based on Jaccard, Dice and Cosine similarity measures of single valued neutrosophic sets and interval neutrosophic sets.
[3449] vixra:1701.0217 [pdf]
Neutrosophic Complex N Continuity
In this paper, the concept of N-open set in neutrosophic complex topological space is introduced. Some of the interesting properties of neutrosophic complex N-open sets are studied. The idea of neutrosophic complex N-continuous function and its characterization are discussed. Also the interrelation among the sets and continuity are established.
[3450] vixra:1701.0216 [pdf]
Neutrosophic Cubic Ideals
Operational properties of neutrosophic cubic sets are investigated.The notion of neutrosophic cubic subsemigroups and neutrosophic cubic left (resp.right) ideals are introduced, and several properties are investigated.
[3451] vixra:1701.0214 [pdf]
Neutrosophic Hypergraphs
In this paper, we introduce certain concepts, including neutrosophic hypergraph, line graph of neutrosophic hypergraph, dual neutrosophic hypergraph, tempered neutrosophic hypergraph and transversal neutrosophic hypergraph. We illustrate these concepts by several examples and investigate some of interesting properties.
[3452] vixra:1701.0213 [pdf]
Neutrosophic Hyperideals of Γ-Semihyperrings
Hyperstructures, in particular hypergroups, were introduced in 1934 by Marty [12] at the eighth congress of Scandinavian Mathematicians. The notion of algebraic hyperstructure has been developed in the following decades and nowadays by many authors, especially Corsini [2, 3], Davvaz [5, 6, 7, 8, 9], Mittas [13], Spartalis [16], Stratigopoulos [17] and Vougiouklis [20]. Basic definitions and notions concerning hyperstructure theory can be found in [2].
[3453] vixra:1701.0199 [pdf]
Neutrsophic Complex N-Continuity
In this paper, the concept of N-open set in neutrosophic complex topological space is introduced. Some of the interesting properties of neutrosophic complex N-open sets are studied. The idea of neutrosophic complex N-continuous function and its characterization are discussed. Also the interrelation among the sets and continuity are established.
[3454] vixra:1701.0197 [pdf]
New Distance Measure of Single-Valued Neutrosophic Sets and Its Application
A single-valued neutrosophic set (SVNS) is an instance of a neutrosophic set, which can be used to handle uncertainty, imprecise, indeterminate, and inconsistent information in real life. In this paper, a new distance measure between two SVNSs is defined by the full consideration of truthmembership function, indeterminacy-membership function, and falsity-membership function for the forward and backward differences.
[3455] vixra:1701.0195 [pdf]
N-Fold Filters in Smarandache Residuated Lattices, Part (I)
In this paper we introduce the notions of n-fold BL-Smarandache positive implicateve filter and n-fold BL-Smarandache implicateve filter in Smarandache residuated lattices and study the relations among them.
[3456] vixra:1701.0194 [pdf]
N-Fold Filters in Smarandache Residuated Lattices, Part (Ii)
In this paper we introduce the notions of n-fold BL-Smarandache n-fold BL-Smarandache fantastic filter and n-fold BL-Smarandache easy filter in Smarandache residuated lattices and study the relations among them.
[3457] vixra:1701.0193 [pdf]
Non-Overlapping Matrices
Two matrices are said non-overlapping if one of them can not be put on the other one in a way such that the corresponding entries coincide. We provide a set of non-overlapping binary matrices and a formula to enumerate it which involves the k-generalized Fibonacci numbers. Moreover, the generating function for the enumerating sequence is easily seen to be rational.
[3458] vixra:1701.0190 [pdf]
Novel Multiple Criteria Decision Making Methods Based on Bipolar Neutrosophic Sets and Bipolar Neutrosophic Graphs
In this research article, we present certain notions of bipolar neutrosophic graphs. We study the dominating and independent sets of bipolar neutrosophic graphs. We describe novel multiple criteria decision making methods based on bipolar neutro- sophic sets and bipolar neutrosophic graphs. We develop an algorithm for computing domination in bipolar neutrosophic graphs. We also show that there are some flaws in Broumi et al. [11]’s definition.
[3459] vixra:1701.0189 [pdf]
N-Valued Refined Neutrosophic Soft Set Theory
In this paper as a generalization of neutrosophic soft set we introduce the concept of n-valued refined neutrosophic soft set and study some of its properties. We also, define its basic operations, complement, union intersection, "AND" and "OR" and study their properties.
[3460] vixra:1701.0188 [pdf]
N-Valued Refined Neutrosophic Soft Sets and Its Applications in Decision Making Problems and Medical Diagnosis
In this work we use the concept of a n-valued refined neutrosophic soft sets and its properties to solve decision making problems, Also a similarity measure between two n-valued refined neutrosophic soft sets are proposed.
[3461] vixra:1701.0187 [pdf]
On a Q-Smarandache Fuzzy Commutative Ideal of a Q-Smarandache BH-algebra
In this paper, the notions of Q-Smarandache fuzzy commutative ideal and Q-Smarandache fuzzy sub-commutative ideal of a Q-Smarandache BH-Algebra are introduced, examples and related properties are investigated. Also, the relationships among these notions and other types of Q-Smarandache fuzzy ideal of a Q-Smarandache BH-Algebra are studied.
[3462] vixra:1701.0183 [pdf]
On Neutrosophic Soft Function
In this paper, the cartesian product and the relations on neutrosophic soft sets have been defined in a new approach. Some properties of this concept have been discussed and verified with suitable real life examples.
[3463] vixra:1701.0182 [pdf]
On Neutrosophic Submodules of a Module
The target of this study is to observe some of the algebraic structures of a single valued neutrosophic set. So, we introduce the concept of a neutrosophic submodule of a given classical module and investigate some of the crucial properties and characterizations of the proposed concept.
[3464] vixra:1701.0181 [pdf]
On Parallel Curves Via Parallel Transport Frame in Euclidean 3-Space
In this paper, we study the parallel curve of a space curve according to parallel transport frame. Then, we obtain new results according to some cases of this curve by using parallel transport frame in Euclidean 3-space. Additionally, we give new examples for this characterizations and we illustrate this examples in gures.
[3465] vixra:1701.0180 [pdf]
On Pseudospherical Smarandache Curves in Minkowski 3-Space
In this paper we define nonnull and null pseudospherical Smarandache curves according to the Sabban frame of a spacelike curve lying on pseudosphere in Minkowski 3-space.
[3466] vixra:1701.0178 [pdf]
On The Darboux Vector Belonging To Involute Curve A Different View
In this paper, we investigated special Smarandache curves in terms of Sabban frame drawn on the surface of the sphere by the unit Darboux vector of involute curve. We created Sabban frame belonging to this curve. It was explained Smarandache curves position vector is composed by Sabban vectors belonging to this curve. Then, we calculated geodesic curvatures of this Smarandache curves. Found results were expressed depending on the base curve. We also gave example belonging to the results found.
[3467] vixra:1701.0169 [pdf]
Quantitative Analysis of Particles Segregation
Segregation is a popular phenomenon. It has considerable effects on material performance. To the author’s knowledge, there is still no automated objective Quantitative indicator for segregation. In order to full fill this task, segregation of particles is analyzed. Edges of the particles are extracted from the digital picture. Then, the whole picture of particles is splintered to small rectangles with the same shape.
[3468] vixra:1701.0155 [pdf]
Single-Valued Neutrosophic Graph Structures
A graph structure is a generalization of undirected graph which is quite useful in studying some structures, including graphs and signed graphs. In this research paper, we apply the idea of single-valued neutrosophic sets to graph structure, and explore some interesting properties of single-valued neutrosophic graph structure. We also discuss the concept of φ-complement of single-valued neutrosophic graph structure.
[3469] vixra:1701.0151 [pdf]
Smarandache Curves According to Sabban Frame of Fixed Pole Curve Belonging to the Bertrand Curves Pair
In this paper, we investigate the Smarandache curves according to Sabban frame of fixed pole curve which drawn by the unit Darboux vector of the Bertrand partner curve. Some results have been obtained. These results were expressed as the depends Bertrand curve.
[3470] vixra:1701.0141 [pdf]
Special Smarandache Curves in R
In differential geometry, there are many important consequences and properties of curves studied by some authors [1, 2, 3]. Researchers always introduce some new curves by using the existing studies.
[3471] vixra:1701.0129 [pdf]
Basic Properties Of Second Smarandache Bol Loops
The basic properties of S2ndBLs are studied. These properties are all Smarandache in nature. The results in this work generalize the basic properties of Bol loops, found in the Ph.D. thesis of D. A. Robinson. Some questions for further studies are raised.
[3472] vixra:1701.0127 [pdf]
Bipolar Neutrosophic Refined Sets and Their Applications in Medical Diagnosis
This paper proposes concept of bipolar neutrosophic refined set and its some operations. Firstly, a score certainty and accuracy function to compare the bipolar neutrosophic refined information is defined. Secondly, to aggregate the bipolar neutrosophic refined information, a bipolar neutrosophic refined weighted average operator and a bipolar neutrosophic refined weighted geometric operator is developed.
[3473] vixra:1701.0122 [pdf]
Certain Networks Models Using Single-valued Neutrosophic Directed Hypergraphs
A directed hypergraph is powerful tool to solve the problems that arises in different fields, including computer networks, social networks and collaboration networks. In this research paper, we apply the concept of single-valued neutrosophic sets to directed hypergraphs.
[3474] vixra:1701.0121 [pdf]
Change Detection by New DSmT Decision Rule and Icm with Constraints :Application to Argan Land Cover
The objective of this work is, in the first place, the integration in a fusion process using hybrid DSmT model, both, the contextual information obtained from a supervised ICM classification with constraints and the temporal information with the use of two images taken at two different dates.
[3475] vixra:1701.0111 [pdf]
Context-dependent Combination of Sensor Information in Dempster-Shafer Theory for BDI
There has been much interest in the Belief-Desire-Intention (BDI) agentbased model for developing scalable intelligent systems, e.g. using the AgentSpeak framework. However, reasoning from sensor information in these large-scale systems remains a significant challenge.
[3476] vixra:1701.0103 [pdf]
Cosine Similarity Measures of Neutrosophic Soft Set
In this paper we have introduced the concept of cosine similarity measures for neutrosophic soft set and interval valued neutrosophic soft set.An application is given to show its practicality and effectiveness.
[3477] vixra:1701.0101 [pdf]
Decision-Making with Belief Interval Distance
In this paper we propose a new general method for decisionmaking under uncertainty based on the belief interval distance. We show through several simple illustrative examples how this method works and its ability to provide reasonable results.
[3478] vixra:1701.0093 [pdf]
Dual Curves of Constant Breadth According to Bishop Frame in Dual Euclidean Space
In this work, curves of constant breadth are defined and some characterizations of closed dual curves of constant breadth according to Bishop frame are presented in dual Euclidean space. Also, it has been obtained that a third order vectorial differential equation in dual Euclidean 3-space.
[3479] vixra:1701.0082 [pdf]
Fingerprint Quality Assessment: Matching Performanceand Image Quality
This article chiefly focuses on Fingerprint Quality Assessment(FQA)applied to the Automatic Fingerprint Identification System (AFIS). In our research work, different FQA solutions proposed so far are compared by using several quality metrics selected from the existing studies.
[3480] vixra:1701.0071 [pdf]
Generalizedfibonaccisequencesin Groupoids
In this paper, we introduce the notion of generalized Fibonacci sequences over a groupoid and discuss it in particular for the case where the groupoid contains idempotents and pre-idempotents.Using the notion of Smarandache-type P-algebra, we obtain several relations on groupoids which are derived from generalized Fibonacci sequences.
[3481] vixra:1701.0066 [pdf]
A Clustering-Based Evidence Reasoning Method
Aiming at the counterintuitive phenomena of the Dempster–Shafer method in combining the highly conflictive evidences, a combination method of evidences based on the clustering analysis is proposed in this paper. At first, the cause of conflicts is disclosed from the point of view of the internal and external contradiction. Andthen,a new similarity measure based on it is proposed by comprehensively considering the Pignistic distance and the sequence according to the size of the basic belief assignments over focal elements.
[3482] vixra:1701.0065 [pdf]
A Double Cryptography Using The Smarandache Keedwell Cross Inverse Quasigroup
The present study further strengthens the use of the Keedwell CIPQ against attack on a system by the use of the Smarandache Keedwell CIPQ for cryptography in a similar spirit in which the cross inverse property has been used by Keedwell. This is done as follows. By constructing two S-isotopic S-quasigroups(loops) U and V such that their Smarandache automorphism groups are not trivial, it is shown that U is a SCIPQ(SCIPL) if and only if V is a SCIPQ(SCIPL). Explanations and procedures are given on how these SCIPQs can be used to double encrypt information.
[3483] vixra:1701.0062 [pdf]
Algorithms for Neutrosophic Soft Decision Making Based on Edas and New Similarity Measure
This paper presents two novel single-valued neutrosophic soft set (SVNSS) methods. First,we initiate a new axiomatic definition of single-valued neutrosophic simlarity measure, which is expressed by single-valued neutrosophic number (SVNN) that will reduce the information loss and remain more original information. Then, the objective weights of various parameters are determined via grey system theory. Combining objective weights with subjective weights, we present the combined weights, which can reflect both the subjective considerations of the decision maker and the objective information. Later, we present two algorithms to solve decision making problem based on Evaluation based on Distance from Average Solution (EDAS) and similarity measure. Finally, the effectiveness and feasibility of approaches are demonstrated by a numerical example.
[3484] vixra:1701.0059 [pdf]
A Model for Medical Diagnosis Via Fuzzy Neutrosophic Soft Sets
The concept of neutrosophic soft set is a new mathematical tool for dealing with uncertainties that is free from the difficulties affecting existing methods. The theory has rich potential for applications in several directions. In this paper, a new approach is proposed to construct the decision method for medical diagnosis by using fuzzy neutrosophic soft sets. Also, we develop a technique to diagnose which patient is suffering from what disease. Our data with respect to the case study has been provided by the a medical center in Ordu, Turkey.
[3485] vixra:1701.0056 [pdf]
AnAlgorithmforMedicalMagneticResonanceImage Non-LocalMeansDenoising
Digital images and digital image processing were widely researched in the past decades and special place in this field have medical images. Magnetic resonance images are a very important class of medical images and their enhancement is very significant for diagnostic process.
[3486] vixra:1701.0044 [pdf]
A New Definition of Entropy of Belief Functions in the Dempster-Shafer Theory
We propose a new definition of entropy for basic probability assignments (BPA) in the Dempster-Shafer (D-S) theory of belief functions, which is interpreted as a measure of total uncertainty in the BPA. Our definition is different from the definitions proposed by H¨ohle, Smets, Yager, Nguyen, Dubois-Prade, Lamata-Moral, Klir-Ramer, Klir-Parviz, Pal et al., MaedaIchihashi, Harmanec-Klir, Jousselme et al., and Pouly et al. We state a list of five desired properties of entropy for D-S belief functions theory that are motivated by Shannon’s definition of entropy for probability functions together with the requirement that any definition should be consistent with the semantics of D-S belief functions theory.
[3487] vixra:1701.0042 [pdf]
Axiomatization of Unification Theories: the Fundamental Role of the Partition Function of Non-Trivial Zeros (Imaginary Parts) of Riemann's Zeta Function. Two Fundamental Equations that Unify Gravitation with Quantum Mechanics
1) Using the partition function for a system in thermodynamic equilibrium; and replacing the energy-Beta factor (Beta = 1/[Boltzmann constant x temperature]) by the imaginary parts of the nontrivial zeros Riemann's zeta function; It is obtained, a function that equals the value of elementary electric charge and the square root of the product of the Planck mass, the electron mass, and the constant of universal gravitation. 2) Using this same partition function (thermodynamic equilibrium) the Planck constant (Planck mass squared, multiplied by the constant of universal gravitation) is calculated with complete accuracy; As a direct function of the square of the quantized constant elementary electric charge. These two fundamental equations imply the existence of a repulsive acceleration of the quantum vacuum. As a direct consequence of this repulsive acceleration of the quantum vacuum; The repulsive energy of the quantum vacuum is directly derived from the general relativity equation for critical density. As a consequence of this repulsive acceleration, we establish provisional equations that allow us to calculate with enough approximation the speed of rotation within the galaxies; As well as the diameter of galaxies and clusters of galaxies. To obtain this results, completely accurate; several initial hypotheses are established. These hypotheses could say, that become physical-mathematical theorems, when they are demonstrated by empirical data. Among others, it is calculated accurately baryon density as well as the mass density. Hypothesis-axioms are demonstrated by their practical application for the empirical calculation of baryon density, antimatter-matter asymmetry factor, Higgs vacuum value, Higgs boson mass (mh1), and mass prediction of Quark stop, about 745-750 GeV. This boson would not have been discovered because its decay would be hidden by the almost equal masses of the particles involved in the decay. We think that this type of hidden decay is a general feature of supersymmetry. The physico-mathematical concept of quantum entropy (entropy of information) acquires a fundamental relevance in the axiomatization of the theories of unification. Another fundamental consequence is that time would be an emergent dimension in the part of the universe called real (finite limit velocities). In the part of the virtual universe and not observable; The time would be canceled, would acquire the value t = 0.This property would explain the instantaneity of the change of correlated observables of interlaced particles; And the instantaneous collapse of the wave function, once it is disturbed (measured, observed with energy transmission to the observed or measured system). To get a zero time, special relativity must be extended to hyperbolic geometries (virtual quantum wormholes). This natural generalization implies the existence of infinite speeds, on the strict condition of zero energy and zero time (canceled). This has a direct relationship with soft photons and soft gravitons with zero energy, from the radiation of a black hole; And that they would solve the problem of the loss of information of the black holes. The main equation of unification of electromagnetism and gravitation; It seems necessarily imply the existence of wormholes, as geometrical manifestation of hyperboloid of one sheet, and two sheets. In the concluding chapter we discuss this point; and others highly relevants. The relativistic invariance of elementary quantized electric charge is automatically derived.
[3488] vixra:1701.0038 [pdf]
A New Similarity Measure on Npn-Soft Set Theory and Its Application
In this paper, we give a new similarity measure on npn-soft set theory which is the extension of correlation measure of neutrosophic refined sets. By using the similarity measure we propose a new method for decision making problem. Finally, we give an example for diagnosis of diseases could be improved by incorporating clinical results and other competing diagnosis in npn-soft environment.
[3489] vixra:1701.0021 [pdf]
Application of DSmT-Icm with Adaptive Decision Rule to Supervised Classification in Multisource Remote Sensing
In this paper, we introduce a new procedure called DSmT-ICM with adaptive decision rule, which is an alternative and extension of Multisource Classification Using ICM (Iterated conditional mode) and DempsterShafer theory (DST).
[3490] vixra:1701.0016 [pdf]
A Discourse on the Electron and Other Particle's Internals, from the Perspective of the Extended Rishon Model and the Field of Optics
This document is in eect a journal of the past thirty years of exploring particle physics, with a special focus on the electron. With the exception of this abstract, a rst person dialog has been unusually chosen after discovering that it can be more eective in communicating certain logical reasoning chains of thought. The story begins in 1986 with the rediscovery of the Rishon Model, later expanded in 2012, followed by an exploration of possible meaning as to why the four Rishons would exist at all, and why they would exist as triplets: what possible physical underlying mechanism would give us "Rishons"? The following hypothesis is therefore put forward: All evidence explored so far supports the hypothesis that all particles are made of phased-array photons in a tight and innitely-cyclic recurring loop, in a self- contained non-radiating E.M eld that obeys nothing more than Maxwell's Equa- tions (applied from rst principles), with the addition that particles that are not nonradiating are going to be unstable to some degree (i.e. will undergo "decay"). Rishons themselves are not actual particles per se but simply represent the phase and braiding order of the constituent photons. A number of researchers have explored parts of this eld, but have not pulled all of the pieces together.
[3491] vixra:1701.0013 [pdf]
Topology of P vs NP
This paper describes about P vs NP by using topological approach. We modify computation history as “Problem forest”, and define special problem family “Wildcard problem” and “Maximal complement Wildcard problem” to simplify relations between every input. “Problem forest” is directed graph with transition functions edges and computational configuration nodes with effective range of tape. Problem forest of DTM is two tree graph which root are accepting & rejecting configuration, which leaves are inputs, trunks are computational configuration with effective range of tape. This tree shows TM's interpretation of symmetry and asymmetry of each input. From the view of problem forest, some NTM inputs are marged partly, and all DTM inputs are separated totally. Therefore NTM can compute implicitly some type of partial (symmetry) overrap, and DTM have to compute explicitly. “WILDCARD (Wildcard problem family)” and “MAXCARD (Maximal complement Wildcard problem family)” is special problem families that push NTM branches variations into inputs. If “CONCRETE (Concrete Problem)” that generate MAXCARD is in P-Complete, then MAXCARD is in PH, and these inputs have many overrap. DTM cannot compute these overrap conditions implicitly, and these conditions are necesarry to compute MAXCARD input, so DTM have to compute these conditions explicitly. These conditions are over polynomial size and DTM take over polynomial steps to compute these conditions explicitly. That is, PH is not P, and NP is not P.
[3492] vixra:1701.0006 [pdf]
An Explanation of the de Vries Formula for the Fine Structure Constant
The de Vries formula, discovered in 2004, is undeniably accurate to current experimental and theoretical measurements (3.1e-10 to within CODATA 2014's value, currently 2.3e-10 relative uncertainty). Its Kolmogorov Complexity is extremely low, and it is as elegant as Euler's Identity formula. Having been discovered by a Silicon Design Engineer, no explanation was offered except for the hint that it is based on the well-recognised first approximation for g/2: 1 + alpha / 2pi. Purely taking the occurence of the fine structure constant in the electron: in light of G Poelz and Dr Mills' work, as well as the Ring Model of the early 1900s, this paper offers a tentative explanation for alpha as being a careful dynamic balanced inter-relationship between each radiated loop as emitted from whatever constitutes the "source" of the energy at the heart of the electron. Mills and the original Ring Model use the word "nonradiating" which is is believed to be absolutely critical.
[3493] vixra:1612.0413 [pdf]
Area of Torricelli's Trumpet or Gabriel's Horn, Sum of the Reciprocals of the Primes, Factorials of Negative Integers
In our previous work [1], we defined the method for computing general limits of functions at their singular points and showed that it is useful for calculating divergent integrals, the sum of divergent series and values of functions in their singular points. In this paper, we have described that method and we will use it to calculate the area of Torricelli's trumpet or Gabriel's horn, the sum of the reciprocals of the primes and factorials of negative integers.
[3494] vixra:1612.0401 [pdf]
Constructing a Mathematical Framework for the Ensemble Interpretation Based on Double-Slit Experiments
The ensemble interpretation attributes the wave appearances of particles to their statistical characteristics. This has increasingly interested scientists. However, the ensemble interpretation is still not a scientific theory based on mathematics. Here, based on the double-slit experiment, a mathematical framework for the ensemble interpretation is constructed. The Schrodinger equation and the de-Broglie equation are also deduced. Analysis shows that the wave appearance of particles is caused by the statistical properties of these particles; the nature of the wave function is the average least action for the particles in a position.
[3495] vixra:1612.0397 [pdf]
Underlying Symmetry Among the Quark and Lepton Mixing Angles (Nine Year Update)
In 2007 a single mathematical model encompassing both quark and lepton mixing was described. This model exploited the fact that when a $3 \times 3$ rotation matrix whose elements are squared is subtracted from its transpose, a matrix is produced whose non-diagonal elements have a common absolute value, where this value is an intrinsic property of the rotation matrix. For the traditional CKM quark mixing matrix with its second and third rows interchanged (i.e., c - t interchange) this value equals one-third the corresponding value for the leptonic matrix (roughly, 0.05 versus 0.15). This model is distinguished by three such constraints on mixing. As nine years have elapsed since its introduction, it is timely to assess the accuracy of the model's six mixing angles. In 2012 a large experimental conflict with leptonic angle $\theta_{13}$ required toggling the sign of one of the model's integer exponents; this change did not significantly impair the model's economy, where it is just this economy that makes the model notable. There followed a nearly fourfold improvement in the accuracy of the measurement of leptonic $\theta_{13}$. Despite this much-improved measurement, and despite much-improved measurements for three other mixing angles since the model's introduction in 2007, no other conflicts have emerged. The model's mixing angles in degrees are 45, 33.210911, 8.034394 (originally 0.013665) for leptons; and 12.920966, 2.367442, 0.190986 for quarks.
[3496] vixra:1612.0386 [pdf]
On the Conformal Unity Between Quantum Particles and General Relativity
I consider the standard model, together with a preon version of it, to search for unifying principles between quantum particles and general relativity. Argument is given for unified field theory being based on gravitational and electromagnetic interactions alone. Conformal symmetry is introduced in the action of gravity with the Weyl tensor. Electromagnetism is geometrized to conform with gravity. Conformal symmetry is seen to improve quantization in loop quantum gravity. The Einstein-Cartan theory with torsion is analyzed suggesting structure in spacetime below the Cartan scale. A toy model for black hole constituents is proposed. Higgs metastability hints at cyclic conformal cosmology.
[3497] vixra:1612.0358 [pdf]
Deriving the Maximum Velocity of Matter from the Planck Length Limit on Length Contraction
Here we will assume that there is a Planck length limit on the maximum length contraction that is related to the reduced Compton wavelength. Our focus will be on the maximum velocity of subatomic particles, which “have” what is known as a reduced Compton wavelength. We assume that the reduced Compton wavelength of a moving particle as measured from the laboratory frame (“rest” frame) cannot be shorter than the Planck length as measured with Einstein-Poincare synchronized clocks.
[3498] vixra:1612.0349 [pdf]
Physical Properties of Stars and Stellar Dynamics
The present study is an investigation of stellar physics based on observables such as mass, luminosity, radius, and photosphere temperature. We collected a dataset of these characteristics for 360 stars, and diagramed the relationships between their characteristics and their type (white dwarf, red dwarf, main sequence star, giant, supergiant, hypergiant, Wolf-Rayet, carbon star, etc.). For stars dominated by radiation pressure in the photosphere which follow the Eddington luminosity, we computed the opacity and cross section to photon flux per hydrogen nuclei in the photosphere. We considered the Sun as an example of star dominated by gas pressure in the photosphere, and estimated the density of the solar photosphere using limb darkening and assuming the adiabatic gradient of a monoatomic gas. We then estimated the cross section per hydrogen nuclei in the plasma of the solar photosphere, which we found to be about 2.66\e{-28} \, m^2, whereas the cross section of neutral hydrogen as given by the Bohr model is 8.82\e{-21} \, m^2. This result suggests that the electrons and protons in the plasma are virtually detached. Hence, a hydrogen plasma may be represented as a gas mixture of electrons and protons. If the stellar photosphere was made of large hydrogen atoms or ions such as the ones we find in gases, its surface would evaporate due to the high temperatures.
[3499] vixra:1612.0347 [pdf]
A Viewpoint on the Momentum of Photons Propagating in a Medium
A suggestion is proposed to solve the dispute about light momentum in transparent materials: when photons show wave features, the momentum of light conforms to Minkowski's viewpoint; when photons show particle features, the momentum of light accords with Abraham's thought.
[3500] vixra:1612.0341 [pdf]
Gravitational Clock: Near Space Proof-of-Concept Prior to Deep Space Measurement of G-Part I
Motivated by the benefits of improving our knowledge of Newton's constant G, Feldman et al have recently proposed a new measurement involving a gravitational clock launched into deep space. The clock's mechanism is supposed to be the linear oscillation of a test mass falling back and forth along the length of a hole through the center of a spherical source mass. Similar devices — ones that would have remained in orbit around Earth — were proposed about 50 years ago for the same purpose. None of these proposals were ever carried out. Further back, in 1632 Galileo proposed the thought experiment of a cannonball falling into a hole through the center of Earth. Curiously, no one has yet observed the gravity-induced radial motion of a test object through the center of a massive body. Also known as a gravity-train, not a one has yet reached its antipodal destination. From this kind of gravitational clock, humans have not yet recorded a single tick. The well known reliability of Newton's and Einstein's theories of gravity may give confidence that the device will work as planned. Nevertheless, it is argued here that a less expensive apparatus — Small Low-Energy Non-Collider — ought to be built first, simply to prove that the operating principle is sound. Certain peculiar facts about Schwarzschild's interior solution are discussed here; and a novel way of interpreting gravitational effects will be presented in Part II, together adding support for the cautious advice to more thoroughly look before we leap to the outskirts of the Solar System.
[3501] vixra:1612.0305 [pdf]
Kronecker Commutation Matrices and Particle Physics
In this paper, formulas giving a Kronecker commutation matrices (KCMs) in terms of some matrices of particles physics and formulas giving electric charge operators (ECOs) for fundamental fermions in terms of KCMs have been reviewed. Physical meaning have been given to the eigenvalues and eigenvectors of a KCM.
[3502] vixra:1612.0278 [pdf]
A Complete Proof of Beal Conjecture Followed by Numerical Examples
In 1997, Andrew Beal announced the following conjecture: \textit{Let $A, B,C, m,n$, and $l$ be positive integers with $m,n,l > 2$. If $A^m + B^n = C^l$ then $A, B,$ and $C$ have a common factor.} We begin to construct the polynomial $P(x)=(x-A^m)(x-B^n)(x+C^l)=x^3-px+q$ with $p,q$ integers depending of $A^m,B^n$ and $C^l$. We resolve $x^3-px+q=0$ and we obtain the three roots $x_1,x_2,x_3$ as functions of $p,q$ and a parameter $\theta$. Since $A^m,B^n,-C^l$ are the only roots of $x^3-px+q=0$, we discuss the conditions that $x_1,x_2,x_3$ are integers and have or not a common factor. Three numerical examples are given.
[3503] vixra:1612.0266 [pdf]
Quaternionic Formulation in Symmetry Breaking Mechanism
In this formalism the covariant derivative contains the four potentials associated with four charges and thus leads the dierent gauge strength for the particles containing electric, magnetic,gravitational and Heavisidian charges. Quaternions representation in spontaneously symmetry of breaking and Higg's mechanics and the equation of motion are derived for free particles (i.e.electric, magnetic, gravitational and Heavisidian charges). The local gauge invariance in order to explain the Yang-Mill's field equation and spontaneous symmetry breaking mechanism. The quaternionic gauge theory of quantum electrodynamics has also been developed in presence of electric, magnetic, gravitational and Heavisidian charge
[3504] vixra:1612.0249 [pdf]
Reductio ad Absurdum. Modern Physics' Incomplete Absurd Relativistic Mass Interpretation
This note discusses an absurdity that is rooted in the modern physics interpretation of Einstein's relativistic mass formula when v is very close to c. Modern physics (and Einstein himself) claimed that the speed of a mass can never reach the speed of light. Yet at the same time they claim that it can approach the speed of light without any upper limit on how close it could get to that special speed. As we will see, this leads to some absurd predictions. If we assert that a material system cannot reach the speed of light, an important question is then, ``How close can it get to the speed of light?" Is there a clear-cut boundary on the exact speed limit for an electron, as an example? Or must we settle for a mere approximation?
[3505] vixra:1612.0246 [pdf]
Silver Nanoparticles as Antibiotics: Bactericidal Effect, Medical Applications and Environmental Risk
Silver nanoparticles (Ag-NPs) are among the most medical applications nanomaterials mainly due to its antimicrobial effect, plasmon resonance and its capacity to impregnate polymeric materials. Recently Ag-NPs have been used in water treatment systems, central venous catheters, burn dressing, as well as in biosensors for detecting levels of p53 protein associated with cancer development. Moreover, the Ag-NPs have been studied for being potentially dangerous to humans and environment. Ag-NPs are transformed under the ecosystems conditions and may even increase their aggressiveness. The aim of this paper is to investigate the current state of knowledge about the bactericidal effect of Ag-NPs, the main synthesis methods, its application based on antibiotic capacity, Ag-NPs environmental transformations and their impact on the human.
[3506] vixra:1612.0241 [pdf]
Ds-Bidens: a Novel Computer Program for Studying Bacterial Colony Features
Optical forward-scattering systems supported by image analysis methods are increasingly being used for rapid identification of bacterial colonies (Vibrio parahaemolyticus, Vibrio vulnificus, Vibrio cholera, etc.). The conventional detection and identification of bacterial colonies comprises a variety of methodologies based on biochemical, serological or DNA/RNA characterization. Such methods involve laborious and time-consuming procedures in order to achieve confirmatory results. In this article we present ds-Bidens, a novel software for studying bacterial colony features. The software ds-Bidens was programmed using C++, Perl and wxBasic programming languages. A graphical user interface (GUI), an image processing tool and functions to compute bacterial colony features were programmed. We obtained versatile software that provides key tools for studying bacterial colony images as: texture analysis, invariant moment and color (CIELab) calculation, etc., simplifying operations previously carried out by MATLAB applications. The new software can be of particular interest in fields of microbiology, both for bacterial colonies identification and the study of their growth, changes in color and textural features. Additionally ds-Bidens offers to the users a versatile environment to study bacterial colonies images. ds-Bidens is freely available from: http://ds-bidens.sourceforge.net/
[3507] vixra:1612.0229 [pdf]
Conical Capacitor as Gravity Propulsion Device
It was proposed gravity propulsion method by using asymmetric conical capacitor charged by high voltage. It was used linear approximation of general relativity equations for derivation of gravity field potential of charged conical capacitor and was shown that negative gravity capabilities of conical capacitor depends only on ratio of electric energy and capacitor mass density, where electric energy density depends on applied voltage and geometric parameters of conical capacitor.
[3508] vixra:1612.0221 [pdf]
Conic and Cyclidic Sections in Double Conformal Geometric Algebra G_{8,2}
The G_{8,2} Geometric Algebra, also called the Double Conformal / Darboux Cyclide Geometric Algebra (DCGA), has entities that represent conic sections. DCGA also has entities that represent planar sections of Darboux cyclides, which are called cyclidic sections in this paper. This paper presents these entities and many operations on them. Operations include projection, rejection, and intersection with respect to spheres and planes. Other operations include rotation, translation, and dilation. Possible applications are introduced that include orthographic and perspective projections of conic sections onto view planes, which may be of interest in computer graphics or other computational geometry subjects.
[3509] vixra:1612.0201 [pdf]
Proof of Riemann's Hypothesis
Riemann's hypothesis (1859) is the conjecture stating that: The real part of every non trivial zero of Riemann's zeta function is 1/2. The main contribution of this paper is to achieve the proof of Riemann's hypothesis. The key idea is to provide an Hamiltonian operator whose real eigenvalues correspond to the imaginary part of the non trivial zeros of Riemann's zeta function and whose existence, according to Hilbert and Polya, proves Riemann's hypothesis.
[3510] vixra:1612.0187 [pdf]
Computational Techniques for Modeling Non-Newtonian Flow in Porous Media
Modeling the flow of non-Newtonian fluids in porous media is a challenging subject. Several approaches have been proposed to tackle this problem. These include continuum models, numerical methods, and pore-scale network modeling. The latter proved to be more successful and realistic than the rest. The reason is that it captures the essential features of the flow and porous media using modest computational resources and viable modeling strategies. In this article we present pore-scale network modeling techniques for simulating non-Newtonian flow in porous media. These techniques are partially validated by theoretical analysis and comparison to experimental data.
[3511] vixra:1612.0186 [pdf]
Flow of Non-Newtonian Fluids in Porous Media
The study of flow of non-Newtonian fluids in porous media is very important and serves a wide variety of practical applications in processes such as enhanced oil recovery from underground reservoirs, filtration of polymer solutions and soil remediation through the removal of liquid pollutants. These fluids occur in diverse natural and synthetic forms and can be regarded as the rule rather than the exception. They show very complex strain and time dependent behavior and may have initial yield-stress. Their common feature is that they do not obey the simple Newtonian relation of proportionality between stress and rate of deformation. Non-Newtonian fluids are generally classified into three main categories: time-independent whose strain rate solely depends on the instantaneous stress, time-dependent whose strain rate is a function of both magnitude and duration of the applied stress and viscoelastic which shows partial elastic recovery on removal of the deforming stress and usually demonstrates both time and strain dependency. In this article, the key aspects of these fluids are reviewed with particular emphasis on single-phase flow through porous media. The four main approaches for describing the flow in porous media are examined and assessed. These are: continuum models, bundle of tubes models, numerical methods and pore-scale network modeling.
[3512] vixra:1612.0185 [pdf]
Computational Techniques for Efficient Conversion of Image Files from Area Detectors
Area detectors are used in many scientific and technological applications such as particle and radiation physics. Thanks to the recent technological developments, the radiation sources are becoming increasingly brighter and the detectors become faster and more efficient. The result is a sharp increase in the size of data collected in a typical experiment. This situation imposes a bottleneck on data processing capabilities, and could pose a real challenge to scientific research in certain areas. This article proposes a number of simple techniques to facilitate rapid and efficient extraction of data obtained from these detectors. These techniques are successfully implemented and tested in a computer program to deal with the extraction of X-ray diffraction patterns from EDF image files obtained from CCD detectors.
[3513] vixra:1612.0184 [pdf]
The Flow of Power-Law Fluids in Axisymmetric Corrugated Tubes
In this article we present an analytical method for deriving the relationship between the pressure drop and flow rate in laminar flow regimes, and apply it to the flow of power-law fluids through axially-symmetric corrugated tubes. The method, which is general with regards to fluid and tube shape within certain restrictions, can also be used as a foundation for numerical integration where analytical expressions are hard to obtain due to mathematical or practical complexities. Five converging-diverging geometries are used as examples to illustrate the application of this method.
[3514] vixra:1612.0183 [pdf]
Slip at Fluid-Solid Interface
The `no-slip' is a fundamental assumption and generally-accepted boundary condition in rheology, tribology and fluid mechanics with strong experimental support. The violations of this condition, however, are widely recognized in many situations, especially in the flow of non-Newtonian fluids. Wall slip could lead to large errors and flow instabilities, such as sharkskin formation and spurt flow, and hence complicates the analysis of fluid systems and introduces serious practical difficulties. In this article, we discuss slip at fluid-solid interface in an attempt to highlight the main issues related to this diverse complex phenomenon and its implications.
[3515] vixra:1612.0182 [pdf]
Newtonian Flow in Converging-Diverging Capillaries
The one-dimensional Navier-Stokes equations are used to derive analytical expressions for the relation between pressure and volumetric flow rate in capillaries of five different converging-diverging axisymmetric geometries for Newtonian fluids. The results are compared to previously-derived expressions for the same geometries using the lubrication approximation. The results of the one-dimensional Navier-Stokes are identical to those obtained from the lubrication approximation within a non-dimensional numerical factor. The derived flow expressions have also been validated by comparison to numerical solutions obtained from discretization with numerical integration. Moreover, they have been certified by testing the convergence of solutions as the converging-diverging geometries approach the limiting straight geometry.
[3516] vixra:1612.0181 [pdf]
New Program with New Approach for Spectral Data Analysis
This article presents a high-throughput computer program, called EasyDD, for batch processing, analyzing and visualizing of spectral data; particularly those related to the new generation of synchrotron detectors and X-ray powder diffraction applications. This computing tool is designed for the treatment of large volumes of data in reasonable time with affordable computational resources. A case study in which this program was used to process and analyze powder diffraction data obtained from the ESRF synchrotron on an alumina-based nickel nanoparticle catalysis system is also presented for demonstration. The development of this computing tool, with the associated protocols, is inspired by a novel approach in spectral data analysis.
[3517] vixra:1612.0180 [pdf]
Using Euler-Lagrange Variational Principle to Obtain Flow Relations for Generalized Newtonian Fluids
Euler-Lagrange variational principle is used to obtain analytical and numerical flow relations in cylindrical tubes. The method is based on minimizing the total stress in the flow duct using the fluid constitutive relation between stress and rate of strain. Newtonian and non-Newtonian fluid models; which include power law, Bingham, Herschel-Bulkley, Carreau and Cross; are used for demonstration.
[3518] vixra:1612.0179 [pdf]
Testing the Connectivity of Networks
In this article we discuss general strategies and computer algorithms to test the connectivity of unstructured networks which consist of a number of segments connected through randomly distributed nodes.
[3519] vixra:1612.0178 [pdf]
Navier–Stokes Flow in Converging–diverging Distensible Tubes
We use a method based on the lubrication approximation in conjunction with a residual-based mass-continuity iterative solution scheme to compute the flow rate and pressure field in distensible converging–diverging tubes for Navier–Stokes fluids. We employ an analytical formula derived from a one-dimensional version of the Navier–Stokes equations to describe the underlying flow model that provides the residual function. This formula correlates the flow rate to the boundary pressures in straight cylindrical elastic tubes with constant-radius. We validate our findings by the convergence toward a final solution with fine discretization as well as by comparison to the Poiseuille-type flow in its convergence toward analytic solutions found earlier in rigid converging–diverging tubes. We also tested the method on limiting special cases of cylindrical elastic tubes with constant-radius where the numerical solutions converged to the expected analytical solutions. The distensible model has also been endorsed by its convergence toward the rigid Poiseuille-type model with increasing the tube wall stiffness. Lubrication-based one-dimensional finite element method was also used for verification. In this investigation five converging–diverging geometries are used for demonstration, validation and as prototypes for modeling converging–diverging geometries in general.
[3520] vixra:1612.0163 [pdf]
Accounting for the Use of Different Length Scale Factors in x, y and z Directions
This short article presents a mathematical formula required for metric corrections in image extraction and processing when using different length scale factors in three-dimensional space which is normally encountered in cryomicrotome image construction techniques.
[3521] vixra:1612.0161 [pdf]
Navier-Stokes Flow in Cylindrical Elastic Tubes
Analytical expressions correlating the volumetric flow rate to the inlet and outlet pressures are derived for the time-independent flow of Newtonian fluids in cylindrically-shaped elastic tubes using a one-dimensional Navier-Stokes flow model with two pressure-area constitutive relations. These expressions for elastic tubes are the equivalent of Poiseuille and Poiseuille-type expressions for rigid tubes which were previously derived for the flow of Newtonian and non-Newtonian fluids under various flow conditions. Formulae and procedures for identifying the pressure field and tube geometric profile are also presented. The results are validated by a finite element method implementation. Sensible trends in the analytical and numerical results are observed and documented.
[3522] vixra:1612.0160 [pdf]
Variational Approach for Resolving the Flow of Generalized Newtonian Fluids in Circular Pipes and Plane Slits
In this paper, we use a generic and general variational method to obtain solutions to the flow of generalized Newtonian fluids through circular pipes and plane slits. The new method is not based on the use of the Euler-Lagrange variational principle and hence it is totally independent of our previous approach which is based on this principle. Instead, the method applies a very generic and general optimization approach which can be justified by the Dirichlet principle although this is not the only possible theoretical justification. The results that were obtained from the new method using nine types of fluid are in total agreement, within certain restrictions, with the results obtained from the traditional methods of fluid mechanics as well as the results obtained from the previous variational approach. In addition to being a useful method in its own for resolving the flow field in circular pipes and plane slits, the new variational method lends more support to the old variational method as well as for the use of variational principles in general to resolve the flow of generalized Newtonian fluids and obtain all the quantities of the flow field which include shear stress, local viscosity, rate of strain, speed profile and volumetric flow rate. The theoretical basis of the new variational method, which rests on the use of the Dirichlet principle, also provides theoretical support to the former variational method.
[3523] vixra:1612.0158 [pdf]
Methods for Calculating the Pressure Field in the Tube Flow
In this paper we outline methods for calculating the pressure field inside flow conduits in the one-dimensional flow models where the pressure is dependent on the axial coordinate only. The investigation is general with regard to the tube mechanical properties (rigid or distensible), and with regard to the cross sectional variation along the tube length (constant or variable). The investigation is also general with respect to the fluid rheology as being Newtonian or non-Newtonian.
[3524] vixra:1612.0157 [pdf]
Further Validation to the Variational Method to Obtain Flow Relations for Generalized Newtonian Fluids
We continue our investigation to the use of the variational method to derive flow relations for generalized Newtonian fluids in confined geometries. While in the previous investigations we used the straight circular tube geometry with eight fluid rheological models to demonstrate and establish the variational method, the focus here is on the plane long thin slit geometry using those eight rheological models, namely: Newtonian, power law, Ree-Eyring, Carreau, Cross, Casson, Bingham and Herschel-Bulkley. We demonstrate how the variational principle based on minimizing the total stress in the flow conduit can be used to derive analytical expressions, which are previously derived by other methods, or used in conjunction with numerical procedures to obtain numerical solutions which are virtually identical to the solutions obtained previously from well established methods of fluid dynamics. In this regard, we use the method of Weissenberg-Rabinowitsch-Mooney-Schofield (WRMS), with our adaptation from the circular pipe geometry to the long thin slit geometry, to derive analytical formulae for the eight types of fluid where these derived formulae are used for comparison and validation of the variational formulae and numerical solutions. Although some examples may be of little value, the optimization principle which the variational method is based upon has a significant theoretical value as it reveals the tendency of the flow system to assume a configuration that minimizes the total stress. Our proposal also offers a new methodology to tackle common problems in fluid dynamics and rheology.
[3525] vixra:1612.0155 [pdf]
The Flow of Newtonian Fluids in Axisymmetric Corrugated Tubes
This article deals with the flow of Newtonian fluids through axially-symmetric corrugated tubes. An analytical method to derive the relation between volumetric flow rate and pressure drop in laminar flow regimes is presented and applied to a number of simple tube geometries of converging-diverging nature. The method is general in terms of fluid and tube shape within the previous restrictions. Moreover, it can be used as a basis for numerical integration where analytical relations cannot be obtained due to mathematical difficulties.
[3526] vixra:1612.0154 [pdf]
The Flow of Newtonian and Power Law Fluids in Elastic Tubes
We derive analytical expressions for the flow of Newtonian and power law fluids in elastic circularly-symmetric tubes based on a lubrication approximation where the flow velocity profile at each cross section is assumed to have its axially-dependent characteristic shape for the given rheology and cross sectional size. Two pressure-area constitutive elastic relations for the tube elastic response are used in these derivations. We demonstrate the validity of the derived equations by observing qualitatively correct trends in general and quantitatively valid asymptotic convergence to limiting cases. The Newtonian formulae are compared to similar formulae derived previously from a one-dimensional version of the Navier-Stokes equations.
[3527] vixra:1612.0153 [pdf]
Analytical Solutions for the Flow of Carreau and Cross Fluids in Circular Pipes and Thin Slits
In this paper, analytical expressions correlating the volumetric flow rate to the pressure drop are derived for the flow of Carreau and Cross fluids through straight rigid circular uniform pipes and long thin slits. The derivation is based on the application of Weissenberg-Rabinowitsch-Mooney-Schofield method to obtain flow solutions for generalized Newtonian fluids through pipes and our adaptation of this method to the flow through slits. The derived expressions are validated by comparing their solutions to the solutions obtained from direct numerical integration. They are also validated by comparison to the solutions obtained from the variational method which we proposed previously. In all the investigated cases, the three methods agree very well. The agreement with the variational method also lends more support to this method and to the variational principle which the method is based upon.
[3528] vixra:1612.0152 [pdf]
Comparing Poiseuille with 1D Navier-Stokes Flow in Rigid and Distensible Tubes and Networks
A comparison is made between the Hagen-Poiseuille flow in rigid tubes and networks on one side and the time-independent one-dimensional Navier-Stokes flow in elastic tubes and networks on the other. Analytical relations, a Poiseuille network flow model and two finite element Navier-Stokes one-dimensional flow models have been developed and used in this investigation. The comparison highlights the differences between Poiseuille and one-dimensional Navier-Stokes flow models which may have been unjustifiably treated as equivalent in some studies.
[3529] vixra:1612.0151 [pdf]
The Flow of Power Law Fluids in Elastic Networks and Porous Media
The flow of power law fluids, which include shear thinning and shear thickening as well as Newtonian as a special case, in networks of interconnected elastic tubes is investigated using a residual based pore scale network modeling method with the employment of newly derived formulae. Two relations describing the mechanical interaction between the local pressure and local cross sectional area in distensible tubes of elastic nature are considered in the derivation of these formulae. The model can be used to describe shear dependent flows of mainly viscous nature. The behavior of the proposed model is vindicated by several tests in a number of special and limiting cases where the results can be verified quantitatively or qualitatively. The model, which is the first of its kind, incorporates more than one major non-linearity corresponding to the fluid rheology and conduit mechanical properties, that is non-Newtonian effects and tube distensibility. The formulation, implementation and performance indicate that the model enjoys certain advantages over the existing models such as being exact within the restricting assumptions on which the model is based, easy implementation, low computational costs, reliability and smooth convergence. The proposed model can therefore be used as an alternative to the existing Newtonian distensible models; moreover it stretches the capabilities of the existing modeling approaches to reach non-Newtonian rheologies.
[3530] vixra:1612.0150 [pdf]
The Photon Model and Equations Are Derived Through Time-Domain Mutual Energy Current
Abstract In this article the authors will build the model of photon in time-domain. Since photon is a very short time wave, the authors need to build it in the time domain. In this photon model, there is an emitter and an absorber. The emitter sends the retarded wave. The absorber sends advanced wave. Between the emitter and the absorber the mutual energy current is built through the combination of the retarded wave and the advanced wave. The mutual energy current can transfer the photon energy from the emitter to the absorber and hence the photon is nothing else but the mutual energy current. This energy transfer is built in 3D space, this allow the wave to go through any 3D structure for example the double slits. The authors have proved that in the empty space, the wave can be seen approximately as 1D wave and can transfer energy from one pointer to to another point without any wave function collapses. That is why the light can be seen as light line. That is why a photon can go through double slits to have the interference. The duality of photon can be explained using this photon model. The total energy transfer can be divided as self-energy transfer and the mutual energy transfer. It is possible the self-energy current transfer half the total energy and it also possible that the part of self-energy part has no contribution to the energy transferring of the photon. In the latter, the self-energy items is canceled by the advanced wave of the emitter current and the retarded wave of the absorber current or canceled by the returned waves. This return wave is still satisfy Maxwell equations or at least some time-reversed Maxwell equations. Furthermore, the authors found the photon should satisfy the Maxwell equations in microcosm. Energy can be transferred only by the mutual energy current. In this solution, the two items in the mutual energy current can just interpret the line or circle polarization or spin of the photon. The traditional concept of wave function collapse in quantum mechanics is not needed in the authors’ photon model. The authors believe the concept of the traditional wave collapse is coursed by the misunderstanding about the energy current. Traditionally, there is only the energy current based on Poynting vector which is always diverges from the source. For a diverged wave, hence, there is the requirement for the energy to collapse to its absorber. After knowing that the electromagnetic energy is actually transferred by the mutual energy current, which is a wave diverging in the beginning and converging in the end, then the wave function collapse is not needed. The concept energy is transferred by the mutual energy current can be extended from photon to any other particles for example electron. Electrons should have the similar mutual energy current to carry their energy from one place to another and do not need the wave function to collapse.
[3531] vixra:1612.0148 [pdf]
Pore-Scale Modeling of Navier-Stokes Flow in Distensible Networks and Porous Media
In this paper, a pore-scale network modeling method, based on the flow continuity residual in conjunction with a Newton-Raphson non-linear iterative solving technique, is proposed and used to obtain the pressure and flow fields in a network of interconnected distensible ducts representing, for instance, blood vasculature or deformable porous media. A previously derived analytical expression correlating boundary pressures to volumetric flow rate in compliant tubes for a pressure-area constitutive elastic relation has been used to represent the underlying flow model. Comparison to a preceding equivalent method, the one-dimensional Navier-Stokes finite element, was made and the results were analyzed. The advantages of the new method have been highlighted and practical computational issues, related mainly to the rate and speed of convergence, have been discussed.
[3532] vixra:1612.0147 [pdf]
The Yield Condition in the Mobilization of Yield-Stress Materials in Distensible Tubes
In this paper we investigate the yield condition in the mobilization of yield-stress materials in distensible tubes. We discuss the two possibilities for modeling the yield-stress materials prior to yield: solid-like materials and highly-viscous fluids and identify the logical consequences of these two approaches on the yield condition. As part of this investigation we derive an analytical expression for the pressure field inside a distensible tube with a Newtonian flow using a one-dimensional Navier-Stokes flow model in conjunction with a pressure-area constitutive relation based on elastic tube wall characteristics.
[3533] vixra:1612.0146 [pdf]
Yield and Solidification of Yield-Stress Materials in Rigid Networks and Porous Structures
In this paper, we address the issue of threshold yield pressure of yield-stress materials in rigid networks of interconnected conduits and porous structures subject to a pressure gradient. We compare the results as obtained dynamically from solving the pressure field to those obtained statically from tracing the path of the minimum sum of threshold yield pressures of the individual conduits by using the threshold path algorithms. We refute criticisms directed recently to our previous findings that the pressure field solution generally produces a higher threshold yield pressure than the one obtained by the threshold path algorithms. Issues related to the solidification of yield stress materials in their transition from fluid phase to solid state have also been investigated and assessed as part of the investigation of the yield point.
[3534] vixra:1612.0145 [pdf]
Non-Newtonian Rheology in Blood Circulation
Blood is a complex suspension that demonstrates several non-Newtonian rheological characteristics such as deformation-rate dependency, viscoelasticity and yield stress. In this paper we outline some issues related to the non-Newtonian effects in blood circulation system and present modeling approaches based mostly on the past work in this field.
[3535] vixra:1612.0144 [pdf]
Fluid Flow at Branching Junctions
The flow of fluids at branching junctions plays important kinematic and dynamic roles in most biological and industrial flow systems. The present paper highlights some key issues related to the flow of fluids at these junctions with special emphasis on the biological flow networks particularly blood transportation vasculature.
[3536] vixra:1612.0143 [pdf]
Flow of Non-Newtonian Fluids in Converging-Diverging Rigid Tubes
A residual-based lubrication method is used in this paper to find the flow rate and pressure field in converging-diverging rigid tubes for the flow of time-independent category of non-Newtonian fluids. Five converging-diverging prototype geometries were used in this investigation in conjunction with two fluid models: Ellis and Herschel-Bulkley. The method was validated by convergence behavior sensibility tests, convergence to analytical solutions for the straight tubes as special cases for the converging-diverging tubes, convergence to analytical solutions found earlier for the flow in converging-diverging tubes of Newtonian fluids as special cases for non-Newtonian, and convergence to analytical solutions found earlier for the flow of power-law fluids in converging-diverging tubes. A brief investigation was also conducted on a sample of diverging-converging geometries. The method can in principle be extended to the flow of viscoelastic and thixotropic/rheopectic fluid categories. The method can also be extended to geometries varying in size and shape in the flow direction, other than the perfect cylindrically-symmetric converging-diverging ones, as long as characteristic flow relations correlating the flow rate to the pressure drop on the discretized elements of the lubrication approximation can be found. These relations can be analytical, empirical and even numerical and hence the method has a wide applicability range.
[3537] vixra:1612.0124 [pdf]
Proton-Electron Geomeric Model
A Geometric Model A family of models in Euclidean space is developed from the following approximation. m_p/m_e = 4pi(4pi- 1\pi)(4pi-2/pi) = 1836.15 (1) where (m_p) and (m_e) are the numeric values for the mass of the proton and the mass of electron, respectively. In particular, we will develop models (1) that agree with the recommended value of the mass ratio of the proton to the electron to six significant figures, (2) that explain the “shape-shifting” behavior of the proton, and (3) that are formed concisely from the sole transcendental number pi. This model is solely geometric, relying on volume as the measure of mass. Claim that inclusion of quantum/relativistic properties enhance the accuracy of the model. The goal is to express the ratio of the proton mass to the electron mass in terms of (1) pure mathematical constants and (2) a quantum corrective factor. harry.watson@att.net
[3538] vixra:1612.0121 [pdf]
On the Identical Simulation of the Entire Universe
A time ago, I published an article about deceleration of the universe. It was especially based on uncertainty, and it explains how does matter work. In this work, it was performed some analysis of the some specific subjects as an approach such as deceleration, uncertainty, possible particle formation, black hole, gravitation, energy, mass and light speed as the elements for identical simulation computations of the entire universe as the most sensitive as possible being related that article. There are some information about escaping from black holes, event horizon lengths, viscosity of free space, re-derivation of Planck constants and infrastructure of some basic laws of existence mathematically as matter is directly dependent of geometric rules. Also, some elements were given for the readers to solve some required constants as the most sensitive manner. As the constants are not enough in the name of engineering, also finally I found a working algorithm out which reduces process number of the power series to process number of the quadratic equations like calculating a root of an integer as an irrational number by solving equation; so also it can be used to calculate trigonometric values in the best manner for simulations of the entire universe besides physical constants as irrational values.
[3539] vixra:1612.0001 [pdf]
Generalizations of Schwarzschild and (Anti) de Sitter Metrics in Clifford Spaces
After a very brief introduction to generalized gravity in Clifford spaces ($C$-spaces), generalized metric solutions to the $C$-space gravitational field equations are found, and inspired from the (Anti) de Sitter metric solutions to Einstein's field equations with a cosmological constant in ordinary spacetimes. $C$-space analogs of static spherically symmetric metrics solutions are constructed. Concluding remarks are devoted to a thorough discussion about Areal metrics, Kawaguchi-Finsler Geometry, Strings, and plausible novel physical implications of $C$-space Relativity theory.
[3540] vixra:1611.0399 [pdf]
Pioneer Anomaly Re-visited
This mysterious effect has been given considerable thought as to its nature. Some have thought that the effect is due to unknown spacecraft effects, such as gas leaks or anisotropic thermal radiation. Others hold out for some fundamental physics that might alter the theory of gravitation. Recently, a complete analysis of rediscovered spacecraft data provides a credible story for spacecraft engineering being the cause. However, more fundamental physics has not been absolutely ruled out. This paper will relook at the anomaly from a fundamental perspective by applying a recently published physical theory to the Pioneer anomaly, and will show that a new theory can explain the effect.
[3541] vixra:1611.0390 [pdf]
Proof of Bunyakovsky's Conjecture
Bunyakovsky's conjecture states that under special conditions, polynomial integer functions of degree greater than one generate innitely many primes. The main contribution of this paper is to introduce a new approach that enables to prove Bunyakovsky's conjecture. The key idea of this new approach is that there exists a general method to solve this problem by using only arithmetic progressions and congruences. As consequences of Bunyakovsky's proven conjecture, three Landau's problems are resolved: the n^2+1 problem, the twin primes conjecture and the binary Goldbach conjecture. The method is also used to prove that there are infinitely many primorial and factorial primes.
[3542] vixra:1611.0379 [pdf]
A Conformal Preon Model
I consider a preon model for quarks and leptons based on massless constituents having spin 1/2 and charge 1/3 or 0. The color and weak interaction gauge structures can be deduced from the three preon states. Argument is given for unified field theory being based on gravitational and electromagnetic interactions only. Conformal symmetry is introduced in the action of gravity with the Weyl tensor. Electromagnetism is geometrized to conform with gravity. Baryon number non-conservation mechanism is obtained.
[3543] vixra:1611.0368 [pdf]
Infinite Product Representations for Gamma Function and Binomial Coefficient
In this paper, I demonstrate one new infinite product for binomial coefficient and news Euler's and Weierstrass's infinite product for Gamma function among other things.
[3544] vixra:1611.0362 [pdf]
Formulation of Energy Momentum Tensor for Generalized Fields of Dyons
The energy momentum tensor of generalized fields of dyons and energy momentum conservation laws of dyons has been developed in simple, compact and consistent manner. We have obtained the Maxwell’s field theory of energy momentum tensor of dyons (electric and magnetic) of electromagnetic field, Poynting vector and Poynting theorem for generalized fields of dyons in a simple, unique and consistent way.
[3545] vixra:1611.0358 [pdf]
Continuous Production of Matter Instead of Big Bang
New exact analytical solutions of Einstein and Qmoger (quantum modification of general relativity) equations are obtained in the context of an alternative to the Big Bang theory.
[3546] vixra:1611.0357 [pdf]
Integer, Fractional, and Anomalous Quantum Hall Effect Explained with Eyring's Rate Process Theory and Free Volume Concept
The Hall effect, especially integer, fractional and anomalous quantum Hall effect, has been addressed with the Eyring's rate process theory and free volume concept. The basic assumptions are that the conduction process is a common rate controlled "reaction" process that can be described with Eyring's absolute rate process theory; the mobility of electrons should be dependent on the free volume available for conduction electrons. The obtained Hall conductivity is clearly quantized as e^2/h with prefactors related to both the magnetic flux quantum number and the magnetic quantum number via azimuthal quantum number, with and without an externally applied magnetic field. This article focuses on two dimensional (2D) systems, but the approaches developed in this article can be extended to 3D systems
[3547] vixra:1611.0352 [pdf]
Proof of Collatz' Conjecture
Collatz' conjecture (stated in 1937 by Collatz and also named Thwaites conjecture, or Syracuse, 3n+1 or oneness problem) can be described as follows: Take any positive whole number N. If N is even, divide it by 2. If it is odd, multiply it by 3 and add 1. Repeat this process to the result over and over again. Collatz' conjecture is the supposition that for any positive integer N, the sequence will invariably reach the value 1. The main contribution of this paper is to present a new approach to Collatz' conjecture. The key idea of this new approach is to clearly differentiate the role of the division by two and the role of what we will name here the jump: a = 3n + 1. With this approach, the proof of the conjecture is given as well as generalizations for jumps of the form qn + r and for jumps being polynomials of degree m >1.
[3548] vixra:1611.0341 [pdf]
New Exact Solutions of Einstein and Qmoger Equations as Alternative to Big Bang
New exact analytical solutions of Einstein and Qmoger (quantum modification of general relativity) equations are obtained in the context an alternative to the Big Bang theory.
[3549] vixra:1611.0310 [pdf]
The Universe is Static
It is shown that the light curve widths of type Ia supernovae do not have time dilation and that their magnitudes are consistent with a static universe. The standard analysis for type Ia supernovae uses a set of templates to overcome the intrinsic variation of the supernova light curves with wavelength. The reference light curves derived from this set of templates contain an anomaly in that at short wavelengths the width of the light curve is proportional to the emitted wavelength. Furthermore this anomaly is exactly what would be produced if supernovae at different redshifts did not have time dilation and yet time dilation corrections were applied. It is the specific nature of this anomaly that is evidence for a static universe. The lack of time dilation is confirmed by direct analysis of the original observations. It is also found that the peak flux density of the light curves in the reference templates had a strong dependence on wavelength that could be due to the use of an incorrect distance modulus. This dependence is investigated by computing the peak absolute magnitudes of type Ia supernovae observations from the original observations using a static cosmological model. The results support the hypothesis of a static universe. It is also argued that the photometric redshift relation and spectroscopic ages are consistent with a static universe.
[3550] vixra:1611.0299 [pdf]
Universal Economic Plan Based Law Constitutions of Kingdom and Nations
In this work, touched on some social issues whatever the result, and a raising awareness was aimed by some new technological upgrades for the vital infrastructures of states, social order and economic plans. The main aim is one world order which has no king and accepts nations as local governance as a requirement of hierarchical order. It is completely based on economic benefits of all nations as there is no alternative to establish a healthy economic order as economic management is directly related with laws. As the important is a law exists or not, or is just or not for justice, also it encourages to develop organic laws in state institutions as it recognizes any state institution as autonomous. No state has this constitution. This work is only an offer.
[3551] vixra:1611.0296 [pdf]
Relativistic Cosmology and Einstein’s ‘Gravitational Waves’
The mathematical theory of Relativity is riddled with violations of the rules of pure mathematics, logical contradictions, and conflict with a vast array of experiments. These flaws are reviewed herein in some detail. Claims for the discovery of black holes, Einstein's gravitational waves and the afterglow of the Big Bang, are demonstrably false. There are two conditions that any physical theory must satisfy: (a) logical consistency, (b) concordance with reality as determined by experiment and observation. Einstein's General Theory of Relativity fails on both counts.
[3552] vixra:1611.0292 [pdf]
Discussing a New Way to Conciliate Large Scale and Small Scale Physics
Interactions are produced, at small scale, by Lorentz transformations around extra dimensions. As a simple example, we include simultaneously a "Kaluza-Klein fth dimension" and minimal coupling in Klein-Gordon equation applied to Hydrogen (all equations can be written in dimensionless form). Instead of solving the last separable equation for f(R), we require one more eigeinvalue equation, and require that the eccentricity of the system vanishes, to deduce the energy levels. With 4 spatial dimensions, there are naturally 6 rotations and 2 angular momenta (a classical one with parity+ and a spin with parity-). The SO(4) degeneracy and Schrodinger's energy levels are deduced, but the ne structure requires a modication : we give an example with a linear equation. We observe that the extra degree of freedom naturally disappears at classical scale (objects made of a large number of elementary particles). We then observe that the quantum principle of minimal coupling (here produced by Lorentz transformations) is analogous to a modication of the metric inside the wave function. We use the corresponding metric (no coordinate singularity, the central one being naturally solved by the Lorentz transformation with extra dimensions) to describe gravitation : the deduced equation of motion reduces, in the low eld approximation, to the equation given by general relativity. More generally, extra dimensions may be usefull in particles physics : conservation of lepton numbers could be understood as conservation of momentum along other dimensions, and unconvenient divergences could be solved.
[3553] vixra:1611.0260 [pdf]
Deng Entropy in Hyper Power Set and Super Power Set
Deng entropy has been proposed to handle the uncertainty degree of belief function in Dempster-Shafer framework very recently. In this paper, two new belief entropies based on the frame of Deng entropy for hyper-power sets and super-power sets are respectively proposed to measure the uncertainty degree of more uncertain and more flexible information. Directly, the new entropies based on the frame of Deng entropy in hyper-power sets and super-power sets can be used in the application of DSmT.
[3554] vixra:1611.0231 [pdf]
Non-Conservativeness of Natural Orbital Systems
The Newtonian mechanic and contemporary physics model the non-circular orbital systems on all scales as essentially conservative, closed-path zero-work systems and circumvent the obvious contradictions (rotor-free ‘field’ of ‘force’, in spite of its inverse proportionality to squared time-varying distance) by exploiting both energy and momentum conservation, along specific initial conditions, to be arriving at technically more or less satisfactory solutions, but leaving many of unexplained puzzles. In sharp difference to it, in recently developed thermo-gravitational oscillator approach movement of a body in planetary orbital systems is modeled in such a way that it results as consequence of two counteracting mechanisms represented by respective central forces, that is gravitational and anti-gravitational accelerations, in that the actual orbital trajectory comes out through direct application of the Least Action Principle taken as minimization of work (to be) done or, equivalently, a closed-path integral of increments (or time-rate of change) of kinetic energy. Based on the insights gained, a critique of the conventional methodology and practices reveals shortcomings that can be the cause of the numerous difficulties the modern physics has been facing: anomalies (as gravitational and Pioneer 10/11), three or more bodies problem, postulations in modern cosmology of dark matter and dark energy, the quite problematic foundation of quantum mechanics, etc. Furthermore, for their overcoming, indispensability of the Aether as an energy-substrate for all physical phenomena is gaining a very strong support, and based on recent developments in Aetherodynamics the Descartes' Vortex Physics may become largely reaffirmed in the near future.
[3555] vixra:1611.0230 [pdf]
The Planck Mass Must Always Have Zero Momentum – Relativistic Energy-Momentum Relationship for the Planck Mass.
This is a short paper on the maximum possible momentum for subatomic particles, as well as on the relativistic energy-momentum relationship for a Planck mass. This paper builds significantly on the maximum velocity for subatomic particles introduced by [1, 2, 3] and I strongly recommend reading an earlier paper [1] before reading this paper. It is important that we distinguish between Planck momentum and the momentum of a Planck mass. The Planck momentum can (almost) be reached for any subatomic particles with rest-mass lower than a Planck mass when accelerated to their maximum velocity, given by Haug. Just before the Planck momentum is reached, the mass will turn into a Planck mass. The Planck mass is surprisingly at rest for an instant, and then the mass will then burst into pure energy. This may sound illogical at first, but the Planck mass is the very turning point of the light particle (the indivisible particle) and it is the only mass that is at rest as observed from any reference frame. That the Planck mass is at rest as observed from any reference frame could be as important as understanding that the speed of light is the same in every reference frame. The Planck mass seems to be as unique and special among masses (particles with mass) as the speed of light is among velocities. It is likely one of the big missing pieces towards a unified theory.
[3556] vixra:1611.0224 [pdf]
Sieve of Collatz
The sieve of Collatz is a new algorithm to trace back the non-linear Collatz problem to a linear cross out algorithm. Until now it is unproved.
[3557] vixra:1611.0222 [pdf]
Geometric Model of Time
The purpose of this article is to provide an alternative, strictly geometric, interpretation for the observed phenomenon of time. This Geometric Model of Time (GMT) is consistent with both Theories of Relativity but goes beyond cur- rent explanations for the nature of and the apparent one-directness of time - the so-called Arrow of Time. Key elements of the model are: 1. Our physical space (not space-time) is a 4-dimensional phenomenon. The notion of a dimension of time that is distinct from space is not necessary for a complete description of our universe. All dimensions are identical and symmetrical. No one dimension can be singled out to be universally or uniquely labeled as "time" or be otherwise unique. 2. All physical objects in our universe are endowed with an axiomatic vectorial property we call velocity. The scalar value of this property (speed) is invariable and identical for all objects and is labeled as c (speed of light). 3. The experience of time as we know it, or, more precisely, of sequential causality, results from each observer's motion through space at c. "Time" is the term given by each observer to their own individual direction of travel in our physical four-space. This model is a better fit with observed phenomena than current ones as well as being simpler and more elegant, elegance being defined as having symmetry (in the sense that it treats no dimension as being singular).
[3558] vixra:1611.0212 [pdf]
The Divergence Myth in Gauss-Bonnet Gravity
n Riemannian geometry there is a unique combination of the Riemann-Christoffel curvature tensor, Ricci tensor and Ricci scalar that defines a fourth-order Lagrangian for conformal gravity theory. This Lagrangian can be greatly simplified by eliminating the curvature tensor term, leaving a unique combination of just the Ricci tensor and scalar. The resulting formalism and the associated equations of motion provide a tantalizing alternative to Einstein-Hilbert gravity that may have application to the problems of dark matter and dark energy without the imposition of the cosmological constant or extraneous scalar, vector and spinor terms typically employed in attempts to generalize the Einstein-Hilbert formalism. Gauss-Bonnet gravity specifies that the full Lagrangian hides an ordinary divergence (or surface term) that can be used to eliminate the curvature tensor term. In this paper we show that the overall formalism, outside of surface terms necessary for integration by parts, does not involve any such divergence. Instead, it is the Bianchi identities that are hidden in the formalism, and it is this fact that allows for the simplification of the conformal Lagrangian.
[3559] vixra:1611.0211 [pdf]
A Variable Order Hidden Markov Model with Dependence Jumps
Hidden Markov models (HMMs) are a popular approach for modeling sequential data, typically based on the assumption of a first- or moderate-order Markov chain. However, in many real-world scenarios the modeled data entail temporal dynamics the patterns of which change over time. In this paper, we address this problem by proposing a novel HMM formulation, treating temporal dependencies as latent variables over which inference is performed. Specifically, we introduce a hierarchical graphical model comprising two hidden layers: on the first layer, we postulate a chain of latent observation-emitting states, the temporal dependencies between which may change over time; on the second layer, we postulate a latent first-order Markov chain modeling the evolution of temporal dynamics (dependence jumps) pertaining to the first-layer latent process. As a result of this construction, our method allows for effectively modeling non-homogeneous observed data, where the patterns of the entailed temporal dynamics may change over time. We devise efficient training and inference algorithms for our model, following the expectation-maximization paradigm. We demonstrate the efficacy and usefulness of our approach considering several real-world datasets. As we show, our model allows for increased modeling and predictive performance compared to the alternative methods, while offering a good trade-off between the resulting increases in predictive performance and computational complexity.
[3560] vixra:1611.0202 [pdf]
Dirac Equation in 24 Irreducible Representations
We demonstrate that if one adheres to a method akin to Dirac's method of arriving at the Dirac equation -- then, the Dirac equation is not the only equation that one can generate but that there is a whole new twenty four equations that Dirac left out. Off these new equations -- interesting is that; some of them violate C, P, T, CT, CP, PT and CPT-Symmetry. If these equations are acceptable on the basis of them flowing from the widely -- if not universally accepted Dirac prescription, then, the great riddle of why the preponderance of matter over antimatter might find a solution.are acceptable on the basis of them flowing from the widely -- if not universally accepted Dirac prescription, then, the great riddle of why the preponderance of matter over antimatter might find a solution.
[3561] vixra:1611.0200 [pdf]
Dirac Equation for General Spin Particles Including Bosons
We demonstrate (show) that the Dirac equation – which is universally assumed to represent only spin 1/2 particles; can be manipulated using legal mathematical operations – starting from the Dirac equation – so that it describes any general spin particle. If our approach is acceptable and is what Nature employs, then, as currently obtaining, one will not need a unique and separate equation to describe particles of different spins, but only one equation is what is needed – the General Spin Dirac Equation. This approach is more economic and very much in the spirit of unification – i.e., the tie-ing together into a single unified garment – a number of phenomenon (or facets of physical and natural reality) using a single principle, which, in the present case is the bunching together into one theory (equation), all spin particles into the General Spin Dirac Equation.
[3562] vixra:1611.0199 [pdf]
On Anderson at al. (2015)'s Supposed Sinusoidal Time Variation of the Newtonian Gravitational Constant
In a recent publication [J. D. Anderson at al. (2015)', Europhys. Lett. 110, 10002] presented a strong correlation between the measured values of the Newtonian gravitational constant G and the 5.9 year oscillation of the length of day. Following this publication of Anderson at al. (2015)'s publication, S. Schlamminger at al. [Phys. Rev. D 91, 121101(R)] compiled a more complete set of published measurements of G made in the last 35 years where they performed a least-squares regression to a sinusoid with period 5.9 years and found this fit to still yields a reasonable fit to these data thus somewhat putting credence to this claim of Anderson at al. (2015). However, it is yet to be established as to whether or not this signal is gravitational in origin. In this brief communication, we point-out that -- in principle -- this sinusoidal signal has a place in the gravitomagnetic model that we currently working on.
[3563] vixra:1611.0196 [pdf]
On the Secular Recession of Earth-Moon System as an Azimuthal Gravitational Phenomenon
We here apply the ASTG-model to the observed anomalous secular trend in the mean Sun-(Earth-Moon) and Earth-Moon distances. For the recession of the Earth-Moon system, in agreement with observation, we obtain a recession of about 11.20 ± 0.20 cm/yr. The ASTG-model predicts orbital drift as being a result of the orbital inclination and the Solar mass loss rate. The Newtonian gravitational constant G is assumed to be absolute time constant. Standish (2005); Krasinsky and Brumberg (2004) reported for the Earth-Moon system, an orbital recession from the Sun of about (15.00 ± 4.00) cm/yr; while Williams et al. (2004); Williams and Boggs (2009); Williams et al. (2014) report for the Moon, an orbital recession of about 38.00 mm/yr from the Earth. The predictions of the ASTG-model for the Earth-Moon system agrees very well with those the findings of Standish (2005); Krasinsky and Brumberg (2004). The lost orbital angular momen-tum for the Earth-Moon system – which we here hypothesize to be gained as spin by the two body Earth-Moon system; this lost angular momentum accounts very well for the observed lunar drift, therefore, one can safely safely say that the ASTG-model does to a reasonable degree of accuracy predict the observed lunar drift of about 38.00 mm/yr from the Earth.
[3564] vixra:1611.0195 [pdf]
On the Secular Recession of the Earth-Moon System as an Azimuthal Gravitational Phenomenon (II)
We here apply – albeit, with improved assumptions compared to our earlier work (Nyambuya et al., Astron. & Astro-phys. S. Sci. 358(1) : pp.1 − 12, 2015); the ASTG-model to the observed secular trend in the mean Sun-(Earth-Moon) and Earth-Moon distances thereby providing an alternative explanation as to what the cause of this secular trend may be. For the semi-major axis rate of the Earth-Moon system, we now obtain a new value of about +3.00 cm/yr while in the earlier work we obtained a value of about 5.00 cm/yr. This new value of +3.00 cm/yr is closer to that of of Standish (2005)'s measurement of (7.00 ± 2.00) cm/yr. Our present value accounts for only 43% of Standish (2005)'s measurement. The other 57% can be accounted for by invoking the hypothesis that the θ-component of the angular momentum maybe non-zero. In the end, it can be said that the ASTG-model predicts orbital drift as being a result of the orbital inclination and the Solar mass loss rate. The Newtonian gravitational constant G is assumed to be an absolute time constant
[3565] vixra:1611.0194 [pdf]
Dirac Equation for the Proton (I) -- Why Three Quarks for Muster Mark?
The present reading is the first in a series where we suggest a Dirac equation for the Proton. Despite its great success in explaining the physical world as we know it, in its bare form, not only is the Dirac equation at loss but fails to account e.g. for the following: (1) Why inside hadrons (Proton in this case) there are three, not four or five quarks; (2) Why quarks have fractional electronic charges; (3) Why the gyromagnetic ratio of the Proton is not equal to two as the Dirac equation requires. In the present reading, we make an attempt to answer the first question of why inside the proton, there are three, not four or five quarks.
[3566] vixra:1611.0193 [pdf]
Dirac Equation for the Proton (II) -- Why Fractional Charges for Muster Mark's Quarks?
The present reading is the second in a series where we suggest a Dirac equation for the Proton. Despite its great success in explaining the physical world as we know it, in its bare form, not only is the Dirac equation at loss but fails to account e.g. for the following: (1) Why inside hadrons there are three, not four or five quarks; (2) Why quarks have fractional charges; (3) Why the gyromagnetic ratio of the Proton is not equal to two as the Dirac equation requires. In the present reading, we make an attempt to answer the second question of why quarks have fractional charges. We actually calculate the exact values of the charges of these quarks.
[3567] vixra:1611.0192 [pdf]
Dirac Equation for the Proton (III) -- Gyromagnetic Ratio
The present reading is the third in series where we suggest a Dirac equation for the Proton. Despite its great success in explaining the physical world as we know it, in its bare form, not only is the Dirac equation at loss but fails to account e.g. for the following: (1) Why inside hadrons there are three, not four or five quarks; (2) Why quarks have fractional charges; (3) Why the gyromagnetic ratio of the Proton is not equal to two as the Dirac equation requires. In the present reading, we make an attempt to answer the third question of why the gyromagnetic ratio of the Proton is not equal to two as the Dirac equation requires. We show that from the internal logic of the proposed theory -- when taken to first order approximation, we are able to account for 55.7% [2.000000000] of the Proton's excess gyromagnetic ratio [3.585 694 710(50)]. The remaining 44.3% [1.585 694 710(50)] can be accounted as a second order effect that has to do with the Proton having a finite size.
[3568] vixra:1611.0191 [pdf]
High Energy Photons as Product Superposed Massive Particle-Antiparticle Pairs
Our present understanding as revealed to us from Einstein's Special Theory of Relativity (STR) and experimental philosophy, informs us that a massive particle can never ever attain the light speed barrier: c. Massive particles are (according to the STR) eternally incarcerated to travel at sub-luminal speeds. On the other hand, Einstein's STR does not forbid the existence of particles that travel at superluminal speeds. Only massless particles can travel at the speed of light and nothing else. In the present reading, we demonstrate that it should in-principle be possible to have massive particles travel at the speed of light. Our investigations suggest that all light (Electromagnetic radiation or any particle for that matter that travels at the speed of light) may very well be comprised of two massive particle-antiparticle coupled pair.
[3569] vixra:1611.0189 [pdf]
A Prediction of Quantized Gravitational Deflection of Starlight
In an earlier reading, it is argued that the pivotal, all-important, critical, crucial and supposedly watershed factor " 2 " emerging from Einstein's General Theory of Relativity (GTR) and used in Solar eclipse measurements by Sir Arthur S. Eddington as the clearest indicator yet that Einstein's GTR is indeed a superior theory to Newton's theory of gravitation may not be adequate as an arbiter to decide the fate of Newtonian gravitational theory. In the present reading, using ideas from research that we have carried out over the years – research whose endeavour is to obtain a General Spin Dirac Equation in Curved Spacetime (GS-Dirac Equation); we present yet another " surprising " result, namely that – if the ideas leading to the GS-Dirac Equation and as-well those presented in the reading rendering the factor " 2 " as being inadequate as an arbiter to decide the fate of Newtonian gravitational theory, then, the gravitational deflection of a photon may very well depend on its spin in such a manner that if photons of different spins where to be observed undergoing gravitational deflection by a massive object such as the Sun, the resulting deflection may very well be seen exhibiting distinct deflection quantization as a result of the quantized spins.
[3570] vixra:1611.0186 [pdf]
Dirac Wavefunction as a 4 × 4 Component Function
Since it was discovered some 84 years ago, the Dirac equation is understood to admit 4x1 component wavefunctions. We demonstrate here that this same equation does admit 4x4 component wavefunctions as-well.
[3571] vixra:1611.0185 [pdf]
Pauli Exclusion Principle, the Dirac Void and the Preponderance of Matter over Antimatter
In the year 1928, the pre-eminent British physicist -- Paul Adrien Maurice Dirac, derived his very successful equation now popularly known as the Dirac equation. This unprecedented equation is one of the most beautiful, subtle, noble and esoteric equations in physics. One of its greatest embellishments is embedded in that this equation exhibits a perfect symmetry -- which amongst others -- requires, that the Universe contain as much matter as antimatter, or that, for every known fundamental particle, there exists a corresponding antiparticle. We show here that the Dirac theory in its bare form -- without the need of the Pauli Exclusion Principle; can -- via, its internal logic -- beautifully explain the stability of the Dirac Void -- i.e., the empty Dirac Sea. There is no need for one to `uglify' Dirac's otherwise beautiful, self-contained and consistent theory by indiscriminately stuffing the Dirac vacuum with an infinite amount of invisible negative energy in-order to prevent the positive energy Electron from falling into the negative energy state.
[3572] vixra:1611.0180 [pdf]
On the Time Evolution of Dual Orthogonal Group-Systems
As it has been conjectured for a long time, dual orthogonal group-systems (DOGs) exhibit a non-static behaviour in the low temperature-limit. This article aims to explore the unitary transformations corresponding to the time-evolution of such systems in the limit of $\beta\rightarrow \infty$.
[3573] vixra:1611.0142 [pdf]
A Theory of the Muon; Explaining the Electron's Embarrassing Fat Cousin!
A theory of the muon is presented that explains the mass of the muon from a formula derived from the relativistic wave equations independently discovered by Lanczos, Weyl, and Van der Waarden using the Liénard- Wiechert potential, discussed in the appendix. The mean-life of the muon is also calculated in a way that differs from the beta-decay-like standard model mechanism but uses a spontaneous emission-like model using Heisenberg's spontaneous emission formula and the model of Weinberg and Salam with the Z0 Boson playing a role analogous to the photon.
[3574] vixra:1611.0137 [pdf]
Restatement and Extension of Various Spin Particle Equations
This paper is based on my own previous articles. I improve research methods and add some new contents in this paper. A more rigorous, more analytical, more complete and more organized mathematical physical method is adopted. And I am as far as possible to make the whole article have a sense of beauty. Firstly, the mathematics foundation of constant tensors analysis methods is established rigorously in Chapter One. Some wonderful mathematical properties are found. Many important constant tensors are proposed. Then in Chapter Two I use constant tensors as a mathematical tool to apply to physics. Some important physical quantities are defined by using constant tensors. All kinds of relationships between them are studied in detail. The canonical, analytical and strict mathematical physical sign system is established in this chapter. In Chapter Three, I use the mathematical tools in the previous two chapters to study spinorial formalism of various spin particles classical equations. And the equivalence between spinorial formalism and classical one is proved strictly. I focuse to study electromagnetic field, Yang-Mills field and gravitational field etc. Especially, a new spinorial formalism of the gravitational field identity is proposed. In order to further explore, I study several important equations by contrast. Some new and interesting results are obtained. The Chapter Four is the most important part of this thesis. It is also my original intention of writing this paper. In this chapter, I put forward a new form of particle equations: Spin Equation. The equation is directly constructed by spin and spin tensor. And I note that spin tensor is also the transformation matrix of corresponding field representation. So the physical meaning of this equation is very clear. The corresponding particle equation can be simply and directly written according to the transformation law of the particle field. It correctly describes neutrino, electromagnetic field, Yang-Mills field and electron etc. And it is found that it is completely equivalent to full symmetry Penrose equation. A scalar field can be introduced naturally in this formalism. Thus, a more interesting equation is obtained: Switch Spin Equation. When the scalar field is zero, free particles can exist. When the scalar field is not zero, free particles can't exist. The scalar field acts as a switch. It can control particles generation and annihilation. This provides a new physical mechanism of particles generation and annihilation. At the same time, it can also answer the question: why the universe inflation period can be completely described by the scalar fields. And the equation itself has an inherent limitation to the scalar field. So that the scalar will be quantized automatically. Each quantized value of the scalar is corresponding to different physical equations. That provides a new idea and an enlightenment for unity of five superstring theories. Finally, in Chapter Five Bargmann-Wigner equation is analyzed thoroughly. It is proved that it is equivalent to Rarita-Schwinger equation in half integer spin case. And it is equivalent to Klein-Gordon equation in integer spin case. The profound physical meanings of Bargmann-Wigner equation are revealed. By contrast, it is found that Bargmann-Wigner equation is suitable to describe massive particles, but not too suitable to describe massless particles. Penrose spinorial equation or Spin Equation is more suitable to describe massless particles. Mathematics and physics of this paper have a stronger originality. Some mathematical and physical concepts, methods and contents also have a certain novelty. All of them are strictly calculated and established step by step by my own independent efforts. It takes me a lot of time and energy. I use spare time to finish the paper. Due to the limited time and my limited level, it is inevitable that there are a few mistakes. Comments and suggestions are welcome!
[3575] vixra:1611.0136 [pdf]
Size and Expansion of the Universe in Zero Energy Universe (Logical Defenses for the Model "We Are Living in a Black Hole")
We can propose two models as an example for Zero Energy Universe Model. In this paper, we research that the total energy of the universe is zero, matters have a positive energy, and only gravitational potential energy is considered as a negative energy to offset this positive energy. In this model, to establish energy conservation law while the universe is expanding, energy needs to be increased, which increases R_gs or R_B of the universe. If a newly appeared energy has antigravity or negative pressure characteristics, it can be used as the model that can account for dark energy. There exists a zone that has a uniform energy density within R_gs due to the presence of gravitational potential energy with negative values. Base on this, I estimated the current size of the universe. And the model that I propose can solve some problems that the model “the universe is a black hole” had.
[3576] vixra:1611.0114 [pdf]
An Early Contribution to Vector Maximisation of De Finetti
An achievement of De Finetti for which he has received little recognition thus far is his contribution to the field of vector maximisation in two articles published in 1937, Problemi di "optimum" and Problemi di "optimum" vincolato. The speech will put his contribution in historical perspective and will discuss its importance for economic theory.
[3577] vixra:1611.0110 [pdf]
The Feynman-Dyson Propagators for Neutral Particles (Local or Non-local?)
An analog of the S=1/2 Feynman-Dyson propagator is presented in the framework of the S=1 Weinberg's theory. The basis for this construction is the concept of the Weinberg field as a system of four field functions differing by parity and by dual transformations. Next, we analyze the recent controversy in the definitions of the Feynman-Dyson propagator for the field operator containing the S=1/2 self/anti-self charge conjugate states in the papers by D. Ahluwalia et al. and by W. Rodrigues Jr. et al. The solution of this mathematical controversy is obvious. It is related to the necessary doubling of the Fock Space (as in the Barut and Ziino works), thus extending the corresponding Clifford Algebra. However, the logical interrelations of different mathematical foundations with the physical interpretations are not so obvious (Physics should choose only one correct formalism - it is not clear, why two correct mathematical formalisms (which are based on the same postulates) lead to different physical results?)
[3578] vixra:1611.0089 [pdf]
The 3n ± p Conjecture: A Generalization of Collatz Conjecture
The Collatz conjecture is an open conjecture in mathematics named so after Lothar Collatz who proposed it in 1937. It is also known as 3n + 1 conjecture, the Ulam conjecture (after Stanislaw Ulam), Kakutanis problem (after Shizuo Kakutani) and so on. Several various generalization of the Collatz conjecture has been carried. In this paper a new generalization of the Collatz conjecture called as the 3n ± p conjecture; where p is a prime is proposed. It functions on 3n + p and 3n - p, and for any starting number n, its sequence eventually enters a finite cycle and there are finitely many such cycles. The 3n ± 1 conjecture, is a special case of the 3n ± p conjecture when p is 1.
[3579] vixra:1611.0079 [pdf]
Sets, Formulas and Electors
This article is a mathematical experiment with the sets and the formulas. We consider new elements which are called the electors. The elector has the properties of the sets and the formulas.
[3580] vixra:1611.0073 [pdf]
Coefficient-of-determination Fourier Transform CFT
This algorithm is designed to perform Discrete Fourier Transforms (DFT) to convert temporal data into spectral data. What is unique about this DFT algorithm is that it can produce spectral data at any user-defined resolution; existing DFT methods such as FFT are limited in resolution proportional to the temporal resolution. This algorithm obtains the Fourier Transforms by studying the Coefficient of Determination of a series of artificial sinusoidal functions with the temporal data, and normalizing the variance data into a high-resolution spectral representation of the time-domain data with a finite sampling rate.
[3581] vixra:1611.0053 [pdf]
Escape Velocity at the Subatomic Level Leads to Escape Probability
In this paper we look at the escape velocity for subatomic particles. We suggest a new and simple interpretation of what exactly the escape velocity represents at the quantum level. At the quantum level, the escape velocity leads to an escape probability that is likely to be more useful at the subatomic scale than the escape velocity itself. The escape velocity seems to make simple logical sense when studied in light of atomism. Haug [1] has already shown that atomism gives us the same mathematical end results as Einstein’s special relativity. Viewed in terms of general relativity and Newtonian mechanics, the escape velocity seems to be simple to understand. It also seems to explain phenomena at the quantum scale if one maintains an atomist’s point of view. This strengthens our hypothesis that everything consists of indivisible particles and void (empty space). From an atomistic interpretation, our main conclusion is that the standard escape velocity formula likely is the most accurate formula we can generate and it appears to hold all the way down to the Planck scale. An escape velocity of ve > c simply indicates that an indivisible particle cannot escape from a fundamental particle (for example an electron) without colliding with the indivisible particles making up the fundamental particle. To understand this paper in detail, I highly recommend reading the article The Planck Mass Finally Discovered [2] first.
[3582] vixra:1611.0049 [pdf]
In nite Product Representations for Binomial Coefcient, Pochhammer's Symbol, Newton's Binomial and Exponential Function
In this paper, I demonstrate one infinite product for binomial coefficient, Euler's and Weierstrass's infinite product for Pochhammer's symbol, limit formula for Pochhammer's symbol, limit formula for exponential function, Euler's and Weierstrass's infinite product for Newton's binomial and exponential function, among other things.
[3583] vixra:1611.0037 [pdf]
Subnormal Distribution Derived from Evolving Networks with Variable Elements
During the last decades, Power-law distributions played significant roles in analyzing the topology of scale-free (SF) networks. However, in the observation of degree distributions of practical networks and other unequal distributions such as wealth distribution, we uncover that, instead of monotonic decreasing, there exists a peak at the beginning of most real distributions, which cannot be accurately described by a Power-law. In this paper, in order to break the limitation of the Power-law distribution, we provide detailed derivations of a novel distribution called Subnormal distribution from evolving networks with variable elements and its concrete statistical properties. Additionally, imulations of fitting the subnormal distribution to the degree distribution of evolving networks, real social network, and personal wealth distribution are displayed to show the fitness of proposed distribution.
[3584] vixra:1611.0033 [pdf]
Local Realism Generalized, EPR Refined, Bell's Theorem Refuted
This open letter challenges Annals of Physics' Editors and Bell's supporters on this front: in the context of Bell's theorem -- after AoP (2016:67) -- ‘it's a proven scientific fact that a violation of local realism has been demonstrated theoretically and experimentally.' We show that such claims under the Bellian canon are curtailed by its foundation on a naive realism that is known to be false; ie, under Bohr's old insight (in our terms), a test may disturb the tested system. Further: (i) We define a general all-embracing local realism -- CLR, commonsense local realism -- the union of local-causality (no causal influence propagates superluminally) and physical-realism (some physical properties change interactively). (ii) Under CLR, with EPR-based variables (and without QM), a thought-experiment delivers a local-realistic account of EPRB and GHZ in 3-space. (iii) Under EPR, mixing common-sense with undergrad math/physics in the classical way so favored by Einstein, we interpret QM locally and realistically. (iv) We find the flaw in Bell's theorem: Bell's 1964:(14a) ≠ Bell's 1964:(14b) under EPRB. (v) EPR (1935) famously argue that additional variables will bring locality and causality to QM's completion; we show that they are right. (vi) Even more famously, Bell (1964) cried ‘impossible' against such variables; we give the shortest possible refutation of his claim. (vii) Using Bell's (1988:88) moot gloss on a fragment of von Neumann's work, we conclude: ‘There's nothing to Bell's theorem -- nor Bellian variants like CHSH (1969), Mermin (1990), Peres (1995); nor Bellian endorsements like those by Bricmont, du Sautoy, Goldstein et al., Maudlin, Norsen, Shimony -- it's not just flawed, it's silly; its assumptions nonsense; it's not merely false but foolish' and misleading. (viii) Our results accord with common-sense, QM, Einstein's principles, EPR's belief and Bell's hopes and expectations.
[3585] vixra:1611.0010 [pdf]
The Condensing Stirling Cycle Heat Engine
The Stirling thermodynamic heat engine cycle is modified, where instead of an ideal gas, a real, monatomic working fluid is used, with the engine designed so that the isothermal compression starts off as a saturated gas, and ends as a mixed-phase fluid. This cycle takes advantage of the attractive intermolecular Van der Waals forces of the working fluid to assist in compressing the working fluid partially into a liquid, reducing the input compression work and increasing the overall heat engine efficiency to exceed that of the Carnot efficiency.
[3586] vixra:1610.0336 [pdf]
Fuzzy Evidential Influence Diagram Evaluation Algorithm
Fuzzy influence diagrams (FIDs) are one of the graphical models that combines the qualitative and quantitative analysis to solve decision-making problems. However, FIDs use an incomprehensive evaluation criteria to score nodes in complex systems, so that many different nodes got the same score, which can not reflect their differences. Based on fuzzy set and Dempster-Shafer (D-S) evidence theory, this paper changes the traditional evaluation system and modifies corresponding algorithm, in order that the influence diagram can more effectively reflect the true situation of the system, and get more practical results. Numerical examples and the real application in supply chain financial system are used to show the efficiency of the proposed influence diagram model.
[3587] vixra:1610.0334 [pdf]
A Non-Particle View of DNA and Its Implication to Cancer Therapy
The various eects of electromagnetic fields to DNA have been reported by Luc Montagnier and his group. It has been shown that genetic information can be transmitted to water through applications of electromagnetic fields, means that DNA has wave character. Here, non-particle view of DNA challenges standard paradigm of DNA and biology. Based on frequency, it can have implications for physics of cancer.
[3588] vixra:1610.0330 [pdf]
Method for Organizing Wireless Computer Network in Chemical System
Method for organizing wireless computer network in chemical system. This invention relates to physical chemistry and computer technology. The nodes of this network are computers with connected chemical feed systems set up to feed substances into the chemical system and online chemical analyzers set up to conduct the chemical analysis of the substance located in the chemical system and register the results of chemical analysis of the substance located in the chemical system. The invention is method for organizing wireless computer network in chemical system, comprising the fact that the transmission of electronic messages from one node to another node of this network is produced through communication channel of this wireless network, created in the chemical system which is organized by connecting a source computer to the chemical feed system, feeding substances into the chemical system by means of the operation of the chemical feed system in accordance with the finite sequence of settings modes of chemical feed system representing electronic message transmitted from the source computer and which is received from the source computer, and by connecting to the receiving computer an online chemical analyzer by which the chemical analysis of the substance located in the chemical system is conducted and the results of chemical analysis of the substance located in the chemical system are registered, and through which, on the receiving computer, the results of registration of the results of chemical analysis of the substance located in the chemical system are received, and the electronic message is restored from the results of registration of the results of chemical analysis of the substance located in the chemical system. In addition, each node of this wireless computer network confer capabilities to receive electronic messages through the connected online chemical analyzer from another node of this wireless network, and transmit electronic messages through the connected chemical feed system to another node of this wireless computer network through communication channels of this wireless computer network, in chemical system. The technical result of this invention is that the radio systems are not used in each wireless communication channel of this wireless computer network in the chemical system. This article is identical to the patent application ”Method for organizing wireless computer network in chemical system” with number: 2015113357, which was published in Russian and filed at Russian Patent Office: Federal Institute For Intellectual Property, Federal Service For Intellectual Property (Rospatent), Russian Federation.
[3589] vixra:1610.0328 [pdf]
Newton and Einstein's Gravity in a New Perspective for Planck Masses and Smaller Sized Objects
In a recent paper, Haug [1] has rewritten many of Newton's and Einstein's gravitational results, without changing their output, into a quantized Planck form. However, his results only hold down to the scale of Planck mass size objects. Here we derive similar results for any mass less than or equal to a Planck mass. All of the new formulas presented in this paper give the same numerical output as the traditional formulas. However, they have been rewritten in a way that gives a new perspective on the formulas when working with gravity at the level of the subatomic world. To rewrite the well-known formulas in this way could make it easier to understand strength and weakness in Newton and Einstein gravitation formulas at the subatomic scale, potentially opening them up for new interpretations.
[3590] vixra:1610.0299 [pdf]
A Theory of the Gravitational Co-Field and its Application to the Spacecraft Flyby Anomaly
A co-field to Newton's gravitational field is derived and its properties defined. It is applied to explain "Spacecraft-Earth Flyby Anomalies" discovered during deep space missions launched between 1990 and 2006. The Flyby anomaly has been considered a major unresolved problem in astrophysics. Keywords: Gravitational Co-Field, Spacecraft-Earth Flyby Anomalies, Space Physics, Classical Physics
[3591] vixra:1610.0295 [pdf]
Bell's Theorem Refuted; Commonsense Local Realism Defined
An open letter to Bellians and the Annals of Physics' Editors re -- it's a proven scientific fact: a violation of local realism has been demonstrated theoretically and experimentally -- after AoP (2016:67). EPR (1935) famously argue that additional variables will bring locality and causality to QM's completion; we show that they are right. Even more famously, Bell (1964) cried ‘impossible' against such variables; we give the shortest possible refutation of his claims. With EPR-based variables (and without QM), an old thought-experiment delivers a commonsensical locally-causal account of EPRB and GHZ in 3-space. We then name the flaw in Bell's theorem – Bell's error -- Bell's 1964:(14a) ≠ Bell's 1964:(14b) under EPRB. Thus, given Bell's (1988:88) gloss on a snippet of von Neumann's work, ‘There's nothing to Bell's theorem -- nor variants like CHSH (1969), Mermin (1990), Peres (1995) -- it's not just flawed, it's silly; not merely false but foolish.' In short, we show that the whole Bellian canon -- with its no-name brand of local realism -- is all of those, and misleading too. Under EPR, mixing common-sense with undergrad math/physics in the classical way so favored by Einstein, we interpret QM locally and realistically. We thus define a new brand of local realism -- CLR, commonsense local realism -- the union of local-causality (no causal influence propagates superluminally) and physical-realism (some physical properties change interactively). Long may EPR rule OK we say.
[3592] vixra:1610.0281 [pdf]
An Information Volume Measure
How to measure the volume of uncertainty information is an open issue. Shannon entropy is used to represent the uncertainty degree of a probability distribution. Given a generalized probability distribution which means that the probability is not only assigned to the basis event space but also the power set of event space. At this time, a so called meta probability space is constructed. A new measure, named as Deng entropy, is presented. The results show that, compared with existing method, Deng entropy is not only better from the aspect of mathematic form, but also has the significant physical meaning.
[3593] vixra:1610.0277 [pdf]
A New Formalism of Arbitrary Spin Particle Equations
In this paper, a new formalism of arbitrary spin particle equations is constructed. The physical meaning of the new equation is very clear. It's completely expressed by the amounts about spin. It's proved to describe correctly neutrino, photon and electron etc. Then a scalar field is introduced into the new equation. The new equation with the scalar field has an unique characteristic. The scalar field is like a switch. It can control generation and annihilation of particles. This provides a new dynamics mechanism about generation and annihilation of particles. This can also explain why the inflation period universe can be completely described by scalar fields.
[3594] vixra:1610.0263 [pdf]
Quanta, Physicists and Probabilities ... ?
There seems to be nothing short of a {\it double whammy} hitting the users of probability, and among them physicists, especially those involved in the foundations of quanta. First is the instant instinctual reaction that phenomena which interest one do sharply and clearly divide into the {\it dichotomy} of {\it two and only two} alternatives of being {\it either} ``deterministic", {\it or} on the contrary, being ``probabilistic". However, there is also a second, prior and yet deeper trouble, namely, the ``probabilistic" case is strongly believed to be equally clear and well-founded as is the ``deterministic" one. And the only difference seen between the two is that the latter can talk also about ``individual" phenomena, while the former can only do so about large enough ``ensembles" for which, however, it is believed to be equally clear, precise and rigorous with the ``deterministic" approach. Or briefly, ``probabilistic'' is seen as nothing else but the ``deterministic'' on the level of ``ensembles" ... \\ The fact, however, is that there is a {\it deep gap} between the empirical world of ``random" phenomena, and on the other hand, theories of ``probability". Furthermore, any attempt to bridge that gap does inevitably involve {\it infinity}, thus aggravating the situation to the extent that even today, and even if not quite realized by many, theories of ``probabilities" have a {\it shaky} foundation. \\ This paper tries to bring to the awareness of various users of ``probabilities", and among them, to physicists involved in quanta, the fact that - seemingly unknown to them - they are self-inflicted victims of the mentioned double whammy.
[3595] vixra:1610.0252 [pdf]
Super Conformal Group in D=10 Space-time
Abstract In this present discussion we discussed the super Poincaré group in D=10 dimensions in terms of the highest division algebra of octonions. We have construct the Poincaré group in D=8 dimension then it's extension to conformal algebra of D=10 has been discussed in terms of octonion algebra. Finally extension of the conformal algebras of D=10 dimensional space to super conformal algebra of Poincaré group have been done in a consistent manner.
[3596] vixra:1610.0251 [pdf]
Graded Lie Algebra of Quaternions and Superalgebra of SO(3,1)
Abstract In the present discussion we study the grading of Quaternion algebra(\mathbb{H}) and Lorentz algebra of O(3,1) group. Then we have made an attempt to make the whole Poincaré algebra of SO(3,1)in terms of Quaternions. After this the supersymmetrization of this group has been done in a consistent manner. Finally the dimensional reduction from D=4 to D=2 has been studied.
[3597] vixra:1610.0250 [pdf]
Planets and Suns and Their Corresponding Sphere Packed Average Particles
When one talks about the density of a planet or star, one normally talks about the average density, despite the fact that the core is much more dense and the surface much less dense than the average density. Here we will link the notion of an average density to a new concept of a hypothetical planetary average subatomic particle. We will define this hypothetical particle as a particle if, when sphere-packed according to the Kepler conjuncture, it matches both the volume and the mass of the planet or sun in question. Even if this type of average particle may not actually exist, we still feel it gives us some new insight into how the average density could be linked to a hypothetical average particle. Take the question of how such a particle would be compared to an electron, for example. The answer is in the analytical solution presented.
[3598] vixra:1610.0244 [pdf]
Operator Exponentials for the Clifford Fourier Transform on Multivector Fields in Detail
In this paper we study Clifford Fourier transforms (CFT) of multivector functions taking values in Clifford’s geometric algebra, hereby using techniques coming from Clifford analysis (the multivariate function theory for the Dirac operator). In these CFTs on multivector signals, the complex unit i∈C is replaced by a multivector square root of −1, which may be a pseudoscalar in the simplest case. For these integral transforms we derive an operator representation expressed as the Hamilton operator of a harmonic oscillator.
[3599] vixra:1610.0203 [pdf]
One-Dimensional Navier-Stokes Finite Element Flow Model
This technical report documents the theoretical, computational, and practical aspects of the one-dimensional Navier-Stokes finite element flow model. The document is particularly useful to those who are interested in implementing, validating and utilizing this relatively-simple and widely-used model.
[3600] vixra:1610.0202 [pdf]
Solving the Flow Fields in Conduits and Networks Using Energy Minimization Principle with Simulated Annealing
In this paper, we propose and test an intuitive assumption that the pressure field in single conduits and networks of interconnected conduits adjusts itself to minimize the total energy consumption required for transporting a specific quantity of fluid. We test this assumption by using linear flow models of Newtonian fluids transported through rigid tubes and networks in conjunction with a simulated annealing (SA) protocol to minimize the total energy cost. All the results confirm our hypothesis as the SA algorithm produces very close results to those obtained from the traditional deterministic methods of identifying the flow fields by solving a set of simultaneous equations based on the conservation principles. The same results apply to electric ohmic conductors and networks of interconnected ohmic conductors. Computational experiments conducted in this regard confirm this extension. Further studies are required to test the energy minimization hypothesis for the non-linear flow systems.
[3601] vixra:1610.0201 [pdf]
Energy Minimization for the Flow in Ducts and Networks
The present paper is an attempt to demonstrate how the energy minimization principle may be considered as a governing rule for the physical equilibrium that determines the flow fields in tubes and networks. We previously investigated this issue using a numerical stochastic method, specifically simulated annealing, where we demonstrated the problem by some illuminating examples and concluded that energy minimization principle can be a valid hypothesis. The investigation in this paper is more general as it is based to a certain extent on an analytical approach.
[3602] vixra:1610.0200 [pdf]
Deterministic and Stochastic Algorithms for Resolving the Flow Fields in Ducts and Networks Using Energy Minimization
Several deterministic and stochastic multi-variable global optimization algorithms (Conjugate Gradient, Nelder-Mead, Quasi-Newton, and Global) are investigated in conjunction with energy minimization principle to resolve the pressure and volumetric flow rate fields in single ducts and networks of interconnected ducts. The algorithms are tested with seven types of fluid: Newtonian, power law, Bingham, Herschel-Bulkley, Ellis, Ree-Eyring and Casson. The results obtained from all those algorithms for all these types of fluid agree very well with the analytically derived solutions as obtained from the traditional methods which are based on the conservation principles and fluid constitutive relations. The results confirm and generalize the findings of our previous investigations that the energy minimization principle is at the heart of the flow dynamics systems. The investigation also enriches the methods of Computational Fluid Dynamics for solving the flow fields in tubes and networks for various types of Newtonian and non-Newtonian fluids.
[3603] vixra:1610.0199 [pdf]
Variational Approach for the Flow of Ree-Eyring and Casson Fluids in Pipes
The flow of Ree-Eyring and Casson non-Newtonian fluids is investigated using a variational principle to optimize the total stress. The variationally-obtained solutions are compared to the analytical solutions derived from the Weissenberg-Rabinowitsch-Mooney equation and the results are found to be identical within acceptable numerical errors and modeling approximations.
[3604] vixra:1610.0198 [pdf]
Using the Stress Function in the Flow of Generalized Newtonian Fluids Through Pipes and Slits
We use a generic and general numerical method to obtain solutions for the flow of generalized Newtonian fluids through circular pipes and plane slits. The method, which is simple and robust can produce highly accurate solutions which virtually match any analytical solutions. The method is based on employing the stress, as a function of the pipe radius or slit thickness dimension, combined with the rate of strain function as represented by the fluid rheological constitutive relation that correlates the rate of strain to stress. Nine types of generalized Newtonian fluids are tested in this investigation and the solutions obtained from the generic method are compared to the analytical solutions which are obtained from the Weissenberg-Rabinowitsch-Mooney-Schofield method. Very good agreement was obtained in all the investigated cases. All the required quantities of the flow which include local viscosity, rate of strain, flow velocity profile and volumetric flow rate, as well as shear stress, can be obtained from the generic method. This is an advantage as compared to some traditional methods which only produce some of these quantities. The method is also superior to the numerical meshing techniques which may be used for resolving the flow in these systems. The method is particularly useful when analytical solutions are not available or when the available analytical solutions do not yield all the flow parameters.
[3605] vixra:1610.0196 [pdf]
Using the Stress Function in the Flow of Generalized Newtonian Fluids Through Conduits with Non-Circular or Multiply Connected Cross Sections
We investigate the possibility that the spatial dependency of stress in generalized Newtonian flow systems is a function of the applied pressure field and the conduit geometry but not of the fluid rheology. This possibility is well established for the case of a one-dimensional flow through simply connected regions, specifically tubes of circular uniform cross sections and plane thin slits. If it can also be established for the more general case of generalized Newtonian flow through non-circular or multiply connected geometries, such as the two-dimensional flow through conduits of rectangular or elliptical cross sections or the flow through annular circular pipes, then analytical or semi-analytical or highly accurate numerical solutions; regarding stress, rate of strain, velocity profile and volumetric flow rate; for these geometries can be obtained from the stress function, which can be easily obtained from the Newtonian case, in combination with the constitutive rheological relation for the particular non-Newtonian fluid, as done previously for the case of the one-dimensional flow through simply connected regions.
[3606] vixra:1610.0195 [pdf]
Reply to "Comment on Sochi's Variational Method for Generalised Newtonian Flow" by Pritchard and Corson
In this article we challenge the claim that the previously proposed variational method to obtain flow solutions for generalized Newtonian fluids in circular tubes and plane slits is exact only for power law fluids. We also defend the theoretical foundation and formalism of the method which is based on minimizing the total stress through the application of the Euler-Lagrange principle.
[3607] vixra:1610.0194 [pdf]
Modeling the Flow of a Bautista-Manero Fluid in Porous Media
In this article, the extensional flow and viscosity and the converging-diverging geometry were examined as the basis of the peculiar viscoelastic behavior in porous media. The modified Bautista-Manero model, which successfully describes shearthinning, elasticity and thixotropic time-dependency, was used for modeling the flow of viscoelastic materials which also show thixotropic attributes. An algorithm, originally proposed by Philippe Tardy, that employs this model to simulate steadystate time-dependent flow was implemented in a non-Newtonian flow simulation code using pore-scale modeling and the initial results were analyzed. The findings are encouraging for further future development.
[3608] vixra:1610.0193 [pdf]
Modeling the Flow of Yield-Stress Fluids in Porous Media
Yield-stress is a problematic and controversial non-Newtonian flow phenomenon. In this article, we investigate the flow of yield-stress substances through porous media within the framework of pore-scale network modeling. We also investigate the validity of the Minimum Threshold Path (MTP) algorithms to predict the pressure yield point of a network depicting random or regular porous media. Percolation theory as a basis for predicting the yield point of a network is briefly presented and assessed. In the course of this study, a yield-stress flow simulation model alongside several numerical algorithms related to yield-stress in porous media were developed, implemented and assessed. The general conclusion is that modeling the flow of yield-stress fluids in porous media is too difficult and problematic. More fundamental modeling strategies are required to tackle this problem in the future.
[3609] vixra:1610.0192 [pdf]
Emissivity: A Program for Atomic Emissivity Calculations
In this article we report the release of a new program for calculating the emissivity of atomic transitions. The program, which can be obtained with its documentation from our website www.scienceware.net, passed various rigorous tests and was used by the author to generate theoretical data and analyze observational data. It is particularly useful for investigating atomic transition lines in astronomical context as the program is capable of generating a huge amount of theoretical data and comparing it to observational list of lines. A number of atomic transition algorithms and analytical techniques are implemented within the program and can be very useful in various situations. The program can be described as fast and efficient. Moreover, it requires modest computational resources.
[3610] vixra:1610.0180 [pdf]
About The Geometry of Cosmos(revised)
The current paper presents a new idea that it might lead us to the Grand Unified Theory. A concrete mathematical framework has been provided that could be appro- priate for one to work with. Possible answers were given concerning the problems of dark matter and dark energy as well as the “penetration” to vacuum dominant epoch, combining Quantum Physics with Cosmology through the existence of Higg’s boson. A value for Higg’s mass around 125.179345 Gev/c2 and a value for vacuum density around 4.41348x10−5Gev/cm3 were derived . Via Cartan’s theorem a proof regarding the number of bosons existing in nature (28) has been presented. Additionally, the full Lagrangian of our Cosmos (including Quantum Gravity) was accomplished.
[3611] vixra:1610.0170 [pdf]
Mass Shift Due to the Nonlinear Lorentz Group.
We determine nonlinear Lorentz transformations between coordinate systems which are mutually in a constant symmetrical accelerated motion. The maximal acceleration as an analogue of the maximal velocity in special relativity follows from the nonlinear Lorentz group of transformtion. The mass formula was derived by the same method as the Thomas precession formula by author. It can play crucial role in particle physics and cosmology
[3612] vixra:1610.0155 [pdf]
Pore-Scale Modeling of Non-Newtonian Flow in Porous Media
The thesis investigates the flow of non-Newtonian fluids in porous media using pore-scale network modeling. Non-Newtonian fluids show very complex time and strain dependent behavior and may have initial yield stress. Their common feature is that they do not obey the simple Newtonian relation of proportionality between stress and rate of deformation. They are generally classified into three main categories: time-independent, time-dependent and viscoelastic. Two three-dimensional networks representing a sand pack and Berea sandstone were used. An iterative numerical technique is used to solve the pressure field and obtain the flow rate and apparent viscosity. The time-independent category is investigated using two fluid models: Ellis and Herschel-Bulkley. The analysis confirmed the reliability of the non-Newtonian network model used in this study. Good results are obtained, especially for the Ellis model, when comparing the network model results to experimental data sets found in the literature. The yield-stress phenomenon is also investigated and several numerical algorithms were developed and implemented to predict threshold yield pressure of the network. An extensive literature survey and investigation were carried out to understand the phenomenon of viscoelasticity with special attention to the flow in porous media. The extensional flow and viscosity and converging-diverging geometry were thoroughly examined as the basis of the peculiar viscoelastic behavior in porous media. The modified Bautista-Manero model was identified as a promising candidate for modeling the flow of viscoelastic materials which also show thixotropic attributes. An algorithm that employs this model was implemented in the non-Newtonian code and the initial results were analyzed. The time-dependent category was examined and several problems in modeling and simulating the flow of these fluids were identified.
[3613] vixra:1610.0154 [pdf]
High Throughput Software for Powder Diffraction and its Application to Heterogeneous Catalysis
In this thesis we investigate high throughput computational methods for processing large quantities of data collected from synchrotrons and their application to spectral analysis of powder diffraction data. We also present the main product of this PhD programme, specifically a software called 'EasyDD' developed by the author. This software was created to meet the increasing demand on data processing and analysis capabilities as required by modern detectors which produce huge quantities of data. Modern detectors coupled with the high intensity X-ray sources available at synchrotrons have led to the situation where datasets can be collected in ever shorter time scales and in ever larger numbers. Such large volumes of datasets pose a data processing bottleneck which augments with current and future instrument development. EasyDD has achieved its objectives and made significant contributions to scientific research. It can also be used as a model for more mature attempts in the future. EasyDD is currently in use by a number of researchers in a number of academic and research institutions to process high-energy diffraction data. These include data collected by different techniques such as Energy Dispersive Diffraction, Angle Dispersive Diffraction and Computer Aided Tomography. EasyDD has already been used in a number of published studies, and is currently in use by the High Energy X-Ray Imaging Technology project. The software was also used by the author to process and analyse datasets collected from synchrotron radiation facilities. In this regard, the thesis presents novel scientific research involving the use of EasyDD to handle large diffraction datasets in the study of alumina-supported metal oxide catalyst bodies. These data were collected using Tomographic Energy Dispersive Diffraction Imaging and Computer Aided Tomography techniques.
[3614] vixra:1610.0153 [pdf]
Atomic and Molecular Aspects of Astronomical Spectra
In the first section we present the atomic part where a C2+ atomic target was prepared and used to generate theoretical data to investigate recombination lines arising from electron-ion collisions in thin plasma. R-matrix method was used to describe the C2+ plus electron system. Theoretical data concerning bound and autoionizing states were generated in the intermediate-coupling approximation. The data were used to generate dielectronic recombination data for C+ which include transition lines, oscillator strengths, radiative transition probabilities, emissivities and dielectronic recombination coefficients. The data were cast in a line list containing 6187 optically-allowed transitions which include many C II lines observed in astronomical spectra. This line list was used to analyze the spectra from a number of astronomical objects, mainly planetary nebulae, and identify their electron temperature. The electron temperature investigation was also extended to include free electron energy analysis to investigate the long-standing problem of discrepancy between the results of recombination and forbidden lines analysis and its possible connection to the electron distribution. In the second section we present the results of our molecular investigation; the generation of a comprehensive, calculated line list of frequencies and transition probabilities for H2D+. The line list contains over 22 million rotational-vibrational transitions occurring between more than 33 thousand energy levels and covers frequencies up to 18500 cm-1. About 15% of these levels are fully assigned with approximate rotational and vibrational quantum numbers. A temperature-dependent partition function and cooling function are presented. Temperature-dependent synthetic spectra for the temperatures T=100, 500, 1000 and 2000 K in the frequency range 0-10000 cm-1 were also generated and presented graphically.
[3615] vixra:1610.0151 [pdf]
Bell's Theorem Refuted: EPR Rule ok
EPR (1935) famously argue that additional variables will bring locality and causality to QM's completion; we show that they are right. More famously, Bell (1964) cried 'impossible' against such variables; we give the shortest possible refutation of his theorem. With EPR-based variables -- and no QM -- a thought-experiment delivers common-sense locally-causal accounts of EPRB and GHZ in 3-space. We then find the flaw in Bell's theorem: Bell's 1964:(14a) does not equal Bell's 1964:(14b). Thus, at odds with EPR (and us), Bell's unrealistic theorem and its many variants (eg, Mermin, Peres) miss their mark. In short, mixing common-sense with undergrad math and physics in the classical way so favored by Einstein, we interpret QM locally and realistically. Long may EPR rule OK we say.
[3616] vixra:1610.0149 [pdf]
Introduction to Tensor Calculus
These are general notes on tensor calculus which can be used as a reference for an introductory course on tensor algebra and calculus. A basic knowledge of calculus and linear algebra with some commonly used mathematical terminology is presumed.
[3617] vixra:1610.0148 [pdf]
Tensor Calculus
These notes are the second part of the tensor calculus documents which started with the previous set of introductory notes. In the present text, we continue the discussion of selected topics of the subject at a higher level expanding, when necessary, some topics and developing further concepts and techniques. Unlike the previous notes which are largely based on a Cartesian approach, the present notes are essentially based on assuming an underlying general curvilinear coordinate system.
[3618] vixra:1610.0147 [pdf]
Principles of Differential Geometry
The present text is a collection of notes about differential geometry prepared to some extent as part of tutorials about topics and applications related to tensor calculus. They can be regarded as continuation to the previous notes on tensor calculus as they are based on the materials and conventions given in those documents. They can be used as a reference for a first course on the subject or as part of a course on tensor calculus.
[3619] vixra:1610.0146 [pdf]
Special Relativity: Scientific or Philosophical Theory?
In this article, we argue that the theory of special relativity, as formulated by Einstein, is a philosophical rather than a scientific theory. What is scientific and experimentally supported is the formalism of the relativistic mechanics embedded in the Lorentz transformations and their direct mathematical, experimental and observational consequences. This is in parallel with the quantum mechanics where the scientific content and experimental support of this branch of physics is embedded in the formalism of quantum mechanics and not in its philosophical interpretations such as the Copenhagen school or the parallel worlds explanations. Einstein theory of special relativity gets unduly credit from the success of the relativistic mechanics of Lorentz transformations. Hence, all the postulates and consequences of Einstein interpretation which have no direct experimental or observational support should be reexamined and the relativistic mechanics of Lorentz transformations should be treated in education, academia and research in a similar fashion to that of quantum mechanics.
[3620] vixra:1610.0121 [pdf]
On Complex Interval Linear System
Linear system of equations with crisp values are crucial filed of research. Several different technique are available to solve this type of equations. The parameter values are actually uncertain in nature because data are collected from experiment. Another aspect is error in calculation. To avoid errors and uncertain nature of the parameter values, we use interval analysis. In this work, we are addressed solution methods for complex interval linear system. We propose a new method for finding solution of complex linear system of equations (CLSE). Moreover we study the numerical experiments using the proposed different methods.
[3621] vixra:1610.0106 [pdf]
A New 3n-1 Conjecture Akin to Collatz Conjecture
The Collatz conjecture is an open conjecture in mathematics named so after Lothar Collatz who proposed it in 1937. It is also known as 3n + 1 conjecture, the Ulam conjecture (after Stanislaw Ulam), Kakutani's problem (after Shizuo Kakutani) and so on. In this paper a new conjecture called as the 3n-1 conjecture which is akin to the Collatz conjecture is proposed. It functions on 3n -1, for any starting number n, its sequence eventually reaches either 1, 5 or 17. The 3n-1 conjecture is compared with the Collatz conjecture.
[3622] vixra:1610.0076 [pdf]
A Solution to the Problem of Apollonius Using Vector Dot Products
To the collections of problems solved via Geometric Algebra (GA) in References 1-13, this document adds a solution, using only dot products, to the Problem of Apollonius. The solution is provided for completeness and for contrast with the GA solutions presented in Reference 3.
[3623] vixra:1610.0074 [pdf]
Belief Reliability Analysis and Its Application
In reliability analysis, Fault Tree Analysis based on evidential networks is an important research topic. However, the existing EN approaches still remain two issues: one is the final results are expressed with interval numbers, which has a relatively high uncertainty to make a final decision. The other is the combination rule is not used to fuse uncertain information. These issues will greatly decrease the efficiency of EN to handle uncertain information. To address these open issues, a new methodology, called Belief Reliability Analysis, is presented in this paper. The combination methods to deal with series system, parallel system, series-parallel system as well as parallel-series system are proposed for reliability evaluation. Numerical examples and the real application in servo-actuation system are used to show the efficiency of the proposed Belief Reliability Analysis methodology.
[3624] vixra:1610.0067 [pdf]
Four Dimensional Quantum Hall Effect for Dyons
Starting with division algebra based on quaternion, we have constructed the generalization of quantum Hall effect from two dimension to four dimension. We have constructed the required Hamiltonian operator and thus obtained its eigen values and eigen functions for four dimensional quantum Hall effect for dyons. The degeneracy of the four dimensional quantum Hall system has been discussed in terms of two integers (P\,and\,Q ) related together where as the integer Q plays the role of Landau level index and accordingly the lowest Landau level has been obtained for four dimensional quantum Hall effect associated with magnetic monopole(or dyons). It is shown that there exists the integer as well the fractional quantum Hall effect and so, the four dimensional quantum Hall system provides a macroscopic number of degenerate states and at appropriate integer or fractional filling factions this system forms an incompressible quantum liquid. Key Words: Quaternion, dyons, Hamiltonian operator, Landau level etc
[3625] vixra:1610.0065 [pdf]
Lauricella Hypergeometric Series Over Finite Fields
In this paper we give a finite field analogue of the Lauricella hypergeometric series and obtain some transformation and reduction formulae and several generating functions for the Lauricella hypergeometric series over finite fields. Some of these generalize some known results of Li \emph{et al} as well as several other well-known results.
[3626] vixra:1610.0064 [pdf]
Derivation of Photon Mass and Avogadro Constant from Planck units
Originally proposed in 1899 by German physicist Max Planck, Planck units are alsoknown as natural units because the origin of their definition comes only from properties of the fundamental physical theories and not from interchangeable experimental param- eters. It is widely accepted that Planck units are the most fundamental units. In this paper, few more fundamental constants are derived from Planck units. These constants are permutations and combinations of Planck units and hence by construct, they are also constants. The mass and radius of photon are derived. The Avogadro constant, Boltzmann constant and unified mass unit are also derived. The structure of the photon is explained. The meaning of Avogadro constant in terms of photon structure is also explained. The meaning of Planck mass is explained. As proof for the meaning of the Planck mass, the solar constant is derived. The solar constant is derived applying string theory as well. Finally revised Planck current, Planck voltage and Planck impedance are also derived. It is also proven that Planck mass is the energy emitted by any star per second per ray of proper length c. Apart from this, the energy emitted per second per ray of proper length c by a planet or communication antenna is not equal to Planck mass.
[3627] vixra:1610.0054 [pdf]
Some Solution Strategies for Equations that Arise in Geometric (Clifford) Algebra
Drawing mainly upon exercises from Hestenes's New Foundations for Classical Mechanics, this document presents, explains, and discusses common solution strategies. Included are a list of formulas and a guide to nomenclature.
[3628] vixra:1610.0050 [pdf]
Spectra of New Join of Two Graphs
Let G1 and G2 be two graph with vertex sets V (G1); V (G2) and edge sets E(G1);E(G2) respectively. The subdivision graph S(G) of a graph G is the graph obtained by inserting a new vertex into every edges of G. The SGvertexjoin of G1 and G2 is denoted by G1}G2 and is the graph obtained from S(G1) [ G1 and G2 by joining every vertex of V (G1) to every vertex of V (G2). In this paper we determine the adjacency spectra ( respectively Laplacian spectra and signless Laplacian spectra) of G1}G2 for a regular graph G1 and an arbitrary graph G2
[3629] vixra:1610.0049 [pdf]
Spectra of a New Join in Duplication Graph
The Duplication graph DG of a graph G, is obtained by inserting new vertices corresponding to each vertex of G and making the vertex adja- cent to the neighbourhood of the corresponding vertex of G and deleting the edges of G. Let G1 and G2 be two graph with vertex sets V (G1) and V (G2) respectively. The DG - vertex join of G1 and G2 is denoted by G1 t G2 and it is the graph obtained from DG1 and G2 by joining every vertex of V (G1) to every vertex of V (G2). The DG - add vertex join of G1 and G2 is denoted by G1 ./ G2 and is the graph obtained from DG1 and G2 by joining every additional vertex of DG1 to every vertex of V (G2). In this paper we determine the A - spectra and L - spectra of the two new joins of graphs for a regular graph G1 and an arbitrary graph G2 . As an application we give the number of spanning tree, the Kirchhoff index and Laplace energy like invariant of the new join. Also we obtain some infinite family of new class of integral graphs
[3630] vixra:1610.0045 [pdf]
Proof for HST WFC3 Uvis and ir Channel Njy Measurements Are Wrong
The context of the paper is related to the flux density of the order of nJy reported in recent papers. The aim is to prove that the reported flux density of the order of nJy is wrong. A new table for both IR and UVIS channel of the HST/WFC3 is created of the order of flux density mJy. This table should be used as a template for future projects related to HST/WFC3. Any new measurements below mJy (reported in the table) should be rejected for obvious reasons reported in this paper. Due to the advent of algorithms and digital computing technology, these errors are possible.
[3631] vixra:1610.0043 [pdf]
Spectrum of (K; r) Regular Hypergraph
We present a spectral theory of uniform, regular and linear hyper- graph. The main result are the nature of the eigen values of (k; r) - regular linear hypergraph and the relation between its dual and line graph. We also discuss some properties of Laplacian spectrum of a (k; r) - regular hypergraphs.
[3632] vixra:1610.0041 [pdf]
Method for Organizing Wireless Computer Network in Biological Tissue
Method for organizing wireless computer network in biological tissue. This invention relates to computer technology and biophysics, and can be used for the establishment and operation of a wireless computer network in biological tissue. The nodes of this network are computers connected to the vibration meters and vibration generators. The contact surfaces of vibration generators and vibration meters are brought into contact with the biological tissue. The invention is method for organizing wireless computer network in biological tissue, comprising the fact that the transmission of electronic messages from one node to another node of this network is produced through communication channel of this wireless network, created in the biological tissue which is organized by connecting a source computer to the vibration generator, bringing the contact surface of the vibration generator in contact with the biological tissue, creating and transferring the controlled mechanical motions to the biological tissue through the contact surface of the vibration generator by means of the operation of the vibration generator in accordance with the finite sequence of settings modes of vibration generator representing electronic message transmitted from the source computer and which is received from the source computer, and by connecting to the receiving computer a vibration meter by which the parameters of mechanical motions are registered and which are received by the vibration meter from biological tissue through the contact surface of the vibration meter which is brought into contact with the biological tissue, and through which, on the receiving computer, the results of registration of parameters of mechanical motions are received, and the electronic message is restored from the results of registration of mechanical motions parameters. In addition, each node of this wireless computer network confer capabilities to receive electronic messages through the connected vibration meter from another node of this wireless computer network, and transmit electronic messages through the connected vibration generator to another node of this wireless computer network through communication channels of this wireless computer network, through biological tissue. The technical result is that the radio systems are not used in each wireless communication channel of this wireless computer network in the biological tissue.
[3633] vixra:1610.0028 [pdf]
A New Belief Entropy: Possible Generalization of Deng Entropy, Tsallis Entropy and Shannon Entropy
Shannon entropy is the mathematical foundation of information theory, Tsallis entropy is the roots of nonextensive statistical mechanics, Deng entropy was proposed to measure the uncertainty degree of belief function very recently. In this paper, A new entropy H was proposed to generalize Deng entropy, Tsallis entropy and Shannon entropy. The new entropy H can be degenerated to Deng entropy, Tsallis entropy, and Shannon entropy under different conditions, and also can maintains the mathematical properity of Deng entropy, Tsallis entropy and Shannon entropy.
[3634] vixra:1609.0397 [pdf]
Einstein Rebooted, Bell's Theorem Refuted, Etc.
Rebooting Einstein's ideas about local-causality, an engineer brings local-causality to quantum theory via operators and variables in 3-space. Taking realism to be the view that external reality exists and has definite properties, his core principle is common-sense local realism (CLR): the union of local-causality (no causal influence propagates superluminally) and physical-realism (some physical properties change interactively). Endorsing Einstein-separability — system X is independent of what is done with system Y that is spatially separated from X — Bell's famous mission is advanced. That is, by means of parameters λ, a more complete specification of EPRB's physics is successful. A consequent locally-causal refutation of Bell's theorem allows EPRB correlations to be explained in a classical way, in line with Einstein's ideas, without reference to Hilbert space, quantum states, etc. Conclusion: Bell's theorem is based on a mathematical error; an error in reduction is inconsistent with Bell's opening assumptions.
[3635] vixra:1609.0384 [pdf]
An Appell Series Over Finite Fields
In this paper we give a finite field analogue of one of the Appell series and obtain some transformation and reduction formulae and the generating functions for the Appell series over finite fields.
[3636] vixra:1609.0374 [pdf]
Collatz Conjecture for $2^{100000}-1$ is True - Algorithms for Verifying Extremely Large Numbers
Collatz conjecture (or 3x+1 problem) is out for about 80 years. The verification of Collatz conjecture has reached to the number about 60bits until now. In this paper, we propose new algorithms that can verify whether the number that is about 100000bits (30000 digits) can return 1 after 3*x+1 and x/2 computations. This is the largest number that has been verified currently. The proposed algorithm changes numerical computation to bit computation, so that extremely large numbers (without upper bound) becomes possible to be verified. We discovered that $2^{100000}-1$ can return to 1 after 481603 times of 3*x+1 computation, and 863323 times of x/2 computation.
[3637] vixra:1609.0373 [pdf]
Induction and Code for Collatz Conjecture or 3x+1 Problem
Collatz conjecture (or 3x+1 problem) has not been proved to be true or false for about 80 years. The exploration on this problem seems to ask for introducing a totally new method. In this paper, a mathematical induction method is proposed, whose proof can lead to the proof of the conjecture. According to the induction, a new representation (for dynamics) called ``code'' is introduced, to represent the occurred $3*x+1$ and $x/2$ computations during the process from starting number to the first transformed number that is less than the starting number. In a code $3*x+1$ is represented by 1 and $x/2$ is represented by 0. We find that code is a building block of the original dynamics from starting number to 1, and thus is more primitive for modeling quantitative properties. Some properties only exist in dynamics represented by code, but not in original dynamics. We discover and prove some inherent laws of code formally. Code as a whole is prefix-free, and has a unified form. Every code can be divided into code segments and each segment has a form $\{10\}^{p \geq 0}0^{q \geq 1}$. Besides, $p$ can be computed by judging whether $x \in[0]_2$, $x\in[1]_4$, or computed by $t=(x-3)/4$, without any concrete computation of $3*x+1$ or $x/2$. Especially, starting numbers in certain residue class have the same code, and their code has a short length. That is, $CODE(x \in [1]_4)=100,$ $CODE((x-3)/4 \in [0]_4)=101000,$ $CODE((x-3)/4 \in [2]_8)=10100100,$ $CODE((x-3)/4 \in [5]_8)=10101000,$ $CODE((x-3)/4 \in [1]_{32})=10101001000,$ $CODE((x-3)/4\in [3]_{32})=10101010000,$ $CODE((x-3)/4\in [14]_{32})=10100101000.$ The experiment results again confirm above discoveries. We also give a conjecture on $x \in [3]_4$ and an approach to the proof of Collatz conjecture. Those discoveries support the proposed induction and are helpful to the final proof of Collatz conjecture.
[3638] vixra:1609.0348 [pdf]
An Attempt to Explain Flyby Anomaly And to Account for the Anomalous Torque of the Gravity Probe-b Gyroscopes Using LITG
To explain the flyby anomaly we fully applied the equivalence principle, and we allowed the observer on-board the free-falling craft to claim the state of rest. The telemetry photons considered as particles possessing their mass due to their movement with the light speed. The telemetry photons assumed to generate their respective gravitomagnetic fields according to LITG. The telemetry photons were emitted from the craft and can only be judged by an observer on-board the craft. The observer on-board the spacecraft will claim that the Earth is moving relatively with his same velocity in the opposite direction. The effect will be detected by an observer on-board the craft, that is the frame of reference attached to the craft. The Earth will generate its respective gravitomagnetic field due to its relative motion as claimed by the observer on-board the craft. The flyby effect is highly dependent on the way we observe it, as we will show. As for the gravity probe-b case, we insisted that the equivalence principle must be fully applied. Therefore an observer in a free falling frame, have the right to claim being at rest, while the rest of the Universe will be moving with his same velocity, in the opposite direction. So from the point of view of the spinning gyroscope, Earth will be orbiting the gyroscope. We usually call this an apparent revolution. But for the gyroscope this apparent revolution of the Earth can produce measurable effects. Using this reasoning and applying the LITG we obtained a field which is about 105.39 times greater than the expected one.
[3639] vixra:1609.0341 [pdf]
Modeling the "Falling Slinky"
This document attempted to obtain analytical solutions for the "Falling Slinky" in three ways. The first two, which used the wave equation, failed for different reasons. The first attempt used Fourier series, which could not satisfy the initial and boundary conditions. The second attempt used Laplace transforms; this method did give a solution, which correctly predicted the acceleration of the center of mass, but which predicted that the upper part of the Slinky should "fall through" the lower part, which is impossible. This false prediction is not a defect of the use of Laplace transforms, but an artifact of the use of the wave equation to treat the falling Slinky, which is a shock-wave phenomenon. The third attempt at a solution used the impulse-momentum theorem, obtaining a result whose predictions are internally consistent, as well as agreeing with empirical results. However, we must point out that the model used here treats only the Slinky's longitudinal behavior, ignoring its torsional behavior.
[3640] vixra:1609.0319 [pdf]
Silicene Superconductivity Due to the Kapitza-Dirac Effect
We consider the Kapitza-Dirac configuration for the generation of the standing waves. Electrons are then diffracted by standing waves and Bragg equation is valid. The situation is considered also in a plane and in the three dimensions. The electron-photon system forms the electron-photon superconductor. The Kapitza-Dirac effect is then applied to silicene.
[3641] vixra:1609.0318 [pdf]
Mechanical Behaviors of Banana Fibres with Different Mechanical Properties
World is as of now concentrating on alternate material sources that are environment agreeable and biodegradable in nature. Because of the expanding natural concerns, bio composite produced out of regular fiber and polymeric resin, is one of the late advancements in the business and constitutes the present extent of experimental work. The use of composite materials field is increasing gradually in engineering. The composite consists of mainly two phases i.e. matrix and fiber. The accessibility of characteristic fiber and simplicity of assembling have enticed scientists worldwide to attempt by regional standards accessible inexpensive fiber and to learning their achievability of fortification determinations and to what degree they fulfill the obliged particulars of great strengthened polymer composite aimed at structural requisition. Fiber reinforced polymer composites has numerous preferences, for example, generally minimal effort of creation, simple to create and better quality contrast than perfect polymer tars due with this reason fiber strengthened polymer composite utilized within an assortment of provision as class of structure material. This work describe the fabrication and the mechanical behavior of banana fiber reinforced polymer composite at varying composition (25\%, 30\%, 35\%) with that of silicon carbide at 4\%, 8\%, 12\% respectively. Also the test such as the tensile test, hardness test and the bending test are carried out and the mechanical properties of the composite material are studied.
[3642] vixra:1609.0309 [pdf]
Comment on the Isotropic Expansion of the Universe
Saadeh et al recently reported in Phys. Rev. Lett. that ''anisotropic expansion of the Universe is strongly disfavoured, with odds of 121,000:1 agains'', using CMB temperature and polarisation data from the WMAP and Planck satellites. However, it is impossible to determine anything about expansion of the Universe from the WMAP and Planck datasets for a number of reasons.
[3643] vixra:1609.0273 [pdf]
The Observer Effect
This paper discusses how an observer, when properly defined, can lead to a different interpretation of the universe by providing a way to connect General Relativity and Quantum Mechanics. First, this was used to investigate the mass of the Milky Way Galaxy, and the result suggests that dark matter does not exist. Then, Hubble's Law was examined, and we were led to the same conclusion, i.e., dark energy does not exist. Finally, the linkage of quantum mechanics and relativity provides another explanation of the cosmic microwave background (CMB) and casts doubt on the Big Bang theory. All of these observations and conclusions were made possible by examining the philosophical foundation provided by a better understanding of how intelligent life forms make sense of the physical world. First, we discuss the intelligent life forms that are responsible for all observations and theories related to the universe. With this new understanding of ourselves, from a physics perspective, a philosophy emerges that alters our understanding of space and time. A theory is developed on this philosophical foundation that provides a way to connect the background dependence of quantum mechanics and the background independence of general relativity.
[3644] vixra:1609.0272 [pdf]
Comment on the Black Hole in Markarian 1018
It has recently been reported in the journal 'Astronomy and Astrophysics' that the active galactic nucleus of Markarian 1018 has likely changed optical type due to the effects of a supermassive black hole or a binary system consisting of two such black holes. It is however impossible for any type or form of black hole to be involved with Mrk 1018 because the mathematical theory of black holes violates the rules of pure mathematics.
[3645] vixra:1609.0230 [pdf]
The Recycling Gibbs Sampler for Efficient Learning
Monte Carlo methods are essential tools for Bayesian inference. Gibbs sampling is a well-known Markov chain Monte Carlo (MCMC) algorithm, extensively used in signal processing, machine learning, and statistics, employed to draw samples from complicated high-dimensional posterior distributions. The key point for the successful application of the Gibbs sampler is the ability to draw efficiently samples from the full-conditional probability density functions. Since in the general case this is not possible, in order to speed up the convergence of the chain, it is required to generate auxiliary samples whose information is eventually disregarded. In this work, we show that these auxiliary samples can be recycled within the Gibbs estimators, improving their efficiency with no extra cost. This novel scheme arises naturally after pointing out the relationship between the standard Gibbs sampler and the chain rule used for sampling purposes. Numerical simulations involving simple and real inference problems confirm the excellent performance of the proposed scheme in terms of accuracy and computational efficiency. In particular we give empirical evidence of performance in a toy example, inference of Gaussian processes hyperparameters, and learning dependence graphs through regression.
[3646] vixra:1609.0151 [pdf]
The Lorentz Transformation at the Maximum Velocity for a Mass
Haug [1, 2] has recently shown there is a speed limit for fundamental particles just below the speed of light. This speed limit means that the mass of a fundamental particle not will go towards infinity as v approaches c in the Einstein relativistic mass equation. The relativistic mass limit for a fundamental particle is the Planck mass. In this paper we use the same velocity limit in the Lorentz transformation. This leads to what we think could be significant results with some interesting interpretations. In addition we look at rapidity as well as relativity of simultaneity for subatomic particles at this maximum velocity for masses.
[3647] vixra:1609.0133 [pdf]
Five Hundred Deep Learning Papers, Graphviz and Python
I invested days creating a graph with PyGraphviz to repre- sent the evolutionary process of deep learning’s state of the art for the last twenty-five years. Through this paper I want to show you how and what I obtained.
[3648] vixra:1609.0119 [pdf]
Chemical Reaction Paths in Classical Potentials
Chemical reaction dynamics are usually tackled within the framework of Quantum Mechanics which can be computationally demanding. Here we suggest to use the energy-dependent Hamilton-Jacobi description with a classical reactive force field to obtain the most probable path. This may enable to calculate the reaction rate.
[3649] vixra:1609.0110 [pdf]
Quantum Hall Effect for Dyons
Considering the generalized charge and generalized four potential associated of dyons as complex quantities with their real and imaginary parts as electric and magnetic constituents, in this present discussion we have constructed a gauge covariant and rotational symmetric angular momentum operator for dyons in order to analyze the integer and fractional quantum Hall effect. It has been shown that the commutation relations of angular momentum operator possesses a higher symmetry to reproduce the eigen values and eigen function Lowest Landau Level (LLL) for quantum Hall system. The LLL has also been constructed in terms of I^{st} Hopf map \left(S^{3}\rightarrow S^{2}\right) and it is concluded that dyons are more suitable object to investigate the existence of quantum Hall effect (both integer and fractional )
[3650] vixra:1609.0108 [pdf]
From Boundary Thermodynamics Towards a Quantum Gravity Approach
We examine how a thermodynamic model of the boundary of 4d-manifolds can be used for an approach to quantum gravity, to keep the number of assumptions low and the quantum degrees of freedom manageable. We start with a boundary action leading to Einstein's Equations under a restriction due to additional information from the bulk. Optionally, a modified form with torsion can be obtained. From the thermodynamic perspective, the number of possible microscopic states is evaluated for every macroscopic configuration, and this allows to compute the transition probability between quantum states. The formalism does not depend on specific microscopic properties. The smoothness and the topological space condition of the manifold structure are viewed as a preferred representation of a macroscopic space on mathematical grounds. By construction, gravity may be interpreted as thermodynamic model which is forced to be out of equilibrium depending on the restrictions imposed by matter. Instead of an ill-behaved path integral description of gravity, we obtain a non-divergent concept of sums over microstates.
[3651] vixra:1609.0092 [pdf]
Beyond Quantum Fields: A Classical Fields Approach to QED
A classical field theory is introduced that is defined on a tower of dimensionally increasing spaces and is argued to be equivalent to QED. The domain of dependence is discussed to show how an equal times picture of the many coordinate space gives QED results as part of a well posed initial value formalism. Identical particle symmetries are not, a priori, required but when introduced are clearly propagated. This construction uses only classical fields to provide some explanation for why quantum fields and canonical commutation results have been successful. Some old and essential questions regarding causality of propagators are resolved. The problem of resummation, generally forbidden for conditionally convergent series, is discussed from the standpoint of particular truncations of the infinite tower of functions and a two step adiabatic turn on for scattering. As a result of this approach it is shown that the photon inherits its quantization \hbar ω from the free lagrangian of the Dirac electrons despite the fact that the free electromagnetic lagrangian has no hbar in it. This provides a possible explanation for the canonical commutation relations for quantum operators, [P,Q] = i hbar without ever needing to invoke such a quantum postulate. The form of the equal times conservation laws in this many particle field theory suggests a simplification of the radiation reaction process for fields that allows QED to arise from a sum of path integrals in the various particle time coordinates. A novel method of unifying this theory with gravity, but that has no obvious quantum field theoretic computational scheme, is introduced.
[3652] vixra:1609.0083 [pdf]
A New Solution to Einstein's Infinite Mass Challenge Based on Maximum Frequency
In 1905, Einstein presented his famous relativistic mass energy equation mc^2/Sqrt(1-{v^2/c^2). When v approaches c, the expression containing the moving mass approaches infinity. Einstein interpreted this in the following way: since one needs an infinite amount of energy to accelerate even a small mass to the speed of light, it would appear that no mass can ever reach the speed of light. In this paper, we present a new solution to the infinite mass challenge based on combining special relativity with insights from Max Planck and maximum frequency. By doing this we show that there is an exact limit on the speed v in Einstein's formula. This limit is only dependent on the Planck length and the reduced Compton wavelength of the mass in question. This also gives us a limit on the maximum relativistic Doppler shift that is derived and discussed.
[3653] vixra:1609.0062 [pdf]
Optimization of Supercritical Fluid Consecutive Extractions of Fatty Acids and Polyphenols from Vitis Vinifera Grape Wastes
In this study, supercritical fluid extraction has been successfully applied to a sequential fractionation of fatty acids and polyphenols from wine wastes (2 different vitis vinifera grapes). To this aim, in a 1st step just fatty acids were extracted and in a 2nd one the polyphenols. The variables that affected to the extraction efficiency were separately optimized in both steps following an experimental design approach. The effect of extraction temperature flow, pressure, and time were thoroughly evaluated for the extraction of fatty acids, whereas the addition of methanol was also considered in the case of the polyphenols extraction. A quantitative extraction with high efficiency was achieved at a very short time and low temperatures. Concerning quantification, fatty acids were determined by means of gas chromatography coupled to mass spectrometry after a derivatization step, whereas the polyphenols were analyzed by means of high performance liquid chromatography coupled to tandem mass spectrometry and the Folin–Ciocalteu method.
[3654] vixra:1609.0019 [pdf]
Recognition and Tracking Analytics for Crowd Control
We explore and apply methods of image analization in several forms in order to monitor the condition and health of a crowd. Stampedes, congestion, and traffic all occur as a result of inefficient crowd management. Our software identifies congested areas and determines solutions to avoid congestion based on live data. The data is then processed by a local device which is fed via camera. This method was tested in simulation and proved to create a more efficient and congestion-free scenario. Future plans include depth sensing for automatic calibration and suggested course of action.
[3655] vixra:1609.0012 [pdf]
On Transformation and Summation Formulas for Some Basic Hypergeometric Series
In this paper, we give an alternate and simple proofs for Sear’s three term 3 φ 2 transformation formula, Jackson’s 3 φ 2 transformation formula and for a nonterminating form of the q-Saalschütz sum by using q exponential operator techniques. We also give an alternate proof for a nonterminating form of the q-Vandermonde sum. We also obtain some interesting special cases of all the three identities, some of which are analogous to the identities stated by Ramanujan in his lost notebook.
[3656] vixra:1608.0449 [pdf]
The Proof of Fermat's Last Theorem
We first prove a weak form of Fermat's Last Theorem; this unique lemma is key to the entire proof. A corollary and lemma follow inter-relating Pythagorean and Fermat solutions. Finally, we prove Fermat's Last Theorem.
[3657] vixra:1608.0439 [pdf]
Cycle and the Collatz Conjecture
We study on the cycle in the Collatz conjecture and there is something surprise us. Our goal is to show that there is no Collatz cycle
[3658] vixra:1608.0429 [pdf]
Expansion of the Euler Zigzag Numbers
This article is based on how to look for a closed-form expression related to the odd zeta function values and explained what meaning of the expansion of the Euler zigzag numbers is.
[3659] vixra:1608.0400 [pdf]
Sentiment Analysis of Twitter Data and the Efficient Market Hypothesis
This thesis discusses the claim of 16 computational finance articles according to which it is possible to predict the stock market using sentiment analysis of social media data. The purpose of this paper is to investigate whether this is indeed true or not. In economic theory, the efficient market hypothesis states that markets are not predictable, that they follow a random walk and that irrational behaviour cancels out in the aggregate. However, behavioural economics research shows that investors are in fact subject to predictable biases which affect the markets. This study uses data from the WeFeel project that analyses tweets in English to infer social mood on a world scale. It also uses data from the Wilshire 5000 index from June 2014 to March 2015. The hypothesis is that changes in aggregate mood arousal mediate stock market fluctuations. Yet linear regression shows that there is no relation between emotional arousal and the stock market, nor between primary emotions and the stock market. Hence, the conclusion is that global social sentiment as derived from social media has no relation with stock market fluctuations. Further research may better focus on social media specialised in the stock markets, such a finance micro-blogging data. Keywords: sentiment analysis, efficient market hypothesis, social networks, computational finance, behavioural finance, stock market, emotion recognition, stock market prediction, social sentiment, behavioural economics
[3660] vixra:1608.0395 [pdf]
The Topology on a Complete Semilattice
We define the topology atop(χ) on a complete upper semilattice χ = (M, ≤). The limit points are determined by the formula lim (X) = sup{a ∈ M | {x ∈ X| a ≤ x} ∈ D}, D where X ⊆ M is an arbitrary set, D is an arbitrary non-principal ultrafilter on X. We investigate lim (X) and topology atop(χ) properties. In particular, D we prove the compactness of the topology atop(χ).
[3661] vixra:1608.0380 [pdf]
Tiling Hexagons with Smaller Hexagons and Unit Triangles
This is a numerical study of the combinatorial problem of packing hexagons of some equal size into a larger hexagon. The problem is well defined if all hexagon edges have integer length and if their centers and vertices share the common lattice points of a triangular grid with unit distances.
[3662] vixra:1608.0373 [pdf]
Astrological Darwinism
Astrological Darwinism centers around two axioms. The first is the axiom of `chosen wave function collapse' and its subset of `chosen mutations'. It is the Bersonian `élan vital' of the soul that makes the choices by quantum looking and thus acting as a metaphysical or vital `hidden variable' causing genetic mutations. The second axiom is that the capacity of complex, multi-cellular life forms to apply the `chosen mutations' is cyclic, following the 26.000 year precession of the equinox. Astrological Darwinism's Cycle of Life can be projected upon the Zodiac of Dendera, as it can be found in the book of Schwaller de Lubicz. During the Age of Leo, the capacity to produce `chosen mutations' is at its peak and during the Age of Aquarius this capacity is at its minimum. The evolution of humanity is cyclic and can be characterized by Great Years of cyclic appearance of creative genetic boosts and subsequent expansion of the fittest. The most obvious example is the Upper Paleolithic Great Year. The present one started with the Neolithic agrarian revolution and will have its expansionist peak during the upcoming Age of Aquarius. Astrological Darwinism will be put in contrast to Neodarwinism in its twenty first century version of the Everett Many Worlds Darwinism scenario, the last being part of the Anthropic/Multiverse narrative. Astrological Darwinism needs Quantum Biology and ultimately Quantum Gravity Biology as its natural environment. Astrological Darwinism is a metaphysical narrative with implications for biology and evolution but without any implications for physics because it strictly follows Bohr's Copenhagen Interpretation in combination with his concept of complementary principles for animate and inanimate matter.
[3663] vixra:1608.0328 [pdf]
Additional Solutions of the Limiting Case "CLP" of the Problem of Apollonius via Vector Rotations using Geometric Algebra
This document adds to the collection of solved problems presented in References [1]-[6]. The solutions presented herein are not as efficient as those in [6], but they give additional insight into ways in which GA can be used to solve this problem. After reviewing, briefly, how reflections and rotations can be expressed and manipulated via GA, it solves the CLP limiting case of the Problem of Apollonius in three ways, some of which identify the the solution circles' points of tangency with the given circle, and others of which identify the solution circles' points of tangency with the given line. For comparison, the solutions that were developed in [1] are presented in an Appendix.
[3664] vixra:1608.0317 [pdf]
Poly-Complex Clifford Algebra and Grand Unification
An algebra for unit multivector components for a manifold of five poly-complex dimensions is presented. The algebra has many properties that suggest it may provide a basis for a grand unification theory.
[3665] vixra:1608.0267 [pdf]
Characteristics of a One-Dimensional Universe Spanned Between a Local and a Non-Local Observer
Special and general relativity theories are critically evaluated regarding their contemporary role as a foundation for a cosmological world picture. It is argued that the rest frame, where all physical processes take place, is more important in this role than the various relativistic distortions of these processes seen by different remote observers. This idea was previously formulated quantitatively with numerical examples from the Bohr atom, quantum physics and astrophysical observations. The theory identifies an observer on one local spatial dimension via Lorentz transformations connected with a space-like separated perpendicular observer who is non-local and only measures time. It was shown that this geometrical construction, where each unit local length comes with a line increment, is relevant both to the atom and to the universe. For example, the Planck length obtained from the Bohr atom could be expressed in terms of the apparent local Hubble expansion rate and the latter substituted into the Schroedinger equation to yield a circular current surrounding a magnetic pole. The distant non-local observer sees the radius Lorentz-contracted at relativistic speeds ultimately so much as to be able to contribute dynamics to the local frame, which was exemplified numerically by the CMBR. Evidence was also presented indicating that the oscillating line increment is capable of contributing mass from vacuum via the resonance particles. Elaborating on the latter idea indicates energy contributions of around 80, 90 and 125 GeV embedded in a robustly defined geometrical framework that has relevance (and even precedence) also in classical physics. The apparent transition from one to several spatial dimensions is exemplified by reinterpreting Compton scattering. The emergence of additional spatial dimensions and tangible locality are also discussed in terms of the number pi which appears by applying the Wallis product to the 1-D universe. The presence of the number pi thus indicates the presence of local particles as further exemplified by the CMBR and Compton scattering. The mass of the 1-D universe is obtained by considering local as well as non-local contributions as prescribed on the basis of the geometry. This yields corrections to the ‘classical’ geometrised mass such that the universe’s baryon particle density visible on the local axis is close to its electron density. Several unrelated numerical approaches guided by the proposed geometry indicate that the particle density of a primordial universe is roughly 1/m^3 (1/m^2).
[3666] vixra:1608.0263 [pdf]
Cercurile Apollonius de Rangul K
In this paper, the notion of Apollonius circle of rank k is introduced and a number of results related to the classical Apollonius circles are generalized.
[3667] vixra:1608.0240 [pdf]
Combinatorial Preon Model for Matter and Unification
I consider a preon model for quarks and leptons based on constituents defined by mass, spin and charge. The preons form a finite combinatorial system for the standard model fermions. The color and weak interaction gauge structures can be deduced from the preon bound states. By applying the area eigenvalues of loop quantum gravity to black hole preons one gets a preon mass spectrum starting from zero. Gravitational baryon number non-conservation mechanism is obtained. Argument is given for unified field theory be based only on gravitational and electromagnetic interactions of preons.
[3668] vixra:1608.0234 [pdf]
Infinitudinal Complexification
To the undoubted displeasure of very many detractors, this research program has heretofore focused on aspects of physics so fundamental that many of said detractors do not even acknowledge the program as physics. This paper responds to detractors' criticisms by continuing the program in the same direction and style as earlier work. We present one new quantitative result regarding the big bang and we find a particularly nice topic from fluid dynamics for qualitative treatment. A few other topics are discussed and we present quantitative results regarding the fine structure constant and the differential operator form of $\hat{M}^3$. This paper is somewhat reiterative as it calls attention to directions for further inquiry and continues to leave the hashing out of certain details to either a later effort or the eventual publication of results by those who have already hashed it out, possibly several years ago by now.
[3669] vixra:1608.0227 [pdf]
Generally Covariant Quantum Theory: Gravitons.
We nalize the project initiated in [1, 2, 3, 4, 7, 8, 9, 10] by studying graviton theory in our setting. Given the results in [1, 3, 9, 10], there is not so much left to accomplish and we start by deepening our understanding of some points left open in [1, 10]. Perturbative niteness of the theory follows ad-verbatim from the analysis in [3, 9] and we do not bother here about writing it down explicitly. Rather, our aim is to provide for a couple of new physical and mathematical insights regarding the genesis of the structure of the quantal graviton theory.
[3670] vixra:1608.0223 [pdf]
Computational Fluid Dynamic Analysis of Aircraft Wing with Assorted Flap Angles at Cruising Speed
An aircraft wing is actually manufactured by the composite materials with the fibre angled in every ply aligned in multi- direction. Dissimilar thickness of the airfoil and layer directions were almost taken to study the result of bending-torsion. These laminated features are usually designed using the different layers, sequence of stacking, geometrical and mechanical properties. Finite number of layers can be integrated to form many laminates, The wing loading was due to its self-weight and weight of other propulsion systems or due to acceleration due to gravity was deliberated and the deflection over here can be found, this actually studied by aero elasticity. The aircraft wing is severely affected by the loads on along wing direction or vertical direction.NACA 2412 airfoil was taken for designing wing, and it was scaled through a profile with a calculated wingspan to obtain wing model. FLUENT and CFX were used for computational fluid dynamic analysis to determine the lift and drag for wing during zero degreed flaps and angled flaps. By this we intend to show how fast retraction flaps effects the drag and lift of aircraft at cruising speed.
[3671] vixra:1608.0217 [pdf]
Simplified Solutions of the CLP and CCP Limiting Cases of the Problem of Apollonius via Vector Rotations using Geometric Algebra
The new solutions presented herein for the CLP and CCP limiting cases of the Problem of Apollonius are much shorter and more easily understood than those provided by the same author in References 1 and 2. These improvements result from (1) a better selection of angle relationships as a starting point for the solution process; and (2) better use of GA identities to avoid forming troublesome combinations of terms within the resulting equations.
[3672] vixra:1608.0215 [pdf]
Theory of Gravity: a Classical Field Approach in Curved Space-Time
A new approach to the theory of gravity is proposed. A second-rank tensor field is chosen to be the potential of the gravitational field. The gravitational field is related to the metric tensor of space--time, and all phenomena occur in curved space--time. A variational principle is established, and the gravitational field equations are derived. The energy--momentum density tensor of the gravitational field and its conservation law are obtained. The source of the gravitational field is the energy--momentum density of all kinds of matter, including the gravitational field itself. A Lagrangian of the gravitational field is proposed that correctly describes local observable gravitational phenomena in the second-order approximation. The energy density of the gravitational field is positive. Estimates are obtained for the gravitational energy defect, the difference between the inertial and gravitational masses of a body, and the effect of the external gravitational field on the mass of a body. The new approach to the description of the gravitational field and its energy provides additional incentives search possibilities of experimental verification of the phenomena of gravitation in strong fields.
[3673] vixra:1608.0165 [pdf]
Generally Covariant Quantum Theory: Non-Abelian Gauge Theories.
We further investigate the new project initiated in [1, 2, 3, 4, 7, 8, 9] by generalizing non-abelian gauge theory to our setting. Given the results in [3, 9], there is not much left to do and we shall deepen our understanding of some points left open in [1] regarding the nature and presence of ghosts. Perturbative niteness of the theory follows ad-verbatim from the analysis in [3, 9] and we shall not bother here about writing it down explicitly. Rather, our aim is to provide for a couple of new physical and mathematical insights regarding the genesis of the structure of quantal non-abelian gauge theory.
[3674] vixra:1608.0155 [pdf]
Exact Solutions for Sine-Gordon Equations and F-Expansion Method
A large number of methods have been proposed for solving nonlinear differential equations. The Jacobi elliptic function method and the f-expansion methods are generalizations from a few of them. These methods produce not only single-solitons but also multi-soliton solutions. In this work we applied the f -expansion method and found novel solutions besides those known for three main equations of the kind sine-Gordon: Triple Sine-Gordon (TSG), Double Sine-Gordon (DSG) and Simple Sine-Gordon (SSG).
[3675] vixra:1608.0153 [pdf]
The Cartan Model for Equivariant Cohomology
In this article, we will discuss a new operator $d_{C}$ on $W(\mathfrak{g})\otimes\Omega^{*}(M)$ and to construct a new Cartan model for equivariant cohomology. We use the new Cartan model to construct the corresponding BRST model and Weil model, and discuss the relations between them.
[3676] vixra:1608.0149 [pdf]
On a Global Relative Revolution of the Universe Around Earth Induced by its Spin and the Outlines for a New Mechanism for Magnetic Fields Generation
Relative motion in the special theory of relativity, can have true and verifiable results. But we ignored it in the case of the rotation of Earth and other planets and cosmic objects, around their own axes. My aim is to find (Earth's Resultant Inertial Rotation) ERIR. This ERIR is resulting from the curved path due to gravity, and the circular path of an observer due to rotation, out of the whole rotation. And for this ERIR an observer can assume the state of rest, while the whole observable universe will be revolving relatively around him in the opposite direction. This (Universe's Relative Revolution) URR will be displayed in conformity with circular motion laws. I've found the equation to describe this type of ERIR . I used this equation, and postulated that aberration of the light of distant objects would allow us to see a component of the tangential velocity produced by the URR along our line of sight. I reinterpreted the Hubble phenomenon, and showed that the phenomenon is different for different cosmic objects, and predicted a blue-shift on the other side of the sky, mostly behind the zone of avoidance. The dependence of Hubble's constant on aberration angle is emphasized. Accordingly we concluded that, the great attractor, the Virgo infall, the CMB dipole, the dark energy, and the fingers of God theories and the likes, were based on illusions. All the anomalies of the CMB mapping, like the axis of evil, could naturally be explained. And the Pioneer effect, could also be explained, and the diurnal and annual variations of the effect also accounted for. Also it is possible using this global URR to find a Universal mechanism for magnetic field generation, which could be applied for all cosmic objects, from asteroids to magnetars and even galaxies. Using this idea I predicted a magnetic field on Ceres twice as that of Mercury. But only the outlines of this new mechanism will be given, so that other investigators could develop it further.
[3677] vixra:1608.0144 [pdf]
On the Properties of Generalized Multiplicative Coupled Fibonacci Sequence of R T H Order
Coupled Fibonacci sequences of lower order have been generalized in number of ways.In this paper the Multiplicative Coupled Fibonacci Sequence has been generalized for r t h order with some new interesting properties.
[3678] vixra:1608.0140 [pdf]
On the Properties of K Fibonacci and K Lucas Numbers
In this paper, some properties of k Fibonacci and k Lucas numbers are derived and proved by using matrices S and M. The identities we proved are not encountered in the k Fibonacci and k Lucasnumber literature.
[3679] vixra:1608.0135 [pdf]
Determinantal Identities for K Lucas Sequence
In this paper, we de¯ned new relationship between k Lucas sequences and determinants of their associated matrices, this approach is di®erent and never tried in k Fibonacci sequence literature.
[3680] vixra:1608.0129 [pdf]
Quantum Interpretation of the Impedance Model as Informed by Geometric Clifford Algebra
Quantum Interpretations seek to explain observables from formal theory. Impedances govern the flow of energy, are helpful in such attempts. An earlier note documented first efforts to resolve the interpretational ambiguities and contentions from the practical perspective of our model-based approach. In the interim, discovery of the deep connections between the impedance model and geometric Clifford algebra has shed new light on the measurement problem and its manifestations, which we revisit here.
[3681] vixra:1608.0125 [pdf]
Generally Covariant Quantum Theory: Quantum Electrodynamics.
We continue our investigation of the new project launched in [1, 2, 3, 4, 7, 8] by generalizing Quantum Electrodynamics, the theory of elec- trons and photons, to our setting. At first, we deal with the respective two point functions, define the correct interaction theory as a series of connected Feynman diagrams and finally, we show that for a certain class of spacetime metrics, each diagram is finite and a modified perturbation series is analytic.
[3682] vixra:1608.0114 [pdf]
Uso de Las Preferencias Politicas Para Inducir Una Topologia en un Conjunto de Votantes
Estas breves notas exploran un metodo para inducir una topologia en un conjunto de votantes a partir de las preferencias de politicas publicas de los mismos. Asimismo, se evaluan algunas caracteristicas basicas de este espacio topologico y sus conexiones con la teoria clasica de eleccion publica.
[3683] vixra:1608.0101 [pdf]
Kic 8462852 Intrinsic Variability
The light curve of KIC 8462852 in dips around day 1519 and 1568 shows features matching clearly the rotational period of the star. Changes in brightness (unevenly distributed around its surface in these two cases) are shown modulated by the rotational period. Therefore the probable explication of this mysterious variability must be some phenomenon of the star itself instead of occultations by external objects.
[3684] vixra:1608.0095 [pdf]
The Current Reversal Phenomenon of Brownian Particles in a Two-Dimensional Potential with L{\'{e}}vy Noise
Effects of L{\'{e}}vy noise on self-propelled particles in a two-dimensional potential is investigated. The current reversal phenomenon appear in the system. $V$($x$ direction average velocity) changes from negative to positive with increasing asymmetry parameter $\beta$, and changes from positive to negative with increasing self-propelled velocity $v_0$. The $x$ direction average velocity $V$ has a maximum with increasing modulation constant $\lambda$.
[3685] vixra:1608.0082 [pdf]
Algorithm for Calculating Terms of a Number Sequence using an Auxiliary Sequence
A formula giving the $n$:th number of a sequence defined by a recursion formula plus initial value is deduced using generating functions. Of particular interest is the possibility to get an exact expression for the n:th term by means a recursion formula of the same type as the original one. As for the sequence itself it is of some interest that the original recursion is non-linear and the fact that the sequence grows very fast, the number of digits increasing more or less exponentially. Other sequences with the same rekursion span can be treated similarly.
[3686] vixra:1608.0064 [pdf]
Quantum Gravity from the Point of View of Covariant Relativistic Quantum Theory.
In the light of a recent novel definition of a relativistic quantum theory [1, 3, 4], we ask ourselves the question what it would mean to make the gravitational field itself dynamical. This could lead to a couple of different viewpoints upon quantum gravity which we shall explain carefully; this paper expands upon some ideas in [2] and again confirms ones thought that we are still far removed from a (type one) theory of quantum gravity.
[3687] vixra:1608.0057 [pdf]
Curry's Non-Paradox and Its False Definition
Curry's paradox is generally considered to be one of the hardest paradoxes to solve. It is shown here that the paradox can be arrived in fewer steps and also for a different term of the original biconditional. Further, using different approaches, it is also shown that the conclusion of the paradox must always be false and this is not paradoxical but it is expected to be so. One of the approaches points out that the starting biconditional of the paradox amounts to a false definition or assertion which consequently leads to a false conclusion. Therefore, the solution is trivial and the paradox turns out to be no paradox at all. Despite that fact that verifying the truth value of the first biconditional of the paradox is trivial, mathematicians and logicians have failed to do so and merely assumed that it is true. Taking this into consideration that it is false, the paradox is however dismissed. This conclusion puts to rest an important paradox that preoccupies logicians and points out the importance of verifying one's assumptions.
[3688] vixra:1608.0056 [pdf]
Material Bodies Moving at Superluminal Speed
As seen from a spaceship accelerating towards a star, the star approaches the ship at a speed that can exceed c, the speed of light in vacuum. For the particular case of constant acceleration with given final speed kc at the star this speed is calculated as a function of the ship's proper time and it is found that the upper bound of this speed is 1,5c. Some other cases are investigated including hyperbolic motion.
[3689] vixra:1608.0047 [pdf]
Free Fall Through the Earth
Free fall through the Earth, considered as a sphere of radially symmetric mass density, along the axis of rotation, is calculated using a general differential equation in newtonian gravity theory. The passage time is calculated and, further, the shape of the tunnel required if the fall is started at an arbitrary point other than a pole, so that the rotation of the Earth comes into play, is determined. Also a general relativistic case with constant density along the axis is considered. -- In this general form the article may also be of some pedagogical value.
[3690] vixra:1608.0043 [pdf]
Physics on the Adiabatically Changed Finslerian Manifold and Cosmology
In present paper we confirm our previous result [4] that Planck constant is adiabatic invariant of electromagnetic field propagating on the adiabatically changed Finslerian manifold. Direct calculation from cosmological parameters gives value h=6x10(-27) (erg s). We also confirm that Planck constant (and hence other fundamental constants which depend on h) is varied on time due to changing of geometry. As an example the variation of the fine structure constant is calculated. Its relative variation ((da/dt)/a) consist 1.0x10(-18) (1/s). We show that on the Finsler manifold characterized by adiabatically changed geometry, classical free electromagnetic field is quantized geometrically, from the properties of the manifold in such manner that adiabatic invariant of field is ET=6x10(-27)=h. Electrodynamic equations on the Finslerian manifold are suggested. It is stressed that quantization naturally appears from these equations and is provoked by adiabatically changed geometry of manifold. We consider in details two direct consequences of the equations: i) cosmological redshift of photons and ii) effects of Aharonov -- Bohm that immediately follow from equations. It is shown that quantization of system consists of electromagnetic field and baryonic components (like atoms) is obvious and has clear explanation.
[3691] vixra:1608.0041 [pdf]
Combining Infinity Number Of Neural Networks Into One
One of the important aspects of a neural network is its generalization property, which is measured by its ability to make correct prediction on unseen samples. One option to improve generalization is to combine results from multiple networks, which is unfortunately a time-consuming process. In this paper, a new approach is presented to combine infinity number of neural networks in analytic way to produce a small, fast and reliable neural network.
[3692] vixra:1608.0024 [pdf]
Generally Covariant Relativistic Quantum Theory :''renormalization''
In a previous paper of this author [1] building upon insights reached in [2], we constructed the free theory on a rather general curved spacetime for spin-0, 1/2,1 particles and we wrote down the most general interaction vertices for the latter leading to the principle of local gauge invariance. In this paper, we further define the interacting theory and study the behavior of modified particle propagators, leading to a finite theory
[3693] vixra:1607.0561 [pdf]
About The Geometry Of Cosmos and Beyond
The current paper presents an attempt to express the mass problem in a more con- crete mathematical scheme where all the Physical quantities could come naturally. The Standard Model and its extensions where investigated where SU(4) has appeared as a conclusion of this attempt.
[3694] vixra:1607.0560 [pdf]
Conservation Laws and Energy Budget in a Static Universe
The universe is characterized by large concentrations of energy contained in small, dense areas such as galaxies, which radiate energy towards the surrounding space. However, no current theory balances the loss of energy of galaxies, a requirement for a conservative universe. This study is an investigation of the physics nature might use to maintain the energy differential between its dense parts and the vacuum. We propose time contraction as a principle to maintain this energy differential. Time contraction has the following effects: photons lose energy, while masses gain potential energy and lose kinetic energy. From the virial theorem, which applies to a system of bodies, we find that the net energy resulting from the gain in potential energy and the loss in kinetic energy remains unchanged, meaning that the orbitals of stars in galaxies remain unaffected by time contraction. However, each object in a galaxy has an internal potential energy leading to a surplus of energy within the object. This internal energy surplus should balance with the energy radiated at the level of a galaxy. We illustrate this principle with a calculation of the energy balance of the Milky Way.
[3695] vixra:1607.0556 [pdf]
A Subatomic Replica of Our Solar System. Macrocosmos and Microcosmos. As Above! So Below!
In this paper we show that each planet and sun (star) has a mathematical subatomic twin. Each planetary twin particle has exactly the same mathematical properties as its substantially larger twin planet or Sun. From a planet's twin particle we get the planet's escape velocity, its solar deflection, and its red shift. If we arrange these solar system twin particles with their relative distances as our real solar system then they will, based on Newton's law of gravitation, have the same orbital velocities as the true solar system. In other words, we have created a subatomic world that in many respects is a replica of the Macrocosmos.
[3696] vixra:1607.0496 [pdf]
The Planck Mass Particle Finally Discovered! The True God Particle! Good bye to the Point Particle Hypothesis!
In this paper we suggest that one single fundamental particle exists behind all matter and energy. We claim that this particle has a spatial dimension and diameter equal to the Planck length and a mass equal to half of the Planck mass. Further, we will claim this particle is indivisible, that is it was never created and can never be destroyed. All other subatomic particles, in spite of having much lower masses than the Planck mass, are easily explained by the existence of such an indivisible particle. Isaac Newton stated that there had to be a fundamental particle, completely hard, that could not be broken down. He also claimed that light consisted of a stream of such particles. Newton’s particle theory was very similar to that of the ancient atomists Democritus and Leucippus. However, the atomist view of an indivisible particle with spatial dimensions has generally been pushed aside by modern physics and replaced with hypothetical point particles and the mysterious wave-particle duality.
[3697] vixra:1607.0484 [pdf]
Active Appearance Model Construction: Implementation notes
Active Appearance Model (AAM) is a powerful object modeling technique and one of the best available ones in computer vision and computer graphics. This approach is however quite complex and various parts of its implementation were addressed separately by different researchers in several recent works. In this paper, we present systematically a full implementation of the AAM model with pseudo codes for the crucial steps in the construction of this model.
[3698] vixra:1607.0466 [pdf]
Variation of the Fine Structure Constant
In present paper we evaluate the fine structure constant variation which should take place as the Universe is expanded and its curvature is changed adiabatically. This changing of the fine structure constant is attributed to the energy lost by physical system (consist of baryonic component and electromagnetic field) due to expansion of our Universe. Obtained ratio (d alpha)/alpha = 1. 10{-18} (per second) is only five times smaller than actually reported experimental limit on this value. For this reason this variation can probably be measured within a couple of years. To argue the correctness of our approach we calculate the Planck constant as adiabatic invariant of electromagnetic field, from geometry of our Universe in the framework of the pseudo- Riemannian geometry. Finally we discuss the double clock experiment based on Al+ and Hg+ clocks carried out by T. Rosenband et al. (Science 2008). We show that in this particular case there is an error in method and this way the fine structure constant variation can not be measured if the fine structure constant is changed adiabatically.
[3699] vixra:1607.0438 [pdf]
Exact Diagonalization of the D-Dimensional Spatially Confined Quantum Harmonic Oscillator
In the existing literature various numerical techniques have been developed to quantize the confined harmonic oscillator in higher dimensions. In obtaining the energy eigenvalues, such methods often involve indirect approaches such as searching for the roots of hypergeometric functions or numerically solving a differential equation. In this paper, however, we derive an explicit matrix representation for the Hamiltonian of a confined quantum harmonic oscillator in higher dimensions, thus facilitating direct diagonalization.
[3700] vixra:1607.0437 [pdf]
Infinite Arctangent Sums Involving Fibonacci and Lucas Numbers
Using a straightforward elementary approach, we derive numerous infinite arctangent summation formulas involving Fibonacci and Lucas numbers. While most of the results obtained are new, a couple of celebrated results appear as particular cases of the more general formulas derived here.
[3701] vixra:1607.0435 [pdf]
On Approximating the Free Harmonic Oscillator by a Particle in a Box
The main purpose of this paper is to demonstrate and illustrate, once again, the potency of the variational technique as an approximation procedure for the quantization of quantum mechanical systems. By choosing particle-in-a-box wavefunctions as trial wavefunctions, with the size of the box as the variation parameter, approximate eigenenergies and the corresponding eigenfunctions are obtained for the one dimensional free harmonic oscillator.
[3702] vixra:1607.0390 [pdf]
The Evolution of the System of Gravitating Bodies
A natural physical approach to the analysis of the structure of closed gravitating systems has been formulated in the scope of classical mechanics. The approach relies on the interrelation between densities of nested spheres inscribed in the circular orbits of the system bodies. An empirical law has been defined for the evolution of closed gravitating systems differing in mass, time scale and distance from the ground-based Observer. The gravitating systems undergo modifications and evolve from their initial state, namely, a gas-and-dust formation of almost constant density over the entire volume, to a certain terminal phase of the process when the system structure becomes similar to the planetary system (like the Solar system) where almost all the gravitating mass is concentrated in the vicinity of the system center of gravity. Using the proposed method of nested spheres, it is possible to reveal for the gravitating system the character of radial distribution of matter density in the system symmetry plane, quantitatively evaluate the density of medium containing the gravitating system under consideration, and assess the current phase of the system evolution. The research results have led us to a conclusion that introduction into the scientific practice of such an entity as "dark matter" has no physical background since it is based on a wrong interpretation of an "unordinary" distribution of star orbital velocities in galaxies.
[3703] vixra:1607.0388 [pdf]
Why do We Live in a Quantum World?
Anybody who has ever studied quantum mechanics knows that it is a very counterintuitive theory, even though it has been an incredibly successful theory. This paper aims to remove this counterintuitiveness by showing that the laws of quantum mechanics are a natural consequence of classical Newtonian mechanics combined with the digital universe hypothesis of Konrad Zuse and Edward Fredkin. We also present a possible way to test the digital universe hypothesis.
[3704] vixra:1607.0373 [pdf]
Dark Matter, the Correction to Newton's Law in a Disk
The dark matter problem in the context of spiral galaxies refers to the discrepancy between the galactic mass estimated from luminosity measurements of galaxies with a given mass-to-luminosity ratio and the galactic mass measured from the rotational speed of stars using the Newton’s law. Newton’s law fails when applied to a star in a spiral galaxy. The problem stems from the fact that Newton’s law is applicable to masses represented as points by their barycenter. As spiral galaxies have shapes similar to a disk, we shall correct Newton’s law accordingly. We found that the Newton’s force exerted by the interior mass of a disk on an adjacent mass shall be multiplied by the coefficient ηdisk estimated to be 7.44±0.83 at a 99% confidence level. The corrective coefficient for the gravitational force exerted by a homogeneous sphere at it’s surface is 1.00±0.01 at a 99% confidence level, meaning that Newton’s law is not modified for a spherical geometry. This result was proven a long time ago by Newton in the shell theorem.
[3705] vixra:1607.0195 [pdf]
Parallel Universes and Causal Anomalies: Links Between Science and Religion
In [6] it was proposed to define "god" as a region of a universe that is subject to circular causality. While we do not adapt the exact definition of "god" that is being introduced in that paper, we do accept the concept that "god" has something to do with causal anomalies: either circular causality or else two competing causal structures. We will show that the presence of a causal anomaly (whatever it happens to be) might allow us to define trinity in non-contradictory way. That is, we will show how the members of trinity can be separate entities and, yet, have the same identity.
[3706] vixra:1607.0171 [pdf]
The David Bohm Pilot-Wave Interpretation is the Best Approach to Reality of Quantum Physics
The problem with introducing the particle trajectories into Quantum Physics is the need of the violation of the Energy Conservation law. The latter law must hold, because the Noether's theorem requires it for the case of homogeneous time. Therefore, the wonder is happening, provided, that the David Bohm's theory is proved. But latter proof is there, in [M. Ringbauer et al.: Nature Physics, 2015] together with my explanation in the present manuscript. Enjoy! All rights Reserved!
[3707] vixra:1607.0166 [pdf]
Three Solutions of the LLP Limiting Case of the Problem of Apollonius via Geometric Algebra, Using Reflections and Rotations
This document adds to the collection of solved problems presented in References 1-4. After reviewing, briefly, how reflections and rotations can be expressed and manipulated via GA, it solves the LLP limiting case of the Problem of Apollonius in three ways.
[3708] vixra:1607.0141 [pdf]
Kalman Folding 4: Streams and Observables
In Kalman Folding, Part 1, we present basic, static Kalman filtering as a functional fold, highlighting the unique advantages of this form for deploying test-hardened code verbatim in harsh, mission-critical environments. In that paper, all examples folded over arrays in memory for convenience and repeatability. That is an example of developing filters in a friendly environment. Here, we prototype a couple of less friendly environments and demonstrate exactly the same Kalman accumulator function at work. These less friendly environments are - lazy streams, where new observations are computed on demand but never fully realized in memory, thus not available for inspection in a debugger - asynchronous observables, where new observations are delivered at arbitrary times from an external source, thus not available for replay once consumed by the filter
[3709] vixra:1607.0139 [pdf]
On a New Reaction-less Mechanism for Thrust Production and the Explanation of the Working of the EmDrive
In this paper, we proposed a totally new mechanism for thrust production, to explain the working of the two reaction-less drives. Namely the EmDrive invented by Roger Shawyer, and the Cannae drive invented by Guido Fetta. The explanation based on a postulated potential or viable momentum deduced from the relativistic momentum energy relation. We assumed that by selectively adding energy to one object of a binary system made of two objects connected by a rigid mass-less rod, a viable momentum would be possessed by this object. We claimed that this will be the inverse of the momentum energy relation. We used the relativistic equivalence of mass and energy to show that the mass of the object where the energy is added will increase. This increase in mass will change the position of the center of mass of the binary system. We postulated that the claimed potential momentum will manifest as a true observable momentum in the direction of center of mass change. The added energy will be absorbed by the object's atoms and manifest as a kinetic energy. And according to the equivalence of all inertial frames of reference dictated by the special theory of relativity the system will be accelerating.
[3710] vixra:1607.0130 [pdf]
Attempts to Detect the Torsion Field Nature of Scalar Wave Generated by Dual Tesla Coil System
Scalar wave was found and used at first by Nikola Tesla in his wireless energy transmission experiment. Prof.K.Meyl extended the Maxwell equation and found the lost scalar wave part. The scalar wave theory proposed by Prof.K.Meyl indicates that the torsion field is the nature of scalar wave. This work attempts to detect the torsion field nature of scalar wave generated by the dual Tesla coil system, using the torsion balance consisting of a wooden frame. The result is positive and two kinds of torsion field including left-handed and right-handed are detected in dual Tesla coil system.
[3711] vixra:1607.0123 [pdf]
Massive Scalar Field Theory on Discrete N-Scales
$N$-scales are a generalization of time-scales that has been put forward to unify continuous and discrete analyses to higher dimensions. In this paper we investigate massive scalar field theory on $n$-scales. In a specific case of a regular 2-scale, we find that the IR energy spectrum is almost unmodified when there are enough spatial points. This is regarded as a good sign because the model reproduces the known results in the continuum approximation. Then we give field equation on a general $n$-scale. It has been seen that the field equation can only be solved via computer simulations. Lastly, we propose that $n$-scales might be a good way to model singularities encountered in the general theory of relativity.
[3712] vixra:1607.0121 [pdf]
A Speculative Note on How to Modify Einstein's Field Equation to Hold at the Quantum Scale. Gravity at the Quantum Scale = Strong Force?
In this short note, first we will show a few ways to rewrite Einstein's field equation without basically changing it. Then we speculate a bit more broadly on how to change the equation to make it hold at the quantum scale. More precisely, we modify it for bodies with masses less than the Planck mass.
[3713] vixra:1607.0107 [pdf]
Analog Computer Understanding of Hamiltonian Paths, and a Possible Digitization
This paper explores finding existence of undirected hamiltonian paths in a graph using lumped/ideal circuits, specifically low-pass filters. While other alternatives are possible, a first-order RC low-pass filter is chosen to describe the process. The paper proposes a way of obtaining time complexity for counting the number of hamiltonian paths in a graph, and then shows that the time complexity of the circuits is around $O(n \log n)$ where $n$ is the number of vertices in a graph. Because analog computation is often undesirable due to several aspects, a possible digitization scheme is proposed in this paper.
[3714] vixra:1607.0101 [pdf]
The Superluminal Signal in Quantum Billiard and in the Casimir Configuration
The quantum energy levels of electron inside of the box with the infinite barriers at point 0 and l is considered. The situation is then extended to the thee dimensions. Quantum mechanics of such so called quantum billiard does not involve the retarded wave functions (the retarded Green functions) and it means that the quantum pressure is instantaneous at the walls of the box. The instantaneous process is equal to the action at a distance, or to the existence of the superluminal signals inside of the quantum box. The similar situation is in case of the Casimir effect between two capacitor plates.
[3715] vixra:1607.0096 [pdf]
Consideration of Some Generalizations of Riemann-Liouville Integral in Physics
Generalization of fractal density on a fractals for spaces with positive and negative fractal dimensions. Fractal-fractional generalized physics (i.e. classical or quantum physics). Generalized Hausdorff measures. Numbers and generalized functions as generalized logical values. Beyond logics and numbers. Generalized concept of physical field, i.e generalized universes and multiverses. Fractional generalization of path integrals.
[3716] vixra:1607.0087 [pdf]
Induction and Analogy in a Problem of Finite Sums
What is a general expression for the sum of the first n integers, each raised to the mth power, where m is a positive integer? Answering this question will be the aim of the paper....We will take the unorthodox approach of presenting the material from the point of view of someone who is trying to solve the problem himself. Keywords: analogy, Johann Faulhaber, finite sums, heuristics, inductive reasoning, number theory, George Polya, problem solving, teaching of mathematics.
[3717] vixra:1607.0084 [pdf]
Kalman Folding 5: Non-Linear Models and the EKF
We exhibit a foldable Extended Kalman Filter that internally integrates non-linear equations of motion with a nested fold of generic integrators over lazy streams in constant memory. Functional form allows us to switch integrators easily and to diagnose filter divergence accurately, achieving orders of magnitude better speed than the source example from the literature. As with all Kalman folds, we can move the vetted code verbatim, without even recompilation, from the lab to the field.
[3718] vixra:1607.0073 [pdf]
Indian Buffet Process Deep Generative Models
Deep generative models (DGMs) have brought about a major breakthrough, as well as renewed interest, in generative latent variable models. However, an issue current DGM formulations do not address concerns the data-driven inference of the number of latent features needed to represent the observed data. Traditional linear formulations allow for addressing this issue by resorting to tools from the field of nonparametric statistics: Indeed, nonparametric linear latent variable models, obtained by appropriate imposition of Indian Buffet Process (IBP) priors, have been extensively studied by the machine learning community; inference for such models can been performed either via exact sampling or via approximate variational techniques. Based on this inspiration, in this paper we examine whether similar ideas from the field of Bayesian nonparametrics can be utilized in the context of modern DGMs in order to address the latent variable dimensionality inference problem. To this end, we propose a novel DGM formulation, based on the imposition of an IBP prior. We devise an efficient Black-Box Variational inference algorithm for our model, and exhibit its efficacy in a number of semi-supervised classification experiments. In all cases, we use popular benchmark datasets, and compare to state-of-the-art DGMs.
[3719] vixra:1607.0071 [pdf]
Is the Schwarzschild Radius Truly a Radius?
This paper questions the assumption that the Schwarzschild radius actually represents a radius. It has recently been shown by Haug (2016) that the Schwarzschild radius for any object can simply be written as N2l_p, where N is the number of Planck masses into which we can ``hypothetically" pack an object of interest and l_p is the well known Planck length. The Schwarzschild radius seems to represent the length of the number of Planck mass objects we can ``hypothetically" pack a planet or star into and then we can place them in perfect alignment along a single strand (of single particles) in a straight line.
[3720] vixra:1607.0070 [pdf]
The Existence of Quantum Computer
We extend an empirically grounded theory of the existence of quantum computer. The main question that we are considering is a question about the quantum computer’s possibility of existence and creation. As empirical evidence, we use logic, which we show in cognitive perspective. For a definition of the computer, we use a formal definition of Turing machine. By formulating many definitions abstractly and phenomenologically we go around the areas of quantum physics, quantum computing and other quantum-related fields of science which could give an unambiguous answer to our question about the essence of quantum computer and which is developed so they can’t. In many ways, it makes this theory about the existence of quantum computer universal for these areas, although less applied for them. We consider some corollary of the essences of quantum computer, including the possibility of quantum computer for the human. References to research of cognitive nature of logic suggest empirical basis which we follow in our theory.
[3721] vixra:1607.0059 [pdf]
Kalman Folding 3: Derivations
In Kalman Folding, Part 1, we present basic, static Kalman filtering as a functional fold, highlighting the unique advantages of this form for deploying test-hardened code verbatim in harsh, mission-critical environments. The examples in that paper are all static, meaning that the states of the model do not depend on the independent variable, often physical time. Here, we present mathematical derivations of the basic, static filter. These are semi-formal sketches that leave many details to the reader, but highlight all important points that must be rigorously proved. These derivations have several novel arguments and we strive for much higher clarity and simplicity than is found in most treatments of the topic.
[3722] vixra:1607.0055 [pdf]
Generalized Equations and Their Solutions in the (S,0)+(0,S) Representations of the Lorentz Group
In this talk I present three explicit examples of generalizations in relativistic quantum mechanics. First of all, I discuss the generalized spin-1/2 equations for neutrinos. They have been obtained by means of the Gersten-Sakurai method for derivations of arbitrary-spin relativistic equations. Possible physical consequences are discussed. Next, it is easy to check that both Dirac algebraic equation Det (\hat p - m) =0 and Det (\hat p + m) =0 for u- and v- 4-spinors have solutions with p_0= \pm E_p =\pm \sqrt{{\bf p}^2 +m^2}. The same is true for higher-spin equations. Meanwhile, every book considers the equality p_0=E_p for both $u-$ and $v-$ spinors of the (1/2,0)+(0,1/2)) representation only, thus applying the Dirac-Feynman-Stueckelberg procedure for elimination of the negative-energy solutions. The recent Ziino works (and, independently, the articles of several others) show that the Fock space can be doubled. We re-consider this possibility on the quantum field level for both S=1/2 and higher spin particles. The third example is: we postulate the non-commutativity of 4-momenta, and we derive the mass splitting in the Dirac equation. The applications are discussed.
[3723] vixra:1607.0048 [pdf]
A Radical Examination of the Equations Derived in Special Relativity Shows SR is Compatible with Quantum Gravity
Using the Time Dilation equation, Length Contraction equation and Mass Increase equation, all derived within Special Relativity. With a simple substitution I convert the time dilation equation to a Pythagorean equation showing the observed time consists of two time components and I convert the length contraction equation to a Pythagorean equation showing the total length of a moving object has two length components. These new forms of the time dilation and length contraction equations strongly indicate the presence of a new Time Dimension and a new Space Dimension. The additional Time dimension explains exactly what time dilation is and together with the Mass Increase equation I introduce the concept of a Newtonian velocity, which is an alternative velocity of a moving object. I propose the mass of the moving object does not change, because we are in fact measuring momentum with the wrong velocity. The additional Space dimension in fact means length contraction does not occur because we are seeing the object at an angle within a four dimensional space, which means it looks shorter when viewed in three space dimensions only. These changes mean Special relativity IS compatible with Quantum Gravity and Doubly-Special Relativity is not required.
[3724] vixra:1607.0025 [pdf]
Planck Dimensional Analysis of Big G
This is a short note to show how big G can be derived from dimensional analysis by assuming that the Planck length is much more fundamental than Newton's gravitational constant.
[3725] vixra:1606.0345 [pdf]
A Polynomial Recursion for Prime Constellations
An algorithm for recursively generating the sequence of solutions of a prime constellation is described. The algorithm is based on an polynomial equation formed from the first n elements of the constellation. A root of this equation is the next element of the sequence.
[3726] vixra:1606.0328 [pdf]
Kalman Folding, Part 1 (Review Draft)
Kalman filtering is commonplace in engineering, but less familiar to software developers. It is the central tool for estimating states of a model, one observation at a time. It runs fast in constant memory. It is the mainstay of tracking and navigation, but it is equally applicable to econometrics, recommendations, control: any application where we update models over time. By writing a Kalman filter as a functional fold, we can test code in friendly environments and then deploy identical code with confidence in unfriendly environments. In friendly environments, data are deterministic, static, and present in memory. In unfriendly, real-world environments, data are unpredictable, dynamic, and arrive asynchronously. The flexibility to deploy exactly the code that was tested is especially important for numerical code like filters. Detecting, diagnosing and correcting numerical issues without repeatable data sequences is impractical. Once code is hardened, it can be critical to deploy exactly the same code, to the binary level, in production, because of numerical brittleness. Functional form makes it easy to test and deploy exactly the same code because it minimizes the coupling between code and environment.
[3727] vixra:1606.0324 [pdf]
The Theory of N-Scales
We provide a theory of $n$-scales previously called as $n$ dimensional time scales. In previous approaches to the theory of time scales, multi-dimensional scales were taken as product space of two time scales \cite{bohner2005multiple,bohner2010surface}. $n$-scales make the mathematical structure more flexible and appropriate to real world applications in physics and related fields. Here we define an $n$-scale as an arbitrary closed subset of $\mathbb R^n$. Modified forward and backward jump operators, $\Delta$-derivatives and multiple integrals on $n$-scales are defined.
[3728] vixra:1606.0322 [pdf]
Anxiety Emotion Affects Health in View of System Science
In this paper we discussed the affect of anxiety emotion in view of system science. With the help of limit cycle model and non-equilibrium thermodynamics, it shows that the affect caused by anxiety emotion is different from person to person. Using synthesis and metabolism frequency as the basis to category the affects, we found that the affect will cause some kind of population over weighted while some kind of population will become thinner and thinner. This depends on whether synthesis and metabolism frequency decreases or increases. We gave some suggestion about getting rid of the affect caused by such kind of emotion according to different kind of population.
[3729] vixra:1606.0316 [pdf]
Hilbert’s Forgotten Equation, the Equivalence Principle and Velocity Dependence of Free Fall.
Referring to the behavior of accelerating objects in special relativity, and applying the principle of equivalence, one expects that the coordinate acceleration of point masses under gravity will be velocity dependent. Then, using the Schwarzschild solution, we analyze the similar case of masses moving on timelike geodesics, which reproduces a little known result by Hilbert from 1917, describing this dependence. We find that the relativistic correction term for the acceleration based on general relativity differs by a factor of two from the simpler acceleration arguments in flat space. As we might expect from the general theory, the velocity dependence can be removed by a suitable coordinate transformation, such as the Painlev´e-Gullstrand coordinate system. The validity of this approach is supported by previous authors who have demonstrated vacuum solutions to general relativity producing true flat space metrics for uniform gravitational fields. We suggest explicit experiments could be undertaken to test the property of velocity dependence.
[3730] vixra:1606.0294 [pdf]
A Note About A Solution of Navier-Stokes Equations
This note represents an attempt to give a solution of Navier-Stokes equations under the assumptions $(A)$ of the problem as described by the Clay Institute \cite{bib1}. We give a proof of the condition of bounded energy when the velocity vector $u$ and vorticity vector $\Omega=curl(u)$ are collinear.
[3731] vixra:1606.0283 [pdf]
Can GR Lack "Dark Energy" or Abide a "Big Bang"?
In 1922 Alexandre Friedmann obtained, in the context of an unusual spherically-symmetric metric form, formal solutions of the Einstein equation for dust of uniform energy density which as well apply within spherically symmetric dynamic dust balls of uniform energy density. The resulting Friedmann equation for the dynamical behavior of these ostensibly general-relativistic dust-ball solutions exclusively reflects, however, completely non-relativistic Newtonian gravitational dynamics, with no trace at all of the purely relativistic phenomenon of gravitational time dilation, notwithstanding that gravitational time dilation inescapably accompanies gravitation's presence in GR. That paradox wasn't noticed by Friedmann, nor has it since been consciously addressed. As a consequence, accepted dust-ball behavior is Newtonian gravitational in every respect, notably including compulsory deceleration of dust-ball expansion, as well as compulsory assumption by every expanding dust ball of a singular, zero-radius "Big Bang" configuration at a finite earlier time -- despite both behaviors being incompatible with the implications of gravitational time dilation. The source of these inconsistencies is the GR-incompatible nature of Friedmann's unusual metric form, which extinguishes relativistic gravitational and speed time dilation by implicitly utilizing the GR-inaccessible set of clock readings of an infinite number of different observers. However in 1939 Oppenheimer and Snyder carried out a tour-de-force analytic space-time transformation of a Friedmann GR-unphysical dust-ball solution which satisfies a particular initial condition to fully GR-physical "standard" metric form. That Oppenheimer-Snyder transformation was recently extended to arbitrary dust-ball initial conditions, yielding the equation of motion in fully GR-physical "standard" coordinates of any dust ball's radius. This non-Newtonian GR-physical dust-ball radius equation of motion fully conforms to the implications of gravitational time dilation: it in no way forbids acceleration of dust-ball expansion, but it prevents, at any finite "standard" time whatsoever, any dust ball's radius from being smaller than or equal to its Schwarzschild radius-value. Full GR conformity thus needs no "dark energy", but can't support a "Big Bang".
[3732] vixra:1606.0282 [pdf]
On the Uncertainty Principle
Analysis of the laws which form, direct universe and of the interacting elements in the interactions emerging by these laws. Forming the theoretical, philosophical infrastructure of the some physical concepts and phenomena such as kinetic energy, uncertainty, length contraction, relative energy transformations, gravity, time and light speed to understand universe better manner as well as possible. Almost any physical subject takes us easily to the same point by visiting the other subjects because of the creation type of matter as there is no alternative. Every mathematical equation is a production of a thinking; so it does not have to be right always as it may has different meanings for different minds and because of wrong thinking, assembly elements even if it may has a certain information sometimes or usually. If our portion is, our portion is as we took some pickaxes and shovels, starting for science mining as the below to understand universe better manner, if it is possible especially by strong evidences by thinking first as simple as possible. Only one pencil and one paper are enough. At this situation, the biggest problem is to be one of them exists and the other does not of them.
[3733] vixra:1606.0258 [pdf]
The Holochronous Universe: Time to Shed Some Light on the Dark Sector?
We describe a novel interpretation of the time dimension in General Relativity: the Holochronous Principle, and show that the application of this principle in standard cosmological situations is able fully to account for the effects currently attributed to Dark Matter in observational phenomena such as galactic rotation curves and gravitational lensing. We re-evaluate the role of the Friedman equations in defining a time varying spacetime metric, and in their place postulate a model that is based on the `shrinkage' of baryons in a gravitational field to account for the dynamical behaviour of the cosmic scale factor. We show that integrating the Holochronous Principle into this model gives rise to a solution that takes the form of a resonant universe, in which the resultant damped oscillations can account for the observed accelerating expansion rate of the universe, to a greater level of precision than the standard ΛCDM model. The Holochronous model obviates the need for Dark Energy in the form of a cosmological constant, Λ, and also resolves other issues associated with the ΛCDM model, including the Ω=1 flatness problem.
[3734] vixra:1606.0253 [pdf]
A Very Brief Introduction to Reflections in 2D Geometric Algebra, and their Use in Solving ``Construction" Problems
This document is intended to be a convenient collection of explanations and techniques given elsewhere in the course of solving tangency problems via Geometric Algebra.
[3735] vixra:1606.0246 [pdf]
Electrodynamics in Riemannian Space with Torsion
Based on the fact that electromagnetic radiation has energy and momentum, and it creates curvature in the space time, we have used the covariant derivative of second rank tensor , thus we show that its possible to derive an explicit expression for Maxwell's equations in curved space time with and without torsion as well as---This is a coupling between gravity to electromagnetism. We show that the coupling introduced an extra amount of charge and current density--the electromagnetic and gravitoelectric--which resulting in non vanishing divergence of the magnetic filed tensor, this is equivalent to a magnetic monopole density. This is similar to the result that found by Piplowski, which states that such a coupling breaks the symmetry of $U(1)$ group, and has only significant at early time of the universe or inside black holes where the energy is very high.
[3736] vixra:1606.0141 [pdf]
Some Definite Integrals Over a Power Multiplied by Four Modified Bessel Functions
The definite integrals int_0^oo x^j I_0^s(x) I_1^t(x) K_0^u(x)K_1^v(x) dx are considered for non-negative integer j and four integer exponents s+t+u+v=4, where I and K are Modified Bessel Functions. There are essentially 15 types of the 4-fold product. Partial integration of each of these types leads correlations between these integrals. The main result are (forward) recurrences of the integrals with respect to the exponent j of the power.
[3737] vixra:1606.0109 [pdf]
Using Periodic Functions to Determine Primes Composites and Factors
This paper discusses connections between periodic functions and primes, composites, and factors. Specifically, it shows how to use periodic functions to construct formulas for the following: the number of factors of a number, the specific factors of a number, the exact prime counting function and distribution, the nth prime, primes of any size, ”product polynomials” as periodic functions, primality and composite tests, prime gap finders, and ”anti-pulses.”
[3738] vixra:1606.0097 [pdf]
Illusory Signaling under Local Realism with Forecasts
G. Adenier and A.Y. Khrennikov (2016) show that a recent ``loophole free'' CHSH Bell experiment violates no-signaling equalities, contrary to the expected impossibility of signaling in that experiment. We show that a local realism setup, in which nature sets hidden variables based on forecasts, and which can violate a Bell Inequality, can also give the illusion of signaling where there is none. This suggests that the violation of the CHSH Bell inequality, and the puzzling no-signaling violation in the CHSH Bell experiment may be explained by hidden variables based on forecasts as well.
[3739] vixra:1606.0065 [pdf]
The Real Parts of the Nontrivial Riemann Zeta Function Zeros
This theorem is based on holomorphy of studied functions and the fact that near a singularity point the real part of some rational function can take an arbitrary preassigned value.
[3740] vixra:1606.0013 [pdf]
A Comment on "Family Ruptures, Stress, and the Mental Health of the Next Generation"
Persson and Rossin-Slater (2016b) claim to be the first to credibly estimate causal effects of fetal stress exposure on mental health in later life. They emphasize that their analysis is the first to control for non-random exposure to a relative’s death and non-random gestation length. In light of discoveries regarding prior literature, we find these claims to be exaggerated and misleading.
[3741] vixra:1606.0002 [pdf]
Holographic Tachyon in Fractal Geometry
The search of a logical quantum gravity theory is one of the noteworthy issues in modern theoretical physics. It is known that most of the quantum gravity theories describe our universe as a dimensional flow. From this point of view, one can investigate whether and how these attractive properties are related with the ultraviolet-divergence problem. These important points motivated us to discuss the reconstruction of a scalar field problem in the fractal theory which is a well-known quantum theory of gravity. Making use of time-like fractal model and considering the holographic description of galactic dark energy, we implement a correspondence between the tachyon model of galactic dark energy effect and holographic energy. Such a connection gives us an opportunity to redefine the fractal dynamics of selected scalar field representation by considering the time-evolution of holographic energy.
[3742] vixra:1605.0314 [pdf]
Solution of the Special Case "CLP" of the Problem of Apollonius via Vector Rotations using Geometric Algebra
NOTE: A new Appendix presents alternative solutions. The famous "Problem of Apollonius", in plane geometry, is to construct all of the circles that are tangent, simultaneously, to three given circles. In one variant of that problem, one of the circles has innite radius (i.e., it's a line). The Wikipedia article that's current as of this writing has an extensive description of the problem's history, and of methods that have been used to solve it. As described in that article, one of the methods reduces the "two circles and a line" variant to the so-called "Circle-Line-Point" (CLP) special case: Given a circle C, a line L, and a point P, construct the circles that are tangent to C and L, and pass through P. This document has been prepared for two very different audiences: for my fellow students of GA, and for experts who are preparing materials for us, and need to know which GA concepts we understand and apply readily, and which ones we do not.
[3743] vixra:1605.0277 [pdf]
How to Treat Directly Magnetic Fields in First-Principle Calculations and the Possible Shape of the Lagrangian
This work checks the Pauli equation with the description of the magnetic field and found a possible missing term in it. We propose a fixed Pauli equation, where the application in density functional theory explains the observed magnetic susceptibilities for Al, Si, and Au with applying directly magnetic fields. The possible shape of the Lagrangian describing the charged particle with an external magnetic field is also discussed.
[3744] vixra:1605.0276 [pdf]
X-Particle as a Solution for the Cosmological Constant Problem
The cosmological constant is a fundamental problem in modern physics, and arises at the intersection between general relativity and quantum field theory. In this paper we show how the cosmological constant problem can be solved by X-particles of dark energy with the repulsive force proportional to energy density.
[3745] vixra:1605.0241 [pdf]
Asymptotic Behaviors of Normalized Maxima for Generalized Maxwell Distribution Under Nonlinear Normalization
In this article, the high-order asymptotic expansions of cumulative distribution function and probability density function of extremes for generalized Maxwell distribution are established under nonlinear normalization. As corollaries, the convergence rates of the distribution and density of maximum are obtained under nonlinear normalization.
[3746] vixra:1605.0240 [pdf]
The Second-order Local Formalism for Time Evolution of Dynamical Systems
The second-order approach to the entropy gradient maximization for systems with many degrees of freedom provides the dynamic equations of first order and light-like second order without additional ergodicity conditions like conservation laws. The first order dynamics lead to the definition of the conserved kinetic energy and potential energy. In terms of proper degrees of freedom the total energy conservation reproduces the Einstein's mass-energy relation. The newtonian interpretation of the second order dynamic equations suggests the definition for general inertial mass and for the interaction potential.
[3747] vixra:1605.0233 [pdf]
The Problem of Apollonius as an Opportunity for Teaching Students to Use Reflections and Rotations to Solve Geometry Problems via Geometric (Clifford) Algebra
Note: The Appendix to this new version gives an alternate--and much simpler--solution that does not use reflections. The beautiful Problem of Apollonius from classical geometry ("Construct all of the circles that are tangent, simultaneously, to three given coplanar circles") does not appear to have been solved previously by vector methods. It is solved here via Geometric Algebra (GA, also known as Clifford Algebra) to show students how they can make use of GA's capabilities for expressing and manipulating rotations and reflections. As Viète did when deriving his ruler-and-compass solution, we first transform the problem by shrinking one of the given circles to a point. In the course of solving the transformed problem, guidance is provided to help students ``see" geometric content in GA terms. Examples of the guidance that is given include (1) recognizing and formulating useful reflections and rotations that are present in diagrams; (2) using postulates on the equality of multivectors to obtain solvable equations; and (3) recognizing complex algebraic expressions that reduce to simple rotations of multivectors. As an aid to students, the author has prepared a dynamic-geometry construction to accompany this article.
[3748] vixra:1605.0232 [pdf]
Rotations of Vectors Via Geometric Algebra: Explanation, and Usage in Solving Classic Geometric "Construction" Problems (Version of 11 February 2016)
Written as somewhat of a "Schaums Outline" on the subject, which is especially useful in robotics and mechatronics. Geometric Algebra (GA) was invented in the 1800s, but was largely ignored until it was revived and expanded beginning in the 1960s. It promises to become a "universal mathematical language" for many scientific and mathematical disciplines. This document begins with a review of the geometry of angles and circles, then treats rotations in plane geometry before showing how to formulate problems in GA terms, then solve the resulting equations. The six problems treated in the document, most of which are solved in more than one way, include the special cases that Viete used to solve the general Problem of Apollonius.
[3749] vixra:1605.0228 [pdf]
A Roadmap to the Quark and Lepton Mass Ratios
The last six years have seen great strides in measuring the neutrino squared-mass splittings and heavy quark masses. It is therefore timely to reconsider the mass formulas introduced by the author in 2010, which then disagreed with the ratio of the neutrino squared-mass splittings.
[3750] vixra:1605.0220 [pdf]
Matter Theory of Maxwell Equations
This article try to unified the four basic forces by Maxwell equations, the only experimental theory. Self-consistent Maxwell equation with the e-current from matter current is proposed. and is solved to four kinds of electrons and the structures of particles. The static properties and decay and scattering are reasoned, all meet experimental data. The equation of general relativity sheerly with electromagnetic field is discussed. In the end the conformation elementarily between this theory and QED and weak theory is discussed compatible, except some bias in some analysis.
[3751] vixra:1605.0212 [pdf]
Electromagnetic Force Modification in Fault Current Limiters under Short-Circuit Condition Using Distributed Winding Configuration
The electromagnetic forces caused by short-circuits consisting of radial and axial forces impose mechanical damages and failures to the windings. The engineers have tried to decrease these forces using dierent techniques and innovations. Utilization of various kinds of winding arrangements is one of these methods, which enable the transformers and fault current limiters to tolerate higher forces without a substantial increase in construction and fabrication costs. In this paper, a distributed winding arrangement is investigated in terms of axial and radial forces during short-circuit condition in a three-phase FCL. To calculate the force magnitudes of AC and DC supplied windings, a model based on the nite element method in time stepping procedure is employed. The three-phase AC and DC supplied windings are split into multiple sections for more accuracy in calculating the forces. The simulation results are compared with a conventional winding arrangement in terms of leakage ux and radial and axial force magnitudes. The comparisons show that the distributed winding arrangement mitigates radial and especially axial force magnitudes signicantly.
[3752] vixra:1605.0196 [pdf]
Is "Dark Energy" Just an Effect of Gravitational Time Dilation?
When an expanding uniform-density dust ball's radius doesn't sufficiently exceed the Schwarzschild value, its expansion rate will actually be increasing because the dominant gravitational time dilation effect diminishes as the dust ball expands. But such acceleration of expansion is absent in "comoving coordinates" because the "comoving" fixing of the 00 component of the metric tensor to unity extinguishes gravitational time dilation, as is evidenced in the "comoving" FLRW dust-ball model by the Newtonian form of its Friedmann equation of motion. Therefore we extend to all dust-ball initial conditions the singular Oppenheimer-Snyder transformation from "comoving" to "standard" coordinates which they carried out for a particular initial condition. In "standard" coordinates relativistic time dilation is manifest in the equations of motion of the dynamical radii of all of the dust ball's interior shells; the acceleration of expansion of the surface shell peaks when its radius is only fractionally larger than the dust ball's Schwarzschild radius. Even so, for a range of initial conditions a dust ball's expansion continues accelerating at all "standard" times, although that acceleration asymptotically decreases toward zero. Attempts to account for the observed acceleration of the expansion of the universe by fitting a nonzero "dark energy" cosmological constant thus seem to be quite unnecessary.
[3753] vixra:1605.0190 [pdf]
The Algorithm of the Thinking Machine
In this article we consider the questions 'What is AI?' and 'How to write a program that satisfies the definition of AI?' It deals with the basic concepts and modules that must be at the heart of this program. The most interesting concept that is discussed here is the concept of abstract signals. Each of these signals is related to the result of a particular experiment. The abstract signal is a function that at any time point returns the probability the corresponding experiment to return true.
[3754] vixra:1605.0189 [pdf]
Linear Temporal Interpolation Method in Etm+ Using Modis Data.
The main objective of the present work was to obtain synthetic ETM+ images with improved temporal resolution using MODIS radiometry to expand the applicability of this method to environmental issues that require detailed monitoring over time. To do this, we needed to verify the consistency between the data provided by ETM+ and MODIS. We used images from these sensors taken on different dates and in different test areas. We designed and validated a spatial resampling method based on statistical parameters and a linear interpolation method for diachronic data. The results confirm the consistency between MODIS and ETM+ data and their dependence on the spatial variability of the information. They also show that it is possible to obtain images derived from MODIS with the spatial resolution of ETM+ using a simple and robust linear interpolation method. Both results broaden the scope of these sensors’ application to environmental issues.
[3755] vixra:1605.0188 [pdf]
Ghost Dbi-Essence in Fractal Geometry
Focusing on a fractal geometric ghost dark energy, we reconstruct the Dirac-Born-Infeld (DBI)-essence–type scalar field and find exact solutions of the potential and warped brane tension.We also discuss statefinders for the selected dark energy description to make it distinguishable among others.
[3756] vixra:1605.0180 [pdf]
Energy-Momentum in General Relativity
It is first shown that there is no exchange of energy-momentum during planetary motion. Then, starting with the field equations, it is shown that the gravitational field does not exchange energy-momentum with any form of matter. Conclusion: In general relativity, there is no gravitational energy, momentum, stress, force or power.
[3757] vixra:1605.0168 [pdf]
Parallax Triangulation from Displacement in Spacetime
By extending the classic concept of parallax as a system to triangulate distant stars, I propose that a displacement in spacetime can be used to triangulate distant galaxies. Such an empirical experiment would also validate, or invalidate, conventional relative Doppler effect theory. The practical procedure would involve measuring SNe1 supernovae data analysis from two separate spacetime reference frames, using two essentially) simultaneous observations, from different inertial reference frames. Both rfs observe the same two supernovae events (a) and (b), such that (b) is twice the distance (x) from the Earth than (a). I named this experiment ”Spacetime Parallax”, because the triangulation of distance x is from a displacement in spacetime and the resulting time dilation is compared. This same format is then used for a quadratic accelerating reference frame, mimicking (g) force on the Earth’s surface. If the difference in wavelengths (∆λ), as measured between the two rfs, is not in proportion to the distance between the two events (x b = 2x a ), it is then justified to assume some error is evident in conventional methods. Correcting for such skewed Doppler shift observations has implications for all parameters of cosmology. This might include: dark energy, accelerated expansion, average density, the cosmic event horizon, as well as rotational velocities in general.
[3758] vixra:1605.0166 [pdf]
On Certain Aspects of American Economics Relevant to 2016
This paper seeks to shine a light on some glaring economic problems of contemporary society. Too often economic issues are framed in the context of moral wedges that divide people. Here we select issues for discussion that likely can be solved and do not strictly require the resolution of any difficult moral quandaries. We show that certain popular debates are not so interesting because sufficient evidence exists to identify the relevant premises as true and false. We suggest an economic program based in part on hypothetical new energy resources that should guide the United States and the Earth's other national states towards a more equitable valley in the space of all economic configurations. This paper is intended to be persuasive and not purely expository.
[3759] vixra:1605.0161 [pdf]
Rewriting General Relativity Based on Dark Energy
There are strong indications the general relativity theory is incomplete. Observational data that is taken as evidence for dark energy and dark matter could indicate the need for new physics. In this paper, general relativity is rewritten based on the X-particles of dark energy. The goal is to help classical mechanics and general relativity to reconcile with the laws of quantum physics.
[3760] vixra:1605.0158 [pdf]
Projective Unified Field Theory today
The short reproduction of the Projective Unified Field Theory of the author (including empiric predictions) will be presented during the subsequent time: Fundamental 5-dimensional physical laws within the projective space and projection of these basic laws onto space-time. Transition from this 4-dimensional complex of the physical laws to the (better understandable) 3-dimensional version with new additional physical terms, being prepared for a physical interpretation (influence of cosmological expansion). Numerical presentation of the predicted astrophysical and cosmological effects: anomalous motion of bodies according to the pioneer effect, anomalous rotation curve of rotating bodies around a gravitational centre. One should take note of my hypothetic result that the true origin of these both effects (pioneer effect and rotation curve effect) seems to have the same cause, namely the overwhelming dark matter existing in our Universe.
[3761] vixra:1605.0152 [pdf]
Sequential Ranking Under Random Semi-Bandit Feedback
In many web applications, a recommendation is not a single item sug- gested to a user but a list of possibly interesting contents that may be ranked in some contexts. The combinatorial bandit problem has been studied quite extensively these last two years and many theoretical re- sults now exist : lower bounds on the regret or asymptotically optimal algorithms. However, because of the variety of situations that can be considered, results are designed to solve the problem for a specific reward structure such as the Cascade Model. The present work focuses on the problem of ranking items when the user is allowed to click on several items while scanning the list from top to bottom.
[3762] vixra:1605.0150 [pdf]
Impedance Representation of the S-matrix: Proton Structure and Spin from an Electron Model
The possibility of electron geometric structure is studied using a model based upon quantized electromagnetic impedances, and written in the language of geometric Clifford algebra. In such a model the electron is expanded beyond the point, to include the simplest possible objects in one, two, and three dimensions. These point, line, plane, and volume elements, quantized at the scale of the electron Compton wavelength and given the attributes of electric and magnetic fields, comprise a minimally complete Pauli algebra of flat 3D space. One can calculate quantized impedances associated with elementary particle spectrum observables (the S-matrix) from interactions between the eight geometric objects of this algebra - one scalar, three vectors, three bivector pseudovectors, and one trivector pseudoscalar. The resulting matrix comprises a Dirac algebra of 4D spacetime. Proton structure and spin are extracted via the dual character of scalar electric and pseudoscalar magnetic charges.
[3763] vixra:1605.0149 [pdf]
A Wave-Particle Duality Interpretation Based on Dark Energy
Every elementary particle or quantic entity may be partly described in terms not only of particles, but also of waves. It expresses the inability of the classical concepts particle or wave to fully describe the behavior of quantum-scale objects. In this paper, we show that X-particles of dark energy can create the wave appearance of a quantum-scale particle, and argue that the particle is indeed a particle. Our theory is deterministic and local, and is based on classical mechanics. Double-slit experiment and quantum entanglement are explained by the X-particle interpretation.
[3764] vixra:1605.0148 [pdf]
On the a-Posteriori Fourier Method for Solving Ill-Posed Problems
The Fourier method is a convenient regularization method for solving a class of ill-posed problems. This class of ill-posed problems can be also formulated as the problem of ill-posed multiplication operator equation in the frequency domain. A recent work on the Morozov's discrepancy principle for the Fourier method are discussed in [2]. In this paper, we investigate the Fourier method within the framework of regularization theory thoroughly for solving the severely ill-posed problems. Many ill-posed examples are provided.
[3765] vixra:1605.0146 [pdf]
Intuitionistic Fuzzy Hypermodules
The relationship between the intuitionistic fuzzy sets and the algebraic hyperstructures is described in this paper. The concept of the quasi-coincidence of an intuitionistic fuzzy interval valued with an interval-valued intuitionistic fuzzy set is introduced and this is a natural generalization of the quasi-coincidence of an intuitionistic fuzzy point in intuitionistic fuzzy sets. By using this new idea, the concept of interval-valued (α, β) - intuitionistic fuzzy sub - hypermodule of a hypermodule is defined. This newly defined interval-valued (α, β) - intuitionistic fuzzy sub - hypermodule is a generalization of the usual intuitionistic fuzzy sub - hypermodule.
[3766] vixra:1605.0145 [pdf]
Solving Coupled Hirota System by Using Reduced Differential Transform Method
In this paper, Reduced Differential TransformMethod (RDTM) has been successively used to find the numerical solutions of the coupled Hirota system (CHS). The results obtained by RDTM are compared with exact solutions to reveal that the RDTM is very accurate and effective. In our work, Maple 13 has been used for computations.
[3767] vixra:1605.0133 [pdf]
Dark Energy Forms a Gravitational Field Resulting in the Uncertainty Principle
As an origin of dark energy, an X-particle with repulsive force proportional to energy density has been proposed [1]. In this paper, we will develop the X-particle theory further, and postulate how dark energy could form a ubiquitous gravitational field and inertial reference of frames, and why they might be the reason for the uncertainty principle. Like photon, an X-particle has only relativistic mass, and acts like a particle that has a definite position and momentum. It creates spaces between them by forces of gravitational attraction and repulsion. However, unlike photon that travels in space, an X-particle only needs to pass signals to its neighboring particles to form the ubiquitous gravitational field. This model could explain how gravitational signals propagate at the speed of light, how their values are stored in X-particles, and why the uncertainty principle could arise from this.
[3768] vixra:1605.0125 [pdf]
Failure Mode and Effects Analysis Based on D Numbers and Topsis
Failure mode and effects analysis (FMEA) is a widely used technique for assessing the risk of potential failure modes in designs, products, process, system or services. One of the main problems of FMEA is to deal with a variety of assessments given by FMEA team members and sequence the failure modes according to the degree of risk factors. The traditional FMEA using risk priority number (RPN) which is the product of occurrence (O), severity (S) and detection (D) of a failure to determine the risk priority ranking order of failure modes. However, it will become impractical when multiple experts give different risk assessments to one failure mode, which may be imprecise or incomplete or the weights of risk factors is inconsistent. In this paper, a new risk priority model based on D numbers, and technique for order of preference by similarity to ideal solution (TOPSIS) is proposed to evaluate the risk in FMEA. In the proposed model, the assessments given by FMEA team members are represented by D numbers, a method can effectively handle uncertain information. TOPSIS method, a novel multi-criteria decision making (MCDM) method is presented to rank the preference of failure modes respect to risk factors. Finally, an application of the failure modes of rotor blades of an aircraft turbine is provided to illustrate the efficiency of the proposed method.
[3769] vixra:1605.0109 [pdf]
Optimising Linear Seating Arrangements with a First-Principles Genetic Algorithm
We discuss the problem of finding an optimum linear seating arrangement for a small social network, i.e. approaching the problem put forth in XKCD comic 173 – for a small social network, how can one determine the seating order in a row (e.g at the cinema) that corresponds to maximum enjoyment? We begin by improving the graphical notation of the network, and then propose a method through which the total enjoyment for a particular seating arrangement can be quantified. We then discuss genetic programming, and implement a first-principles genetic algorithm in python, in order to find an optimal arrangement. While the method did produce acceptable results, outputting an optimal arrangement for the XKCD network, it was noted that genetic algorithms may not be the best way to find such an arrangement. The results of this investigation may have tangible applications in the organising of social functions such as weddings.
[3770] vixra:1605.0108 [pdf]
Repulsive Force Proportional to Energy Density as an Origin of Dark Energy
An X-particle with repulsive force proportional to energy density is postulated as an origin of dark energy. Like photon, the particle has only relativistic mass (zero rest mass), and acts like a particle that has a definite position and momentum. It creates spaces between them by forces of gravitational attraction and repulsion, where the repulsive force is postulated to be proportional to energy density. The model could be applied to explain the Lambda-CDM model of dark energy which is filling space homogeneously or to scalar fields such as quintessence whose energy density can vary in time and space.
[3771] vixra:1605.0091 [pdf]
Energy Shift of H-Atom Electrons Due the Blackbody Photons
The electromagnetic shift of energy levels of H-atom electrons is determined by calculating the mean square amplitude of oscillation of an electron coupled to the relic photon fluctuations of the electromagnetic field. Energy shift of electrons in H-atom is determined in the framework of non-relativistic quantum mechanics. The
[3772] vixra:1605.0086 [pdf]
Bare Charge and Bare Mass in Quantum Electrodynamics
The existence of the bare mass and the bare charge in Quantum Electrodynamics is analyzed in terms of the Standard Model of particle physics. QED arises as a renormalized theory as a consequense of spontaneous symmetry breaking by Englert-Brout-Higgs mechanism as $SU(3)_{C}\otimes SU(2)_{L}\otimes U(1)_{Y} \,\rightarrow \,SU(3)_{C} \otimes U(1)_{em}$.
[3773] vixra:1605.0057 [pdf]
The Return of Absolute Simultaneity and Its Relationship to Einstein’s Relativity of Simultaneity
In this paper we show how Einstein's relativity of simultaneity is fully consistent with anisotropic one-way speed of light. We get the same end result and observations as predicted by Einstein, but with a very different interpretation. We show that the relativity of simultaneity is an apparent effect due to an Einstein clock synchronization error, which is rooted in assuming that the one-way speed of light is the same as the well-tested round-trip speed of light. Einstein's relativity of simultaneity leads to several bizarre paradoxes recently introduced by Haug (2016a, 2016b).
[3774] vixra:1605.0055 [pdf]
An Investigation of the Energy Storage Properties of a 2D α-Moo3-SWCNTs Composite Films
2D a-MoO3 was synthesized using a facile, inexpensive and scalable liquid-phase exfoliation method. 2D a-MoO3/SWCNT (85 wt%/15 wt%) composite films were manufactured by vacuum filtration and their energy storage properties were investigated in a LiClO4/propylene carbonate electrolyte in a 1.5 V to 3.5 V vs. Li+/Li electrochemical window. Cyclic voltammetry showed typical ion intercalation peaks of a-MoO3 and a capacitance of 200 F/g was achieved at 10 mV/s and 82 F/g at 50 mV/s. The composite electrodes achieved a capacitive charge storage of 375 C/g and a diffusion-controlled maximum charge storage of 703 C/g. The latter being superior to the charge storage achieved by previously reported mesoporous a-MoO3, produced using more cumbersome multi-step templating methods, and a-MoO3 nanobelts . This superior Li-ion intercalation charge storage was attributed to the shorter ion-transport paths of 2D a-MoO3 as compared to other nanostructures. Galvanostatic charge-discharge experiments showed a maximum charge storage of 123.0 mAh/g at a current density of 100 mA/g.
[3775] vixra:1605.0038 [pdf]
The Return of Absolute Simultaneity? A New Way of Synchronizing Clocks Across Reference Frames
This paper introduces a new way to synchronize clocks distanced apart and a way to synchronize clocks between reference frames. Based on a simple synchronization thought experiment, we claim that relativity of simultaneity must be incomplete. Einstein's special relativity theory predicts relativity of simultaneity and that two events that happen simultaneously in one reference frame will not happen simultaneously as observed from another reference frame. Relativity of simultaneity is directly linked to a particular way of synchronizing clocks (Einstein-Poincar\'{e} synchronization) that again assumes that the one-way speed of light is isotropic and identical with the well-tested round-trip speed of light. In the new thought experiment introduced here, there is reason to believe that two distant events can happen simultaneously in both frames. Still, we agree that these events will not happen simultaneously as measured with Einstein synchronized clocks. The claim here is that Einstein synchronized clocks lead to apparent relativity of simultaneity due to a clock synchronization error rooted in the assumption of isotropic one-way speed of light.
[3776] vixra:1605.0036 [pdf]
Fermi's Golden Rule: Its Derivation and Breakdown by an Ideal Model
Fermi's golden rule is of great importance in quantum dynamics. However, in many textbooks on quantum mechanics, its contents and limitations are obscured by the approximations and arguments in the derivation, which are inevitable because of the generic setting considered. Here we propose to introduce it by an ideal model, in which the quasi-continuum band consists of equaldistant levels extending from $-\infty $ to $+\infty $, and each of them couples to the discrete level with the same strength. For this model, the transition probability in the first order perturbation approximation can be calculated analytically by invoking the Poisson summation formula. It turns out to be a \emph{piecewise linear} function of time, demonstrating on one hand the key features of Fermi's golden rule, and on the other hand that the rule breaks down beyond the Heisenberg time, even when the first order perturbation approximation itself is still valid.
[3777] vixra:1604.0392 [pdf]
The Multitude Behind the Buddhabrot
A terminological framework is proposed for the mathematical examination and analysis of the Mandelbrot set's correlative ectocopial set. The Apeiropolis and anthropobrot multisets are defined and explained to be the mathematical entities underlying the well-known Buddhabrot visualization. The definitions are presented as tools conducive to finding novel approaches and generating discoveries that might otherwise be missed via a primarily programmatic approach. The anthropobrot multisets are introduced as a new, infinite repository of unique pareidolic figures as richly diverse as the Julia sets.
[3778] vixra:1604.0382 [pdf]
Small Nonassociative Corrections to the Susy Generators and Cosmological Constant
Small nonassociative corrections for the SUSY operators $Q_{a, \dot a}$ are considered. The smallness is controlled by the ratio of the Planck length and a characteristic length $\ell_0 = \Lambda^{-1/2}$. Corresponding corrections of the momentum operator arising from the anticommutator of the SUSY operators are considered. The momentum operator corrections are defined via the anticommutator of the unperturbed SUSY operators $Q_{a, \dot a}$ and nonassociative corrections $Q_{1, a, \dot a}$. Choosing different anticommutators, one can obtain either a modified or $q$ -- deformed commutator of position $x^\mu$ and momentum operators $P_\nu$.
[3779] vixra:1604.0377 [pdf]
The Conscious Spiritual-Energy Quanta (CSE Quanta) - The Physico-Mathematical Formalism
Consciousness is projected as property of a particle termed - "The Conscious Spiritual-Energy Quanta (CSE Quanta)". This paper assumes a correspondence between the physical laws and meta-physical laws. Meta-Wave functions of CSE Quanta, meditative state, rapture, avtar, and creation and annhilation operators for CSE Quanta are proposed. Concept of Meta-Bose-Einstein condensation during meditation is presented. Fractal sub-structure of CSE Quanta and mechanism for storage of an infinite amount of information in it is proposed. This paper is part of a paper presented at, "Conference on Science of Consciousness, Tuscon, Arizona, USA, April 2016". The original paper is entitled - "REVEALING THE REAL SCIENCE OF CONSCIOUSNESS THROUGH A NOVEL DIVINE SACRED GEOMETRICAL STRUCTURE OF CONSCIOUS QUANTA".
[3780] vixra:1604.0365 [pdf]
Simple Experiment That Unequivocally Verify Relativity.
Although many experiments have been cited as empirical verification of special relativity, they are not experiments that could not have questions raised about their validity as proofs. There is one simple experiment that could be carried out to put to rest, incontrovertibly, the controversies concerning the validity of special relativity - by a direct measurement of the speed of beta particles from beta decay. If special relativity is correct, no electrons would be found to go faster than light speed. If special relativity is invalid, then kinetic energy obeys the old 1 / 2 mv2 formula. Some beta particles could be ejected with kinetic energy greater than 1 MeV; such electrons would be detected to travel at about twice the speed of light. Though it is one of the simplest experiment that many laboratories over the world could perform,until now, the experiment has yet to be carried out.
[3781] vixra:1604.0364 [pdf]
The G\"{o}del Universe and the Einstein-Podolsky-Rosen Paradox
Notion of causality in G\"{o}del universe, is compared with what is implied by the Einstein-Podolski-Rosen (EPR) type experiments of quantum mechanics. Red shift of light from distant galaxies is explained by employing Segal's compact time coordinate - in the G\"{o}del universe - which indeed was considered by G\"{o}del in his seminal paper. Various possibilities for the rotation of the universe are discussed. It is note worthy that in recent years, the scope of research on G\"{o}del univese, has expanded to include topics in string theory, supersymmetry and embedding of black holes.
[3782] vixra:1604.0363 [pdf]
A Note on The Reduced Mass
In this note we are rewriting the reduced mass formula into a form that potentially gives more intuition on what is truly behind the reduced mass.
[3783] vixra:1604.0322 [pdf]
The Identity of the Inertial Mass with the Gravity Mass
The motion of the mathematical extra-terrestrial pendulum is considered in the spherical gravitational field. The potential energy of the pendulum bob is approximated to give the nonlinear equation of motion. After solving it by the Landau-Migdal method we obtain the frequency of motion and the swing amplitude. The crucial point of our experiment is, that if the inertial mass m depends on its distance from the reference body M, then the calculated frequuency of our pendulum is not identical with the measured frequency. This crucial knowledge of the pendulum project can have substantional influence on the future development of general relativity and physics of elementary particles.
[3784] vixra:1604.0301 [pdf]
Counter-Examples to the Kruskal-Szekeres Extension: In Reply to Jason J. Sharples
Jason J. Sharples of the University of New South Wales wrote an undated article titled ‘On Crothers’ counter-examples to the Kruskal-Szekeres extension’, in which he asserted, “The claims relating to the counter-examples made by the author in [1] about the invalidity of the Kruskal-Szekeres extension and the Schwarzschild black hole are completely erroneous.” However, Sharples failed to understand the arguments I adduced and consequently committed serious errors in both mathematics and physics. In his endeavour to prove me ‘erroneous’, Sharples introduced what he calls “an inverted radial coordinate”. Contrary to Sharples’ allegation, there is no ‘inverted radial coordinate’ involved. Sharples simply failed to comprehend the geometry of the problem. Sharples’ mathematical proof that I am ‘erroneous’ violates the rules of pure mathematics. The Kruskal-Szkeres extension and the Schwarzschild black hole it facilitates are fallacious because they violate the rules of pure mathematics.
[3785] vixra:1604.0300 [pdf]
The Mathematical Foundations of Quantum Indeterminacy
Abstract:<br> In 2008, Tomasz Paterek et al published ingenious research, proving that quantum randomness is the output of measurement experiments, whose input commands a logically independent response. Following up on that work, this paper develops a full mathematical theory of quantum indeterminacy. I explain how, the Paterek experiments imply, that the measurement of pure eigenstates, and the measurement of mixed states, cannot both be isomorphically and faithfully represented by the same single operator. Specifically, unitary representation of pure states is contradicted by the Paterek experiments. Profoundly, this denies the axiomatic status of Quantum Postulates, that state, symmetries are unitary, and observables Hermitian. Here, I show how indeterminacy is the information of transition, from pure states to mixed. I show that the machinery of that transition is unpreventable, logically circular, unitary-generating self-reference: all logically independent. Profoundly, this indeterminate system becomes apparent, as a visible feature of the mathematics, when unitarity --- imposed by Postulate --- is given up and abandoned.<br><br>Keywords:<br>foundations of quantum theory, quantum mechanics, quantum randomness, quantum indeterminacy, quantum information, prepared state, measured state, pure states, mixed states, unitary, redundant unitarity, orthogonal, scalar product, inner product, mathematical logic, logical independence, self-reference, logical circularity, mathematical undecidability.
[3786] vixra:1604.0297 [pdf]
Standard Model Matter Emerging from Spacetime Preons
I consider a statistical mechanical model for black holes as atoms of spacetime with the partition function sum taken over area eigenvalues as given by loop quantum gravity. I propose a unied structure for matter and spacetime by applying the area eigenvalues to a black hole composite model for quarks and leptons. Gravitational baryon number non-conservation mechanism is predicted. Argument is given for unified field theory be based on gravitational and electromagnetic interactions only. The non-Abelian gauge interactions of the standard model are briefly discussed.
[3787] vixra:1604.0281 [pdf]
The Extra-Terrestrial Pendulum in the Gravity Field
The motion of the mathematical extra-terrestrial pendulum is considered in the spherical gravitational field. The potential energy of the pendulum bob is approximated by the linear term mgh and additional quadratical term in h, where h is height of the pendulum bob over the reference point. The nonlinear equation of motion of pendulum is solved by the Landau- Migdal method to obtain the frequency of motion and the swing amplitude. While the Foucault pendulum bob moves over the sand surface, our pendulum bob moves in ionosphere. It is not excluded that the pendulum project will be the integral part of the NASA cosmical physics.
[3788] vixra:1604.0251 [pdf]
Generalized Uncertainty Relations, Curved Phase-Spaces and Quantum Gravity
Modifications of the Weyl-Heisenberg algebra $ [ { \bf x}^i, {\bf p}^j ] = i \hbar g^{ij} ( {\bf p } ) $ are proposed where the classical limit $g_{ij} ( p ) $ corresponds to a metric in (curved) momentum spaces. In the simplest scenario, the $ 2D$ de Sitter metric of constant curvature in momentum space furnishes a hierarchy of modified uncertainty relations leading to a minimum value for the position uncertainty $ \Delta x $. The first uncertainty relation of this hierarchy has the same functional form as the $stringy$ modified uncertainty relation with a Planck scale minimum value for $ \Delta x = L_P $ at $ \Delta p = p_{Planck} $. We proceed with a discussion of the most general curved phase space scenario (cotangent bundle of spacetime) and provide the noncommuting phase space coordinates algebra in terms of the symmetric $ g_{ ( \mu \nu ) } $ and nonsymmetric $ g_{ [ \mu \nu ] } $ metric components of a Hermitian complex metric $ g_{ \mu \nu} = g_{ ( \mu \nu ) } + i g_{ [ \mu \nu ] } $, such $ g_{ \mu \nu} = (g_{ \nu \mu})^*$. Yang's noncommuting phase-space coordinates algebra, combined with the Schrodinger-Robertson inequalities involving angular momentum eigenstates, reveals how a quantized area operator in units of $ L_P^2$ emerges like it occurs in Loop Quantum Gravity (LQG). Some final comments are made about Fedosov deformation quantization, Noncommutative and Nonassociative gravity.
[3789] vixra:1604.0228 [pdf]
Charged Particle Radiation Power at the Planck Scale: One Force and One Power?
In this paper we show that the Larmor formula at the Planck scale is simply the Planck power multiplied by $\frac{1}{2\pi}$. The Larmor formula is used to describe the total power radiated by charged particles that are accelerating or decelerating. \citet{Hau16h} has recently shown that the Coulomb's electrostatic force is the same (at least mathematically) as the gravitational force at the Planck scale. The findings in this paper strengthen the argument that electricity is not so special and that at the Planck scale, we likely only have one force and thereby only one power as well.
[3790] vixra:1604.0211 [pdf]
Do Quanta Violate the Equation 0 = 0 ?
Ever since the celebrated 1964 paper of John Bell, the statement that "Quantum systems violate the Bell inequalities", [1,2], has a very large support among quantum physicists as well as others claiming some knowledge about quanta. Amusingly, it has so far escaped the general notice that, if indeed, quanta do violate that Bell inequalities, then - due to elementary facts of Logic - they must also violate {\it all} other valid mathematical relations, thus among them, the equation 0 = 0. Here the respective elementary facts of Logic are presented.
[3791] vixra:1604.0208 [pdf]
Unification of Gravity and Electromagnetism: GravityElectroMagnetism: A Probability Interpretation of Gravity
In this paper I will first show that Coulomb's electrostatic force formula is mathematically exactly the same as Newton's universal gravitational force at the very bottom of the rabbit hole --- that is for two Planck masses. Still, the electrostatic force is much stronger than the gravity force when we are working with any non-Planck masses. We show that the difference in strength between the gravity and the electromagnetism is likely due to the fact that electromagnetism can be seen as aligned matter (``superimposed" gravity), and standard gravity is related to non-aligned matter (waves). Mathematically the difference between gravity and electromagnetism is simply linked to a joint probability factor; this is probably one for aligned matter (electromagnetism) and is close to zero for gravity. Actually, the dimensionless gravitational coupling constant is directly related to this gravitational probability factor. Based on this new view, we claim to have unified electromagnetism and gravity. This paper could have major implications for our entire view on physics from the largest to the smallest scales. For example, we show that electron voltage and ionization can basically be calculated from the Newtonian gravitational escape velocity when it is adjusted to take aligned matter (electromagnetism) into account. Up until now, we have had electromagnetism and gravity; from now on there is GravityElectroMagnetism!
[3792] vixra:1604.0204 [pdf]
On Zeros of Some Entire Functions
Let \begin{equation*} A_{q}^{(\alpha)}(a;z)=\sum_{k=0}^{\infty}\frac{(a;q)_{k}q^{\alpha k^2} z^k}{(q;q)_{k}}, \end{equation*} where $\alpha >0,~0<q<1.$ In a paper of Ruiming Zhang, he asked under what conditions the zeros of the entire function $A_{q}^{(\alpha)}(a;z)$ are all real and established some results on the zeros of $A_{q}^{(\alpha)}(a;z)$ which present a partial answer to that question. In the present paper, we will set up some results on certain entire functions which includes that $A_{q}^{(\alpha)}(q^l;z),~l\geq 2$ has only infinitely many negative zeros that gives a partial answer to Zhang's question. In addition, we establish some results on zeros of certain entire functions involving the Rogers-Szeg\H{o} polynomials and the Stieltjes-Wigert polynomials.
[3793] vixra:1604.0200 [pdf]
The Density of Primes
The prime numbers has very irregular pattern. The problem of finding pattern in the prime numbers is the long-open problem in mathematics. In this paper, we try to solve the problem axiomatically. And we propose some natural properties of prime numbers.
[3794] vixra:1604.0198 [pdf]
A Note on The Dimensionless Gravitational Coupling Constant
In this paper we are rewriting the gravitational coupling constant in a slightly different form than has been shown before (without changing its value). This makes it simpler to understand what is meant and what is not meant by a “dimensionless gravitational coupling constant.
[3795] vixra:1604.0157 [pdf]
Bell Inequalities ?
Recently in [3] it was shown that the so called Bell Inequalities are {\it irrelevant} in physics, to the extent that they are in fact {\it not} violated either by classical, or by quantum systems. This, as well known, is contrary to the claim of John Bell that the mentioned inequalities {\it would be} violated in certain quantum contexts. The relevant point to note in [3] in this regard is that Bell's mentioned claim, quite of a wider acceptance among quantum physicists, is due to a most simple, elementary and trivial {\it mistake} in handling some of the involved statistical data. A brief presentation, simplified perhaps to the maximum that still presents the essence of that mistake, can be found in [10], see also [9]. The present paper tries to help in finding a way to the understanding of the above by quantum physicists, an understanding which, typically, is obstructed by an instant and immense amount and variety of ``physical intuitions" with their mix of ``physics + philosophy" considerations which - as an unstoppable avalanche - ends up making a hopeless situation from one which, on occasion, may in fact be quite simple and clear, as shown in [3] to actually happen also with the Bell Inequalities story. The timeliness of such an attempt here, needless to say not the first regarding the Bell Inequalities story, is again brought to the fore due to the no less than {\it three} most freshly claimed to be fundamental contributions to the Bell Inequalities story, [4,5,13], described and commented upon in some detail in [6].
[3796] vixra:1604.0156 [pdf]
Divergence Free Non-Linear Scalar Model
We present results of applying our divergence-free effective action quantum field theory techniques to a scalar model with nonlinear interactions governed by a dimensional coupling constant. This gives an example of the applicability of our divergence-free methods, and the viability of theories that are often disregarded due to the outstanding problem of nonrenormalizable divergences. Our results demonstrate that the (Goldstone) scalar would remain massless in the effective quantum action, while the original vertices, governed by nonlinear invariance, would preserve their form.
[3797] vixra:1604.0116 [pdf]
Generally Covariant Quantum Theory:Examples.
In a previous paper of this author [1], I introduced a novel way of looking at and extending at quantum field theory to a general curved spacetime satisfying mild geodesic conditions. The aim of this paper is to further extend the theory and clarify the construction from a physical point of view; in particular, we will study the example of a single particle propagating in a general external potential from two different points of view. The reason why we do this is mainly historical given that the interacting theory is after all well defined by means of interaction vertices and the Feynman propagator and therefore also applicable to this range of circumstances. However, it is always a pleasure to study the same question from different points of view and that is the aim of this paper.
[3798] vixra:1604.0115 [pdf]
Divergence-Free Quantum Gravity in a Scalar Model
We present results of applying our divergence-free effective action quantum field theory techniques to a scalar model with gravity-like, non-polynomial interactions characterized by a dimensional coupling constant. This treatment would give a clear perspective regarding the viability of applying the divergence-free approach to quantum gravity. Issues regarding the masslessness of the effective graviton, while the virtual counterpart is massive, as well as, regarding the invariance of the basic Lagrangian, are discussed.
[3799] vixra:1604.0080 [pdf]
On Neutrosophic Refined Sets and Their Applications in Medical Diagnosis
In this paper, we present some definitions of neutrosophic re¯ned sets such as; union, intersection, convex and strongly convex in a new way to handle the indeterminate information and inconsistent information. Also we have examined some desired properties of neutrosophic refined sets based on these definitions. Then, we give distance measures of neutrosophic refined sets with properties. Finally, an application of neutrosophic re¯ned set is given in medical diagnosis problem (heart disease diagnosis problem) to illustrate the advantage of the proposed approach.
[3800] vixra:1604.0079 [pdf]
Neutrosophic Cubic Sets
The aim of this paper is to extend the concept of cubic sets to the neutrosophic sets. The notions of truth-internal (indeterminacy-internal, falsity-internal) neutrosophic cu- bic sets and truth-external (indeterminacy-external, falsity-external) neutrosophic cubic sets are introduced, and related properties are investigated.
[3801] vixra:1604.0048 [pdf]
Soft Neutrosophic Semigroup and Their Generalization
Soft set theory is a general mathematical tool for dealing with uncertain, fuzzy, not clearly dened objects. In this paper we introduced soft neutrosophic semigroup,soft neutosophic bisemigroup, soft neutrosophic N-semigroup with the discuissionf of some of their characteristics.
[3802] vixra:1604.0009 [pdf]
Estimating Spatial Averages of Environmental Parameters Based on Mobile Crowdsensing
Mobile crowdsensing can facilitate environmental surveys by leveraging sensor-equipped mobile devices that carry out measurements covering a wide area in a short time, without bearing the costs of traditional field work. In this paper, we examine statistical methods to perform an accurate estimate of the mean value of an environmental parameter in a region, based on such measurements. The main focus is on estimates produced by considering the mobile device readings at a random instant in time. We compare stratified sampling with different stratification weights to sampling without stratification, as well as an appropriately modified version of systematic sampling. Our main result is that stratification with weights proportional to stratum areas can produce significantly smaller bias, and gets arbitrarily close to the true area average as the number of mobiles increases, for a moderate number of strata. The performance of the methods is evaluated for an application scenario where we estimate the mean area temperature in a linear region that exhibits the so-called <i>Urban Heat Island</i> effect, with mobile users moving in the region according to the Random Waypoint Model.
[3803] vixra:1604.0001 [pdf]
General One-Sided Clifford Fourier Transform, and Convolution Products in the Spatial and Frequency Domains
In this paper we use the general steerable one-sided Clifford Fourier transform (CFT), and relate the classical convolution of Clifford algebra-valued signals over $\R^{p,q}$ with the (equally steerable) Mustard convolution. A Mustard convolution can be expressed in the spectral domain as the point wise product of the CFTs of the factor functions. In full generality do we express the classical convolution of Clifford algebra signals in terms of a linear combination of Mustard convolutions, and vice versa the Mustard convolution of Clifford algebra signals in terms of a linear combination of classical convolutions.
[3804] vixra:1603.0421 [pdf]
The Relativistic Solenoid
It is a mistake to believe that the primitive experiments, known as the origin of the physical sciences, have been sufficiently studied and therefore it is impossible to extract from them some new and important knowledge. This view has contributed to the perpetuation of some misconceptions. The study of such experiments from other points of view, and applying new techniques, makes it possible to expand their meaning and understanding. Einstein must have thought this way since in 1905 decided to study the Faraday disk and, by doing so, discovered the theory of relativity, according to which the magnetic field is a consequence of the relative motion of different signs electric charges. The verifications of the theory of general relativity by cosmological experiments have led to the belief that the special relativity theory is irrelevant in terrestrial dimensions and speed. Therefore, it is important to correct this error by simple laboratory experiments, whose explanation is only possible by using special relativity theory.
[3805] vixra:1603.0416 [pdf]
Experiments with Powerful Neodymium Magnets: Magnetic Repulsion.phenomenological Equivalences: the Reverse Casimir Effect, or Repulsive (Spherical Shell) and the Spherical Shell of the Macroscopic Universe (Obsevable Sphere of the Universe).
The experiments, conducted by the author; demonstrate the physical equivalence between the repulsion between two powerful Neodymium magnets and reverse Casimir effect (nanoscale) and macroscopic scale (The spherical shell of actual observable Universe); and as measuring weight on an electronic balance of this repulsive force, it causes the appearance of a fictitious mass; dependent on the repulsive force between the two magnets. One of these magnets is positioned above the balance; while the other slowly magnet is positioned right in the perpendicular axis that would link the centers of both circular magnets. (Circular disks). There is no difference between this experiment and the physical results of the experiments carried out at the microscopic level and measured experimentally: The reverse Casimir effect of a conducting spherical shell. The actual comportment of the Universe to macroscopic scales; with the manifestation of an accelerated expansion and the emergence of a fictitious mass, which does not exist; the so-called dark matter. The three physical phenomena with identical results are equivalent; so they could have a common physical origin. In the article, we have inserted links to videos uploaded to youtube that let you see the whole experimental process and its results. The last experiment is made with other balance; more shielded against interference magnetism and the magnet placed over the balance. The videos are explained in Spanish. They are welcome English subtitles.
[3806] vixra:1603.0412 [pdf]
Reply to the Article ‘Watching the World Cup’ by Jason J. Sharples
In 2010 Jason J. Sharples, an Associate Professor at the University of New South Wales, wrote an article titled ‘Watching the World Cup’. Despite the title, the article addresses a number of papers and articles refuting the theories of black holes and Big Bang cosmology written by Stephen J. Crothers. In his article, Sharples has committed several major errors, and resorted to language unbefitting a publicly funded professorship when addressing the person of Crothers. After some rolling preamble, Sharples disputes two matters addressed by Crothers: (a) Einstein’s Principle of Equivalence, (b) Einstein’s pseudotensor. In the first case Sharples incorrectly argues that multiple arbitrarily large finite masses are not involved in its definition. In the second case he failed to understand the problem and thereby expounded upon an entirely different matter that was never contested by Crothers in the first place - Sharples confounded the Einstein tensor for Einstein’s pseudotensor and consequently did not even address the issue.
[3807] vixra:1603.0411 [pdf]
General Relativity and Gravity from Heisenberg's Potentia in Quantum Mechanics
Recently we have provided a physically consistent and a mathematically justified ontological model of Heisenberg's suggested "potentia" in quantum mechanics. What arises is that parallel to the real three dimensional $SO(3)_l$ space there is a coexisting dual space called potentia space $SO(3)_p$, wherein velocity $c \rightarrow \infty$. How does this affect gravity? We show here that gravity actually sits in the space of potentia. The space of potentia does not allow gauging. Thus gravity is not quantized.
[3808] vixra:1603.0404 [pdf]
General Covariance, a New Paradigm for Relativistic Quantum Theory.
We offer a new look on multiparticle theory which was initiated in a recent philosophical paper [1] of the author. To accomplish such feature, we start by a revision and extension of the single particle theory as well relativistically as nonrelativistically. Standard statistics gets an interpre- tation in terms of symmetry properties of the two point function and any reference towards all existing quantization schemes is dropped. As I have repeatedly stated and was also beautifully explained by Weinberg, there is no a priori rationale why quantum field theory should take the form it does in a curved spacetime; there is no reason why the straightforward generalizations of the Klein Gordon and Dirac theory should have some- thing to do with the real world. Perhaps, if we were to look differently at the flat theory, a completely satisfactory class of relativistic quantum theories would emerge. These may not have anything to do with quantum fields at all except in some limit.
[3809] vixra:1603.0392 [pdf]
A Computational Proof of Locality in Entanglement.
In this paper the design and coding of a local hidden variables model is presented that violates the Clauser, Horne, Shimony and Holt, $|$CHSH$|$ $\leq 2$ inequality. Numerically we find with our local computer program, CHSH $\approx 1 + \sqrt{2}$.
[3810] vixra:1603.0390 [pdf]
The Collapse of the Schwarzschild Radius: The End of Black Holes
In this paper we introduce an exact escape velocity that also holds under very strong gravitational fields, even below the Schwarzschild radius. The standard escape velocity known from modern physics is only valid under weak gravitational fields. This paper strongly indicates that an extensive series of interpretations around the Schwarzschild radius are wrong and were developed as a result of using an approximate escape velocity that not is accurate when we approach strong gravitational fields. Einstein’s general relativity escape velocity as well as the gravitational time dilation and gravitational redshift that are derived from the Schwarzschild metric need to be modified; in reality, they are simply approximations that only give good predictions in low gravitational fields. This paper could have major implications for gravitational physics as well as a long series of interpretations in cosmology.
[3811] vixra:1603.0389 [pdf]
Guide to Dynamical Space and Emergent Quantum Gravity: Experiments and Theory
This report provides a brief outline and literature listing dealing with the discovery of the existence of Dynamical Space, and the subsequent generalisation to Maxwell Electromagnetic Theory, Schrodinger and Dirac Quantum Theory, and the emergence of Gravity as a quantum effect. This amounts to the unified theory of gravity and quantum phenomena. All theory developments have been experimentally and observationally checked.
[3812] vixra:1603.0386 [pdf]
Communication Optimization of Parallel Applications in the Cloud
One of the most important aspects that influences the performance of parallel applications is the speed of communication between their tasks. To optimize communication, tasks that exchange lots of data should be mapped to processing units that have a high network performance. This technique is called communication-aware task mapping and requires detailed information about the underlying network topology for an accurate mapping. Previous work on task mapping focuses on network clusters or shared memory architectures, in which the topology can be determined directly from the hardware environment. Cloud computing adds significant challenges to task mapping, since information about network topologies is not available to end users. Furthermore, the communication performance might change due to external factors, such as different usage patterns of other users. In this paper, we present a novel solution to perform communication- aware task mapping in the context of commercial cloud environments with multiple instances. Our proposal consists of a short profiling phase to discover the network topology and speed between cloud instances. The profiling can be executed before each application start as it causes only a negligible overhead. This information is then used together with the communication pattern of the parallel application to group tasks based on the amount of communication and to map groups with a lot of communication between them to cloud instances with a high network performance. In this way, application performance is increased, and data traffic between instances is reduced. We evaluated our proposal in a public cloud with a variety of MPI-based parallel benchmarks from the HPC domain, as well as a large scientific application. In the experiments, we observed substantial performance improvements (up to 11 times faster) compared to the default scheduling policies.
[3813] vixra:1603.0378 [pdf]
A Review of Theoretical and Practical Challenges of Trusted Autonomy in Big Data
Despite the advances made in artificial intelligence, software agents, and robotics, there is little we see today that we can truly call a fully autonomous system. We conjecture that the main inhibitor for advancing autonomy is lack of trust. Trusted autonomy is the scientific and engineering field to establish the foundations and ground work for developing trusted autonomous systems (robotics and software agents) that can be used in our daily life, and can be integrated with humans seamlessly, naturally and efficiently. In this paper, we review this literature to reveal opportunities for researchers and practitioners to work on topics that can create a leap forward in advancing the field of trusted autonomy. We focus the paper on the `trust' component as the uniting technology between humans and machines. Our inquiry into this topic revolves around three sub-topics: (1) reviewing and positioning the trust modelling literature for the purpose of trusted autonomy; (2) reviewing a critical subset of sensor technologies that allow a machine to sense human states; and (3) distilling some critical questions for advancing the field of trusted autonomy. The inquiry is augmented with conceptual models that we propose along the way by recompiling and reshaping the literature into forms that enables trusted autonomous systems to become a reality. The paper offers a vision for a Trusted Cyborg Swarm, an extension of our previous Cognitive Cyber Symbiosis concept, whereby humans and machines meld together in a harmonious, seamless, and coordinated manner.
[3814] vixra:1603.0371 [pdf]
On Wick Rotation
Wick rotation produces numbers that agree with experiment and yet the method is mathematically wrong and not allowed by any self-consistent rule. We explore a small slice of wiggle room in complex analysis and show that it may be possible to use QFT without reliance on Wick rotations.
[3815] vixra:1603.0364 [pdf]
The Sum of Pears and Apples: Analysis of College Admissions Tests
The rankings of all kinds are in vogue. They are backed up by the belief that human activity is likely to be measured, each and every institution must advance continuously to improve its position in the ranking. Education and research are some of the activities most affected by the pressure of classification. MIDE of Ministry of Education Colombia; the classification of research groups and researchers by Colciencias; rankings of journals, articles and books; tests Saber, Saber Pro in Colombia; entrance exams to universities; accreditation of "high quality", qualified registration, curriculum changes, Quality Assurance system... Internationally: The Shanghai rankings type of universities, those of "Impact" of scientific journals, Test of English as a Foreign Language (TOEFL), The Graduate Record Examination (GRE), PISA Testing, etc. They assume that the intellect can be "measured," standardized, branded with basis with a unique number that is obtained by "Summing Pears and Apples." Then the number is used to classify "by descending order of quality" whole institutions, academic programs, research groups and individuals. In this text we discuss the technical impossibility of such pretension, emphasizing the development of a ranking to choose people who deserve to enter the University of Antioquia and discard those who do not deserve it. Seeks to answer the following questions: (1) The system allows an accurate ranking of the candidates, based on the scores? (2) Does the system is accomplished by establishing a solid border between the scores of the admitted and the unsupported? (3) Do the alleged measurements of the intellectual abilities are the only measurements that escape protocols of the science of measurements?
[3816] vixra:1603.0362 [pdf]
Various Arithmetic Functions and Their Applications
Over 300 sequences and many unsolved problems and conjectures related to them are presented herein. These notions, definitions, unsolved problems, questions, theorems corollaries, formulae, conjectures, examples, mathematical criteria, etc. on integer sequences, numbers, quotients, residues, exponents, sieves, pseudo-primes squares cubes factorials, almost primes, mobile periodicals, functions, tables, prime square factorial bases, generalized factorials, generalized palindromes, so on, have been extracted from the Archives of American Mathematics (University of Texas at Austin) and Arizona State University (Tempe): "The Florentin Smarandache papers" special collections, and Arhivele Statului (Filiala Vâlcea & Filiala Dolj, Romania). This book was born from the collaboration of the two authors, which started in 2013. The first common work was the volume "Solving Diophantine Equations", published in 2014. The contribution of the authors can be summarized as follows: Florentin Smarandache came with his extraordinary ability to propose new areas of study in number theory, and Octavian Cira - with his algorithmic thinking and knowledge of Mathcad.
[3817] vixra:1603.0344 [pdf]
Heisenberg's Potentia in Quantum Mechanics and Discrete Subgroups of Lie Groups
The concept of "potentia" as proposed by Heisenberg to understand the structure of quantum mechanics, has just remained a fanciful speculation as of now. In this paper we provide a physically consistent and a mathematically justified ontology of this model based on a fundamental role played by the discrete subgroups of the relevant Lie groups. We show that as such, the space of "potentia" arises as a coexisting dual space to the real three dimensional space, while these two sit piggyback on each other, such that the collapse of wave function can be understood in a natural manner. Quantum nonlocality and quantum jumps arise as a natural consequence of this model.
[3818] vixra:1603.0335 [pdf]
Conditional Deng Entropy, Joint Deng Entropy and Generalized Mutual Information
Shannon entropy, conditional entropy, joint entropy and mutual information, can estimate the chaotic level of information. However, these methods could only handle certain situations. Based on Deng entropy, this paper introduces multiple new entropy to estimate entropy under multiple interactive uncertain information: conditional Deng entropy is used to calculate entropy under conditional basic belief assignment; joint Deng entropy could calculate entropy by applying joint basic belief assignment distribution; generalized mutual information is applied to estimate the uncertainty of information under knowing another information. Numerical examples are used for illustrating the function of new entropy in the end.
[3819] vixra:1603.0332 [pdf]
Algebraic Quantum Thermodynamics
Density matrices can be used to describe ensembles of particles in thermodynamic equilibrium. We assume that the density matrices are a more natural description of quantum objects than state vectors. This suggests that we generalize density matrices from the usual operators on a Hilbert space to be elements of an algebra. Using density matrix renormalization to change the temperature of the ensembles, we show how the choice of algebra determines the symmetry and particle content of these generalized density matrices. The symmetries are of the form SU(N)xSU(M)x...U(1). We propose that the Standard Model of elementary particles should include a dark matter SU(2) doublet.
[3820] vixra:1603.0267 [pdf]
Planck Quantization of Newton and Einstein Gravitation
In this paper we rewrite the gravitational constant based on its relationship with the Planck length and, based on this, we rewrite the Planck mass in a slightly different form (that gives exactly the same value). In this way we are able to quantize a series of end results in Newton and Einstein's gravitation theories. The formulas will still give exactly the same values as before, but everything related to gravity will then come in quanta. Numerically this only has implications at the quantum scale; for macro objects the discrete steps are so tiny that they are close to impossible to notice. Hopefully this can give additional insight into how well or not so well (ad hoc) quantized Newton and Einstein's gravitation are potentially linked with the quantum world.
[3821] vixra:1603.0255 [pdf]
Causality of the Coulomb Field of Relativistic Electron Bunches
Recent experiments, performed by Prof. Pizzella's team with relativistic electron bunches, indicate that Coulomb field is rigidly attached to the charge's instantaneous position. Despite a widespread opinion, this fact does not violate causality in moving reference frames. To see that, one should apply the Wigner--Dirac theory of relativistic dynamics and take into account that the Lorentz boost generator depends on interaction.
[3822] vixra:1603.0254 [pdf]
A Novel Approach to the Discovery of Ternary BBP-Type Formulas for Polylogarithm Constants
Using a clear and straightforward approach, we prove new ternary (base 3) digit extraction BBP-type formulas for polylogarithm constants. Some known results are also rediscovered in a more direct and elegant manner. A previously unproved degree~4 ternary formula is also proved. Finally, a couple of ternary zero relations are established, which prove two known but hitherto unproved formulas.
[3823] vixra:1603.0239 [pdf]
On the Foundations of Physics.
The road on the foundations of science in general consists in (a) making precise what the assumptions are one makes resulting from our measurements (b) holding a “good” balance between theoretical assumptions and genericity of predictions (c) saying as precisely as possible what you mean. Unfortunately, recent work where these three criteria are met is scarce and I often encounter situations where physicists talk about different things in the same words or the other way around, identify distinct concepts (even without being aware of it), or introduce unnecessary hypothesis based upon a too stringent mathematical interpretation of some observation. In this work, I will be as critical as possible and give away those objections against modern theories of physics which have become clear in my mind and therefore transcend mere intuition. All these objections result from the use of unclear language or too stringent assumptions on the nature of reality. Next, we weaken the assumptions and discuss what I call process physics; it will turn out that Bell’s concerns do find a natural solution within this framework.
[3824] vixra:1603.0232 [pdf]
LIGO Gravitational Wave Event as Observed by Network of Quantum Gravity Detectors
The LIGO team, operating two vacuum-mode Michelson interferometers reported the detection, on September 14, 2015, of a gravitational wave event of some 0.2sec duration, which was claimed to have been generated by two black holes merging a billion years ago. However experimentally it has been shown that such vacuum mode interferometers have zero sensitivity to gravitational waves, which have indeed been detected using other techniques over the last 100+ years. One such recently discovered technique uses quantum barrier electron tunnelling current fluctuations in reverse biased diodes, generated by dynamical 3-space fluctuations: gravitational waves. These are Quantum Gravity Detectors (QGD). There happens to be an international network of such detectors, and the data from this network shows a significant event at the same time as the LIGO event, but extending over some 4sec duration. Previously in 2014 such Quantum Gravity Detectors detected gravitational waves generated by the resonant Earth vibrations, whose frequencies were known from seismology. It is suggested that the LIGO event may have been an Earth generated gravitational wave event that was detected by the electronics of the LIGO measuring and recording system, an effect previously discovered in 2014 using time-delayed correlated fluctuations in data recorded by oscilloscopes located in Australia and London.
[3825] vixra:1603.0227 [pdf]
A New Binary BBP-type Formula for $\sqrt 5\,\log\phi$
Hitherto only a base 5 BBP-type formula is known for $\sqrt 5\log\phi$, where \mbox{$\phi=(\sqrt 5+1)/2$}, the golden ratio, ( i.e. Formula 83 of the April 2013 edition of Bailey's Compendium of \mbox{BBP-type} formulas). In this paper we derive a new binary BBP-type formula for this constant. The formula is obtained as a particular case of a BBP-type formula for a family of logarithms.
[3826] vixra:1603.0223 [pdf]
Comment on 5 Papers by Gandhi and Colleagues
The bio-heat transfer equation for homogeneous material model can be easily calculated by using second order finite difference approximation to discretize the spatial derivatives and explicit finite-difference time-domain (FDTD) scheme for time domain discretization. Mr. Gandhi and colleagues solved the bio-heat equation for inhomogeneous models utilizing implicit finite-difference method. Whereas we appreciate their research, we would like to address a few issues that may help further clarify or confirm the research.
[3827] vixra:1603.0208 [pdf]
Planck + Einstein on Pi-Day
This paper show how we can “manipulate” physical fundamental constants like the Planck constants in Euclidean space-time. For example, what is the velocity we need to travel at for the π to disappear from the Planck length as observed from another reference frame? Or what is the velocity we need to travel at to replace π in the Planck energy with the Golden ratio Phi? Or what is the velocity we need to travel at to turn Planck’s mass into “Gold”? This paper provides the answer to this and similar questions all quite natural to think about on Pi-day.
[3828] vixra:1603.0198 [pdf]
0-Branes of SU(2) Lattice Gauge Theory: First Numerical Results
The site reduction of SU(2) lattice gauge theory is employed to model the magnetic monopoles of SU(2) gauge theory. The site reduced theory is a matrix model on discrete world-line for the angle-valued coordinates of 0-branes. The Monte~Carlo numerical analysis introduces the critical temperature $T_c\simeq 0.25~a^{-1}$ and the critical coupling $g_c\simeq 1.56$, above which the free energy does not exhibit a minimum leading to a phase transition.
[3829] vixra:1603.0195 [pdf]
Relation of Physiological Variables and Health
Based on the non-equilibrium thermodynamics point of view that a biological system is sustained by a local potential provided by stable entropy production, we construct a mathematical model to describe the metabolism of human body system. According to the stable and periodic property of human body system, the embryonic form of the model is constructed by dimensional analysis. Based on the mathematical model, stability analysis is used to discuss the response to perturbation which corresponds to the influence on human health. With the help of physiology and medical science, the parameters in the model are determined by empirical formulas in physiology. The correspondence of parameters and the observable variables such as body temperature, body weight, heart rate etc is found out. As an example, an interesting result obtained from our model is that overweight adults, even though healthy in the medical examination reports, faces the risk of being sick, because overweight decreases the metabolic frequency, however, it drives the human body system "farther" from equilibrium (death). This result shows that the body weight of over weighted ones will gradually increase rather than staying at a stable interval. Our method provides a new approach of predicting human health according to the observable vital signs.
[3830] vixra:1603.0180 [pdf]
A Monte Carlo Scheme for Node-Specific Inference Over Wireless Sensor Networks
In this work, we design an efficient Monte Carlo scheme for a node-specific inference problem where a vector of global parameters and multiple vectors of local parameters are involved. This scenario often appears in inference problems over heterogeneous wireless sensor networks where each node performs observations dependent on a vector of global parameters as well as a vector of local parameters. The proposed scheme uses parallel local MCMC chains and then an importance sampling (IS) fusion step that leverages all the observations of all the nodes when estimating the global parameters. The resulting algorithm is simple and flexible. It can be easily applied iteratively, or extended in a sequential framework.
[3831] vixra:1603.0178 [pdf]
He's Variational Iteration Method for the Solution of Nonlinear Newell-Whitehead-Segel Equation
In this paper, we apply He's Variational iteration method (VIM) for solving nonlinear Newell-Whitehead-Segel equation. By using this method three dierent cases of Newell-Whitehead-Segel equation have been discussed. Comparison of the obtained result with exact solutions shows that the method used is an eective and highly promising method for solving dierent cases of nonlinear Newell-Whitehead-Segel equation.
[3832] vixra:1603.0164 [pdf]
Solutions to the Gravitational Field Equations in Curved Phase-Spaces}
After reviewing the basics of the geometry of the cotangent bundle of spacetime, via the introduction of nonlinear connections, we build an action and derive the generalized gravitational field equations in phase spaces. A nontrivial solution generalizing the Hilbert-Schwarzschild black hole metric in spacetime is found. The most relevant physical consequence is that the metric becomes momentum-dependent (observer dependent) which is what one should aim for in trying to $quantize$ geometry (gravity) : the observer must play an important role in any measurement (observation) process of the spacetime he/she lives in.
[3833] vixra:1603.0134 [pdf]
The Leibniz Theorem in the Bohr Model and the Parity Oscillation
We unify the Bohr energy formula with the Leibniz continuity theorem in order to get the aufbau of photon. During the electron transition in this model the photon is created by the continual way. The oscillation of parity of K-meson is discussed.
[3834] vixra:1603.0130 [pdf]
La Conjecture de Beal : Une Démonstration Complète
In 1997, Andrew Beal announced the following conjecture : \textit{Let $A, B,C, m,n$, and $l$ be positive integers with $m,n,l > 2$. If $A^m + B^n = C^l$ then $A, B,$ and $C$ have a common factor.} We begin to construct the polynomial $P(x)=(x-A^m)(x-B^n)(x+C^l)=x^3-px+q$ with $p,q$ integers depending of $A^m,B^n$ and $C^l$. We resolve $x^3-px+q=0$ and we obtain the three roots $x_1,x_2,x_3$ as functions of $p,q$ and a parameter $\theta$. Since $A^m,B^n,-C^l$ are the only roots of $x^3-px+q=0$, we discuss the conditions that $x_1,x_2,x_3$ are integers. Three numerical examples are also given.
[3835] vixra:1603.0115 [pdf]
Observations of Structure of a Possible Unification Algebra
A C-loop algebra, designated U is assembled as the product: M4(C)x T. When M4(C) is assigned to represent Cl{1,3}(R) x C and the principle of spatial equivalence is invoked, a sub-algebra designated W is found to have features that suggest it could provide an underlying basis for the standard model of fundamental particles.U is of the same order as Cl{0,10}(R), but has a ``natural" partition into Cl{1,3}(R) x C x W, suggesting that its use in string/M theories in the place of Cl{0,10}(R) may generate a description of reality.
[3836] vixra:1603.0052 [pdf]
High-Order Spectral Volume Scheme for Multi-Component Flows Using Non-Oscillatory Kinetic Flux
In this paper, an arbitrary high-order compact method is developed for compressible multi-component flows with a stiffened gas equations of state(EOS). The main contribution is combining the high-order, conservative, compact spectral volume scheme(SV) with the non-oscillatory kinetic scheme(NOK) to solve the quasi-conservative extended Euler equations of compressible multi-component flows. The new scheme consists of two parts: the conservative part and the non-conservative part. The original high order compact SV scheme is used to discretize the conservative part directly. In order to treat the equation of state of the stiffened gas, the NOK scheme is utilized to compute the numerical flux. Then, careful analysis is made to satisfy the necessary condition to avoid unphysical oscillation near the material interfaces. After that, a high-order compact scheme for the non-conservative part is obtained. This new scheme has the following advantages for numerical simulations of compressible multi-component stiffened gas: high order accuracy with compact stencil and oscillation-free near the material interfaces. Numerical tests demonstrate the good performance and the efficiency of the new scheme for multi-component flow simulations.
[3837] vixra:1603.0045 [pdf]
Diffraction to De-Diffraction
De-diffraction (DD), a new procedure to totally cancel diffraction effects from wave-fields is presented, whereby the full field from an aperture is utilized and a truncated geometrical field is obtained, allowing infinitely sharp focusing and non-diverging beams. This is done by reversing a diffracted wave-fields’ direction. The method is derived from the wave equation and demonstrated in the case of Kirchhoff’s integral. An elementary bow-wavelet is described and the DD process is related to quantum and relativity theories.
[3838] vixra:1602.0364 [pdf]
Right-Handed Four-Fermion Condensation, LHC 750 GeV Diphoton Resonance, and Potential Dark Matter Candidate
We propose a Clifford algebra based model, which includes local gauge symmetries SO(1,3)*SU_L(2)*U_R(1)*U(1)*SU(3). There are two sectors of bosonic fields as electroweak and Majorana bosons. The electroweak boson sector is composed of scalar Higgs, pseudoscalar Higgs, and antisymmetric tensor components. The Majorana boson sector is responsible for flavor mixing and neutrino Majorana masses. The LHC 750 GeV diphoton resonance is identified as a Majorana sector quadruon, which is the pseudo-Nambu-Goldstone boson of $\bar{u}_Rs_R\bar{c}_Rd_R$ four-quark condensation. The quadruon results from spontaneous symmetry breaking of a flavor-related global U(1) symmetry involving right-handed up, down, charm, and strange quarks. In addition to $\bar{u}_Rs_R\bar{c}_Rd_R$, four-fermion condensations can also involve three other right-handed configurations $\bar{u}_R\tau_R\bar{\nu}_{\tau R}d_R$, $\bar{c}_R\mu_R\bar{\nu}_{\mu R}s_R$, and $\bar{\nu}_{\mu R}\tau_R\bar{\nu}_{\tau R}\mu_R$. Free from gauge interactions, these four-fermion condensations are potential dark matter candidates.
[3839] vixra:1602.0334 [pdf]
The P Versus NP Problem. Refutation.
In the article we provides an response to the problem of equality of P and NP classes, which is also called the Millennium problem. As a result, given the complete result of equality. For theory of refutation we use method of "reductio ad absurdum". We use tensor analysis which for define objects, such considered relatively to the Turing machine computation. The goal was to give an answer to a proble that has affected to degree of the proof calculation's details. The result can be obtained relative to the current problems of equality P and NP classes, but other than that give an opportunity to explore the computational process more.
[3840] vixra:1602.0333 [pdf]
Weighting a Resampled Particle in Sequential Monte Carlo (Extended Preprint)
The Sequential Importance Resampling (SIR) method is the core of the Sequential Monte Carlo (SMC) algorithms (a.k.a., particle filters). In this work, we point out a suitable choice for weighting properly a resampled particle. This observation entails several theoretical and practical consequences, allowing also the design of novel sampling schemes. Specifically, we describe one theoretical result about the sequential estimation of the marginal likelihood. Moreover, we suggest a novel resampling procedure for SMC algorithms called partial resampling, involving only a subset of the current cloud of particles. Clearly, this scheme attenuates the additional variance in the Monte Carlo estimators generated by the use of the resampling.
[3841] vixra:1602.0325 [pdf]
Constructing de Broglie's Periodic Phenomena
De Broglie waves were originally derived from the Lorentz Transformation of a standing wave, $e^{-i \omega t}$, that has no space dependence. It is shown here that a suitable, physically reasonable, standing wave can be constructed from physical waves that propagate at c, subject to the condition that any field line of the wave vector exists on the surface of a sphere at rest in the comoving frame. This result contradicts the classical picture of a point particle emitting a far field that propagates radially away from it, and it is argued that, while the present construction of de Broglie waves is both local and realistic, Bell Inequalities cannot be derived in de Broglie's context.
[3842] vixra:1602.0322 [pdf]
Phase Transition by 0-Branes of U(1) Lattice Gauge Theory
The site reduction of U(1) lattice gauge theory is used to model the 0-branes in the dual theory. The reduced theory is the 1D plane-rotator model of the angle-valued coordinates on discrete world-line. The energy spectrum is obtained exactly via the transfer-matrix method, with a minimum in the lowest energy as a direct consequence of compact nature of coordinates. Below the critical coupling $g_c=1.125$ and temperature $T_c=0.335$ the system undergoes a first order phase transition between coexistent phases with lower and higher gauge couplings. The possible relation between the model and the proposed role for magnetic monopoles in confinement mechanism based on dual Meissner effect is pointed.
[3843] vixra:1602.0271 [pdf]
LHC 750 GeV Diphoton Resonance from uscd Four-Quark Condensation
We propose a Clifford algebra based model, which includes local gauge symmetries SO(1,3)*SU_L(2)*U_R(1)*U(1)*SU(3). There are two sectors of bosonic fields as electroweak and Majorana bosons. The electroweak boson sector is composed of scalar Higgs, pseudoscalar Higgs, and antisymmetric tensor components. The Majorana boson sector is responsible for flavor mixing and neutrino Majorana masses. The LHC 750 GeV diphoton resonance is explained by a Majorana sector quadruon, which is the pseudo-Nambu-Goldstone boson of uscd four-quark condensation. The quadruon results from spontaneous symmetry breaking of a family-related global U(1) symmetry involving up, down, charm, and strange quarks. Being standard model singlets, four-fermion condensations are potential dark matter candidates.
[3844] vixra:1602.0265 [pdf]
Реальность. Природа. Вселенная. Планковские константы. Yra-концепция.
В настоящей статье представлена концепция автора по некоторым вопросам и проблемам современного физического знания о Природе и Вселенной. В статье определена уникальная, универсальная космологическая константа Вселенной. На базе этой константы решена проблема построения естественной аксиоматической теории планковских констант. В этой статье представлены взгляды автора 2011-2012 годов.
[3845] vixra:1602.0254 [pdf]
Note on a Possible Solution of Navier-Stokes Equations
The papier presents an essay of the resolution of Navier-Stokes equations under the hypothesis (A) of the open problem cited by Clay Institute (C.L. Fefferman, 2006).
[3846] vixra:1602.0252 [pdf]
Double Conformal Geometric Algebras
This paper gives an overview of two different, but closely related, double conformal geometric algebras. The first is the G(8,2) Double Conformal / Darboux Cyclide Geometric Algebra (DCGA), and the second is the G(4,8) Double Conformal Space-Time Algebra (DCSTA). DCSTA is a straightforward extension of DCGA. The double conformal geometric algebras that are presented in this paper have a large set of operations that are valid on general quadric surface entities. These operations include rotation, translation, isotropic dilation, spacetime boost, anisotropic dilation, differentiation, reflection in standard entities, projection onto standard entities, and intersection with standard entities. However, the quadric surface entities and other "non-standard entities" cannot be intersected with each other.
[3847] vixra:1602.0235 [pdf]
Performances Piecewise Defined Functions in Analytic Form, Prime-Counting Function
The article discusses the representation of discrete functions defined in an analytic form without the use of approximations, namely the Heaviside function, identity function, the Dirac delta function and the prime-counting function.
[3848] vixra:1602.0234 [pdf]
The Impossible is Possible! Squaring the Circle and Doubling the Cube in Space-Time
Squaring the Circle is a famous geometry problem going all the way back to the ancient Greeks. It is the great quest of constructing a square with the same area as a circle using a compass and straightedge in a finite number of steps. Since it was proven that π was a transcendental number in 1882, the task of Squaring the Circle has been considered impossible. Here, we will show it is possible to Square the Circle in space-time. It is not possible to Square the Circle in Euclidean space alone, but it is fully possible in space-time, and after all we live in a world with not only space, but also time. By drawing the circle from one reference frame and drawing the square from another reference frame, we can indeed Square the Circle. By taking into account space-time rather than just space the Impossible is possible! However, it is not enough simply to understand math in order to Square the Circle, one must understand some “basic” space-time physics as well. As a bonus we have added a solution to the impossibility of Doubling the Cube. As a double bonus we also have also Boxed the Sphere! As one will see, one can claim we simply have bent the rules and moved a problem from one place to another. One of the main essences of this paper is that we can move challenging space problems out from space and into time, and vice versa.
[3849] vixra:1602.0218 [pdf]
Planck's Constants, Yra-Concept
In the present article the unique, universal, cosmological constant of the Universe is determined. On the basis of this constant the problem of build-up of unique, natural, universal, axiomatic system of the Planck's constants is solved. According to the concept of the author the Universe space is considered as a three-dimensional, discrete, Euclidean spatial lattice with knots and a motion of the material carriers (elementary particles or the particles transmitting interaction) is considered as a transferring between the neighbouring knots of a lattice on the loopback trajectories. The author in article presents the solution of a question on sense of a ne-structure constant. The solution of a problem of a dark matter and dark energy is presented.
[3850] vixra:1602.0217 [pdf]
Coincidences on the Speed of Light
There are theories related to the time dependence of the speed of light that predict that the speed of light is not constant but depends on the time. But they are not verifiable or demonstrable physically or mathematically with concrete results. In this study, based on simple assumptions on the microwave background and thermal electrons, a model of the time varying speed of light is presented, which is fully compatible with currently accepted results for the speed of light and the Hubble constant theories.
[3851] vixra:1602.0192 [pdf]
LHC 750 GeV Diphoton Resonance and Flavor Mixing from Majorana Higgs Bosons
We propose a Clifford algebra based model, which includes local gauge symmetries SO(1,3)*SU_L(2)*U_R(1)*U(1)*SU(3). There are two sectors of Higgs fields as Majorana and electroweak Higgs bosons. The Majorana Higgs sector, is responsible for the 750 GeV diphoton resonance, flavor mixing, and right-handed neutrino Majorana masses. The electroweak Higgs sector, which induces Dirac masses, is composed of scalar, pseudoscalar, and antisymmetric tensor components.
[3852] vixra:1602.0189 [pdf]
Thermodynamics of Noncommutative Quantum Kerr Black Holes
Thermodynamic formalism for rotating black holes, characterized by noncommutative and quantum corrections, is constructed. From a fundamental thermodynamic relation, equations of state and thermodynamic response functions are explicitly given and the effect of noncommutativity and quantum correction is discussed. It is shown that the well known divergence exhibited in specific heat is not removed by any of these corrections. However, regions of thermodynamic stability are affected by noncommutativity, increasing the available states for which the system is thermodynamically stable.
[3853] vixra:1602.0167 [pdf]
The Lorentz Force Law And Kaluza Theories
Kaluza's 1921 theory of gravity and electromagnetism using a fifth wrapped-up spatial dimension is inspiration for many modern attempts to develop new physical theories. The original theory has problems which may well be overcome, and thus Kaluza theory should be looked at again: it is a natural, if not necessary, geometric unification of gravity and electromagnetism. Here a general demonstration that the Lorentz force law can be derived from a range of Kaluza theories is presented. This is investigated via non-Maxwellian kinetic definitions of charge that are divergence-free and relate Maxwellian charge to 5D components of momentum. The possible role of torsion is considered as an extension. It is shown, however, that symmetric torsion components are likely not admissible in any prospective theory. As a result Kaluza's original theory is rehabilitated and a call for deeper analysis made.
[3854] vixra:1602.0147 [pdf]
A Non-Equilibrium Extension of Quantum Gravity
A variety of quantum gravity models (including spin foams) can be described using a path integral formulation. A path integral has a well-known statistical mechanical interpretation in connection with a canonical ensemble. In this sense, a path integral describes the thermodynamic equilibrium of a local system in a thermal bath. This interpretation is in contrast to solutions of Einstein's Equations which depart from local thermodynamical equilibrium (one example is shown explicitly). For this reason, we examine an extension of the path integral model to a (locally) non-equilibrium description. As a non-equilibrium description, we propose to use a global microcanonical ensemble with constraints. The constraints reduce the set of admissible microscopic states to be consistent with the macroscopic geometry. We also analyse the relation between the microcanonical description and a statistical approach not based on dynamical assumptions which has been proposed recently. This analysis is of interest for the test of consistency of the non-equilibrium description with general relativity and quantum field theory.
[3855] vixra:1602.0145 [pdf]
Relating Spontaneous and Explicit Symmetry Breaking in the Presence of the Higgs Mechanism
One common way to define spontaneous symmetry breaking involves necessarily explicit symmetry breaking. We add explicit symmetry breaking terms to the Higgs potential, so that the spontaneous breaking of a global symmetry in multi-Higgs-doublet models is a particular case of explicit symmetry breaking. Then we show that it is possible to study the Higgs potential without assuming that the local gauge $SU(2)_L$ symmetry is spontaneously broken or not (it is known that gauge symmetries may not be possible to break spontaneously). We also discuss the physical spectrum of multi-Higgs-doublet models and the related custodial symmetry. We review background symmetries: these are symmetries that despite already explicitly broken, can still be spontaneously broken. We show that the CP background symmetry is not spontaneously broken, based on this fact: we explain in part a recent conjecture relating spontaneous and explicit breaking of the charge-parity (CP) symmetry; we also relate explicit and spontaneous geometric CP-violation.
[3856] vixra:1602.0114 [pdf]
Double Conformal Space-Time Algebra
This paper introduces the G(4,8) Double Conformal Space-Time Algebra (DCSTA). G(4,8) DCSTA is a straightforward extension of the G(2,8) Double Conformal Space Algebra (DCSA), which is a different form of the G(8,2) Double Conformal / Darboux Cyclide Geometric Algebra (DCGA). G(4,8) DCSTA extends G(2,8) DCSA with spacetime boost operations and differential operators for differentiation with respect to the pseudospatial time w=ct direction and time t. The spacetime boost operation can implement anisotropic dilation (directed non-uniform scaling) of quadric surface entities. DCSTA is a high-dimensional 12D embedding of the G(1,3) Space-Time Algebra (STA) and is a doubling of the G(2,4) Conformal Space-Time Algebra (CSTA). The 2-vector quadric surface entities of the DCSA subalgebra appear in DCSTA as quadric surfaces at zero velocity that can be boosted into moving surfaces with constant velocities that display the length contraction effect of special relativity. DCSTA inherits doubled forms of all CSTA entities and versors. The doubled CSTA entities (standard DCSTA entities) include points, hypercones, hyperplanes, hyperpseudospheres, and other entities formed as their intersections, such as planes, lines, spatial spheres and circles, and spacetime hyperboloids (pseudospheres) and hyperbolas (pseudocircles). The doubled CSTA versors (DCSTA versors) include rotor, hyperbolic rotor (boost), translator, dilator, and their compositions such as the translated-rotor, translated-boost, and translated-dilator. The DCSTA versors provide a complete set of spacetime transformation operators on all DCSTA entities. DCSTA inherits the DCSA 2-vector spatial entities for Darboux cyclides (incl. parabolic and Dupin cyclides, general quadrics, and ring torus) and gains Darboux pseudocyclides formed in spacetime with the pseudospatial time dimension. All DCSTA entities can be reflected in, and intersected with, the standard DCSTA entities. To demonstrate G(4,8) DCSTA as concrete mathematics with possible applications, this paper includes sample code and example calculations using the symbolic computer algebra system SymPy.
[3857] vixra:1602.0112 [pdf]
Effective Sample Size for Importance Sampling Based on Discrepancy Measures
The Effective Sample Size (ESS) is an important measure of efficiency of Monte Carlo methods such as Markov Chain Monte Carlo (MCMC) and Importance Sampling (IS) techniques. In the IS context, an approximation $\widehat{ESS}$ of the theoretical ESS definition is widely applied, involving the inverse of the sum of the squares of the normalized importance weights. This formula, $\widehat{ESS}$, has become an essential piece within Sequential Monte Carlo (SMC) methods, to assess the convenience of a resampling step. From another perspective, the expression $\widehat{ESS}$ is related to the Euclidean distance between the probability mass described by the normalized weights and the discrete uniform probability mass function (pmf). In this work, we derive other possible ESS functions based on different discrepancy measures between these two pmfs. Several examples are provided involving, for instance, the geometric mean of the weights, the discrete entropy (including the {\it perplexity} measure, already proposed in literature) and the Gini coefficient among others. We list five theoretical requirements which a generic ESS function should satisfy, allowing us to classify different ESS measures. We also compare the most promising ones by means of numerical simulations.
[3858] vixra:1602.0060 [pdf]
Random Dynamics of Dikes
Inthis paper, random dynamic systems theory is applied to time series ($\Delta t=5$ minutes) of measurement of water level, $W$, temperature, $T$, and barometric pressure, $P$, in sea dikes. The time series were obtained from DDSC and are part of DMC systems dike maintenance program of the Ommelanderzeedijk in northern Netherlands. The result of numerical analysis of dike $(W,T,P)$ time series is that after the onset of a more or less monotone increase in barometric pressure, an unexpected relatively sharp increase or decrease in water level can occur. The direction of change is related to random factors shortly before the onset of the increase. From numerical study of the time series, we found that $\Delta W_{max}\approx \pm 0.5$ mNAP\footnote{NAP indicates New Amsterdam water level which is a zero determining water level well known in the Netherlands.}. The randomness in the direction of change is most likely explained by the random outcome of two competitive processes shortly before the onset of a continuous barometric pressure increase. The two processes are pore pressure compaction and expulsion of water by air molecules. An important cause of growing barometric pressure increase can be found in pressure subsidence following a decrease in atmospheric temperature. In addition, there is a diurnal atmospheric tide caused by UV radiation fluctuations. This can give an additional $\Delta P_{tide}\approx\pm 0.1$ kPa barometric fluctuation\footnote{1Pa=1Pascal=$1Nm^{-2} \approx 10kg s^{-2}m^{-1}$.} in the mid latitudes ($30^{\circ}N-60^{\circ}N$).
[3859] vixra:1602.0055 [pdf]
A Categorical Approach for Relativity Theory
We provide a categorical interpretation for a model unifying the Galilei relativity and the special relativity, which is based on the introduction of two times variables, one associated to the absolute time of the Galilei relativity, and the other to the local time of the special relativity. The relation between these two time variables is the key point for the construction of a natural transformation relating two functors $\ol G$ and $\ol L$ that translates to the framework of category the role of the Galilei and the Lorentz transformations bringing with them a decomposition of the Lorentz transformation in terms of the Galilei transformation, which in some sense unify both relativities.
[3860] vixra:1602.0049 [pdf]
Mathematics in Physics
This book proposes a review and, on some important points, a new interpretation of the main concepts of Theoretical Physics. Rather than offering an interpretation based on exotic physical assumptions (additional dimension, new particle, cosmological phenomenon,…) or a brand new abstract mathematical formalism, it proceeds to a systematic review of the main concepts of Physics, as Physicists have always understood them : space, time, material body, force fields, momentum, energy… and propose the right mathematical tools to deal with them, chosen among well known mathematical theories. After a short introduction about the place of Mathematics in Physics, a new interpretation of the main axioms of Quantum Mechanics is proposed. It is proven that these axioms come actually from the way mathematical models are expressed, and this leads to theorems which validate most of the usual computations and provide safe and clear conditions for their use, as it is shown in the rest of the book. Relativity is introduced through the construct of the Geometry of General Relativity, based on 5 propositions and the use of tetrads and fiber bundles, which provide tools to deal with practical problems, such as deformable solids. A review of the concept of momenta leads to the introduction of spinors in the framework of Clifford algebras. It gives a clear understanding of spin and antiparticles. The force fields are introduced through connections, in the, now well known, framework of gauge theories, which is here extended to the gravitational field. It shows that this field has actually a rotational and a transversal component, which are masked under the usual treatment by the metric and the Levy-Civita connection. A thorough attention is given to the topic of the propagation of fields with interesting results, notably to explore gravitation. The general theory of lagrangians in the application of the Principle of Least Action is reviewed, and two general models, incorporating all particles and fields are explored, and used for the introduction of the concepts of currents and energy-momentum tensor. Precise guidelines are given to find operational solutions of the equations of the gravitational field in the most general case. The last chapter shows that bosons can be understood as discontinuities in the fields. In this 4th version of this book, changes have been made : - in Relativist Geometry : the ideas are the same, but the chapter has been rewritten, notably to introduce the causal structure and explain the link with the practical measures of time and space; - in Spinors : the relation with momenta has been introduced explicitly - in Force fields : the section dedicated to the propagation of fields is new, and is an important addition. - in Continuous Models : the section about currents and energy-momentum tensor are new. - in Discontinuous Processes : the section about bosons has been rewritten and the model improved.
[3861] vixra:1602.0048 [pdf]
Physical Law from Experimental Data
I open a dusty old drawer, and I found this article, rejected by every journal, with a complete, total loss of time; a thing that I'll never make; and I change only the bibliography, that is to initially conceived . The old idea sounds interesting, and here, on vixra, there is not rejection. I don't remember the whole theory, and the whole programs, but it can be useful to others; so I share it with you. It seem that without the complication of the least common divisor, the calculation is more simple, and elegant.
[3862] vixra:1602.0044 [pdf]
General Two-Sided Clifford Fourier Transform, Convolution and Mustard Convolution
In this paper we use the general steerable two-sided Clifford Fourier transform (CFT), and relate the classical convolution of Clifford algebra-valued signals over $\R^{p,q}$ with the (equally steerable) Mustard convolution. A Mustard convolution can be expressed in the spectral domain as the point wise product of the CFTs of the factor functions. In full generality do we express the classical convolution of Clifford algebra signals in terms of finite linear combinations of Mustard convolutions, and vice versa the Mustard convolution of Clifford algebra signals in terms of finite linear combinations of classical convolutions.
[3863] vixra:1602.0031 [pdf]
Galactic Entropy in Extended Kaluza-Klein Cosmology
We use a Kaluza-Klein model with variable cosmological and gravitational terms to discuss the nature of galactic entropy function. For this purpose, we assume a universe filled with dark fluid and consider five-dimensional field equations using the Gamma law equation. We mainly discuss the validity of the first and generalized second laws of galactic thermodynamics for viable Kaluza-Klein models.
[3864] vixra:1602.0014 [pdf]
Determining the Radius of the Observable Universe Using a Time-Scaled 3-Sphere Model of the Universe
Observation: Treating the observable universe as the surface volume of a 3-sphere allows us to produce a simple equation for the radius of the observable universe. By taking the ratio of the standard three dimensional volumes for the observable universe to the Hubble volume, and setting it equal to the ratio of the corresponding volumes derived using the equations for the surface volume of a 3-sphere, we can solve for the radius of the observable universe. We find that this radius to be the cube root of 12 pi, times the Hubble radius, or 46.27 billion light years, which matches the accepted figures. The volume of the observable universe is shown to be larger than the Hubble volume by a factor of 12 pi. This gives us a simple expression for the radius and volume of the observable universe, derived from the geometry of a 3-sphere. Explanation: But the whole universe is vastly larger than the observable, and the observable universe is not a 3-sphere, forcing us to explain how this artificial 3-sphere, clearly smaller than what the true 3-sphere would be, can show such behavior. We use the argument from another hyperverse paper, on quantum time, that the relative increase in the unit of quantum time cancels the rapidly increasing growth rate of the whole hyperverse, resulting in a constant, 2c radial expansion rate. This allows the observable universe to be treated, in many respects, as a stand-alone hyperverse, a time-scaled version of the whole. This time-scaling gives the observable universe many of the properties of a 3-sphere.
[3865] vixra:1602.0010 [pdf]
The Theory of Idealiscience
The theory of idealiscience is an accurate theoretical model, by the model we can deduce most important laws of Physics, explain a lot of physical mysteries, even a lot of basic and important philosophical questions. we can also get the theoretical values of a lot of physical constants, even some of the constants can not be deduced by traditional physical theories, such as neutron mass and magnetic moment,Avogadro constant and so on.
[3866] vixra:1602.0008 [pdf]
A Statistical Model of Spacetime, Black Holes and Matter
I propose first a simple model for quantum black holes based on a harmonic oscillator describing the black hole horizon covered by Planck length sized squares carrying soft hair. Secondly, I discuss a more involved statistical model with the partition function sum taken over black hole stretched horizon constituents which are black holes themselves. Attempting a unified quantum structure for spacetime, black holes and matter, I apply the statistical model picture also to matter particles using a composite model for quarks and leptons.
[3867] vixra:1601.0354 [pdf]
Numerical Solution for Solving fuzzy Fredholm Integro-Differential Equations by Euler Method
Numerical algorithms for solving Fuzzy Integro-Differential Equations (FIDEs) are considered. A scheme based on the classical Euler method is discussed in detail and this is followed by a complete error analysis. The algorithm is illustrated by solving linear first-order fuzzy integro-differential equations.
[3868] vixra:1601.0352 [pdf]
Rebuttal of the Paper "Black-Body Laws Derived from a Minimum Knowledge of Physics"
Errors in the paper "Black-body laws derived from a minimum knowledge of Physics" are described. The paper claims that the density of the thermal current in any number of spatial dimensions is proportional to the temperature to the power of 2(n-1)/(n-2), where n represents the number of spatial dimensions. However, it is actually proportional to the temperature to the power of n + 1. The source of this error is in the claim that the known formula for the fine-structure constant is valid for any number of spatial dimensions, and in the subsequent error that the physical dimensions of Planck's constant become dependent on n.
[3869] vixra:1601.0351 [pdf]
Unsteady Non-Darciancouette ow in Porous Medium with Heat Transfer Subject to Uniform Suction or Injection
The unsteady non-DarcianCouette ow through a porous medium of a viscous incompressible uid bounded by two parallel porous plates is studied with heat transfer. A non-Darcy model that obeys the Forchheimer extension is assumed for the characteristics of the porous medium. A uniform suction and injection are applied perpendicular to the plates while the uid motion is subjected to a constant pressure gradient. The two plates are kept at dierent but constant temperatures while the viscous dissipation is included in the energy equation. The eects of the porosity of the medium, inertial eects and the uniform suction and injection velocity on both the velocity and temperature distributions are investigated.
[3870] vixra:1601.0350 [pdf]
Quasi-Interpolation Method for Numerical Solution of Volterra Integral Equations
In this article, a numerical method based on quasi-interpolation method is used for the numerical solution of the linear Volterra integral equations of the second kind. Also, we approximate the solution of Volterra integral equations by Nystrom's method. Some examples are given and the errors are obtained for the sake of comparison.
[3871] vixra:1601.0349 [pdf]
A Fixed-Size Fragment Approach to Graph Mining
Many practical computing problems concern large graphs. Some examples include the Web graphs, various social networks and molecular datasets. The scale of these graphs introduces challenges to their ecient processing. One of the main issues in such problems is that most of the mentioned datasets cannot be t in the memory. In this paper, we present a new data fragment framework for graph mining. The original dataset is divided into a xed number of fragments, associated with the number of the graphs in each dataset. Then each fragment is mined individually using a well-known graph mining algorithm (i.e. FSG or gSpan) and the results are combined to generate global results. A major problem in fragmenting graphs is concerning on similarity or dissimilarity of them. Another problem corresponds to the completeness of the output which will be discussed in this paper.
[3872] vixra:1601.0348 [pdf]
Laplace Decomposition and Semigroup Decomposition Methods to Solve Glycolysis System in One Dimension
In this article, we formulate two methods to get approximate solution of Glycolysis system. The first is Laplace decomposition methods (is a method combined Lplace transform and Adomian polynomial) and the second is semigroup decomposition method (is a method combined semigroup approach and Adomian polynomial), In both methods the nonlinear terms in Glycolysis system treated with help Adomian polynomial. One example are presented to illustrate the efficiency of the methods, this is done by writing a computer programs with the aid of Maple 13.
[3873] vixra:1601.0347 [pdf]
Numerical Solutions of the Time Fractional Diffusion Equations by Using Quarter-Sweep Sor Iterative Method
The main objective of this paper is to describe the formulation of Quarter-Sweep Successive Over-Relaxation (QSSOR) iterative method using the Caputos time fractional derivative together with Quarter-Sweep implicit nite dierence approximation equation for solving one-dimensional linear time-fractional diusion equations. To solve the problems, a linear system will be constructed via discretization of the one-dimensional linear time fractional diusion equations by using the Caputos time fractional derivative. Then the generated linear system has been solved by using the proposed QSSOR iterative method. Computational results are provided to demonstrate the eectiveness of the proposed methods as compared with the FSSOR and HSSOR methods.
[3874] vixra:1601.0345 [pdf]
B-Spline Collocation Method for Numerical Solution of the Nonlinear Two-Point Boundary Value Problems with Applications to Chemical Reactor Theory
In this article, the cubic B-spline collocation method is implemented to nd numerical solution of the problem arising from chemical reactor theory. The method is tested on some model problems from the literature, and the numerical results are compared with other methods.
[3875] vixra:1601.0344 [pdf]
Application of Complex Analysis on Solving Some Definite Integrals
This paper studies two types of denite integrals and uses Maple for verication. The closed forms of the two types of denite integrals can be obtained mainly using Cauchy integral theorem and Cauchy integral formula. In addition, some examples are used to demonstrate the calculations.
[3876] vixra:1601.0342 [pdf]
Double Fourier Harmonic Balance Method for Nonlinear Oscillators by Means of Bessel Series
The standard harmonic balance method consists in expanding the displacement of an oscillator as a Fourier cosine series in time. A key modification is proposed here, in which the conservative force is additionally expanded as a Fourier sine series in space. As a result, the steady-state oscillation frequency can be expressed in terms of a Bessel series, and the sums of many such series are known or can be developed. The method is illustrated for five different physical situations, including a ball rolling inside a V-shaped ramp, an electron attracted to a charged filament, a large-amplitude pendulum, and a During oscillator. As an example of the results, the predicted period of a simple pendulum swinging between -90° and +90° is found to be only 0.4% larger than the exact value. Even better, the predicted frequency for the V-ramp case turns out to be exact.
[3877] vixra:1601.0341 [pdf]
Comparison of Two Finite Difference Methods for Solving the Damped Wave Equation
In this work we present two finite-difference schemes for solving this equation with one initial and boundary conditions. We study stability and consistency of these methods. Two methods are explicit and they approximates the solutions of the wave equation with consistency of order O, for examining the accuracy of the results , we compare the results with the solution obtained by the methods of separation of variable, also a numerical example for each methods is presented and compared with each other. Finally, the graphs of the error have been plotted to show the methods work with high accuracy.
[3878] vixra:1601.0340 [pdf]
A Reconstruction Method for the Gradient of a Function in Two-Dimensional Space
Numerical differentiation is a classical ill-posed problem. In image processing, sometimes we have to compute the gradient of an image. This involves a problem of numerical differentiation. In this paper we present a truncation method to compute the gradient of a two-variables function which can be considered as an image. A H¨older-type stability estimate is obtained. Numerical examples show that the proposed method is effective and stable.
[3879] vixra:1601.0339 [pdf]
An O(h^10) Methods For Numerical Solutions Of Some Differential Equations Occurring In Plate Deflection Theory
A tenth-order non-polynomial spline method for the solutions of two-point boundary value problem u(4)(x) + f(x; u(x)) = 0; u(a) = 1; u00(a) = 2; u(b) = 3; u00(b) = 4; is constructed. Numerical method of tenth-order with end conditions of the order 10 is derived. The convergence analysis of the method has been discussed. Numerical examples are presented to illustrate the applications of method, and to compare the computed results with other known methods.
[3880] vixra:1601.0338 [pdf]
Trapezoidal Method for Solving the First Order Stiff Systems on a Piecewise Uniform Mesh
In this paper, we introduce a method based on the modification of the Trapezoidal Method with a Piecewise Uniform Mesh proposed in Sumithra and Tamilselvan [1] for a numerical solution of stiff Ordinary Differential Equations (ODEs) of the first order system. Using this modification, the stiff ODEs were successfully solved and it resulted in good solutions.
[3881] vixra:1601.0337 [pdf]
A New Method for Solving Fuzzy Linear Fractional Programs with Triangular Fuzzy Numbers
Several methods currently exist for solving fuzzy linear and non-linear programming problems. In this paper an efficient method for FLFP has been proposed, in order to obtain the fuzzy optimal solution. This proposed method based on crisp linear programming and has a simple structure. It is easy to apply the proposed method compare to exiting method for solving FLFP problems in real life situations. To show the efficiency of our proposed method a numerical example has been illustrated with a practical problem.
[3882] vixra:1601.0335 [pdf]
A Numerical Investigation Of The Signicance Of Non-Dimensional Numbers On The Oscillating Flow Characteristics Of A Closed Loop Pulsating Heat Pipe
Pulsating Heat Pipe (PHP) is a two phase passive heat transfer device for low temperature applications. Even though it is a simple, exible and cheap structure, its complex physics has not been fully understood and requires a robust, validated simulation tool. In the present work the basic theoretical model by H.B. Ma et al[11] has been updated with the inclusion of capillary forces in order to characterise the pulsating ow under the in uence of various non dimensional quantities. The mathematical model is solved using explicit embedded Range-kutta method and proved that the Poiseuille number considered in the numerical analysis assumes more signicance since it includes ow characteristics, geometry and uid properties of a PHP.
[3883] vixra:1601.0334 [pdf]
Analytical Investigation and Numerical Prediction of Driving Point Mechanical Impedance for Driver Posture By Using ANN
In vibration human body is unied and complex active dynamic system. Lumped parameters are oered used to capture and evaluate the human dynamic properties.Entire body vibration causes a multi fascinated sharing out of vibration within the body and disagreeable feelings giving rise to discomfort or exasperation result in impaired performance and health means. This distribution of vibration is dependent on intra subject variability and inters subject variability. For this study a multi degree of freedom lumped parameter model has taken for analysis. The equation of motion is derived and the response function such as seat to head transmissibility (STHT) driving point mechanical impedance (DPMI) and apparent mass(APMS) are determined, for this kind of study we can use a neural network (ANN) which is a powerful data modeling tool that is able to capture and represent complex input/output relationship. The goal of ANN is to create a model that correctly maps the input to the output using historic data so that the model can be then used to produce the output when the desired output is unknown.
[3884] vixra:1601.0333 [pdf]
Predicting and Analyzing the Efficiency of Portable Scheffer Re ector By Using Response Surface Method
Portable Scheffer reactor (PSR) is an important and useful mechanical device used solar energy for the numerous applications. The present work consists of 2.7 square meter scheffer re ector used for the domestic application in Indian context such as water heating. The parameters such as the position of the PSR surface with respect to the sun i.e. tilting angle (AR),the processing timing (TM) measured in 24 Hr clock and the water quantity (WT ) are considered as a independent parameters. The parameter related with the PSR performance is the eciency of the PSR (EFF).The response surface methodology (RSM) was used to predict and analyze the performance of PSR. The experiments conducted based on three factors, three-level, and central composite face centered design with full replications technique, and mathematical model was developed. Sensitivity analysis was carried out to identify critical parameters. The results obtained through response surface methodology were compared with the actual observed performance parameters. The results show that the RSM is an easy and effective tool for modelling and analyzing the performance of any mechanical system.
[3885] vixra:1601.0326 [pdf]
Time Really Passes, Science Can't Deny That
Today's science provides quite a lean picture of time as a mere geometric evolution parameter. I argue that time is much richer. In particular, I argue that besides the geometric time, there is creative time, when objective chance events happen. The existence of the latter follows straight from the existence of free-will. Following the french philosopher Lequyer, I argue that free-will is a prerequisite for the possibility to have rational argumentations, hence can't be denied. Consequently, science can't deny the existence of creative time and thus that time really passes.
[3886] vixra:1601.0324 [pdf]
Numerical Solution for Hybrid Fuzzy System by Adams Fourth Order Predictor-Corrector Method
In this paper three numerical methods to solve for hybrid fuzzy differential equations are discussed. These methods are Adams-Bashforth, Adams-Moulton and Predictor-Correctormethod is obtained by combining Adams-Bashforth and Adams-Moulton methods. Convergence and stability of the proposed methods are also proved in detail. In addition, these methods are illustrated by solving two Cauchy problems.
[3887] vixra:1601.0302 [pdf]
A Mathematical Approach to Simple Bulls and Cows
This document describes the game of Bulls and Cows and the research previously done on it. We then proceed to discuss our simplified algorithm which can be used practically by humans during course of play. An extended version of the algorithm, which leverages computational power to guess the code quickly and more efficiently, has also been explored. Lastly, extensive human trials have been conducted to study the effectiveness of the algorithm, and it has been shown that the algorithm results in a marked decrease in the average number of guesses in which a code is guessed by the code-breaker.
[3888] vixra:1601.0300 [pdf]
Cusps in the Quench Dynamics of a Bloch State
We report some nonsmooth dynamics of a Bloch state in a one-dimensional tight binding model with the periodic boundary condition. After a sudden change of the potential of an arbitrary site, quantities like the survival probability of the particle in the initial Bloch state show cusps periodically, with the period being the Heisenberg time associated with the energy spectrum. This phenomenon is a nonperturbative counterpart of the nonsmooth dynamics observed previously (Zhang and Haque, arXiv:1404.4280) in a periodically driven tight binding model. Underlying the cusps is an exactly solvable model, which consists of equally spaced levels extending from $-\infty$ to $+\infty$, between which two arbitrary levels are coupled to each other by the same strength.
[3889] vixra:1601.0291 [pdf]
Higgs-Higgs Interaction
The amplitude of Higgs-Higgs interaction is calculated in the Standard Model in the framework of the Sirlin's renormalization scheme in the unitary gauge. The one-loop corrections for \lambda, the constant of 4\chi interaction are compared with the previous results of L. Durand et al. obtained on using the technique of the equivalence theorem, and in the different gauges.
[3890] vixra:1601.0289 [pdf]
Korovkin-Type Theorems for Abstract Modular Convergence
We give some Korovkin-type theorems on convergence and estimates of rates of approximations of nets of functions, satisfying suitable axioms, whose particular cases are lter/ideal convergence, almost convergence and triangular A-statistical convergence, where A is a non-negative summability method. Furthermore, we give some applications to Mellin-type convolution and bivariate Kantorovich-type discrete operators.
[3891] vixra:1601.0285 [pdf]
A Roadmap to Some Dimensionless Constants of Physics
It is well known that nature's dimensionless constants variously take the form of mass ratios, coupling constants, and mixing angles. What is not generally known is that by considering a subset of these constants in a particular order (following a roadmap if you will) one can easily find accurate, but compact, approximations for each member of this subset, with each compact expression pointing the way to the next. Specifically, if the tau-muon mass ratio, the muon-electron mass ratio, the neutron-electron mass ratio, the fine structure constant, and the three largest quark and lepton mixing angles are considered in that order, one can readily find a way of compressing them into a closely-related succession of compact mathematical expressions.
[3892] vixra:1601.0283 [pdf]
Special Relativistic Fourier Transformation and Convolutions
In this paper we use the steerable special relativistic (space-time) Fourier transform (SFT), and relate the classical convolution of the algebra for space-time $Cl(3,1)$-valued signals over the space-time vector space $\R^{3,1}$, with the (equally steerable) Mustard convolution. A Mustard convolution can be expressed in the spectral domain as the point wise product of the SFTs of the factor functions. In full generality do we express the classical convolution of space-time signals in terms of finite linear combinations of Mustard convolutions, and vice versa the Mustard convolution of space-time signals in terms of finite linear combinations of classical convolutions.
[3893] vixra:1601.0278 [pdf]
Gravicommunication (GC)
In this work gravicommunication (GC) is introduced, as a new form of communication (different from the gravitational waves), which involves gravitons (elementary particles of gravitation). This research is based on quantum modification of the general relativity. The modification includes effects of production /absorption of gravitons, which turn out to have small, but finite mass and electric dipole moment. It is shown, that such gravitons form the dipole Bose-Einstein condensate, even for high temperature. The theory (without fitting parameters) is in good quantitative agreement with cosmological observations. In this theory we got an interface between gravitons and ordinary matter, which very likely exist not only in cosmos, but everywhere, including our body and, especially, our brain. Subjective experiences are considered as a manifestation of that interface. A model of such interface is presented and some new experimentally verifiable aspects of natural neural systems are considered. According to the model, GC can be superluminal, which will solve the problem of quantum entanglement. Probable applications of these ideas include health (brain stimulation), new forms of communication, computational capabilities, energy resources and weapons. Potential social consequences of these developments can be comparable with the effects of discovery and applications of electricity. Some developed civilizations in the universe may already master gravicommunication (with various applications) and so should we.
[3894] vixra:1601.0275 [pdf]
A Comparison Study on Original and Torrefied Hazelnut Shells using a Bubbling Fluidised Bed Gasifier
Torrefaction is mild thermo-chemical process similar to pyrolysis, that can be applied to biomass to improve energy density and hydrophobicity. Comparison was made between original and torrefied forms of hazelnut shell agricultural waste biomass, when these materials were subjected to gasification using a “bench-scale” fluidised bed gasifier. Results indicated that a simplified torrefaction process was successful in physical transformation of the hazelnut shell and that the resultant syn-gas was of a relatively higher calorific value, together with lower tar content. Keywords: biomass, TGA, thermogravimetry, thermogravimetic analysis, gasification, fluidisation
[3895] vixra:1601.0237 [pdf]
Thermal Contributions in Divergence-Free Quantum Field Theory
In the framework of divergence-free quantum field theory, we demonstrate how to compute the thermal free energy of bosonic and fermionic fields. While our computations pertain to one loop, they do indicate the method to be applied in higher-loops. In the course of our derivations, use is made of Poisson's summation formula, and the resulting expressions involve the zeta function. We note that the logarithmic terms involve temperature as an energy scale term.
[3896] vixra:1601.0233 [pdf]
Numerical-Analytical Assessment on Solar Chimney Power Plant
This study considers an appropriate expression to estimate the output power of solar chimney power plant systems (SCPPS). Recently several mathematical models of a solar chimney power plant were derived, studied for a variety of boundary conditions, and compared against CFD calculations. An important concern for modeling SCPPS is about the accuracy of the derived pressure drop and output power equation. To elucidate the matter, axisymmetric CFD analysis was performed to model the solar chimney power plant and calculate the output power for diffrent available solar irradiation. Both analytical and numerical results were compared against the available experimental data from the Manzanares power plant. We also evaluated the fidelity of the assumptions underlying the derivation and present reasons to believe that some of the derived equations, specially the power equation in this model, may require a correction to be applicable in more realistic conditions. This paper provides an approach to estimate the output power with respect to radiation available to the collector.
[3897] vixra:1601.0211 [pdf]
Baryogenesis Via the Packaged Entanglement States with C-Symmetry Breaking
Baryogenesis, or the origin of matter-antimatter asymmetry of the universe, is one of the major unsolved problems in physical cosmology. Here we present a new interpretation to the baryogenesis based on the theory of packaged entanglement states in which the particles are indeterminate and hermaphroditic. A measurement or an external perturbation to these packaged entanglement states will cause the wave function to collapse and therefore break the system's C-symmetry. This process satisfies the Sakharov conditions. By further proposing an entanglement selection principle, we can give a self-consistent interpretation to the origin of matter-antimatter asymmetry produced in early universe. Thus, the collapse of packaged entanglement states with C-symmetry breaking could be or at lease contribute to the origin of matter-antimatter asymmetry.
[3898] vixra:1601.0207 [pdf]
Interpreting the Summation Notation When the Lower Limit is Greater Than the Upper Limit
In interpreting the sigma notation for finite summation, it is generally assumed that the lower limit of summation is less than or equal to the upper limit. This presumption has led to certain misconceptions, especially concerning what constitutes an empty sum. This paper addresses how to construe the sigma notation when the lower limit is greater than the upper limit
[3899] vixra:1601.0206 [pdf]
Split and Observing the Spin of Free Electrons in Action (Plasma Theory and Stern-Gerlach Experiment by Free Electron in Quantum Theory)
In this article some observed objects in the experiment and the way of compatibility classical relationships between empirical observations from the view points of the plasma physics have been investigated, the plasma physics equations rooted in classical physics and quantum mechanics equations; given that the possibility of separation and direct observation the spin of free electron is one of the most discussable issues in the quantum philosophy during the last few decades, this paper has been studied some of technical and scientific issues of the experiment.
[3900] vixra:1601.0184 [pdf]
Closed Loop Current Control of Three Phase Photovoltaic Grid Connected System
The paper presents a closed loop current control technique of three phase grid connected systems with a renewable energy source. The proposal optimizes the system design, permitting reduction of system losses and harmonics for the three phase grid connected system. The performance of the proposed controller of grid connected PV array with DC-DC converter and multilevel inverter is evaluated through MATLAB-Simulation. The results obtained with the proposed method are compared with those obtained when using without current controller for three-phase photovoltaic multilevel inverter in terms of THD and switching frequency. Experimental works were carried out with the PV module WAREE WS 100, which has a power rating of 10 W, 17 V output voltages and 1000 W=m2 ir-radiance. The test results show that the proposed design exhibits a good performance.
[3901] vixra:1601.0179 [pdf]
Efficient Linear Fusion of Distributed MMSE Estimators for Big Data
Many signal processing applications require performing statistical inference on large datasets, where computational and/or memory restrictions become an issue. In this big data setting, computing an exact global centralized estimator is often unfeasible. Furthermore, even when approximate numerical solutions (e.g., based on Monte Carlo methods) working directly on the whole dataset can be computed, they may not provide a satisfactory performance either. Hence, several authors have recently started considering distributed inference approaches, where the data is divided among multiple workers (cores, machines or a combination of both). The computations are then performed in parallel and the resulting distributed or partial estimators are finally combined to approximate the intractable global estimator. In this paper, we focus on the scenario where no communication exists among the workers, deriving efficient linear fusion rules for the combination of the distributed estimators. Both a Bayesian perspective (based on the Bernstein-von Mises theorem and the asymptotic normality of the estimators) and a constrained optimization view are provided for the derivation of the linear fusion rules proposed. We concentrate on minimum mean squared error (MMSE) partial estimators, but the approach is more general and can be used to combine any kind of distributed estimators as long as they are unbiased. Numerical results show the good performance of the algorithms developed, both in simple problems where analytical expressions can be obtained for the distributed MMSE estimators, and in a wireless sensor network localization problem where Monte Carlo methods are used to approximate the partial estimators.
[3902] vixra:1601.0174 [pdf]
Improving Population Monte Carlo: Alternative Weighting and Resampling Schemes
Population Monte Carlo (PMC) sampling methods are powerful tools for approximating distributions of static unknowns given a set of observations. These methods are iterative in nature: at each step they generate samples from a proposal distribution and assign them weights according to the importance sampling principle. Critical issues in applying PMC methods are the choice of the generating functions for the samples and the avoidance of the sample degeneracy. In this paper, we propose three new schemes that considerably improve the performance of the original PMC formulation by allowing for better exploration of the space of unknowns and by selecting more adequately the surviving samples. A theoretical analysis is performed, proving the superiority of the novel schemes in terms of variance of the associated estimators and preservation of the sample diversity. Furthermore, we show that they outperform other state of the art algorithms (both in terms of mean square error and robustness w.r.t. initialization) through extensive numerical simulations.
[3903] vixra:1601.0165 [pdf]
General Two-Sided Quaternion Fourier Transform, Convolution and Mustard Convolution
In this paper we use the general two-sided quaternion Fourier transform (QFT), and relate the classical convolution of quaternion-valued signals over $\R^2$ with the Mustard convolution. A Mustard convolution can be expressed in the spectral domain as the point wise product of the QFTs of the factor functions. In full generality do we express the classical convolution of quaternion signals in terms of finite linear combinations of Mustard convolutions, and vice versa the Mustard convolution of quaternion signals in terms of finite linear combinations of classical convolutions.
[3904] vixra:1601.0127 [pdf]
Create Polygon Through Fans Suitable for Parellel Calculations
There are many method for nding whether a point is inside a polygon or not. The congregation of all points inside a polygon can be referred point congregation of polygon. Assume on a plane there are N points. Assume the polygon have M vertexes. There are O(NM) calculations to create the point congregation of polygon. Assume N>>M, we oer a parallel calculation method which is suitable for GPU programming. Our method consider a polygon is consist of many fan regions. The fan region can be positive and negative. We wold like to extended this method to 3 D problem where a polyhedron instead of polygon should be drawn using cones.
[3905] vixra:1601.0045 [pdf]
Die Formale Grundlage eines Quaternionischen Raum-Zeit-Kalküls
Die Einführung 'quaternionischer Differentialformen' auf dem Tangentialraum einer vierdimensionalen Mannigfaltigkeit ergibt ein vielversprechendes mathematisches System zur Beschreibung unserer physikalischen (3+1)-Raumzeit schon mit einem Minimum von Grundannahmen. Hier wird die Grundlage dieses Modells einer 'Quaternionischen Raumzeit' dargestellt.
[3906] vixra:1601.0036 [pdf]
Clifford Algebraic Unification of Conformal Gravity with an Extended Standard Model
A brief review of the basics of the Clifford $ Cl ( 5, C ) $ Unified Gauge Field Theory formulation of Conformal Gravity and $ U (4) \times U (4) \times U(4) $ Yang-Mills in $ 4D$ is presented. A physically relevant subgroup is $ SU (2, 2) \times SU (4)_C \times SU (4)_L \times SU (4)_R $ and which is compatible with the Coleman-Mandula theorem (in the absence of a mass gap). This proposal for a Clifford Algebraic Unification of Conformal Gravity with an Extended Standard Model deals mainly with models of $four$ generations of fermions. Mirror fermions can be incorporated as well. Whether these mirror fermions are dark matter candidates is an open question. There are also residual $U(1)$ groups within this Clifford group unification scheme that should play an important in Cosmology in connection to dark matter particles coupled to gravity via a Bimetric extension of General Relativity. Other four generation scenarios based on $ Cl (6,R), Cl (8,R) $ algebras, Supersymmetric Field Theories and Quaternions are discussed.
[3907] vixra:1601.0031 [pdf]
A Remark by Atiyah on Donaldson's Theory, ap Theory and Ads/cft Duality
Using Artin Presentation Theory, we mathematically augment a remark of Atiyah on physics and Donaldson's 4D theory which, conversely, explicitly introduces the theoretical physical relevance of AP Theory into Modern Physics. AP Theory is a purely discrete group-theoretic, in fact, a framed pure braid theory, which, in the sharpest possible holographic manner, encodes all closed, orientable 3-manifolds and their knot and linking theories, and a large class of compact, connected, simply-connected, smooth 4-manifolds with a connected boundary, whose physical relevance for Atiyah's remark we explain.
[3908] vixra:1512.0494 [pdf]
Packaged Entanglement States and Particle Teleportation. II. C-Symmetry Breaking
Packaged entanglement states encapsulate the necessary physical quantities as an entirety for completely identifying the particles. They are important for particle physics and matter teleportation. Here we proposed the new packaged entanglement states (of two particles and more than two particles) in which the charge does not conserve in the process of wave function collapse. We also discussed the particle teleportation and entanglement transfer using the new packaged entanglement states. It is shown that a particle always converts into its conjugating particle during the particle teleportation process.
[3909] vixra:1512.0472 [pdf]
The Logical Self-Reference Inside the Fourier Transform
Abstract<br>I show that, in general, the Fourier transform is necessarily self-referent and logically circular.<br><br>Keywords<br> self-reference, logical circularity, mathematical logic, Fourier transform, vector space, orthogonality, orthogonal, unitarity, unitary, imaginary unit, foundations of quantum theory, quantum mechanics, quantum indeterminacy, quantum information, prepared state, pure state, mixed state, wave packet, scalar product, tensor product.
[3910] vixra:1512.0462 [pdf]
Galilean Relativity with the Relativistic Gamma Factor.
Special Relativity derived by Einstein presents time and space distorsions and paradoxes. This paper presents an approach where the Lorenz transformations are build on equations with speed variables instead of space and time variables as done by Einstein. The result are transformation rules between inertial frames that are free of time dilation and length contraction for all relativistiv speeds. Particles move according to Galilei relativity multiplied with the relativistic Gamma factor. All the transformation equations already existent for the electric and magnetic fields, deduced on the base of the invariance of the Maxwell wave equations are still valid. The present work shows the importance of including the characteristics of the measuring equipment in the chain of physical interactions to avoid unnatural conclusions like time dilation and lengthcontraction.
[3911] vixra:1512.0449 [pdf]
Energy in Special Relativity
I give new relativistic formulas for kinetic, rest and total energies. The change in kinetic energy of a particle will be determined as the work done by the spatial part of the Minkowski four-force. I present a new relation between the relativistic kinetic energy and the spatial part of the four-momentum also interpretation of the temporal component of the Minkowski four-force.
[3912] vixra:1512.0433 [pdf]
The 2(2S + 1)–Formalism and Its Connection with Other Descriptions
In the framework of the Joos-Weinberg 2(2S+1)- theory for massless particles, the dynamical invariants have been derived from the Lagrangian density which is considered to be a 4-vector. A la Majorana interpretation of the 6-component ``spinors`", the field operators of S=1 particles, as the left- and right-circularly polarized radiation, leads us to the conserved quantities which are analogous to those obtained by Lipkin and Sudbery. The scalar Lagrangian of the Joos-Weinberg theory is shown to be equivalent to the Lagrangian of a free massless field, introduced by Hayashi. As a consequence of a new ``gauge" invariance this skew-symmetric field describes physical particles with the longitudinal components only. The interaction of the spinor field with the Weinberg's 2(2S+1)-component massless field is considered. New interpretation of the Weinberg field function is proposed. KEYWORDS: quantum electrodynamics, Lorentz group representation, high-spin particles, bivector, electromagnetic field potential. PACS: 03.50.De, 11.10.Ef, 11.10.Qr, 11.17+y, 11.30.Cp
[3913] vixra:1512.0420 [pdf]
Cooperative Parallel Particle Filters for Online Model Selection and Applications to Urban Mobility
We design a sequential Monte Carlo scheme for the dual purpose of Bayesian inference and model selection. We consider the application context of urban mobility, where several modalities of transport and different measurement devices can be employed. Therefore, we address the joint problem of online tracking and detection of the current modality. For this purpose, we use interacting parallel particle filters, each one addressing a different model. They cooperate for providing a global estimator of the variable of interest and, at the same time, an approximation of the posterior density of each model given the data. The interaction occurs by a parsimonious distribution of the computational effort, with online adaptation for the number of particles of each filter according to the posterior probability of the corresponding model. The resulting scheme is simple and flexible. We have tested the novel technique in different numerical experiments with artificial and real data, which confirm the robustness of the proposed scheme.
[3914] vixra:1512.0369 [pdf]
A Memoir of Mathematical Cartography
It is a memoir of mathematical cartography concerning the traduction of the paper '' A Conformal Mapping Projection With Minimum Scale Error” de W.I. Reilly, published in Survey Review, Volume XXII, n°168, april 1973.
[3915] vixra:1512.0348 [pdf]
Dark Matter and Weak Field Limit of General Relativity
Followed the Dr. Cooperstock (the emeritus professor) idea to solve Dark Matter problem by means of Einstein's General Relativity. Then is discovered, what the problem of galaxies (even with help of the proposed novel numerical algorithm) do not converge into needed solution. The reason is obvious: the small factors as the non-azimuthal motion of stars have been neglected. But the exact, the non-approximative, equations are simple enough for the stationary rotating dust cylinder of huge height. Turns out, what weak fields limit of the General Relativity fully coincides with the Newton Gravity
[3916] vixra:1512.0319 [pdf]
On Almost Sure Convergence Rates for the Kernel Estimator of a Covariance Operator Under Negative Association
Let fXn; n 1g be a strictly stationary sequence of negatively associated random variables, with common continuous and bounded distribution function F. We consider the estimation of the two-dimensional distribution function of (X1;Xk+1) based on kernel type estimators as well as the estimation of the covariance function of the limit empirical process induced by the sequence fXn; n 1g where k 2 IN0. Then, we derive uniform strong convergence rates for the kernel estimator of two-dimensional distribution function of (X1;Xk+1) which were not found already and do not need any conditions on the covari- ance structure of the variables. Furthermore assuming a convenient decrease rate of the covariances Cov(X1;Xn+1); n 1, we prove uniform strong convergence rate for covari- ance function of the limit empirical process based on kernel type estimators. Finally, we use a simulation study to compare the estimators of distribution function of (X1;Xk+1).
[3917] vixra:1512.0317 [pdf]
A Gauge Theory of Gravity in Curved Phase-Spaces
After a cursory introduction of the basic ideas behind Born's Reciprocal Relativity theory, the geometry of the cotangent bundle of spacetime is studied via the introduction of nonlinear connections associated with certain $nonholonomic$ modifications of Riemann--Cartan gravity within the context of Finsler geometry. A novel gauge theory of gravity in the $8D$ cotangent bundle $ T^*M$ of spacetime is explicitly constructed and based on the gauge group $ SO (6, 2) \times_s R^8$ which acts on the tangent space to the cotangent bundle $ T_{ ( {\bf x}, {\bf p}) } T^*M $ at each point $ ({\bf x}, {\bf p})$. Several gravitational actions involving curvature and torsion tensors and associated with the geometry of curved phase spaces are presented. We conclude with a brief discussion of the field equations, the geometrization of matter, QFT in accelerated frames, {\bf T}-duality, double field theory, and generalized geometry.
[3918] vixra:1512.0303 [pdf]
Differential Operators in the G8,2 Geometric Algebra, DCGA
This paper introduces the differential operators in the G(8,2) Geometric Algebra, called the Double Conformal / Darboux Cyclide Geometric Algebra (DCGA). The differential operators are three x, y, and z-direction bivector-valued differential elements and either the commutator product or the anti-commutator product for multiplication into a geometric entity that represents the function to be differentiated. The general form of a function is limited to a Darboux cyclide implicit surface function. Using the commutator product, entities representing 1st, 2nd, or 3rd order partial derivatives in x, y, and z can be produced. Using the anti-commutator product, entities representing the anti-derivation can be produced from 2-vector quadric surface and 4-vector conic section entities. An operator called the pseudo-integral is defined and has the property of raising the x, y, or z degree of a function represented by an entity, but it does not produce a true integral. The paper concludes by offering some basic relations to limited forms of vector calculus and differential equations that are limited to using Darboux cyclide implicit surface functions. An example is given of entity analysis for extracting the parameters of an ellipsoid entity using the differential operators.
[3919] vixra:1512.0297 [pdf]
General Classical Electrodynamics, a New Foundation of Modern Physics and Technology
Maxwell's Classical Electrodynamics (MCED) shows several related inconsistencies, as the consequence of a single false premise. The Lorentz force law of MCED violates Newton's Third Law of Motion (N3LM) in case of General Magnetostatics (GMS) current distributions, that are not necessarily divergence free. A consistent GMS theory is defined by means of Whittaker's force law, which requires a scalar magnetic force field, $B_L$. The field $B_L$ mediates a longitudinal Ampère force, similar to the vector magnetic field, $\B_T$, that mediates a \textit{transverse} Ampère force. The sum of transverse- and longitudinal Ampère forces obeys N3LM for stationary currents in general. The scalar field, $B_\Phi$, is also a physical, as a consequence of charge continuity. MCED does not treat the induction of the electric field, $\E_L$, by a time varying $B_L$ field, so MCED does not cover the reason for adding $E_L$ to the superimposed electric field, $E$. The exclusion of $E_L$ from $E$ simplifies MCED to Classical Electrodynamics (CED). The MCED Jefimenko fields show a far field contradiction, that is not shown by the CED fields. CED is based on the Lorentz force and therefore violates N3LM as well. Hence, we define a General Classical Electrodynamics (GCED) as a generalization of GMS and CED. GCED describes three types of far field waves: the longitudinal $\Phi$-wave, the longitudinal electromagnetic (LEM) wave and the transverse electromagnetic (TEM) wave, with vacuum phase velocities respectively $a$, $b$ and $c$. GCED power- and force theorems are derived. The general force theorem obeys N3LM only if the three phase velocities satisfy the Coulomb premise: a >> c and b=c. GCED with Coulomb premise is far field consistent, and resolves the classical $\frac{4}{3}$ energy-momentum problem of a moving charged sphere. GCED with the Lorentz premise (a=c and b=c) reduces to the inconsistent MCED. Many experimental results verify GCED with Coulomb premise, and falsify MCED. GCED can replace MCED as a new foundation of modern physics (relativity theory and wave mechanics). It might be the inspiration for new scientific experiments and electrical engineering, such as new wave-electronic effects based on $\Phi$-waves and LEM waves, and the conversion of natural $\Phi$-waves and LEM wave energy into useful electricity, in the footsteps of Nikola Tesla and Thomas Henry Moray.
[3920] vixra:1512.0286 [pdf]
Logical Independence of Imaginary and Complex Numbers in Elementary Algebra<br>context: Theory of Indeterminacy and Quantum Randomness
Abstract: As opposed to the classical logic of true and false, when Elementary Algebra is treated as a formal axiomatised system, formulae in that algebra are either provable, disprovable or otherwise, logically independent of axioms. This logical independence is well-known to Mathematical Logic. Here I show that the imaginary unit, and by extension, all complex numbers, exist in that algebra, logically independently of the algebra's axioms. The intention is to cover the subject in a way accessible to physicists. This work is part of a project researching logical independence in quantum mathematics, for the purpose of advancing a complete theory of quantum randomness. Elementary Algebra is a theory that cannot be completed and is therefore subject to Gödel's Incompleteness Theorems.<br><br>keywords: mathematical logic, formal system, axioms, mathematical propositions, Soundness Theorem, Completeness Theorem, logical independence, mathematical undecidability, foundations of quantum theory, quantum mechanics, quantum physics, quantum indeterminacy, quantum randomness.
[3921] vixra:1512.0278 [pdf]
A Qualitative Analysis of Isotopic Changes Reported in LENR Experiments
A few patterns are identified in the isotopic changes seen in LENR experiments. These patterns are shown to be consistent with the parallel operation of several related processes: α decay, α capture, fragmentation of heavier nuclides following upon α capture, and β decay/electron capture. The results of several researchers working in the field are examined in the light of these processes. The analysis developed here is then applied to the 2014 report by Levi et al. on the test of Andrea Rossi’s E-Cat in Lugano, Switzerland, whose fuel and ash assays are found to be broadly consistent with the isotope studies. The different processes are seen, then, to operate in systems making use of palladium, nickel, electrolysis, gas diffusion and glow discharge. A suggestion is made as to what might be inducing these decays and capture and fragmentation reactions.
[3922] vixra:1512.0264 [pdf]
Emission Theories and Relativity
The present paper makes a comparison between two different approaches for Relativity Theories. One is the approach made by Einstein at the beginning of the 20th century which postulates that light moves with light speed c independent of its emitting source. The proposed approach postulates that light moves with light speed c relative to its emitting source, which also includes the reflecting and refracting surfaces. Einstein interpreted the constancy of light speed in all inertial frames as a time and space problem resulting time dilation and length contraction, while the new approach considers it a speed problem where time and space are absolute variables. The result of the proposed approach is that particles move according Galilei relativity multiplied with the relativistic gamma factor.
[3923] vixra:1512.0256 [pdf]
Particle Physics and Cosmology in the Microscopic Model
This review summarizes the results of a series of recent papers, where a microscopic structure underlying the physics of elementary particles has been proposed. The ’tetron model’ relies on the existence of an internal isospin space, in which an independent physical dynamics takes place. This idea is critically reconsidered in the present work. As becomes evident in the course of discussion, the model not only describes electroweak phenomena but also modifies our understanding of other physical topics, like gravity, the big bang cosmology and the nature of the strong interactions.
[3924] vixra:1512.0220 [pdf]
God and Explosions
Suppose, for simplicity, that there is the God. Yes. The question is, why He will do explosions? Why show bad example to future terrorists on Earth?
[3925] vixra:1512.0214 [pdf]
Fitting the Hyperverse Model into the Friedmann Equation Gives a Constant Expansion Rate for the Universe
We show that a 3-sphere, whose surface volume matches the volume of the observable universe, has a radius of 27.6 billion light years, meaning it would be expanding into the fourth dimension at exactly twice the speed of light. We claim this observation has deep significance. Although the whole universe is much larger than this 'observable hyperverse', the 2c expansion rate is the actual expansion rate we experience, and this is due to the increase in the duration of the unit of quantum time as the universe expands. A model of the universe based on the 2c radial expansion rate produces equations stating that both energy and matter are continuously created with expansion, starting at the time of the Big Bang. Placing these equations into the Friedmann equation also gives a constant rate of expansion. The continuous creation of matter and energy, starting with the Big Bang, is distinct from the Steady State model, which has no Big Bang. The universe as an expanding 3-sphere is not an FRW model, as energy and matter both change with one over the scale factor squared. This negates most arguments against constant-rate expansion, as they are based on the FRW model. A 2c-hyperverse cosmology provides a new, powerful model for exploring the nature of the universe.
[3926] vixra:1512.0211 [pdf]
Human Integration of Motion and Texture Information in Visual Slant Estimation
The present research is aimed to: (i) characterize the ability of human visual system to define the objects’ slant on the base of combination of visual stimulus characteristics, that in general are uncertain and even conflicting. (ii) evaluate the influence of human age on visual cues assessment and processing; (iii) estimate the process of human visual cue integration based on the well known Normalized Conjunctive Consensus and Averaging fusion rules, as well on the base of more efficient probabilistic Proportional Conflict Redistribution rule no.5 defined within Dezert-Smarandache Theory for plausible and paradoxical reasoning.
[3927] vixra:1512.0204 [pdf]
Intelligent Alarm Classification Based on DSmT
In this paper the critical issue of alarms’ classification and prioritization (in terms of degree of danger) is considered and realized on the base of Proportional Conflict Redistribution rule no.5, defined in Dezert-Smarandache Theory of plausible and paradoxical reasoning. The results obtained show the strong ability of this rule to take care in a coherent and stable way for the evolution of all possible degrees of danger, relating to a set of a priori defined, out of the ordinary dangerous directions.
[3928] vixra:1512.0186 [pdf]
Multiple Camera Fusion Based on DSmT for Tracking Objects on Ground Plane
This paper presents comparative results of a model for multiple camera fusion, which is based on Dezert-Smarandache theory of evidence. Our architecture works at the decision level to track objects on a ground plane using predefined zones, producing useful information for surveillance tasks such as behavior recognition. Decisions from cameras are generated by applying a perspective-based basic belief assignment function, which represent uncertainty derived from cameras perspective while tracking objects on ground plane.
[3929] vixra:1512.0167 [pdf]
P-Union and P-Intersection of Neutrosophic Cubic Sets
Conditions for the P-intersection and P-intersection of falsity-external (resp. indeterminacy-external and truth-external) neutrosophic cubic sets to be an falsity-external (resp. indeterminacy-external and truth- external) neutrosophic cubic set are provided.
[3930] vixra:1512.0157 [pdf]
Rough sets in Neutrosophic Approximation Space
A rough set is a formal approximation of a crisp set which gives lower and upper approximation of original set to deal with uncertainties. The concept of neutrosophic set is a mathematical tool for handling imprecise, indeterministic and inconsistent data. In this paper, we introduce the concepts of neutrosophic rough Sets and investigate some of its properties. Further as the characterisation of neutrosophic rough approxi- mation operators, we introduce various notions of cut sets of neutrosophic rough sets.
[3931] vixra:1512.0153 [pdf]
Simulating Human Decision Making for Testing Soft and Hard/Soft Fusion Algorithms
Current methods for evaluating the effects of human opinions in data fusion systems are often dependent on human testing (which is logistically hard and difficult to arrange for repeated tests of the same population).
[3932] vixra:1512.0147 [pdf]
Smarandache-Lattice and Algorithms
In this paper we introduced algorithms for constructing Smarandache-lattice from the Boolean algebra through Atomic lattice, weakly atomic modular lattice, Normal ideals, Minimal subspaces, Structural matrix algebra, Residuated lattice. We also obtained algorithms for Smarandache-lattice from the Boolean algebra.
[3933] vixra:1512.0136 [pdf]
The Improvement of DS Evidence Theory and its Application in IR/MMW Target Recognition
ATR system has a broad application prospect in military, especially in the field of modern defense technology. When paradoxes are existence in ATR system due to adverse battlefield environment, integration cannot be effectively and reliably carried out only by traditional DS evidence theory.
[3934] vixra:1512.0103 [pdf]
DSmT Based Scheduling Algorithm in Opportunistic Beamforming Systems
A novel approach based on Dezert-Smarandache Theory (DSmT) is proposed for scheduling in opportunistic beamforming (OBF) systems. By jointly optimizing among system throughput, fairness and time delay of each user, the proposed algorithm can achieve larger system throughput and lower average time delay with approximately the same fairness and acceptable complexity, as compared with the proportional fair scheduler PFS).
[3935] vixra:1512.0049 [pdf]
Applying Extensions of Evidence Theory to Detect Frauds in Nancial Infrastructures
The Dempster-Shafer (DS) theory of evidence has signicant weaknesses when dealing with confflicting information sources, as demonstrated by preeminent mathematicians. This problem may invalidate its eectiveness when it is used to implement decision making tools that monitor a great number of parameters and metrics.
[3936] vixra:1512.0036 [pdf]
Cautious OWA and Evidential Reasoning for Decision Making under Uncertainty
To make a decision under certainty, multicriteria decision methods aims to choose, rank or sort alternatives on the basis of quantitative or qualitative criteria and preferences expressed by the decision-makers. However, decision is often done under uncertainty: choosing alternatives can have different consequences depending on the external context (or state of the word). In this paper, a new methodology called Cautious Ordered Weighted Averaging with Evidential Reasoning (COWA-ER) is proposed for decision making under uncertainty to take into account imperfect evaluations of the alternatives and unknown beliefs about groups of the possible states of the world (scenarii). COWA-ER mixes cautiously the principle of Yager’s Ordered Weighted Averaging (OWA) approach with the efficient fusion of belief functions proposed in Dezert-Smarandache Theory DSmT).
[3937] vixra:1512.0035 [pdf]
Change Detection by New DSmT Decision Rule and Icm with Constraints :Application to Argan Land Cover
The objective of this work is, in the first place, the integration in a fusion process using hybrid DSmT model, both, the contextual information obtained from a supervised ICM classification with constraints and the temporal information with the use of two images taken at two different dates. Secondly, we have proposed a new decision rule based on the DSmP transformation to overcome the inherent limitations of the decision rules thus use the maximum of generalized belief functions.The approach is evaluated on two LANDSAT ETM+ images, the results are promising.
[3938] vixra:1512.0033 [pdf]
Change Detection in Heterogeneous Remote Sensing Images Based on Multidimensional Evidential Reasoning
We present a multidimensional evidential reasoning (MDER) approach to estimate change detection from the fusion of heterogeneous remote sensing images. MDER is based on a multidimensional (M-D) frame of discernment composed by the Cartesian product of the separate frames of discernment used for the classification of each image.
[3939] vixra:1512.0032 [pdf]
Characterizations of Normal Parameter Reductions of Soft Sets
In 2014, Wang et. al gave the reduct denition for fuzzy information system. We observe that the reduct denition given by Wang et. al does not retain the optimal choice of objects. In this paper, we give the drawbacks of the reduct denition of Wang et. al and give some characterizations of normal parameter reduction of soft sets. Also, we prove that the image and inverse image of a normal parameter reduction is a normal parameter reduction under consistency map.
[3940] vixra:1512.0029 [pdf]
Comparing Performance of Interval Neutrosophic Sets and Neural Networks with Support Vector Machines for Binary Classification Problems
In this paper, the classification results obtained from several kinds of support vector machines (SVM) and neural networks (NN) are compared with our proposed classifier. Our approach is based on neural networks and interval neutrosophic sets which are used to classify the input patterns into one of the two binary class outputs.
[3941] vixra:1512.0007 [pdf]
Ontology, Evolving Under the Influence of the Facts
We propose an algebraic approach to building ontologies which capable of evolution under the influence of new facts and which have some internal mechanisms of validation. For this purpose we build a formal model of the interactions of objects based on cellular automata, and find out the limitations on transactions with objects imposed by this model. Then, in the context of the formal model, we define basic entities of the model of knowledge representation: concepts, samples, properties, and relationships. In this case the formal limitations are induced into the model of knowledge representation in a natural way.
[3942] vixra:1512.0006 [pdf]
An Asymptotic Robin Inequality
The conjectured Robin inequality for an integer $n>7!$ is $\sigma(n)<e^\gamma n \log \log n,$ where $\gamma$ denotes Euler constant, and $\sigma(n)=\sum_{d | n} d $. Robin proved that this conjecture is equivalent to Riemann hypothesis (RH). Writing $D(n)=e^\gamma n \log \log n-\sigma(n),$ and $d(n)=\frac{D(n)}{n},$ we prove unconditionally that $\liminf_{n \rightarrow \infty} d(n)=0.$ The main ingredients of the proof are an estimate for Chebyshev summatory function, and an effective version of Mertens third theorem due to Rosser and Schoenfeld. A new criterion for RH depending solely on $\liminf_{n \rightarrow \infty}D(n)$ is derived.
[3943] vixra:1512.0005 [pdf]
Sagnac Experiment Analyzed with the "Emission & Regeneration" UFT
The results of the Sagnac experiment analyzed with the Standard Model (SM) are easily explained with non relativistic equations assuming that light moves with light speed independent of its source, but the results are not compatible with Special Relativity. The Sagnac results analyzed with the ``Emission & Regeneration'' UFT present no incompatibilities within the theory which postulates that light is emitted with light speed relative to the emitting source. Electromagnetic waves that arrive with any speed to mirrors, optical lenses and electric antenas are absorbed by the level electrons and subsequently emitted with light speed "c" relative to their nuclei, explaining why light speed "c" is always measured in all inertial frames. Relativity derived in the frame of the ``E & R'' UFT has absolute time and absolute space resulting in a theory without paradoxes..
[3944] vixra:1511.0302 [pdf]
The Quaternion Domain Fourier Transform and its Properties
So far quaternion Fourier transforms have been mainly defined over $\mathbb{R}^2$ as signal domain space. But it seems natural to define a quaternion Fourier transform for quaternion valued signals over quaternion domains. This quaternion domain Fourier transform (QDFT) transforms quaternion valued signals (for example electromagnetic scalarvector potentials, color data, space-time data, etc.) defined over a quaternion domain (space-time or other 4D domains) from a quaternion position space to a quaternion frequency space. The QDFT uses the full potential provided by hypercomplex algebra in higher dimensions and may moreover be useful for solving quaternion partial differential equations or functional equations, and in crystallographic texture analysis. We define the QDFT and analyze its main properties, including quaternion dilation, modulation and shift properties, Plancherel and Parseval identities, covariance under orthogonal transformations, transformations of coordinate polynomials and differential operator polynomials, transformations of derivative and Dirac derivative operators, as well as signal width related to band width uncertainty relationships.
[3945] vixra:1511.0274 [pdf]
Détermination D'un Géoide de Haute Précision Par L'approche D'a. Ardalan I: Rappels de la Théorie de Pizzetti-Somigliana
This paper gives a rappel of the theory of Pizzetti-Somigliana for the determination of a geoid. It is the first part of an investigation to use the approch of A. Ardalan for the calcul of a regional géoid of high resolution to determine a tunisian geoid.
[3946] vixra:1511.0267 [pdf]
Algebra of Paravectors
Paravectors just like integers have a ring structure. By introducing an integrated product we get geometric properties which make paravectors similar to vectors. The concepts of parallelism, perpendicularity and the angle are conceptually similar to vector counterparts, known from the Euclidean geometry. Paravectors meet the idea of parallelogram law, Pythagorean theorem and many other properties well-known to everyone from school.
[3947] vixra:1511.0260 [pdf]
Quantum Corrections to Classical Kinetics: the Weight of Rotation
Abstract Hydrodynamics of gases in the classical domain are examined from the perspective that the gas has a well-defined wavefunction description at all times. Specifically, the internal energy and volume exclusion of decorrelated vortex structures are included so that quantum corrections and modifications to Navier-Stokes behavior can be derived. This leads to a small deviation in rigid body rotation for a cylindrically bound gas and the internal energy changes associated with vorticity give deviations in the Reynolds’ transport theorem. Some macroscopic observable features arising from this include variations in the specific heat, an anisotropic correction to thermal conductivity and a variation in optical scattering as a function of the declination from the axis of local vorticity. The improvements in magneto-optical traps suggests some interesting experiments to be done in higher temperature regimes where they are not usually employed. It is argued that the finite lifetime of observed vortices in ultracold bosonic gases is only apparent and these volume excluding structures persist in generating angular momentum and pressure in the cloud in a non-imageable form.
[3948] vixra:1511.0253 [pdf]
Combinaison of Doppler and Terrestrial Observations in the Adjustment of Geodetic Networks
The investigation concerns the combination of Doppler data and terrestrial classical observations for the adjustment of geodetic networks and the determination of the 7 parameters (translation, rotation and scale) between terrestrial geodetic and Doppler networks. Models of adjustment are presented. They are part of two groups : 1- a combined adjustment, 2- a commun adjustment. Bursa-Wolf’s model and Molodensky formulas are used to determine the parameters. Systematic errors in orientation and scale of geodetic terrestrial networks are studied.
[3949] vixra:1511.0247 [pdf]
General Method for Summing Divergent Series Using Mathematica and a Comparison to Other Summation Methods
We are interested in finding sums of some divergent series using the general method for summing divergent series discovered in our previous work and symbolic mathematical computation program Mathematica. We make a comparison to other five summation methods implemented in Mathematica and show that our method is the stronger method than methods of Abel, Borel, Cesaro, Dirichlet and Euler.
[3950] vixra:1511.0246 [pdf]
ACourse of Numerical Analysis and Applied Mathematics
It is a course of numerical analysis and applied mathematics for the students of the geomatic and topography option of the ESAT School, Tunis, Tunisia.
[3951] vixra:1511.0237 [pdf]
Radiation from Rotating Dielectric Disc
The power spectral formula of the radiation of an electron moving in a rotating dielectric disc is derived. We suppose the index of refraction is constant during the rotation. This is in accord with the Fermi dielectric rotating disc for the determination of the light polarization gyration. While the well-known Cherenkov effect, transition effect, the Cherenkov-synchrotron effect due to the motion of particles in magnetic field are experimentally confirmed, the new phenomenon - the radiation due to a charge motion in rotating dielectric medium and the Cherenkov-synchrotron radiation due to the superluminal motion of particle in the rotating dielectric medium is still in the state of the preparation of experiment.
[3952] vixra:1511.0232 [pdf]
Generalized Multiple Importance Sampling
Importance Sampling methods are broadly used to approximate posterior distributions or some of their moments. In its standard approach, samples are drawn from a single proposal distribution and weighted properly. However, since the performance depends on the mismatch between the targeted and the proposal distributions, several proposal densities are often employed for the generation of samples. Under this Multiple Importance Sampling (MIS) scenario, many works have addressed the selection or adaptation of the proposal distributions, interpreting the sampling and the weighting steps in different ways. In this paper, we establish a general framework for sampling and weighting procedures when more than one proposal is available. The most relevant MIS schemes in the literature are encompassed within the new framework, and, moreover novel valid schemes appear naturally. All the MIS schemes are compared and ranked in terms of the variance of the associated estimators. Finally, we provide illustrative examples which reveal that, even with a good choice of the proposal densities, a careful interpretation of the sampling and weighting procedures can make a significant difference in the performance of the method.
[3953] vixra:1511.0230 [pdf]
P, C and T: Different Properties on the Kinematical Level
We study the discrete symmetries (P,C and T) on the kinematical level within the extended Poincare Group. On the basis of the Silagadze research, we investigate the question of the definitions of the discrete symmetry operators both on the classical level, and in the secondary-quantization scheme. We study the physical contents within several bases: light-front formulation, helicity basis, angular momentum basis, and so on, on several practical examples. We analize problems in construction of the neutral particles in the the (1/2,0)+(0,1/2) representation, the (1,0)+(0,1) and the (1/2,1/2) representations of the Lorentz Group. As well known, the photon has the quantum numbers 1-, so the (1,0)+(0,1) representation of the Lorentz group is relevant to its description. We have ambiguities in the definitions of the corresponding operators P, C; T, which lead to different physical consequences. It appears that the answers are connected with the helicity basis properties, and commutations/anticommutations of the corresponding operators, C, P, T, and C^2, P^2, (CP)^2 properties.
[3954] vixra:1511.0225 [pdf]
Counting 2-way Monotonic Terrace Forms over Rectangular Landscapes
A terrace form assigns an integer altitude to each point of a finite two-dimensional square grid such that the maximum altitude difference between a point and its four neighbors is one. It is 2-way monotonic if the sign of this altitude difference is zero or one for steps to the East or steps to the South. We provide tables for the number of 2-way monotonic terrace forms as a function of grid size and maximum altitude difference, and point at the equivalence to the number of 3-colorings of the grid.
[3955] vixra:1511.0224 [pdf]
Analyse de Structure D'un Réseau Géodésique de Base:aspect Tridimensionnel
Cette étude permettra d'analyser les fondements d'un réseau géodésique de base ou primordial et de comparer les déformations que subit le réseau en des points séparés par de longues distances.
[3956] vixra:1511.0219 [pdf]
Cours de Cartographie Mathématique et Les Transformations de Passage Entre Les Systèmes Géodésiques
A course of mathematical cartography and transformations between geodetic systems, given to the students of Geomatic department of the ESAT school, the first semester of 2015.
[3957] vixra:1511.0216 [pdf]
Fundamental Quantal Paradox and Its Resolution
The postulate that coordinate and momentum representations are related to each other by the Fourier transform has been accepted from the beginning of quantum theory. As a consequence, coordinate wave functions of photons emitted by stars have cosmic sizes. This results in a paradox because predictions of the theory contradict observations. The reason of the paradox and its resolution are discussed.
[3958] vixra:1511.0212 [pdf]
Note de Géométrie Différentielle Application de la Méthode du Repère Mobile à L'Ellipsoïde de Référence
This note of differential geometry concerns the formulas of Elie Cartan about the differntial forms on a surface. We calculate these formulas for an ellipsoïd of revolution used in geodesy.
[3959] vixra:1511.0205 [pdf]
Tensor Fields in Relativistic Quantum Mechanics
We re-examine the theory of antisymmetric tensor fields and 4-vector potentials. We discuss corresponding massless limits. We analize the quantum field theory taking into account the mass dimensions of the notoph and the photon. Next, we deduced the gravitational field equations from relativistic quantum mechanics.
[3960] vixra:1511.0194 [pdf]
Description of Elementary Particles by Stable Wave Packets - A New Attempt
The attempt of Schrödinger to describe elementary particles by wave packets is repeated by means nowadays available, that is to say, by applying the results of quantum field theory and especially by the explicit consideration of interaction.
[3961] vixra:1511.0182 [pdf]
Conic and Cyclidic Sections in the G8,2 Geometric Algebra, DCGA
The G(8,2) Geometric Algebra, also called the Double Conformal / Darboux Cyclide Geometric Algebra (DCGA), has entities that represent conic sections. DCGA also has entities that represent planar sections of Darboux cyclides, which are called cyclidic sections in this paper. This paper presents these entities and many operations on them. Operations include reflection, projection, rejection, and intersection with respect to spheres and planes. Other operations include rotation, translation, and dilation. Possible applications are introduced that include orthographic and perspective projections of conic sections onto view planes, which may be of interest in computer graphics or other computational geometry subjects.
[3962] vixra:1511.0174 [pdf]
Note Sur la Méthode Des Référentiels Inverses Régionaux en Géodésie
In this paper, a review of the regional inverse referential is presented with application on geodesy for the determination of the parameters of the passage from a geodetic system to another. An numerical exemple is also given.
[3963] vixra:1511.0168 [pdf]
The Constant Cavity Pressure Casimir Inaptly Discarded
Casimir's celebrated result that the conducting plates of an unpowered rectangular cavity attract each other with a pressure inversely proportional to the fourth power of their separation entails an unphysical unbounded pressure as the plate separation goes to zero. An unphysical result isn't surprising in light of Casimir's unphysical assumption of perfectly conducting plates that zero out electric fields regardless of their frequency, which he sought to counteract via a physically foundationless discarding of the pressure between the cavity plates when they are sufficiently widely separated. Casimir himself, however, emphasized that real metal plates are transparent to sufficiently high electromagnetic frequencies, which makes removal of the frequency cutoff that he inserted unjustifiable at any stage of his calculation. Therefore his physically groundless discarding of the large-separation pressure isn't even needed, and when it is left out a constant attractive pressure between cavity plates exists when their separation is substantially larger than the cutoff wavelength. The intact cutoff furthermore implies zero pressure between cavity plates when their separation is zero, and also that Casimir's pressure is merely the subsidiary lowest-order correction term to the constant attractive pressure between cavity plates that is dominant when their separation substantially exceeds the cutoff wavelength.
[3964] vixra:1511.0149 [pdf]
Equations for Generalized N-Point Information with Extreme and not Extreme Approximations in the Free Fock Space
The general n-point information (n-pi) are introduced and equations for them are considered. The role of right and left invertible interaction operators occurring in these equations together with their interpretation is discussed. Some comments on approximations to the proposed equations are given. The importance of positivity conditions and a possible interpretation of n-pi in the case of their non-compliance, for essentially nonlinear interactions (ENI), are proposed. A language of creation, annihilation and projection operators which can be applied in classical as well as in quantum case is used. The role of the complex numbers and functions in physics is also a little elucidated.
[3965] vixra:1511.0146 [pdf]
Why and How do Black Holes Radiate?
The phenomenological model proposed in this note indicates that the black hole radiation consists of two components: the standard thermal Hawking radiation and an additional non-thermal baryonic/leptonic component due to quantum number neutralization by the no-hair theorem. The particle radiation grows relatively stronger than the Hawking radiation with increasing black hole mass, and it can be tested in principle.
[3966] vixra:1511.0145 [pdf]
Which is the Best Belief Entropy?
In this paper, many numerical examples are designed to compare the existing different belief functions with the new entropy, named Deng entropy. The results illustrate that, among the existing belief entropy functions,Deng entropy is the best alternative due to its reasonable properties.
[3967] vixra:1511.0144 [pdf]
Measure Divergence Degree of Basic Probability Assignment Based on Deng Relative Entropy
Dempster Shafer evidence theory (D-S theory) is more and more extensively applied to information fusion for the advantage dealing with uncertain information. However, the results opposite to common sense are often obtained when combining the different evidence using the Dempster’s combination rules. How to measure the divergence between different evidence is still an open issue. In this paper, a new relative entropy named as Deng relative entropy is proposed in order to measure the divergence between different basic probability assignments (BPAs). The Deng relative entropy is the generalization of Kullback-Leibler Divergence because when the BPA is degenerated as probability, Deng relative entropy is equal to Kullback-Leibler Divergence. Numerical examples are used to illustrate the effectiveness of the proposed Deng relative entropy.
[3968] vixra:1511.0136 [pdf]
A Merge of the Rideout-Sorkin Growth Process with Quantum Field Theory on Causal Sets.
I raise some issues when one combines the dynamical causal sequential growth dynamics with the static approach towards quantum field theory. A proper understanding of these points is mandatory before one attempts to unite both approaches. The conclusions we draw however appear to transcend causal set theory and apply to any theory of spacetime and matter which involves topology change.
[3969] vixra:1511.0131 [pdf]
Eléments de Géodésie et de la Théorie Des Moindres Carrés
It is a preliminary version of a book on " Geodesy and the Theory of Least Squares". The book contains two parts. The first part is about geomtric and spatial geodesy. The second part concerns the theory of errors of least squares, we give an idea of the theory when we use non-linear models.
[3970] vixra:1511.0112 [pdf]
Coordinate/Field Duality in Gauge Theories: Emergence of Matrix Coordinates
The proposed coordinate/field duality [Phys. Rev. Lett. 78 (1997) 163] is applied to the gauge and matter sectors of gauge theories. In the non-Abelian case, due to indices originated from the internal space, the dual coordinates appear to be matrices. The dimensions and the transformations of the matrix coordinates of gauge and matter sectors are different and are consistent to expectations from lattice gauge theory and the theory of open strings equipped with the Chan-Paton factors. It is argued that in the unbroken symmetry phase, where only proper collections of field components as colorless states are detected, it is logical to assume that the same happens for the dual coordinates, making matrix coordinates the natural candidates to capture the internal dynamics of baryonic confined states. The proposed matrix coordinates happen to be the same appearing in the bound-state of D0-branes of string theory.
[3971] vixra:1511.0105 [pdf]
Wave Function Collapse in Linguistic Interpretation of Quantum Mechanics
Recently I proposed the linguistic interpretation of quantum mechanics, which is characterized as the linguistic turn of the Copenhagen interpretation of quantum mechanics. This turn from physics to language does not only extend quantum theory to classical theory but also yield the quantum mechanical world view. Although the wave function collapse is prohibited in the linguistic interpretation, in this paper I show that the phenomenon like wave function collapse can be realized. Hence, I propose the justification of the projection postulate in the linguistic interpretation.
[3972] vixra:1511.0102 [pdf]
On Generalized Harmonic Numbers, Tornheim Double Series and Linear Euler Sums
Direct links between generalized harmonic numbers, linear Euler sums and Tornheim double series are established in a more perspicuous manner than is found in existing literature. We show that every linear Euler sum can be decomposed into a linear combination of Tornheim double series of the same weight. New closed form evaluations of various Euler sums are presented. Finally certain combinations of linear Euler sums that are reducible to Riemann zeta values are discovered.
[3973] vixra:1511.0090 [pdf]
Calculation of the Atomic Nucleus Mass
In this paper, the mass derived from the g$\_$equation is assumed to be the mass of quark-lepton, and is used to calculate the masses of the atomic nucleus.
[3974] vixra:1511.0083 [pdf]
Unitary Mixing Matrices and Their Parameterizations
We present a new decomposition of unitary matrices particularly useful for mixing matrices. The decomposition separates the complex phase information from the mixing angle information of the matrices and leads to a new type of parameterization. We show that the mixing angle part of U(n) is equivalent to U(n-1). We give closed form parameterizations for 3x3 unitary mixing matrices (such as the CKM and MNS matrices) that treat the mixing angles equally. We show the relationship between Berry-Pancharatnam or quantum phase and the Jarlskog invariant Jcp that gives the CP-violation in the standard model. We established the likely existence of the new decomposition by computer simulation in 2008. Philip Gibbs proved the n=3 case in 2009 and in 2011, Samuel Lisi proved the general case using Floer theory in symplectic geometry. We give an accessible version of Lisi's proof.
[3975] vixra:1511.0077 [pdf]
Abscissa and Weights for Gaussian Quadratures of Modified Bessel Functions Integrated from Zero to Infinity
We tabulate the abscissae and associated weights for numerical integration of integrals with kernels which contain a power of x and Modified Bessel Functions K_nu(x). The first family of integrals contains the factor x^m K_nu(x) with integer indices nu=0 or 1 and integer powers nu <= m <= 3. The second family of integrals contains the factor x^m K_nu(x)K_nu'(x) with integer indices 0 <= nu, nu' <= 1 and integer powers nu+nu' <= m <= 3.
[3976] vixra:1511.0076 [pdf]
Note on a Clifford Algebra Based Grand Unification Program of Gravity and the Standard Model
Further evidence is provided why a $ Cl ( 5, C ) $ gauge field theory in four dimensions furnishes the $simplest$ Grand Unification model of Gravity and the Standard Model. In essence we have four copies of $ Cl ( 4, R )$, one copy per each axis-direction in our observed $ D = 4$-dim spacetime.
[3977] vixra:1511.0062 [pdf]
Traduction of the paper "Equilibrium Figures In Geodesy And Geophysics" of Helmut Mortiz (1988)
Ce papier représente la traduction de la communication du Professeur Helmut Moritz (1988) intitulée " Equilibrium Figures In Geodesy And Geophysics". Il étudie le problème des figures d'équilibre hydrostatique terrestre. L'auteur passe en revue les publications à ce sujet.
[3978] vixra:1511.0061 [pdf]
Calcul Des Lignes Géodésiques de L'ellipsoide de Révolution
Après avoir défini les lignes géodésiques d'une surface, on établit les équations des géodésiques pour une surface donnée. Comme application, nous détaillons celles de l'ellipsoïde de révolution. On fera l'intégration de ces équations.
[3979] vixra:1511.0057 [pdf]
Solution to Poisson Boltzmann Equation in Semi-Infinite and Cylindrical Geometries
Linearized Poisson-Boltzmann equation (PBE) gives us simple expressions for charge density distribution (ρ<sub>e</sub>) within fluids or plasma. A recent work of this author shows that the old boundary conditions (BC), which are usually used to solve PBE, have serious defects. The old solutions turned out to be non-unique, and violates charge conservation principle in some cases. There we also derived the correct formula of ρ<sub>e</sub> for a finite, rectangular geometry, using appropriate BCs. Here we consider some other types of geometries and obtain formula of ρ<sub>e</sub>, which may be useful to analyse different experimental conditions.
[3980] vixra:1511.0052 [pdf]
Packaged Entanglement States and Particle Teleportation
Entanglement states are important for both basic research and applied research. However, these entanglement states usually relate to one or several of the particles' physical quantities. Here we theoretically show that a particle-antiparticle pair can form packaged entanglement states which encapsulate all the necessary physical quantities for completely identifying the particles. The particles in the packaged entanglement states are hermaphroditic and indeterminate. Thereafter, we gave a possible experimental scheme for testing the packaged entanglement states. Finally, we proposed a protocol for teleporting a particle to an arbitrarily large distance using the packaged entanglement states. These packaged entanglement states could be important for particle physics and useful in matter teleportation, medicine, remote control, and energy transfer.
[3981] vixra:1511.0024 [pdf]
The Initial Evidence for M-Theory: Fractal Nearly Tri-Bimaximal Neutrino Mixing and CP Violation
We propose an instructive possibility to generalize the tri-bimaximal neutrino mixing ansatz, such that leptonic CP violation and the fractal feature of the universe can naturally be incorporated into the resultant scenario of fractal nearly tri-bimaximal flavor mixing. The consequences of this new ansatze on the latest experimental da- ta of neutrino oscillations are analyzed. This theory is perfectly matched with the current experimental data, and surprisedly, we find that the existing neutrino oscil- lation experimental data is the initial experimental evidence supporting one kind of high dimensional unified theories, such as M-theory. Besides, an interesting approach to construct lepton mass matrices in fractal universe under permutation symmetry is also discussed. This theory opens an unexpected window on the physics beyond the Standard Model.
[3982] vixra:1511.0015 [pdf]
A Class of Multinomial Permutations Avoiding Object Clusters
The multinomial coefficients count the number of ways (of permutations) of placing a number of partially distinguishable objects on a line, taking ordering into account. A well-known two-parametric family of counts arises if there are objects of c distinguishable colors and m objects of each color, m*c objects in total, to be placed on line. In this work we propose an algorithm to count the permutations where no two objects of the same color appear side-by-side on the line. This eliminates all permutations with "clusters" of colors. Essentially we represent filling the line sequentially with objects as a tree of states where each node matches one partially filled line. Subtrees are merged if they have the same branching structure, and weights are assigned to nodes in the tree keeping track of how many mergers take place. This is implemented in a JAVA program; numerical results confirm Hardin's earlier counts for this kind of restricted permutations.
[3983] vixra:1511.0009 [pdf]
A Study on the Coffee Spilling Phenomena in the Low Impulse Regime
When a glass of wine is oscillated horizontally at 4Hz, the liquid surface oscillates calmly. But when the same amount of liquid is contained in a cylindrical mug and oscillated under the same conditions, the liquid starts to oscillate aggressively against the container walls and results in significant spillage. This is a manifestation of the same principles that also cause coffee spillage when we walk. In this study, we experimentally investigate the cup motion and liquid oscillation during locomotion. The frequency spectrum of each motion reveals that the second harmonic mode of the hand motion corresponds to the resonance frequency of the first antisymmetric mode of coffee oscillation, resulting in maximum spillage. By applying these experimental findings, a number of methods to suppress resonance are presented. Then, we construct two mechanical models to rationalize our experimental findings and gain further insight; both models successfully predict actual hand behaviors.
[3984] vixra:1510.0504 [pdf]
New Sound in the Superfluid Helium
The friction sound is considered to be involved in the von Karman vortex street. The non-relativistic and relativistic Strouhal numbers are derived from von Karman's vortex street. The relativistic result follows from the relativistic addition formula for velocities. The friction tones generated by von Karman's vortex street form the fith sound in the fliquid helium II. This sound was still not experimentally observed in superfuid helium II and it means that this sound is here predicted as the crucial step in the low temperature helium II physics of the low temperature laboratories. The electron transport in graphene is supposed to be described by the hydrodynamic form of the viscous liquid allowing the existence of the vortex street. It is not excluded that the discovering of the vortex street in graphene can form one of the crucial discoveries in the graphene physics. By analogy with helium II, we propose that photon is a quantum vortex, or, the Onsager vortexon.
[3985] vixra:1510.0503 [pdf]
The First Evidence for M Theory: Fractal Nearly Tri-Bimaximal Neutrino Mixing and CP Violation
We propose an instructive possibility to generalize the tri-bimaximal neutrino mixing ansatz, such that leptonic CP violation and the fractal feature of the universe can naturally be incorporated into the resultant scenario of fractal nearly tri-bimaximal flavor mixing. The consequences of this new ansatze on the latest experimental data of neutrino oscillations are analyzed. Our theory is perfectly matched with the current experimental data, and we are surprised and excited to find that the existing neutrino oscillation experimental data is the first experimental evidence supporting one kind of higher dimensional unified theory, such as M theory. An interesting approach to construct lepton mass matrices in fractal universe under permutation symmetry is also discussed. Our theory opens an unexpected window on the physics beyond the Standard Model.
[3986] vixra:1510.0501 [pdf]
Exact Solutions of General States of Harmonic Oscillator in 1 and 2 Dimensions: Student's Supplement
The purpose of this paper is two-fold. First, we would like to write down algebraic expression for the wave function of general excited state of harmonic oscillator which doesn't include derivative signs (this is to be contrasted with typical physics textbook which only gets rid of derivative signs for first few excited states, while leaving derivatives in when it comes to Hermite polynomial for general n). Secondly, we would like to write similar expression for two dimensional case as well. In the process of tackling two dimensions, we will highlight the interplay between Cartesian and polar coordinates in 2D in the context of an oscillator. All of the above mentioned results have probably been derived by others but unfortunately they are not easily available. The purpose of this paper is to make it easier for both students and general public to look up said results and their derivations, should the need arise. We also attempt to illustrate different angles from which one could look at the problem and this way encourage students to think more deeply about the material.
[3987] vixra:1510.0473 [pdf]
A Still Simpler Way of Introducing Interior-Point Method for Linear Programming
Linear programming is now included in algorithm undergraduate and postgraduate courses for computer science majors. We give a self-contained treatment of an interior-point method which is particularly tailored to the typical mathematical background of CS students. In particular, only limited knowledge of linear algebra and calculus is assumed.
[3988] vixra:1510.0446 [pdf]
The First Evidence for M-Theory: Fractal Nearly Tri-Bimaximal Neutrino Mixing and CP Violation
We propose an instructive possibility to generalize the tri-bimaximal neutrino mixing ansatz, such that leptonic CP violation and the fractal feature of the universe can naturally be incorporated into the resultant scenario of fractal nearly tri-bimaximal flavor mixing. The consequences of this new ansatze on the latest experimental data of neutrino oscillations are analyzed. This theory is perfectly matched with the current experimental data, and surprisedly, we find that the existing neutrino oscillation exper- imental data is the first experimental evidence supporting one kind of high dimensional unified theories, such as M-theory. Besides, an interesting approach to construct lep- ton mass matrices in fractal universe under permutation symmetry is also discussed. This theory opens an unexpected window on the physics beyond the Standard Model.
[3989] vixra:1510.0444 [pdf]
On the Sequestered Gabor Fekete Model, an Uplifted Nilpotent 12d Superconformal Sequence Compactification
As indicated in the seminal work of reformer physicist G. Fekete all natural constants can be derived at least to eight digit accuracy. In this work we show that his model can in fact be understood by membrane lattices in string theory. More specifically fibred non-trivial conformal algebraic varieties sequenced of super-Hirzebruch curves that approximate Maxwell-Enriques surfaces. We propose the structure of the Fekete matrix to be parametrised by all possible toric condensates. By analytic continuation we extend our result N=pi supersymmetry on a sphere. Thus the only physical sense to understand the Fekete model is through radiative fluxes in superstring theory.
[3990] vixra:1510.0443 [pdf]
GR as a Nonsingular Classical Field Theory
Thirring and Feynman showed that the Einstein equation is simply a partial differential classical field equation, akin to Maxwell's equation, but it and its solutions are required to conform to the GR principles of general covariance and equivalence. It is noted, with examples, that solutions of such equations can contravene required physical principles when they exhibit unphysical boundary conditions. Using the crucially important tensor contraction theorem together with the equivalence principle, it is shown that metric tensors are physical only where all their components, and also those of their inverse matrix, are finite real numbers, and their signature is that of the Minkowski metric. Thus the "horizons" of the empty-space Schwarzschild solution metrics are unphysical, which is traced to the boundary condition that arises from the minimum energetically-allowed radius of a positive effective mass. It is also noted that "comoving" ostensible "coordinate systems" disrupt physical boundary conditions in time via their artificial "composite time" which can't be registered by the clock of any observer because it is "defined" by the clocks of an infinite number of observers. Spurious singularities ensue in such unphysical "coordinates", which fall away on transformation of metric solutions to non-"comoving coordinates".
[3991] vixra:1510.0413 [pdf]
Talking Sense into Nonsense
``Then came the disciples to Jesus apart, and said, Why could not we cast him out? And Jesus said unto them, Because of your unbelief" (Matthew 17 KJV). The problem with proofs today, what they are lefting possibility of No-God idol. Preacher says in YouTube, what there is certainly the moral law. Good deed this. But he preacher spoils all by adding: the best explanation is God. No, brother, there is no room for ``No God" paganism! Our freedom is not absence of sure proofs, but
[3992] vixra:1510.0401 [pdf]
Expanding Mond with Baryon Intrinsic Dark Matter, Helmholtz Work, an Entropic Force and a New Dimension Parameter
In this paper I present a baryon intrinsic Dark Matter halo model. The model gives a correct first order galactic rotation curve, leads to the baryonic Tully-Fisher relation and to the MOND force for the weak acceleration regime. Then I show that the MOND force can be derived from the combination of my model's potential and the first law of thermodynamics in the Helmholtz energy A = U - TS formulation. In my model the MOND work is identical to the Helmholtz work. The entropy connected to the intrinsic Dark Matter halo allows the derivation of the Dark Matter force, the deviation from Newton, as an entropic force. The definition of the entropy leads to a new parameter, of dimensional degrees of freedom, added to MOND. This new parameter solves the galaxy cluster mass discrepancy problem of MOND and produces and exact relationship between the MOND acceleration and the Hubble acceleration, with cosmological implications. In my model the cosmic structure formation degree of freedom value $N = \sqrt{cH_0/a_0}=2.1$, is also the minimum mass discrepancy in the MOND cluster analysis. The realization that MOND is a theory based on Helmholtz work shifts the question regarding its relativistic formulation towards the larger problem of a relativistic formulation of thermodynamics, a highly discussed and accepted problem in physics. It touches upon the arrow of time issue.
[3993] vixra:1510.0390 [pdf]
On Maximal Acceleration, Born's Reciprocal Relativity and Strings in Tangent Bundle Backgrounds
Accelerated strings in tangent bundle backgrounds are studied in further detail than it has been done in the past. The worldsheet associated with the accelerated open string described in this work envisages a continuum family of worldlines of accelerated points. It is when one embeds the two-dim string worldsheet into the tangent bundle $ TM$ background (associated with a uniformly accelerated observer in spacetime) that the effects of the maximal acceleration are manifested. The induced worldsheet metric as a result of this embedding has a $null$ horizon. It is the presence of this null horizon that limits the acceleration values of the points inside string. If the string crosses the null horizon some of its points will exceed the maximal acceleration value and that portion of the string will become causally disconnected from the rest of string outside the horizon. It is explained why our results differ from those in the literature pertaining the maximal acceleration modifications of the Rindler metric. We also find a modified Rindler metric which has a true curvature singularity at the location of the null horizon due to a finite maximal acceleration. One of the salient features of studying the geometry of the tangent bundle is that the underlying spacetime geometry becomes $observer ~dependent$ as Gibbons and Hawking envisioned long ago. We conclude with some remarks about generalized QFT in accelerated frames and the black hole information paradox.
[3994] vixra:1510.0377 [pdf]
The New Parameter for Mond and the Mond Cosmic Structure Formation Entropic Degree of Freedom
In a previous paper I showed how a new parameter added to MOND, the entropic degree of freedom N, exactly solved the MOND galaxy cluster mass discrepancy problem. In this paper I show that the same entropic degree of freedom produces an exact interpretation of Milgrom's approximate 5.a_0= c.H_0. The new relation gives N^2.a_0 = c.H_0. With present day values, N = 2.13, the cosmic degree of freedom of the entropic force in relation to cosmic structure formation.
[3995] vixra:1510.0337 [pdf]
The Dark Matter Entropic Force and Newtons Energetic Force as a Complete First Law of Thermodynamics Set of Gravitational Forces
In this paper I derive an emergent Dark Matter force using the virial theorem in the context of the Dark Matter halo model. This emergent force is then used to inductively derive a Dark Matter entropy S and a Dark Matter number of microstates W. I then show that this emergent force can be interpreted as an entropic force. Using the first law of thermodynamics a set of two forces can be derived from my model's potential function, with the Newtonian force of gravity derived from the energy as the first one and the emergent Dark Matter force derived from entropy as the second one.
[3996] vixra:1510.0329 [pdf]
Is the Energy of Universe Above the Zero?!
Dust-cloud collapse gives in Moller's formula the E=0 as energy. Therefore, if such cloud flies towards a wall, the thermal energy won't be released after the impact. But warm must be (cloud of machine gun bullets).
[3997] vixra:1510.0325 [pdf]
Multi-label Methods for Prediction with Sequential Data
The number of methods available for classification of multi-label data has increased rapidly over recent years, yet relatively few links have been made with the related task of classification of sequential data. If labels indices are considered as time indices, the problems can often be seen as equivalent. In this paper we detect and elaborate on connections between multi-label methods and Markovian models, and study the suitability of multi-label methods for prediction in sequential data. From this study we draw upon the most suitable techniques from the area and develop two novel competitive approaches which can be applied to either kind of data. We carry out an empirical evaluation investigating performance on real-world sequential-prediction tasks: electricity demand, and route prediction. As well as showing that several popular multi-label algorithms are in fact easily applicable to sequencing tasks, our novel approaches, which benefit from a unified view of these areas, prove very competitive against established methods.
[3998] vixra:1510.0323 [pdf]
Fly Me to the Moon - For All Mankind
NASA programme Apollo landed men on the Moon and returned them safely to Earth. In support of their achievements NASA presented, among others, two pieces of evidence which are subject of this report, namely, the photographs of the Apollo 11 landing site; and, the video-recording of the Apollo 17 lift-off. Starting from post-landing NASA documents, the Apollo 11 landing sequence is proposed in which the Lunar Module cruises at the height of the Lunar Surface Sensing Probes (LSSP, some $1.7$~m above the ground) for as much as ten seconds before touchdown, and it is the -Y/Left and +Y/Right landing gears that touched the surfrace first. This is then compared to pre-landing NASA experimental investigation, % of landing gear performance in simulated lunar conditions according to which the deformation energy $\mbox{DE} \gsimeq \mbox{KE}$, the impact kinetic energy, while the potential energy from settling is the smallest, $\mbox{PE} \ll \mbox{KE}$; and that the one or two gears touching the surface first, absorb most of KE. Contrary to expectations, NASA reported that -Z/Aft landing gear absorbed as much energy as all the other gears combined, and that $\mbox{DE} \simeq \frac12\,\mbox{KE}$. It is shown that this outcome is consistent with the dry Lunar Module being lowered to an uneven surface at near-zero vertical velocity and then released to settle down in Earth-like gravity. Next, we examine the behavior of the LSSPs in the 360\deg~yaw that the Apollo 11 Lunar Module performed during the Inspection and Separation Stage in the lunar circular orbit. Contrary to NASA's own reference drawings of the fully deployed LSSPs, we find that during the maneuver the LSSPs are always flexed mildly-inwards, as if the Lunar Module were suspended in the presence of gravity, and not weightless in the lunar orbit. Lastly, detailed analysis of the Apollo 17 lift-off video recording is presented. It is shown that the vessel trajectory implies an additional propulsion in form of an explosion, while the video frames flicker at 5~Hz and 10~Hz rate and carry an artefact strongly resembling an edge of film stock. An analysis of illumination of the ascending Lunar Module is also presented, which suggests that the vessel is approaching near-by light source rather then being lit by the Sun (at infinity). A discussion of the entire scene follows, and an explanation for the explosion is proposed. Overall, it is concluded that the photographs and the video recording depict scenes that were staged here on Earth, rather then on the way to the Moon.
[3999] vixra:1510.0319 [pdf]
A Note on the Mass Origin in the Relativistic Theories of Gravity
We investigate the most general Lagrangian in the Minkowski space for the symmetric tensor field of the second rank. Then, we apply the Higgs mechanism to provide the mass to the appropriate components.
[4000] vixra:1510.0316 [pdf]
A Note on Q-Analogue of SANDOR’S Functions
The additive analogues of Pseudo-Smarandache, Smarandache-simple functions and their duals have been recently studied by J. S´andor. In this note, we obtain q-analogues of S´andor’s theorems.
[4001] vixra:1510.0315 [pdf]
A Pair Of Smarandachely Isotopic Quasigroups And Loops Of The Same Variety
The isotopic invariance or universality of types and varieties of quasigroups and loops described by one or more equivalent identities has been of interest to researchers in loop theory in the recent past.
[4002] vixra:1510.0312 [pdf]
Comparative Review of Some Properties of Fuzzy and Anti Fuzzy Subgroups
This paper is to comparatively review some works in fuzzy and anti fuzzy group theory. The aim is to provide anti fuzzy versions of some existing theorems in fuzzy group theory and see how much similar they are to their fuzzy versions. The research therefore focuses on the properties of fuzzy subgroup, fuzzy cosets, fuzzy conjugacy and fuzzy normal subgroups of a group which are mimicked in anti fuzzy group theory.
[4003] vixra:1510.0309 [pdf]
Contributii in Dezvoltarea Sistemelor de Control Neuronal al Miscarii Robotilor Mobili Autonomi
Robotica reprezinta in prezent unul din cele mai mari realizari ale omenirii si este cea mai mare incercare de a produce o inta articiala capabila sa simta si sa transmita emotii, producatorii de roboti realizand in ultimii ani modele de serie extrem de complexe disponibile pentru publicul larg.
[4004] vixra:1510.0302 [pdf]
Fuzzy Crossed Product Algebras
We introduce fuzzy groupoid graded rings and, as a particular case, fuzzy crossed product algebras. We show that there is a bijection between the set of fuzzy graded isomorphism equivalence classes of fuzzy crossed product algebras and the associated second cohomology group.
[4005] vixra:1510.0278 [pdf]
Smarandache Curves of Some Special Curves in the Galilean 3-Space
In the present paper, we consider a position vector of an arbitrary curve in the three-dimensional Galilean space G3. Furthermore, we give some conditions on the curvatures of this arbitrary curve to study special curves and their Smarandache curves. Finally, in the light of this study, some related examples of these curves are provided and plotted.
[4006] vixra:1510.0275 [pdf]
Smarandache Isotopy Theory Of Smarandache: Quasigroups And Loops
The concept of Smarandache isotopy is introduced and its study is explored for Smarandache: groupoids, quasigroups and loops just like the study of isotopy theory was carried out for groupoids, quasigroups and loops. The exploration includes: Smarandache; isotopy and isomorphy classes, Smarandache f, g principal isotopes and G-Smarandache loops.
[4007] vixra:1510.0272 [pdf]
Smarandache Multi-¸ Space Theory(III)
A Smarandache multi-space is a union of n different spaces equipped with some different structures for an integer n ≥ 2, which can be both used for discrete or connected spaces, particularly for geometries and spacetimes in theoretical physics
[4008] vixra:1510.0271 [pdf]
Smarandache Multi-¸ Space Theory(IV)
A Smarandache multi-space is a union of n different spaces equipped with some different structures for an integer n ≥ 2, which can be both used for discrete or connected spaces, particularly for geometries and spacetimes in theoretical physics.
[4009] vixra:1510.0266 [pdf]
Special Smarandache Curves According To Darboux Frame In E3
In this study, we determine some special Smarandache curves according to Darboux frame in E3 . We give some characterizations and consequences of Smarandache curves.
[4010] vixra:1510.0260 [pdf]
A Two-Step Fusion Process for Multi-Criteria Decision Applied to Natural Hazards in Mountains
Mountain river torrents and snow avalanches generate human and material damages with dramatic consequences. Knowledge about natural phenomenona is often lacking and expertise is required for decision and risk management purposes using multi-disciplinary quantitative or qualitative approaches. Expertise is considered as a decision process based on imperfect information coming from more or less reliable and conflicting sources.
[4011] vixra:1510.0247 [pdf]
Human Experts Fusion for Image Classification
In image classification, merging the opinion of several human experts is very important for different tasks such as the evaluation or the training. Indeed, the ground truth is rarely known before the scene imaging.
[4012] vixra:1510.0244 [pdf]
Inductive Classification Through Evidence-Based Models and Their Ensembles
In the context of Semantic Web, one of the most important issues related to the class-membership prediction task (through inductive models) on ontological knowledge bases concerns the imbalance of the training examples distribution, mostly due to the heterogeneous nature and the incompleteness of the knowledge bases.
[4013] vixra:1510.0241 [pdf]
New Ahp Methods for Handling Uncertainty Within the Belief Function Theory
As society becomes more complex, people are faced with many situations in which they have to make a decision among different alternatives. However, the most preferable one is not always easily selected.
[4014] vixra:1510.0239 [pdf]
Order in DSmT; Definition of Continuous DSm Models
When implementing the DSmT, a difficulty may arise from the possible huge dimension of hyperpower sets, which are indeed free structures.However, it is possible to reduce the dimension of these structures by involving logical constraints.
[4015] vixra:1510.0238 [pdf]
Probabilistische Fahrzeugumfeldschätzung Für Fahrerassistenzsysteme
Viele aktuelle Fahrerassistenzsysteme wie beispielsweise die adaptive Geschwindigkeitsregelung, purwechselassistenten und Systeme zur Anhaltewegverkürzung sind auf eine verlässliche Detektion anderer Verkehrsteilnehmer und Hindernisse angewiesen. Zukünftige Assistenzsysteme wie beispielsweise Systeme für das Automatische Fahren erhöhen diese Zuverlässigkeitsanforderung weiter.
[4016] vixra:1510.0211 [pdf]
A Study on Symptoms of Stress on College Students Using Combined Disjoint Block Fuzzy Cognitive Maps (CDBFCM)
Going through college is stressful for everybody. Caused by many reasons, the stress is present whether one is in their first year of college or their last. However, most seniors have an easier time dealing with stress because they have experience handling it. Most of the reasons for so much stress fall into one of three categories: academic stress, that is, anything to do with studying for classes, financial stress, which has to do with paying for school, and personal stress, which is stress associated with personal problems in college. College students experience many effects of stress and depression.
[4017] vixra:1510.0207 [pdf]
Correlated Aggregating Operators for Simplied Neutrosophic Set and their Application in Multi-attribute Group Decision Making
The simplied neutrosophic set (SNS) is a generalization of the fuzzy set that is designed for some incomplete, uncertain and inconsistent situations in which each element has dierent truth membership, indeterminacy membership and falsity membership functions.
[4018] vixra:1510.0196 [pdf]
Interval Neutrosophic Sets
Neutrosophic set is a part of neutrosophy which studies the origin, nature, and scope of neutralities, as well as their interactions with different ideational spectra.
[4019] vixra:1510.0195 [pdf]
Interval-Valued Neutrosophic Soft Sets and Its Decision Making
In this paper, the notion of the interval valued neutrosophic soft sets (ivn−soft sets) is defined which is a combination of an interval valued neutrosophic sets and a soft sets.
[4020] vixra:1510.0183 [pdf]
On Some Similarity Measures and Entropy on Quadripartitioned Single Valued Neutrosophic Sets
A notion of Quadripartitioned Single Valued Neutrosophic Sets (QSVNS) is introduced and a theoretical study on various set-theoretic operations on them has been carried out.
[4021] vixra:1510.0169 [pdf]
A Double Cryptography Using The Smarandache Keedwell Cross Inverse Quasigroup
The present study further strengthens the use of the Keedwell CIPQ against attack on a system by the use of the Smarandache Keedwell CIPQ for cryptography in a similar spirit in which the cross inverse property has been used by Keedwell. This is done as follows.
[4022] vixra:1510.0166 [pdf]
A New View of Co¸mbinatorial Maps by Smaranda¸che’s Notion
On a geometrical view, the conception of map geometries is introduced, which is a nice model of the Smarandache geometries, also new kind of and more general intrinsic geometry of surfaces. Some open problems related combinatorial maps with the Riemann geometry and Smarandache geometries are presented.
[4023] vixra:1510.0165 [pdf]
An Holomorphic Study Of Smarandache Automorphic and Cross Inverse Property Loops
By studying the holomorphic structure of automorphic inverse property quasigroups and loops[AIPQ and (AIPL)] and cross inverse property quasigroups and loops[CIPQ and (CIPL)], it is established that the holomorph of a loop is a Smarandache; AIPL, CIPL, K-loop, Bruck-loop or Kikkawa-loop if and only if its Smarandache automorphism group is trivial and the loop is itself is a Smarandache; AIPL, CIPL, K-loop, Bruck-loop or Kikkawa-loop.
[4024] vixra:1510.0164 [pdf]
An Holomorphic Study of the Smarandache Concept in Loops
If two loops are isomorphic, then it is shown that their holomorphs are also isomorphic. Conversely, it is shown that if their holomorphs are isomorphic, then the loops are isotopic. It is shown that a loop is a Smarandache loop if and only if its holomorph is a Smarandache loop.
[4025] vixra:1510.0145 [pdf]
Intuitive Curvature: No Relation to the Riemann Tensor
Merriam-Webster's Collegiate Dictionary, Eleventh Edition, gives a technical definition of curvature, "the rate of change of the angle through which the tangent to a curve turns in moving along the curve and which for a circle is equal to the reciprocal of the radius". That precisely describes a curve's intuitive curvature, but the Riemann "curvature" tensor is zero for all curves! We work out the natural extension of intuitive curvature to hypersurfaces, based on the rates that their tangents develop components which are orthogonal to the local tangent hyperplane. Intuitive curvature is seen to have the form of a second-rank symmetric tensor which cannot be algebraically expressed in terms of the metric tensor and a finite number of its partial derivatives. The Riemann "curvature" tensor contrariwise is a fourth-rank tensor with both antisymmetric and symmetric properties that famously is algebraically expressed in terms of the metric tensor and its first and second partial derivatives. Thus use of the word "curvature" with regard to the Riemann tensor is misleading, and since it can't encompass intuitive curvature, Gauss-Riemann "geometry" oughtn't be termed differential geometry either. That "geometry" is no more than the class of the algebraic functions of the metric and any finite number of the metric's partial derivatives, which it is convenient to organize into generally covariant entities such as the Riemann tensor because those potentially play a role in generally-covariant metric-based field theories.
[4026] vixra:1510.0131 [pdf]
Problem of Thermally Driven Diffusion in Terms of Occupation Numbers
In the new approach to the diffusion problem conventional statistical derivation is reconsidered deterministically using the partition function for thermal velocities. The resulting relation for time evolution of particle distribution is an integro-differential equation. Its first approximation provides the conventional partial differential equation - the second Fick's law with the diffusion transport coefficient proportional to the temperature.
[4027] vixra:1510.0114 [pdf]
Gravity of Subjectivity
This work is based on quantum modification of the general relativity, which includes effects of production /absorption of gravitons by the vacuum. It turns out, that gravitons created and continued to influence the universe, including people. The theory (without fitting parameters) is in good quantitative agreement with cosmological observations. In this theory we got an interface between gravitons and ordinary matter, which very likely exist not only in cosmos, but everywhere, including our body and, especially, our brain. Subjective experiences are considered as a manifestation of that interface. This opens a possibility of a "communication" with gravitons. Probable applications of these ideas include health (brain stimulation), communication, computational capabilities and energy resources. Social consequences of these ideas can be comparable with the effects of invention and application of electricity.
[4028] vixra:1510.0107 [pdf]
The Decay of a Black Hole in a GUT Model
I propose a phenomenological model for the decay of black holes near Planck mass. The decay takes place via a quantum state between general relativity and a grand unified field theory like SO(10). This group is favored also by a no-scale SUGRA GUT model for Starobinsky inflation by other authors
[4029] vixra:1510.0084 [pdf]
E8: A Gauge Group for the Electron?
The eight geometric objects of the electron impedance model, as fortuitous happenstance would have it, are those of the 3D Pauli subalgebra of the geometric interpretation of Clifford algebra. Given that impedance is a measure of the amplitude and phase of opposition to the flow of energy, and that quantum phase is the gauge parameter in quantum mechanics, one might consider an approach in which elements of an electron gauge group would be the phase shifters, the impedances of interactions between these geometric objects. The resulting 4D Dirac algebra is briefly examined in relation to the E8 exceptional Lie group.
[4030] vixra:1510.0082 [pdf]
Robert Dicke's Momentous Error - A Comment on Rev.Mod.Phys. 29 (1957), p.363
It is shown that the paper `Gravitation without a principle of equivalence' by American Astrophysicist Robert Dicke (1916-1997) contains a simple, but consequential, technical mistake. The purpose of this comment however is not to blame Dicke, but to bring to mind the intriguing idea exposed in his article. The cosmology proposed by Dicke would have been in full agreement with Dirac's Large Number Hypothesis, had Dicke not gone astray at that decisive step. Instead of igniting the dispute with Dirac that followed (R. Dicke, Nature 192 (1961), p. 440; P. A. M. Dirac, Nature 192 (1961) p.441), the two researchers could have joined forces in creating an alternative cosmology that incorporated Mach's principle.
[4031] vixra:1510.0075 [pdf]
Ultralight Gravitons with Tiny Electric Dipole Moment Are Seeping from the Vacuum
Mass and electric dipole moment of graviton, which is identified as dark matter particle, are estimated. This change the concept of dark matter and can help to explain the baryon asymmetry of the universe. The calculations are based on quantum modification of the general relativity with two additional terms in the Einstein equations, which takes into account production/absorption of gravitons. In this theory, there are no Big Bang in the beginning (some local bangs during the evolution of the universe are probable), no critical density of the universe, no dark energy (no need in cosmological constant) and no inflation. The theory (without fitting) is in good quantitative agreement with cosmic data. Key words: graviton; cosmology; age of the universe; interface between gravitons and ordinary matter.
[4032] vixra:1510.0057 [pdf]
Clearest Proof of Poincare Conjecture or Is Grisha Perelman Right?
There is Prize committee (claymath.org), which requires publication in worldwide reputable mathematics journal and at least two years of following scientific admiration. Why then the God-less Grisha Perelman has published only in a God-less forum (arXiv), publication was unclear as the crazy sketch; but mummy child "Grisha" has being forced to accept the Millennium Prize? Am I simply ugly or poor? Please respect my copyrights!
[4033] vixra:1510.0034 [pdf]
Quantum System Symmetry is not the Source of Unitary Information in Wave Mechanics Context Quantum Randomness
Abstract<br>The homogeneity symmetry is re-examined and shown to be non-unitary, with no requirement for the imaginary unit. This removes symmetry, as reason, for imposing unitarity (or self-adjointness) -- by Postulate. The work here is part of a project researching logical independence in quantum mathematics, for the purpose of advancing a full and complete theory of quantum randomness.<br><br>Keywords<br>foundations of quantum theory, quantum physics, quantum mechanics, wave mechanics, Canonical Commutation Relation, symmetry, homogeneity of space, unitary, non-unitary, unitarity, mathematical logic, formal system, elementary algebra, information, axioms, mathematical propositions, logical independence, quantum indeterminacy, quantum randomness.
[4034] vixra:1510.0033 [pdf]
Chronodynamics, Cosmic Space-Time Bubbles and the Entropic Dark Matter Force as a Galactic Stabilizer
In this paper I continue with the elementary particle Dark Matter halo model. The first few sections will shortly repeat the basics of this model. In section II I take a better look at the modified Newtonian potential as a consequence of the changed source mass. In section VI the effect of the modified potential on Einstein lensing is touched briefly. In sections VII and VIII the order stored in the frequency gauge of the de Broglie time devices in the outer galactic disks due to Dark Matter is related to entropy. The study of order stored in frequency synchronized or desynchronized time devices is called chronodynamics and this is added to the thermodynamics part of entropy. We show that in the inner galactic range thermodynamic entropy dominates chronodynamic entropy but that in the outer flat rotation curve part of the galactic disks the chronodynamic entropy dominates by far. We show that the chronodynamic entropy of a galaxy is lowest in its outer fringes and highest in its luminous center. This creates an inward entropic force and that is how the Dark Matter halo, through the intermediate of chronodynamic entropy, stabilizes galaxies. The frequency gauge of the de Broglie elementary time devices in the outer range of galactic disks creates a sort of a gauged time bubble in a cosmic time sea, a gauge regulated by the BTF relation. In the last section we relate this galactic gauged time bubble to GR as a reference frame independent theory of gravity.
[4035] vixra:1510.0029 [pdf]
Quantum Modification of General Relativity (Qmoger)
This work is based on modification of the general relativity, which includes effects of production /absorption of matter by the vacuum. The theory (without fitting parameters) is in good quantitative agreement with cosmological observations (SnIa, SDSS-BAO and reduction of acceleration of the expanding universe). In this theory, there is no Big Bang at the beginning, but some local bangs during the evolution are probable. Also, there is no critical density of the universe and, therefore, no dark energy. Based on exact Gaussian solution for the scale factor, it is shown that an effective age of the universe is about 327 billion years. Production of primary dark matter particles have started 43 billion years later. It is shown that characteristic distance between particles is 30 times smaller than the thermal de Brogle wavelength, so that quantum effects, including formation of the Bose-Einstein condensate, can dominate. "Ordinary" matter was synthesized from dark matter in galaxies. Supplementary exact solutions are obtained for various ranges of parameters. From the theory we get an interface between dark and ordinary matter (IDOM), which very likely exist not only in cosmos, but everywhere, including our body and our brain. Key words: cosmology; age of the universe; dark matter; interface between dark and ordinary matter.
[4036] vixra:1510.0028 [pdf]
Quantum Modification of General Relativity
This work is based on modification of the general relativity with two additional terms in the Einstein equations. The additional terms give macroscopic description of the quantum effects of production /absorption of matter by the vacuum. The theory (without fitting parameters and without hypothesis of inflation) is in good quantitative agreement with cosmological observations (SnIa, SDSS-BAO and reduction of acceleration of the expanding universe). In this theory, there is no Big Bang at the beginning, but some local bangs during the evolution are probable. Also, there is no critical density of the universe and, therefore, no dark energy (no need in the cosmological constant). Based on exact Gaussian solution for the scale factor, it is shown that an effective age of the universe is about 327 billion years. Production of primary dark matter particles (possibly, gravitons) have started 43 billion years later. It is shown that characteristic distance between particles is much smaller than the thermal de Brogle wavelength, so that quantum effects, including formation of the Bose-Einstein condensate, can dominate, even for high temperature. "Ordinary" matter was synthesized from dark matter (with estimated small electric dipole moment (EDM)) in galaxies. Supplementary exact solutions are obtained for various ranges of parameters. From the theory we get an interface between dark and ordinary matter (IDOM), which very likely exist not only in cosmos, but everywhere, including our body and our brain. Key words: modification of general relativity; cosmology; age of the universe; dark matter (gravitons); interface between dark and ordinary matter.
[4037] vixra:1510.0022 [pdf]
Nonextensive Deng Entropy
In this paper, a generalized Tsallis entropy, named as Nonextensive Deng entropy, is presented. When the basic probability assignment is degenerated as probability, Nonextensive Deng entropy is identical to Tsallis entropy.
[4038] vixra:1510.0020 [pdf]
A Complete Proof of Beal Conjecture-Final Version
In 1997, Andrew Beal announced the following conjecture: \textit{Let $A, B,C, m,n$, and $l$ be positive integers with $m,n,l > 2$. If $A^m + B^n = C^l$ then $A, B,$ and $C$ have a common factor.} We begin to construct the polynomial $P(x)=(x-A^m)(x-B^n)(x+C^l)=x^3-px+q$ with $p,q$ integers depending of $A^m,B^n$ and $C^l$. We resolve $x^3-px+q=0$ and we obtain the three roots $x_1,x_2,x_3$ as functions of $p,q$ and a parameter $\theta$. Since $A^m,B^n,-C^l$ are the only roots of $x^3-px+q=0$, we discuss the conditions that $x_1,x_2,x_3$ are integers and have or not a common factor. Three numerical examples are given.
[4039] vixra:1510.0010 [pdf]
Damped Harmonic Oscillator with Time-Dependent Frictional Coefficient and Time-Dependent Frequency
In this paper we extend the so-called dual or mirror image formalism and Caldirola's- Kanai's formalism for damped harmonic oscillator to the case that both frictional coefficient and time-dependent frequency depend on time explicitly. As an solvable example, we consider the case that frictional coefficient $ \ga (t) = \frac{ \ga_0}{1 + q t} , (q > 0 )$ and angular frequency function $ w(t) = \frac{ w_0}{ 1 + q t } $. For this choice, we construct the quantum harmonic Hamiltonian and express it in terms of $su(2)$ algebra generators. Using the exact invariant for the Hamiltonian and its unitary transform, we solve the time-dependent Schro\"dinger equation with time-dependent frictional coefficient and time-dependent frequency.
[4040] vixra:1510.0009 [pdf]
Qualia (Subjectivity) as a Manifestation of an Interface Between Dark and Ordinary Matter
This work is based on modification of the general relativity, which includes effects of production /absorption of matter by the vacuum. The theory (without fitting parameters) is in good quantitative agreement with cosmological observations. In this theory we got an interface between dark and ordinary matter, which very likely exist not only in cosmos, but everywhere, including our body and, especially, our brain. Subjective experiences are considered as a manifestation of that interface. This opens a possibility of a "communication" with dark matter. Probable applications of these ideas include health (brain stimulation), communication, computational capabilities and energy resources.
[4041] vixra:1510.0002 [pdf]
Further Insights on the New Concept of Heat for Open Systems
A new definition of heat for open systems, with a number of advantages over previous definitions, was introduced in [2013}; Int. J. Therm., 16(3), 102--108]. We extend the previous work by analyzing the production of entropy and showing that the new definition of heat appears naturally as the proper flow [«flux density»] conjugate to the gradient of temperature, with the previous definitions only considering a subset of the physical effects associated to this gradient. We also revisit the transfer of heat in multicomponent systems, confirming the identity derived in the previous work for the identification of thermal effects associated to each one of the chemical potentials in the system. The new definition of heat was previously obtained within the scope of the traditional thermodynamics of irreversible processes (TIP), which has a limited field of applicability to macroscopic systems with no too strong gradients and not too fast processes. We extend now the new definition of heat to more general situations and to the quantum level of description using a standard non-commutative phase space, with the former TIP-level definition recovered from partial integration.
[4042] vixra:1509.0285 [pdf]
Kaluza-Klein Nature of Entropy Function
In the present study, we mainly investigate the nature of entropy function in non-flat Kaluza-Klein universe. We prove that the first and generalized second laws of gravitational thermodynamics are valid on the dynamical apparent horizon.
[4043] vixra:1509.0280 [pdf]
Non Relativistic Quantum Mechanics and Classical Mechanics as Special Cases of the Same Theory.
We start by rewriting classical mechanics in a quantum mechanical fashion and point out that the only difference with quantum theory re- sides at one point. There is a classical analogon of the collapse of the wavefunction and an extension of the usual Born rule is proposed which might solve this problem. We work only with algebra’s over the real (com- plex) numbers, general number fields of finite characteristic allowing for finite dimensional representations of the commutation relations are not considered given that such fields are not well ordered and do not give rise to a well defined probability interpretation. Our theory generalizes however to discrete spacetimes and finite dimensional algebra’s. Looking at physics this way, spacetime itself distinguishes itself algebraically by means of well chosen commutation relations and there is further nothing special about it meaning it has also a particle interpretation just like any other dynamical variable. Likewise, there is no reason for the dynamics to be Hamiltonian and therefore we have a nonconservative formulation of quantum physics at hand. The harmonic oscillator (amongst few oth- ers) distinguishes itself because the algebra forms a finite dimensional Lie algebra; the classical and quantum (discrete) harmonic oscillator are stud- ied in a some more generality and examples are given which are neither classical, nor quantum.
[4044] vixra:1509.0261 [pdf]
Age of the Universe and More
This work is based on modification of the general relativity, which includes effects of production /absorption of matter by the vacuum. The theory (without fitting parameters) is in good quantitative agreement with cosmological observations (SnIa, SDSS-BAO and reduction of acceleration of the expanding universe). In this theory, there is no Big Bang at the beginning, but some local bangs during the evolution are probable. Also, there is no critical density of the universe and, therefore, no dark energy. Based on exact Gaussian solution for the scale factor, it is shown that an effective age of the universe is about 327 billion years. Production of primary dark matter particles have started 43 billion years later. It is shown that characteristic distance between particles is 30 times smaller than the thermal de Brogle wavelength, so that quantum effects, including formation of the Bose-Einstein condensate, can dominate. "Ordinary" matter was synthesized from dark matter in galaxies. Supplementary exact solutions are obtained for various ranges of parameters. From the theory we get an interface between dark and ordinary matter (IDOM), which very likely exist not only in cosmos, but everywhere, including our body and our brain. Key words: cosmology; age of the universe; dark matter; interface between dark and ordinary matter.
[4045] vixra:1509.0260 [pdf]
Subjective Experiences as a Manifestation of an Interface Between Dark and Ordinary Matter
This work is based on modification of the general relativity, which includes effects of production /absorption of matter by the vacuum. The theory (without fitting parameters) is in good quantitative agreement with cosmological observations. In this theory we got an interface between dark and ordinary matter, which very likely exist not only in cosmos, but everywhere, including our body and, especially, our brain. Subjective experiences are considered as a manifestation of that interface. This opens a possibility of a "communication" with dark matter. Probable applications of these ideas include health (brain stimulation), communication, computational capabilities and energy resources.
[4046] vixra:1509.0250 [pdf]
A Critical Review of Mond from the Perspective of the `dark Matter Halo for Every Elementary Particle' Model
In this paper I look at MOND from the perspective of my elementary particle Dark matter halo hypothesis. First I repeat the core elements of my model, in order for the paper to be self contained. Then I show how the energy equation for the rotation curve with an extra constant term can give the natural but deceptive impression that Newton's Laws have to be corrected for the ultra low regime, if this energy perspective is missing. Special attention is given to the virtual aspect of the acceleration, virtual as in not caused by Newton's force of gravity, due to a constant kinetic energy caused by the Dark Matter halo at large distances. The rotation curve equations are discussed and the one from my model is given. Conclusions are drawn from the $\Lambda$CDM core-cusp problem in relation to this new perspective on MOND as hiding a Dark Matter model perspective. My model might well be the bridge between MOND at the galactic scale and $\Lambda$CDM at the cosmic level.
[4047] vixra:1509.0236 [pdf]
The Strouhal Numbers from Von Karman's Vortex Street
The non-relativistic and relativistic Strouhal number is derived from the so called von Karman's vortex street. The relativistic derivation of this formula follows from the addition formula for velocities. The Strouhal friction tones are generated also during the motion of cosmic rays in relic photon sea, during the motion of bolids in atmosphere, during the Saturn rings motion in the relic black-body sea, during the motion of bodies in superfluid helium and so on.
[4048] vixra:1509.0233 [pdf]
Quantum Fortunetelling
Recently it was proposed that quantum mechanics, if applied to macroscopic systems, would necessarily include a form of fortune telling or psychic phenomena. In this article, this claim is presented using formal quantum mechanics methods, and the results are analysed and found to be possible.
[4049] vixra:1509.0232 [pdf]
The Logical Standing of Unitarity in Wave Mechanics in Context of Quantum Randomness
The homogeneity symmetry is re-examined and shown to be non-unitary. This is motivated by the prospect that logical independence in elementary algebra, entering quantum mathematics, will constitute the basis for a theory explaining quantum randomness. Keywords: foundations of quantum theory, quantum physics, quantum mechanics, wave mechanics, Canonical Commutation Relation, symmetry, homogeneity of space, unitary, non-unitary, unitarity, mathematical logic, formal system, elementary algebra, information, axioms, mathematical propositions, logical independence, quantum indeterminacy, quantum randomness.
[4050] vixra:1509.0217 [pdf]
Metric Operator Equations for Quantum Gravity
The search for a consistent theory of quantum gravity has motivated the development of radically different approaches. This seeks consists of constructing a mathematical apparatus that encapsulates both concepts of quantum theory and general relativity. However, none approach has been definitive and the problem remains open. As the quantization of the metric is an alternative, this paper shows how a metric operator may be explicitly obtained by introducing a temporal operator, defining an induced metric and invoking some spacetime symmetries. This makes it possible to relate the effective acoustic metric to the model proposed here. The metric operator equations are expressed in terms of a hamiltonian operator describing the degrees of freedom of quantum vaccum whose dynamics gives rise to the metric field. These findings may help understand and study the quantum vacuum at Planck scale, consisting of one more tool for the community working on quantization of gravity.
[4051] vixra:1509.0215 [pdf]
About The Geometry of Cosmos
The current paper presents a new idea that it might lead us to the Grand Unified Theory. A concrete mathematical framework has been provided that could be appro- priate for one to work with. Possible answers were given concerning the problems of dark matter and dark energy as well as the \penetration" to vacuum dominant epoch, combining Quantum Physics with Cosmology through the existence of Higg's boson. A value for Higg's mass around 125,179345 Gev/c^2 and a value for vacuum density around 4,41348x10^-5Gev/cm^3 were derived . Via Cartan's theorem a proof regarding the number of bosons existing in nature (28) has been presented. Additionally, the full Lagrangian of our Cosmos (including Quantum Gravity) was accomplished.
[4052] vixra:1509.0213 [pdf]
The Baryonic Tully-Fisher Relation Combined with the Elementary Particle Dark Matter Halo Hypothesis Lead to a Universal Dark Matter Gravitational Acceleration Constant for Galaxies.
In this paper I combine the elementary particle Dark matter halo hypothesis with the Baryonic Tully-Fisher relation. It results in a universal Dark Matter galaxy gravitational centripetal acceleration and connects the galaxy specific Dark Matter radius uniquely to the galaxy rotation curve's final velocity. This allows the precise operational definition of the galaxy specific Dark Matter density function and mass function.
[4053] vixra:1509.0209 [pdf]
On the Millennium Prize Problems
There is Prize committee (claymath.org), which requires publication in worldwide reputable mathematics journal and at least two years of following scientific admiration. Why then the Grisha Perelman has published only in a forum (arXiv), publication was unclear as the crazy sketch; but mummy child ``Grisha'' have being forced to accept the Millennium Prize? Am I simply ugly or poor? If the following text would not be accepted by committee as the pay-able proofs (but I hope for) then let at least it builds your confidence to refer to these conjectures and problems (which now are having my answers), as the achieved facts. I see no logical problems with all these plain facts, are you with me at last? It is your free choice to be blind and discriminative ignorant or be better one. One even can ignore own breathing and, thus, die. One can ignore what's ever in this world. But it is not always recommended. Please respect my copyrights!
[4054] vixra:1509.0197 [pdf]
Logical Independence Inherent in Elementary Algebra Seen in Context of Quantum Randomness
Abstract As opposed to the classical logic of true and false, when elementary algebra is treated as a formal axiomatised system, formulae in that algebra are either provable, disprovable or otherwise, logically independent of axioms. This logical independence is well-known to Mathematical Logic. The intention here is to cover the subject in a way accessible to physicists. This work is part of a project researching logical independence in quantum mathematics, for the purpose of advancing a complete theory of quantum randomness.<br><br><b>Keywords</b> mathematical logic, formal system, axioms, mathematical propositions, Soundness Theorem, Completeness Theorem, logical independence, mathematical undecidability, foundations of quantum theory, quantum mechanics, quantum physics, quantum indeterminacy, quantum randomness.
[4055] vixra:1509.0188 [pdf]
Gravity in Curved Phase-Spaces : Towards Geometrization of Matter
After reviewing the basic ideas behind Born's Reciprocal Relativity theory, the geometry of the (co) tangent bundle of spacetime is studied via the introduction of nonlinear connections associated with certain $nonholonomic$ modifications of Riemann--Cartan gravity within the context of Finsler geometry. The curvature tensors in the (co) tangent bundle of spacetime are explicitly constructed leading to the analog of the Einstein vacuum field equations. The geometry of Hamilton Spaces associated with curved phase spaces follows. An explicit construction of a gauge theory of gravity in the $8D$ co-tangent bundle $ T^*M$ of spacetime is provided, and based on the gauge group $ SO (6, 2) \times_s R^8$ which acts on the tangent space to the cotangent bundle $ T_{ ( x, p) } T^*M $ at each point $ ({\bf x}, {\bf p})$. Several gravitational actions associated with the geometry of curved phase spaces are presented. We conclude with a discussion about the geometrization of matter, QFT in accelerated frames, {\bf T}-duality, double field theory, and generalized geometry.
[4056] vixra:1509.0182 [pdf]
A Dark Matter Halo for Every Elementary Particle in a Zwicky de Broglie Synthesis
In this paper I introduce a new Dark matter hypothesis. I assume that every elementary particle has a Dark Matter halo. Given a rest mass m_0 at r=0, it will have an additional spherical Dark Matter halo containing an extra mass in the sphere with radius r as m_DM = (r/R_DM)m_0 with the constant Dark Matter radius R_DM having a measured value somewhere in between 10 kpc and 20 kpc, so approximately once or twice the radius of an average luminous galaxy. The total rest mass of an elementary particle contained within a sphere with radius r will then be given by m = m_0 + (r/R_DM)m_0. The correlated mass density is \rho_DM = m_0/(4 pi r^2 R_DM). The new Newtonian gravitational energy will be U_g = - GM_0 m_0/r - G M_0 m_0/R_DM resulting in an unchanged Newtonian force of gravity but with a correct galaxy velocity rotation curve, due to the still applicable virial energy theorem. The axiom is theory of gravity neutral because it is a statement about mass and mass density distribution only. But it implies that WIMP's and the like aren't necessary to explain Dark Matter; my proposal isn't WIMP neutral. Beyond the scale of galaxy clusters the model becomes problematic due to an extra halo halo interaction term becoming active at that scale.
[4057] vixra:1509.0149 [pdf]
Fundamental Nature of the Fine-Structure Constant
Arnold Sommerfeld introduced the fine-structure constant that determines the strength of the electromagnetic interaction. Following Sommerfeld, Wolfgang Pauli left several clues to calculating the fine-structure constant with his research on Johannes Kepler's view of nature and Pythagorean geometry. The Laplace limit of Kepler's equation in classical mechanics, the Bohr-Sommerfeld model of the hydrogen atom and Julian Schwinger's research enable a calculation of the electron magnetic moment anomaly. Considerations of fundamental lengths such as the charge radius of the proton and mass ratios suggest some further foundational interpretations of quantum electrodynamics.
[4058] vixra:1509.0142 [pdf]
Quantum Gravity Experiments: No Magnetic Effects
A quantum gravity experiment was reported in Cahill, Progress in Physics 2015, v.11(4), 317-320, with the data confirming the generalisation of the Schrodinger equation to include the interaction of the wave function with dynamical space. Dynamical space, via this interaction process, raises and lowers the energy of the electron wave function, which is detected by observing consequent variations in the electron quantum barrier tunnelling rate in reverse-biased Zener diodes. This process has previously been reported and enabled the measurement of the speed of the dynamical space flow, which is consistent with numerous other detection experiments. However Vrba, Progress in Physics, 2015, v.11(4), 330, has suggested the various experimental results may have been caused by magnetic field effects, but without any experimental evidence. Here we show experimentally that there is no such magnetic field effect in the Zener Diode Dynamical Space Quantum Detectors.
[4059] vixra:1509.0140 [pdf]
A Computer Program to Solve Water Jug Pouring Puzzles.
We provide a C++ program which searches for the smallest number of pouring steps that convert a set of jugs with fixed (integer) capacities and some initial known (integer) water contents into another state with some other prescribed water contents. Each step requires to pour one jug into another without spilling until either the source jug is empty or the drain jug is full-because the model assumes the jugs have irregular shape and no further marks. The program simply places the initial jug configuration at the root of the tree of state diagrams and deploys the branches (avoiding loops) recursively by generating all possible states from known states in one pouring step.
[4060] vixra:1509.0131 [pdf]
Holism and the Geometrization and Unification of Fundamental Physical Interactions (An Essay on Philosophy of Physics)
We are looking for a paradigm of modern physics and this essay is devoted to this problem. Thus in the essay we consider very important problems of philosophy of physics: a unification and a geometrization of fundamental physical interactions and holism. We are looking for holistic approaches in many different domains of physics. Simultaneously we look for some holistic approaches in the history of philosophy and compare them to approaches in physics. We consider a unification and a geometrization of physical interac- tions in contemporary physics in order to find some connections with many philosophical approaches known in the history of philosophy. In this way we want to connect humanity to natural sciences, i.e. physics. In the his- tory of philosophy there are several basic principles, so-called “arche”. Our conclusion is that a contemporary “arche” is geometry as in the Einstein programme connected to a holistic approach. In our meaning geometry as “arche” of physics will be a leading idea in a fundamental physics. Physics itself will be a leading force in philosophy of science and philosophy itself. Cultural quasi-reality (in R. Ingarden’s meaning) and also biology, medicine and social sciences will be influenced by physics. In this way a cultural quasi-reality will be closer to physical reality.
[4061] vixra:1509.0119 [pdf]
The Maximum Deng Entropy
Dempster Shafer evidence theory has widely used in many applications due to its advantages to handle uncertainty. Deng entropy, has been proposed to measure the uncertainty degree of basic probability assignment in evidence theory. It is the generalization of Shannon entropy since that the BPA is degenerated as probability, Deng entropy is identical to Shannon entropy. However, the maximal value of Deng entropy has not been disscussed until now. In this paper, the condition of the maximum of Deng entropy has been disscussed and proofed, which is usefull for the application of Deng entropy.
[4062] vixra:1509.0112 [pdf]
Hubble Redshift Revisited
In 1907 Einstein used special relativity to prove that vacuum permittivity is a function of gravity by assuming that acceleration and gravity are equivalent. Vacuum permittivity is the scalar in Maxwell's equations that determines the speed of light and the strength of electrical fields. Predictably, vacuum permittivity also changes with spacetime curvature in general relativity. When spacetime curvature changes, the wavelengths of both photons and atomic emissions shift. In Friedmann geometry, curvature changes in time. A photon today has a different wavelength than it did yesterday. Yesterday, an atom emitted a photon with a different wavelength than it emits today. Considered together, the evolution of atoms and photons reverse the interpretation of Hubble redshift. Hubble redshift implies that the Friedmann universe is closed and collapsing. During collapse, both atomic emissions and photons blueshift. Atomic emissions blueshift about twice as much as photons blueshift. This means that blueshifted photons seen in a telescope today are redder than blueshifted reference photons emitted by atoms today. With this insight, supernovae redshift observations are fit simply using the physics of Maxwell, Einstein, Bohr, and Friedmann from the 1920's. There is no need to postulate dark energy. Supernovae redshift data imply that the universe is very nearly flat and will collapse in about 9.6 billion years. High-z redshift observations up to 11.9 suggest that the universe is at least 2000 billion years old. This is more than a hundred times greater than a typical star's lifetime. It is probable that most dark matter is the residue of stellar evolution. The changes in atoms and photons derived here confirm Schrödinger's 1939 proof that quantum wave functions expand and contract proportionally to the radius of a closed Friedmann universe.
[4063] vixra:1509.0106 [pdf]
Crush-down of One World Trade Center: Conditions in the Building from Roof-line Motion Data
We analyze the crush-down collapse of One World Trade Center (1~WTC, North Tower) in the framework of the National Institute of Standards and Technology (NIST, 2005) collapse hypothesis. The main feature of crush-down is that a moving part of the building - the top section - falls onto the stationary base, and absorbs the mass in the way. We extend the Ba\v{z}ant-Verdure-Seffen (BVS) model of crush-down~(\bv, 2002; Seffen, 2008), where we split the crushing front in two, one at the core and to other at the perimeter of the building. We fit the BVS and the split-front crush-down model to recently published roofline motion data (MacQueen and Szamboti, 2009), to find detailed variation of crushing force $\FC(Z)$ in the storeys 97 and 96, and the average crushing force $\mean{F^C}$ in the remainder of the impact zone (storeys 95 to 93) and in the base below for the remainder of roofline data (storeys 92 through 87). We show how within the NIST hypothesis and the BVS model $\ec{}$, defined as $\ec{} = \mean{F^C} / \FC(0)$, requires a correction factor of 1/6 to match the data. We construct a Controlled Demolition (CD) hypothesis which avoids this and other correction factors through two assumptions: $(i)$ the top section is twice as massive as what it appears to be, where its core stretches initially down to the 75-th storey; and $(ii)$, the collapse starts with the wave of massive destruction which annihilates the core below the 75-th storey and separates the top section from the base below the impact zone, following which the top section falls to the ground opposed mostly by the perimeter columns, which strength is approximately a third of the total strength. Within the CD hypothesis we achieve excellent agreement between Ba\v{z}ant-Verdure model of crushing force and the data.
[4064] vixra:1509.0104 [pdf]
A Formula Based Approach to Arithmetic Coding
The Arithmetic Coding process involves re-calculation of intervals for each symbol that need to be encoded. This article discovers a formula based approach for calculating compressed codes and provides proof for deriving the formula from the usual approach. A spreadsheet is also provided for verification of the approach. Consequently, the similarities between Arithmetic Coding and Huffman coding are also visually illustrated.
[4065] vixra:1509.0091 [pdf]
Une Démonstration Élémentaire de la Conjecture de BEAL
En 1997, Andrew Beal avait annoncé la conjecture suivante: Soient A, B,C, m,n, et l des entiers positifs avec m,n,l >2. Si A^m + ^n = C^l alors A, B,et C ont un facteur commun. Nous commençons par construire le polynôme P(x)=(x-A^m)(x-B^n)(x+C^l)=x^3-px+q avec p,q des entiers qui dépendent de A^m,B^n et C^l. Nous résolvons l'équation x^3-px+q=0 et nous obtenons les trois racines x_1,x_2,x_3 comme fonctions de p,q et d'un paramètre µ. Comme A^m,B^n,-C^l sont les seules racines de x^3-px+q=0, nous discutons les conditions pour que x_1,x_2,x_3 soient des entiers.
[4066] vixra:1509.0088 [pdf]
Cognitive Architecture for Personable and Human-Like ai :A Perspective
In this article we will introduce a cognitive architecture for creating a more human like and personable artificial intelligence. Recent works such as those by Marvin Minsky, Google DeepMind and cognitive models like AMBR, DUAL that aim to propose/discover an approach to commonsense AI have been promising, since they show that human intelligence can be emulated with a divide and conquer approach on a machine. These frameworks work with an universal model of the human mind and do not account for the variability between human beings. It is these differences between human beings that make communication possible and gives them a sense of identity. Thus, this work, despite being grounded in these methods, will differ in hypothesizing machines that are diverse in their behavior compared to each other and have the ability to express a dynamic personality like a human being. To achieve such individuality in machines, we characterize the various aspects that can be dynamically programmed onto a machine by its human owners. In order to ensure this on a scale parallel to how humans develop their individuality, we first assume a child-like intelligence in a machine that is more malleable and which then develops into a more concrete, mature version. By having a set of tunable inner parameters called aspects which respond to external stimuli from their human owners, machines can achieve personability. The result of this work would be that we will not only be able to bond with the intelligent machines and relate to them in a friendly way, we will also be able to perceive them as having a personality, and that they have their limitations. Just as each human being is unique, we will have machines that are unique and individualistic. We will see how they can achieve intuition, and a drive to find meaning in life, all of which are considered aspects unique to the human mind.
[4067] vixra:1509.0048 [pdf]
Adaptive Rejection Sampling with Fixed Number of Nodes
The adaptive rejection sampling (ARS) algorithm is a universal random generator for drawing samples eciently from a univariate log-concave target probability density function (pdf). ARS generates independent samples from the target via rejection sampling with high acceptance rates. Indeed, ARS yields a sequence of proposal functions that converge toward the target pdf, so that the probability of accepting a sample approaches one. However, sampling from the proposal pdf becomes more computational demanding each time it is updated. In this work, we propose a novel ARS scheme, called Cheap Adaptive Rejection Sampling (CARS), where the computational effort for drawing from the proposal remains constant, decided in advance by the user. For generating a large number of desired samples, CARS is faster than ARS.
[4068] vixra:1509.0047 [pdf]
Improved Numerical Analysis of Thermal WIMP model of Dark Matter
This report contains a short review of the thermal WIMP model of Dark matter, followed by an improved numerical analysis that takes into account the evolution of effective degrees of freedom for Standard Model particles to solve the evolution equation. The result dictates that for low-mass WIMPs, the cross section required to match the current Dark Matter relic density is significantly different from the oft-quoted value.
[4069] vixra:1509.0027 [pdf]
The Mixing Matrices and the Cubic Equation
This article begins by defining two sets of related constants: The first set is used to define a nonstandard cubic equation. The second set is used to define equations that constrain a pair of rotation matrices. Curiously, these sets each contain an angle, each defined much like the other, where these angles separately play a central role in identifying solutions to their respective, very different equations. In this way these similarly-defined angles provide good evidence of a non-coincidental relationship between the nonstandard cubic equation and the pair of matrices. Moreover, a special case of this nonstandard cubic neatly combines with these matrices to produce values that map over to multiple physical constants (the fine structure constant reciprocal, the Weinberg angle, and the quark and lepton mixing angles), providing further evidence of a non-coincidental relationship.
[4070] vixra:1509.0025 [pdf]
An Application According to Spatial Quaternionic Smarandache Curve
In this paper, we found the Darboux vector of the spatial quaternionic curve according to the Frenet frame. Then, the curvature and torsion of the spatial quaternionic smarandache curve formed by the unit Darboux vector with the normal vector was calculated. Finally; these values are expressed depending upon the spatial quaternionic curve.
[4071] vixra:1509.0019 [pdf]
Extended Banach G Flow Spaces on Differential Equations with Applications
The main purpose of this paper is to extend Banach spaces on topological graphs with operator actions and show all of these extensions are also Banach space with a bijection with a bijection between linear continuous functionals and elements, which enables one to solve linear functional equations in such extended space, particularly, solve algebraic, differential or integral equations on a topological graph, find multi-space solutions on equations, for instance, the Einstein’s gravitational equations.
[4072] vixra:1509.0014 [pdf]
Parameters for Minimal Unsatisfiability: Smarandache Primitive Numbers and Full Clauses
We establish a new bridge between propositional logic and elementary number theory. A full clause contains all variables, and we study them in minimally unsatisfiable clause-sets (MU); such clauses are strong structural anchors, when combined with other restrictions.
[4073] vixra:1508.0431 [pdf]
Cosmic Expansion As a Virtual Problem
The fact that a red shift of galaxies is observed to increase with the galaxies approaching the cosmic horizon is taken as plain evidence that the universe is acceleratedly expanding, hopefully perpetually. That implication does however not take into account the gravitational effects of the cosmic horizon on the universe; the cosmic horizon is the center of mass of our universe, and taking this into account, a very different picture of the state of our universe results: Based on [1] it is shown that the observed red shift, implies a huge deceleration of the universe, already. Whether the cosmos is currently before, at, or after the turn from expansion to collapse, this could only be answered by repeated red shift measurements over the next decades.
[4074] vixra:1508.0427 [pdf]
Surfaces Family With Common Smarandache Asymptotic Curve According To Bishop Frame In Euclidean Space
In this paper, we analyzed the problem of consructing a family of surfaces from a given some special Smarandache curves in Euclidean 3-space. Using the Bishop frame of the curve in Euclidean 3-space, we express the family of surfaces as a linear combination of the components of this frame, and derive the necessary and sufficient conditions for coefficents to satisfy both the asymptotic and isoparametric requirements. Finally, examples are given to show the family of surfaces with common Smarandache asymptotic curve.
[4075] vixra:1508.0425 [pdf]
Fusing Uncertain Knowledge and Evidence for Maritime Situational Awareness Via Markov Logic Networks
The concepts of event and anomaly are important building blocks for developing a situational picture of the observed environment. We here relate these concepts to the JDL fusion model and demonstrate the power of Markov Logic Networks (MLNs) for encoding uncertain knowledge and compute inferences according to observed evidence.
[4076] vixra:1508.0422 [pdf]
Grid Occupancy Estimation for Environment Perception Based on Belief Functions and PCR6
In this contribution, we propose to improve the grid map occupancy estimation method developed so far based on belief function modeling and the classical Dempster’s rule of combination. Grid map offers a useful representation of the perceived world for mobile robotics navigation.
[4077] vixra:1508.0421 [pdf]
Performance Evaluation of Fuzzy-Based Fusion Rules for Tracking Applications
Abstract: The objective of this paper is to present and to evaluate the performance of particular fusion rules based on fuzzy T-Conorm/T-Norm operators for two tracking applications: (1) Tracking object's type changes, supporting the process of objects' identication (e.g. ghter against cargo, friendly aircraft against hostile ones), which, consequently is essential for improving the quality of generalized data association for targets' tracking; (2) Alarms identication and prioritization in terms of degree of danger relating to a set of a priori dened, out of the ordinary dangerous directions.
[4078] vixra:1508.0420 [pdf]
Dual-Complex Numbers and Their Holomorphic Functions
The purpose of this paper is to contribute to development a general theory of dual-complex numbers. We start by de…ne the notion of dual- complex and their algebraic properties. In addition, we develop a simple mathematical method based on matrices, simplifying manipulation of dual-complex numbers. Inspired from complex analysis, we generalize the concept of holo- morphicity to dual-complex functions. Moreover, a general representation of holomorphic dual-complex functions has been obtained. Finally and as concrete examples, some usual complex functions have been generalized to the algebra of dual-complex numbers.
[4079] vixra:1508.0408 [pdf]
Correlation Measure for Neutrosophic Re¯ned Sets and Its Application in Medical Diagnosis
In this paper, the correlation measure of neutrosophic refined(multi-) sets is proposed. The concept of this correlation measure of neutrosophic refined sets is the extension of correlation measure of neutrosophic sets and intuitionistic fuzzy multi sets. Finally, using the correlation of neutrosophic refined set measure, the application of medical diagnosis and pattern recognition are presented.
[4080] vixra:1508.0399 [pdf]
Some Weighted Geometric Operators with SVTrN-Numbers and Their Application to Multi-Criteria Decision Making Problems
The single valued triangular neutrosophic number (SVTrN-number) is simply an ordinary number whose precise value is somewhat uncertain from a philosophical point of view, which is a generalization of triangular fuzzy numbers and triangular intuitionistic fuzzy numbers.
[4081] vixra:1508.0389 [pdf]
Comparative Study of Intuitionistic and Generalized Neutrosophic Soft Sets
The aim of this paper is to define several operations such as Intersection, Union, OR, AND operations of intuitionistic (resp. generalized) neutrosophic soft sets in the sense of Maji and compare these with intuitionistic (resp. generalized) neutrosophic soft sets in the sense of Said et al via examples.
[4082] vixra:1508.0388 [pdf]
Correlation Measure for Neutrosophic Refined Sets and Its Application in Medical Diagnosis
In this paper, the correlation measure of neutrosophic refined(multi-) sets is proposed. The concept of this correlation measure of neutrosophic refined sets is the extension of correlation measure of neutrosophic sets and intuitionistic fuzzy multi-sets. Finally, using the correlation of neutrosophic refined set measure, the application of medical diagnosis and pattern recognition are presented.
[4083] vixra:1508.0360 [pdf]
Introduction to Neutrosophic Nearrings
The objective of this paper is to introduce the concept of neutrosophic near-rings. The concept of neutrosophic N-group of a neutrosophic nearring is introduced. We studied neutrosophic subnearrings of neutrosophic nearrings and also neutrosophic N-subgroups of neutrosophic N- groups.
[4084] vixra:1508.0358 [pdf]
On Single Valued Neutrosophic Relations
Smarandache initiated neutrosophic sets (NSs) which can be used as a mathematical tool for dealing with indeterminate and inconsistent information. In order to apply NSs conveniently, single valued neutrosophic sets (SVNSs) were proposed by Wang et al.
[4085] vixra:1508.0351 [pdf]
Simplified Neutrosophic Linguistic Normalized Weighted Bonferroni Mean Operator and Its Application to Multi-Criteria Decision-Making Problems
The main purpose of this paper is to provide a method of multi-criteria decision-making that combines simplified neutrosophic linguistic sets and normalized Bonferroni mean operator to address the situations where the criterion values take the form of simplified neutrosophic linguistic numbers and the criterion weights are known.
[4086] vixra:1508.0330 [pdf]
Smarandache Curves and Spherical Indicatrices in the Galilean 3-Space
In the present paper, Smarandache curves for some special curves in the threedimensional Galilean space G3are investigated. Moreover, spherical indicatrices for the helix as well as circular helix are introduced. Furthermore, some properties for these curves are given. Finally, in the light of this study, some related examples of these curves are provided.
[4087] vixra:1508.0329 [pdf]
Spinor Darboux Equations of Curves in Euclidean 3-Space
In this paper, the spinor formulation of Darboux frame on an oriented surface is given. Also, the relation between spinor formulation of Frenet frame and Darboux frame is obtained.
[4088] vixra:1508.0328 [pdf]
Smarandache Curves In Terms of Sabban Frame of Fixed Pole Curve
In this paper, we study the special Smarandache curve in terms of Sabban frame of Fixed Pole curve and we give some characterization of Smarandache curves. Besides, we illustrate examples of our results.
[4089] vixra:1508.0313 [pdf]
DSmH Evidential Network for Target Identication
This paper proposes a model of evidential network based on Hybrid Dezert-Smarandache theory (DSmH) to improve target identication of multi-sensors. In the classication simulation, we compared the results obtained at the Target Type node and Foe-Ally node in evidential network by using Dempster-Shafer theory (DS) and using DSmH. The comparisons show that, when we use DSmH in the evidential network, we can assign more Basic Belief Assignments (BBA) to the focal element the target belongs to. Experiments conrm that the model of evidential network using DSmH is better than the one using DS.
[4090] vixra:1508.0310 [pdf]
United Dipole Field
The field of an electromagnetic (E) dipole has been examined using general relativistic (R) and quantum mechanical (Q) points of view, and an E=Q=R equivalence principle presented whereas the curvature of the electromagnetic streamlines of the field are taken to be evidence of the distortion of spacetime, and hence of the presence of a gravitational field surrounding the dipole. Using a quasi-refractive index function N, with the streamlines and equipotential surfaces as coordinates, a new dipole relativistic metric is described, replacing Schwarzschild’s for a point mass. The same principle equates the curvature and other physical features of the field with fundamental quantum concepts such as the uncertainty principle, the probability distribution and the wave packet. The equations of the dipole field therefore yield the three fields emerging naturally one from the other and unified without resorting to any new dimensions. It is speculated whether this model can be extended to dipolar matter-antimatter pairs.
[4091] vixra:1508.0266 [pdf]
Assessing the Performance of Data Fusion Algorithms Using Human Response Models
There is ongoing interest in designing data fusion systems that make use of human opinions (i.e.,\soft" data) alongside readings from various sensors that use mechanical, electromagnetic, optical,and acoustic transducers (i.e., \hard" data). One of the major challenges in the development of these hard/soft fusion systems is to determine accurately and flexibly the impact of human responses on the performance of the fusion operator.
[4092] vixra:1508.0258 [pdf]
Critical Review
In this paper we initiated the concept of neutrosophic codes which are better codes than other type of codes. We first construct linear neutrosophic codes and gave illustrative examples. This neutrosophic algebriac structure is more rich for codes and also we found the containement of corresponding code in neutrosophic code. We also found new types of codes and these are pseudo neutrosophic codes and strong or pure neutrosophic codes. By the help of examples, we illustrated in a simple way. We established the basic results for neutosophic codes. At the end, we developed the decoding proceedures for neutrosophic codes.
[4093] vixra:1508.0252 [pdf]
Fuzzy Abel Grassmann Groupoids, Second Updated and Enlarged Version
Usually the models of real world problems in almost all disciplines like engineering, medical sciences, mathematics, physics, computer science, management sciences, operations research and arti…cial intelligence are mostly full of complexities and consist of several types of uncertainties while dealing them in several occasion.
[4094] vixra:1508.0238 [pdf]
Neutrosophic Code
The idea of neutrosophic code came into my mind at that time when i was reading the literature about linear codes and i saw that, if there is data transfremation between a sender and a reciever. They want to send 11and 00 as codewords. They suppose 11 for true and 00 for false. When the sender sends the these two codewords and the error occures. As a result the reciever recieved 01 or 10 instead of 11 and 00. This story give a way to the neutrosophic codes and thus we introduced neutrosophic codes over finite field in this paper
[4095] vixra:1508.0206 [pdf]
Explicit Matrix Representation for the Hamiltonian of the One Dimensional Spin $1/2$ Ising Model in Mutually Orthogonal External Magnetic Fields
We give an explicit matrix representation for the Hamiltonian of the Ising model in mutually orthogonal external magnetic fields, using as basis the eigenstates of a system of non-interacting spin~$1/2$ particles in external magnetic fields. We subsequently apply our results to obtain an analytical expression for the ground state energy per spin, to the fourth order in the exchange integral, for the Ising model in perpendicular external fields.
[4096] vixra:1508.0204 [pdf]
New Finite and Infinite Summation Identities Involving the Generalized Harmonic Numbers
We state and prove a general summation identity. The identity is then applied to derive various summation formulas involving the generalized harmonic numbers and related quantities. Interesting results, mostly new, are obtained for both finite and infinite sums. The high points of this paper are perhaps the discovery of several previously unknown infinite summation results involving {\em non-linear} generalized harmonic number terms and the derivation of interesting alternating summation formulas involving these numbers.
[4097] vixra:1508.0199 [pdf]
Wep and SR on a Global Free Fall Grid Used to Derive Gravitational Relativistic Corrections to GPS/GNSS Clocks on the Level of the Training of Standard GPS/GNSS Engineers
Using frequency gauged clocks on a free fall grid we look at gravitational phenomena as they appear for observers on a stationary grid in a central field of gravity. With an approach based on Special Relativity, the Weak Equivalence Principle and Newton's gravitational potential we derive first order correct expressions for the gravitational red shift of stationary clocks and of satellites. We also derive a second order correction of a satellite's clock frequency, related to the geodetic precession. In the derivation of the apparent velocity of light in a field of gravity, a Lorentz symmetry breaking occurred. The derived changed radial velocity of light is at the basis of the Shapiro delay and the gravitational index of refraction, so phenomena connected to the curvature of the metric. The advantage of the free fall grid SR-WEP approach is that it is less advanced and thus far less complicated as compared to the GR approach, but still accurate enough for all GPS purposes for the next few decades. Also important: our approach is never in conflict with GR because we do not introduce additional axioms to the ones already in use in GR. We only use less axioms. For GPS engineers, our approach will give a deeper insight into problems concerning clock synchronization in a grid around the earth, without using the complex mathematics needed in GR. The free fall grid SR-WEP approach can be taught to GPS engineers in an achievable and economically affordable time. It will considerably reduce the communication gap between those engineers and the GR experts.
[4098] vixra:1508.0180 [pdf]
From Elementary Particles to Early Universe in the Ultra Relativistic Limits
Using a phenomenological model in the ultra-relativistic limits we suggest that elementary particles including photons transforms in to micro black holes subjected to the following critical conditions:(i) When the de Broglie wavelength of elementary particles becomes equal to its Schwarzschild radius its energy reaches an upper limit (Em) given by the relation: Em= hc3/2Gm where m is the rest mass of the elementary particle (ii) Particle black holes will have a mass equal to the limiting relativistic mass of the elementary particles and Schwarzschild radius equal to Compton wavelength of these particles. Lorentz invariance of the Compton wavelength of elementary particles in the trans Planckian scales is suggested from this result. Photon black holes are found to be similar to massive elementary particles resembling the Planck particles discussed in cosmology. We find that the known elementary particle physical properties may be a window to the early universe since it provides clues about density distribution and nature of primordial black holes formed during the post Planck era after the Big Bang
[4099] vixra:1508.0170 [pdf]
On the Origin of the Constants C and H
\begin{abstract} It is argued that the speed of light $c$ and Planck's quantum $h$ are anomalies that undermine the basis of Newtonian physics, the existence of space and time. In a Kuhnian sense, $c$ and $h$ were unpredicted parameters, extraneous to Newton's physics. Relativity and quantum mechanics, despite their obvious success, can be seen modifications of Newtonian physics that hided the possibility that space and time were inappropriate concepts for describing reality. Rather than being fundamental, space and time might just be the most suitable frames for human perception. $c$ indicates a failure of the Newtonian space-time paradigm on large scale, while $h$ indicates a failure on small scale. At the same time, $c$ and $h$ are related to light and matter, two phenomenologies Newtonian physics cannot explain as such. There is no a priori reason why reality should present itself in this particular fashion, and there is no reason for the existence of 3+1 dimensions either. It is further suggested that reality might be truly three-dimensional, the fourth dimension being an illusion created by navigating through a sequence of tangent spaces of a three-dimensional manifold. All physical laws would then be encdoded in a connection on this manifold. The most simple three-dimensional manifold, endowed with unique properties, is $S^3$. From the point of view of natural philosophy, there must be a reason for the existence of constants of nature. If $S^3$ is indeed a description of reality, then it should provide a reason for the existence of $c$ and $h$. It is suggested that $c$ is related to the fact that $S^3$ has a tangent space and $h$ is related to the noncommutativity of SU(2)$, the group acting on $S^3$.
[4100] vixra:1508.0168 [pdf]
Oscillations and Superluminosity of Neutrinos
Two conflicting theoretical structures, one using the non-relativistic Schroedinger equation and the other using relativistic energy, are used simulataneously to derive the expression, for example for $P_{\nu_\mu \rightarrow \nu_\tau} (t)}$, to study the neutrino oscillations. This has been confirmed experimentally. Here we try to resolve the above theoretical inconsistency. We show that this can be done in a single consistent theoretical framework which demands that the neutrinos be superluminal. We therefor predict that in the neutrino appearance experiments ( for example $P_{\nu_\mu \rightarrow \nu_\tau} (t)$ ) the neutrinos shall be seen to travel with velocities which are faster than that of light. The experimentalists are urged to try to confirm this prediction.
[4101] vixra:1508.0163 [pdf]
Relation Between the Newton Principle of Action and Reaction and Gravitational Waves, Together with the Heisenberg Indeterminacy Principle, as a Possible Key to an Explanation of the Quantum Nature of Our World
Initially it is proposed to interpret the fact that the delay of gravitational interaction in a binary system causes the emanation of gravitational waves, as a generalization of the Newtonian Principle of action and reaction. Then, the impact of such a concept is shown for a situation with mutual and simultaneously acting electromagnetic and inertia force, as well as consequences for a possible validity of the Mach principle. Further, the same phenomenon of retardation of the physical interactions is applied on the mechanistic Bohr model of the hydrogen atom together with the Indeterminacy principle, which results in a realistically adequate quantum description of the atom involving the pertinent de Broglie wave. Finally, the physical conclusions made before are discussed from a philosophical point of view; a new `para-deterministic' concept is presented being an alternative to both the deterministic and holistic views of our world.
[4102] vixra:1508.0150 [pdf]
Une Démonstration Elémentaire de la Conjecture de BEAL
En 1997, Andrew Beal \cite{B1} avait annoncé la conjecture suivante: \textit{Soient $A, B,C, m,n$, et $l$ des entiers positifs avec $m,n,l > 2$. Si $A^ m + B^n = C^l$ alors $A, B,$ et $C$ ont un facteur commun.} Nous commençons par construire le polynôme $P(x)=(x-A^m)(x-B^n)(x+C^l)=x^3-px+q$ avec $p,q$ des entiers qui dépendent de $A^m,B^n$ et $C^l$. Nous résolvons $x^3-px+q=0$ et nous obtenons les trois racines $x_1,x_2,x_3$ comme fonctions de $p,q$ et d'un paramètre $\theta$. Comme $A^m,B^n,-C^l$ sont les seules racines de $x^3-px+q=0$, nous discutons les conditions pourque $x_1,x_2,x_3$ soient des entiers. Des exemples numériques sont présentés.
[4103] vixra:1508.0149 [pdf]
HFVS:Arbitrary High Order Flux Vector Splitting Method
In this paper, a new scheme of arbitrary high order accuracy in both space and time is proposed to solve hyperbolic conservative laws. Based on the idea of °ux vector splitting(FVS) scheme, we split all the space and time derivatives in the Taylor expansion of the numerical °ux into two parts: one part with positive eigenvalues, another part with negative eigenvalues. According to a Lax-Wendro® procedure, all the time derivatives are then replaced by space derivatives. And the space derivatives is calculated by WENO reconstruction polynomial. One of the most important advantages of this new scheme is easy to implement.In addition, it should be pointed out, the procedure of calculating the space and time derivatives in numerical °ux can be used as a building block to extend the current ¯rst order schemes to very high order accuracy in both space and time. Numerous numerical tests for linear and nonlinear hyperbolic conservative laws demonstrate that new scheme is robust and can be high order accuracy in both space and time.
[4104] vixra:1508.0142 [pdf]
Issues in the Multiple Try Metropolis Mixing
The multiple Try Metropolis (MTM) algorithm is an advanced MCMC technique based on drawing and testing several candidates at each iteration of the algorithm. One of them is selected according to certain weights and then it is tested according to a suitable acceptance probability. Clearly, since the computational cost increases as the employed number of tries grows, one expects that the performance of an MTM scheme improves as the number of tries increases, as well. However, there are scenarios where the increase of number of tries does not produce a corresponding enhancement of the performance. In this work, we describe these scenarios and then we introduce possible solutions for solving these issues.
[4105] vixra:1508.0135 [pdf]
Testing Electrodynamics and Verification of the Results of Michelson and Morley by Laser Beam Aberration Measurement
The International Year of Light 2015 is a welcome opportunity to look back to observations and famous experiments that lead to revolutionary theories in physics around the phenomenon of light. The observed aberration of starlight and later the experiment of Michelson and Morley were in contradiction to the electrodynamic theory of light until the special relativity theory solved the conflict by proposing time and space effects in moving systems. Today, laser and CCD technlogies enable much more precise measurements than in former times. The most accurate test of electrodynamics is achievable by aberration measurement of a laser beam because it offers an effect in the first order of v/c. The results of the experiment verify the findings of Michelson and Morley but are, surprisingly, in contradiction to electrodynamics and to special relativity theory. Our picture of the light's properties is still imperfect.
[4106] vixra:1508.0131 [pdf]
Quantum Gravity Experiments
A new quantum gravity experiment is reported with the data confirming the generalisation of the Schr\"{o}dinger equation to include the interaction of the wave function with dynamical space. Dynamical space turbulence, via this interaction process, raises and lowers the energy of the electron wave function, which is detected by observing consequent variations in the electron quantum barrier tunnelling rate in reverse-biased Zener diodes. This process has previously been reported and enabled the measurement of the speed of the dynamical space flow, which is consistent with numerous other detection experiments. The interaction process is dependent on the angle between the dynamical space flow velocity and the direction of the electron flow in the diode, and this dependence is experimentally demonstrated. This interaction process explains gravity as an emergent quantum process, so unifying quantum phenomena and gravity. Gravitational Waves are easily detected.
[4107] vixra:1508.0110 [pdf]
Estimating the PML Risk on Natalizumab: a Simple Approach
In this short note, we show how to quickly verify the correctness of the estimates of the PML risk on natalizumab established in [Borchardt 2015]. Our approach is simple and elementary in that it requires virtually no knowledge of either statistics or probability theory. For a Kaplan-Meier curve of the PML incidence may be found in [O'Connor et al 2014], based on postmarketing data as of early August 2013, and just using the information from that chart, it is possible to directly derive estimates of the risk of PML in JCV-seropositive natalizumab-treated patients according to prior or no prior immunosuppression. Actually, the resulting figures are almost identical to the ones in [Borchardt 2015], even though the latter were obtained in a very different fashion.
[4108] vixra:1508.0101 [pdf]
Controlling Planetary Movements: Displacement of Earth for Preventing Extinction of the Entire Existence of Life Because Increments of Solar Irradiance
Via decrement of mass of planets, we can send the entire planets to far space orbital allocations. We can convert physical matter into energy, it can either get irradiated to outer space, get transmitted back to Earth's potential energy, get used as a self-propellant, or it can get used in a complex model of these systems; it decreases the mass of the planet. By conversion of the matter to energy, Earth will lose some mass that decreases the gravitational fields for Earth; the formulas of the current research are deduced to control the movements of the planets. Celestial bodies, like any other mechanical systems which follow, and are based on, the physical laws of mechanics and dynamical systems, follow these laws. So since the celestial object “A” exerts the “F” force on the celestial object “B”, the celestial object “B” exerts an interactive force “F” on the celestial object “A” also. All celestial objects exert gravitational influences on each other. Scientists believe, once upon a time, the Sun would be much hotter than what it is today. By that point, this high temperature leads to extinction of the entire existence of life on Earth. When the gravitational force changes, a space particle may either get departed from the other particle or come closer to the other particle.
[4109] vixra:1508.0099 [pdf]
Encrypted Transmission of a PGP Public Key to Destinations
To protect your private information, you may use a data encryption and decryption computer program like PGP. But for an espionage agency even the PGP Public Key is not completely unbreakable. So you may prefer to encipher the Public Key before you send it to the destination, then it would become probably an impossible goal for the Internet fraud operatives to decipher the contents.
[4110] vixra:1508.0098 [pdf]
Generalized Bohr’s Principle of Complementarity.
We show that Bohr's complementarity principle can be generalized to all the phenomena of reality. The generalized principle of complementarity Bohr can be formulated as follows: the rational side of reality and conjugate irrational side of reality are complementary to each other. This raises the question of relations between science and mysticism.
[4111] vixra:1508.0090 [pdf]
Exposure of Charged Particle Beam on the Brain of the Humans Leads to a Painless Death
By the progress of technology, following to advancements on directed energy systems and magnetic resonance, over an introductory research on potentials of application of charged particle beam on the brain of the humans which can lead to a painless consecutive death, we shall investigate for the possibility of prevention of the extreme pain that a volunteer patient may highly suffer of, before the death as a state for relieving the pain itself. Exposure of charged particle beam on the brain of the humans, potentially, leads to a painless death. The basal ganglia, S1 and BA3, S2, BA46, BA10, BA9, BA5, the pretectal area, the hippocampus (and the other parts of the limbic system), and the thalamus are the most important locations in the human’s brain, and targeting them destroys the home of our personality and the control/attention center of the brain, over NDE (near-death experience) it leads to finishing the life process of a patient who is suffering intolerable pain, but the patient won’t sense the pain ever. In conclusion, distinct parts of the cerebrum and the thalamus are active locations of self-personality and attention. In elder patients of extreme pain who suffer of untreatable illnesses, it may get considered voluntary as a final decision by the victim and the victim’s relatives. Thus, this experiment has several potentials for application; more research must be conducted with regard to in-depth understanding of these neural networks.
[4112] vixra:1508.0089 [pdf]
A Probabilistic Proof of the Existence of Etraterrestrial Life
Until the current moment, mankind is not realized that there is a diverse population of intelligent civilizations living in our universe. In the current article we will deduce the occurrence/existence of extraterrestrial life by mathematical proof. I would show you that even inside our galaxy, the Milky Way, a sufficient number of alien creatures are living. The first section includes an algebraic probabilistic proof when the event of life is not highly biased and the second section includes a proof by contradiction that describes the event fundamentally. It's a mathematical proof for the extraterrestrial life debate, for the first time in mankind's history.
[4113] vixra:1508.0086 [pdf]
G8,2 Geometric Algebra, DCGA
This paper introduces the Double Conformal / Darboux Cyclide Geometric Algebra (DCGA), based in the G8,2 Clifford geometric algebra. DCGA is an extension of CGA and has entities representing points and general Darboux cyclide surfaces in Euclidean 3D space. The general Darboux cyclide is a quartic surface. Darboux cyclides include circular tori and all quadrics, and also all surfaces formed by their inversions in spheres. Dupin cyclide surfaces can be formed as inversions in spheres of circular toroid, cylinder, and cone surfaces. Parabolic cyclides are cubic surfaces formed by inversion spheres centered on other surfaces. All DCGA entities can be conformally transformed in 3D space by rotors, dilators, translators, and motors, which are all types of versors. All entities can be inversed in general spheres and reflected in general planes. Entities representing the intersections of surfaces can be created by wedge products. All entities can be intersected with spheres, planes, lines, and circles. DCGA provides a higher-level algebra for working with 3D geometry in an object/entity-oriented system of mathematics above the level of the underlying implicit surface equations of algebraic geometry. DCGA could be used in the study of geometry in 3D, and also for some applications.
[4114] vixra:1508.0078 [pdf]
The Pentaquark and the Pauli Exclusion Principle
A subtle but very real difference in how the Pauli Exclusion Principle is applicable to baryons in three-quark systems and to those in multiquark systems is presented. This distinction creates no important physical manifestations for structures with light quarks, as in case of the SU(3)-flavour group with (u,d,s)-quarks. However it does produce significant effects for multiquark systems containing one or more heavy quarks like the c- and b-quarks. In fact, these consequences permit us to comprehend the structure of the two pentaquark states at 4.38 and 4.45 Gev, which were discovered recently by the LHCb Collaboration at CERN. This model makes a unique prediction of the existence of similar two new pentaquarks with structure (uudb$\bar{b}$) and with similar spin assignments as above.
[4115] vixra:1508.0070 [pdf]
Hip2wrl: a Java Program to Represent a Hipparcos Star Collection as a VRML97 File.
A Java program is presented which extracts star positions from the Hipparcos main catalogue and places them into a sphere collection rendered in a VRML97 file. The main options to the executable are a cut-off distance to some center of the scene (the sun by default) and a density of labeling some or all of the spheres with common names, Henry Draper numbers or Hipparcos ID's.
[4116] vixra:1508.0053 [pdf]
Model-Based Analysis of Hypothalamus Controlled Fever: the Non-Equilibrium Thermodynamic Aspect
We focus on the symptom of hypothalamus controlled fever, which is in fact a problem related to non-equilibrium system. Since live human body has constant temperature, whose dissipation is easy to be figured out by observation, it is a suitable candidate for non-equilibrium system to study. In our paper, human body is regarded as a 2-compartment-system: one is the chemical-reaction network, the other is observed by mechanical motion which means the vital signs apart from body temperature. Van der Pol model is used to describe the overall effect of chemical reaction network in human body. When the parameter of mathematical model is set to guarantee the mathematical model to be in limit cycle oscillation state, the energy absorbtion and releasing is computed. With the help of body temperature, which can be observed, the energy metabolism of overall effect of chemical reaction network is figured out. We have figured out the conditions when human is at healthy and fever how the mathematical respond. This response is just the overall effect of chemical reaction network. This research may be capable of answering the question whether fever is a kind of illness or some response of body to maintain its life? From our study, hypothalamus controlled fever is beneficial to maintain life.
[4117] vixra:1508.0048 [pdf]
The Analysis of Gianluca Perniciano Applied to the Natario Warp Drive Spacetime in Both the Original and Parallel $3+1$ $adm$ Formalisms:
Warp Drives are solutions of the Einstein Field Equations that allows superluminal travel within the framework of General Relativity. There are at the present moment two known solutions: The Alcubierre warp drive discovered in $1994$ and the Natario warp drive discovered in $2001$. However the major drawback concerning warp drives is the huge amount of negative energy density able to sustain the warp bubble.In order to perform an interstellar space travel to a "nearby" star at $20$ light-years away in a reasonable amount of time a ship must attain a speed of about $200$ times faster than light.However the negative energy density at such a speed is directly proportional to the factor $10^{48}$ which is $1.000.000.000.000.000.000.000.000$ times bigger in magnitude than the mass of the planet Earth!!. With the correct form of the shape function the Natario warp drive can overcome this obstacle at least in theory.Other drawbacks that affects the warp drive geometry are the collisions with hazardous interstellar matter(asteroids,comets,interstellar dust etc)that will unavoidably occurs when a ship travels at superluminal speeds and the problem of the Horizons(causally disconnected portions of spacetime).The geometrical features of the Natario warp drive are the required ones to overcome these obstacles also at least in theory.Recently Gianluca Perniciano a physicist from Italy appeared with a very interesting idea for the Alcubierre warp drive spacetime:he introduced in the Alcubierre equations a coefficient which is $1$ inside and outside the warp bubble but possesses large values in the Alcubierre warped region thereby reducing effectively the negative energy density requirements making the warp drive more "affordable" even at $200$ times light speed.In this work we reproduce the Perniciano analysis for the Natario warp drive spacetime in both the original and parallel $ADM$ formalisms.
[4118] vixra:1508.0046 [pdf]
The Bianchi Identities in Weyl Space
As far as the writer is aware, the Bianchi identities associated with a Weyl space have never been presented. That space was discovered by the noted German mathematical physicist Hermann Weyl in 1918, and represented the geometry underlying a tantalizing theory that appeared to successfully unify the gravitational and electromagnetic fields. One of theory’s problems involved one form of the Bianchi identities, which in Riemannian space are used to derive the divergenceless Einstein tensor. Such a derivation is generally not applicable in a non-Riemannian geometry like Weyl’s, in which the covariant derivative of the metric tensor is non-zero. But it turns out that such a derivation is not only possible but straightforward, with a result that hints at a fundamental relationship between Weyl’s geometry and electromagnetism.
[4119] vixra:1508.0032 [pdf]
Energy Conservation in Monopole Theories
The paper discusses a monopole theory based on the assumption that electromagnetic fields of charges are identical to electromagnetic fields of monopoles. It proves that this theory violates energy conservation. This result is consistent with the absence of a regular Lagrangian density for this theory as well as with the systematic failure of experiments aiming to detect such monopoles. By contrast, a monopole theory that is derived from a regular Lagrangian density is free of these problems.
[4120] vixra:1508.0018 [pdf]
How to Construct Self/anti-Self Charge Conjugate States for Higher Spins? Significance of the Spin Bases, Mass Dimension, and All that
We construct self/anti-self charge conjugate (Majorana-like) states for the (1/2, 0)⊕(0, 1/2) representation of the Lorentz group, and their analogs for higher spins within the quantum field theory. The problems of the basis rotations and that of the selection of phases in the Dirac-like and Majorana-like field operators are considered. The discrete symmetries properties (P, C, T) are studied. The corresponding dynamical equations are presented. In the (1/2, 0) ⊕ (0, 1/2) representation they obey the Diraclike equation with eight components, which has been first introduced by Markov. Thus, the Fock space for corresponding quantum fields is doubled (as shown by Ziino). The particular attention has been paid to the questions of chirality and helicity (two concepts which are frequently confused in the literature) for Dirac and Majorana states, and to the normalization (“the mass dimension”). We further review several experimental consequences which follow from the previous works of M. Kirchbach et al. on neutrinoless double beta decay, and G.J. Ni et al. on meson lifetimes. The results are generalized for spins 1, 3/2 and 2.
[4121] vixra:1508.0015 [pdf]
Event-Based and LHV Simulation of an EPR-B Experiment: Epr-Simple and Epr-Clocked
In this note, I analyse the code and the data generated by M. Fodje's simulation programs (written in Python, published in 2013 on Github) epr-simple and epr-clocked using appropriate modified Bell-CHSH type inequalities: the Larsson detection-loophole adjusted CHSH, and the Larsson-Gill coincidence-loophole adjusted CHSH. The experimental efficiencies turn out to be approximately eta = 81% and gamma = 55% respectively, and the observed value of CHSH is (of course) well within the adjusted bounds. Fodjes' detection loophole model turns out to be very, very close to Pearle's famous 1970 model, so the efficiency is very close to optimal, but the model shares the same defect as Pearle's - the joint detection rates exhibit signalling. His coincidence-loophole model is actually a clever modification of his detection-loophole model, and the trick he uses is actually rather simple. But it does not lead to the optimal efficiency. Note: this is version 5 of a paper originally written in 2014. I recently submitted version 4 to the journal "Entropy" where it got rejected, rightly so. It has the status of "lab notes", a documentation of one or two experiments whose results are interesting but not worth publishing on their own. I will extract the few jewels in this work later and use them in a more ambitious paper about the results of the bigger research project of which these experiments were a small part.
[4122] vixra:1508.0005 [pdf]
Two Conjectures in Number Theory
In this note, I propose a conjecture of generalization of the Lander, Parkin, and Selfridge conjecture; and a conjecture of generalization of the Beal’s conjecture.
[4123] vixra:1507.0222 [pdf]
A Computational Violation of the CHSH with a Local Model
In this paper the design and coding of a local hidden variables model is presented that violates the CHSH criterion in size larger than $1+\sqrt{2}$.
[4124] vixra:1507.0192 [pdf]
A Kappa Deformed Clifford Algebra, Hopf Algebras and Quantum Gravity
Explicit deformations of the Lorentz (Conformal) algebra are performed by recurring to Clifford algebras. In particular, deformations of the boosts generators are possible which still retain the form of the Lorentz algebra. In this case there is an invariant value of the energy that is set to be equal to the Planck energy. A discussion of Clifford-Hopf $\kappa$-deformed quantum Poincare algebra follows. To finalize we provide further deformations of the Clifford geometric product based on Moyal star products associated with noncommutative spacetime coordinates.
[4125] vixra:1507.0169 [pdf]
Lectures on Affine, Hyperbolic and Quantum Algebras
These introductory lectures on affine and quantum algebras are motivated by the idea that triality and exceptional structures are crucial to gravity and color symmetry. We start with the ADE classification.
[4126] vixra:1507.0153 [pdf]
Inflation sans Singularity in "Standard" Transformed FLRW
The calculations of Oppenheimer and Snyder showed that quasi-Newtonian cycloidal metric and energy density singularities in the behavior of an initially stationary uniform dust ball in "comoving" coordinates fail to carry over to "standard" coordinates, where that contracting dust ball at no finite time attains a radius (quite) as small as its Schwarzschild radius. This physical behavior disparity reflects the singular nature of the "comoving" to "standard" transformation, whose cause is that "comoving time" requires the clocks of an infinite number of different observers, making that "time" inherently physically unobservable. Notwithstanding the warning implicit in the Oppenheimer-Snyder example, checking other "comoving" dust ball results by transforming them to physically reliable coordinates is seldom emulated. We here consider the analytically simplest case of a dust ball whose energy density always decreases; its "comoving" result has a well known singularity at a sufficiently early time. But after transformation to "standard" coordinates, that singularity no longer occurs at any finite time, nor is this expanding dust ball at any finite time (quite) as small as its Schwarzschild radius. But this dust ball's expansion rate peaks at a substantial fraction of the speed of light when its radius equals a few times the Schwarzchild value, and the "standard" time when this inflationary expansion peak occurs is roughly equal to the "comoving" time of the "occurrence" of the unphysical "comoving" singularity.
[4127] vixra:1507.0152 [pdf]
In God We Mind or Physical Considerations of Divine
It is possible to buy this paper, your money will not be spent for entertainment and you will be rewarded in Heaven for spending your time and money to consume and promote (among your contacts and friends) the product of the Cripple author. Points for God they call not the proofs, but the ``arguments''. It is because they are illustrations of divine. As example: God exists, because word ``God'' means ``exists''. He has more right to exist than anyone else. Therefore the criticism against the arguments (main modern arguers: S.Hawking, R.Dawkins) is pointless. Dr. Marcelo Gleiser, in his article ``Hawking And God: An Intimate Relationship'' wrote: ``Maybe Hawking should leave God alone.'' The Universe could have been any. But it is the most complex in face of humans. Probability of such ``random'' event is zero. For sure, without God the complexity would be average, but not the top one.
[4128] vixra:1507.0149 [pdf]
To the Quantum Theory of Gravity
We discuss the gravitational collapse of a photon. It is shown that when the photon gets Planck energy, it turns into a black hole (as a result of interaction with the object to be measured). It is shown that three-dimensional space is a consequence of energy advantage in the formation of the Planck black holes. New uncertainty relations established on the basis of Einstein’s equations. It is shown that the curvature of space-time is quantized.
[4129] vixra:1507.0145 [pdf]
Author Attribution in the Bitcoin Blocksize Debate on Reddit
The block size debate has been a contentious issue in the Bitcoin com- munity on the social media platform Reddit. Many members of the com- munity suspect there have been organized attempts to manipulate the debate, from people using multiple accounts to over-represent and mis- represent some sides of the debate. The following analysis uses techniques from authorship attribution and machine learning to determine whether comments from user accounts that are active in the debate are from the same author. The techniques used are able to recall over 90% of all in- stances of multiple account use and achieve up to 72% for the true positive rate.
[4130] vixra:1507.0122 [pdf]
Theory of Abel Grassmann's Groupoids
An AG-groupoid is an algebraic structure that lies in between a groupoid and a commutative semigroup. It has many characteristics similar to that of a commutative semigroup. If we consider x^2y^2= y^2x^2, which holds for all x, y in a commutative semigroup, on the other hand one can easily see that it holds in an AG-groupoid with left identity e and in AG**-groupoids. This simply gives that how an AG-groupoid has closed connections with commutative algebras. We extend now for the first time the AG-Groupoid to the Neutrosophic AG-Groupoid. A neutrosophic AG-groupoid is a neutrosophic algebraic structure that lies between a neutrosophic groupoid and a neutrosophic commutative semigroup.
[4131] vixra:1507.0110 [pdf]
Orthogonal Parallel MCMC Methods for Sampling and Optimization
Monte Carlo (MC) methods are widely used for Bayesian inference and optimization in statistics, signal processing and machine learning. A well-known class of MC methods are Markov Chain Monte Carlo (MCMC) algorithms. In order to foster better exploration of the state space, specially in high-dimensional applications, several schemes employing multiple parallel MCMC chains have been recently introduced. In this work, we describe a novel parallel interacting MCMC scheme, called {\it orthogonal MCMC} (O-MCMC), where a set of ``vertical'' parallel MCMC chains share information using some "horizontal" MCMC techniques working on the entire population of current states. More specifically, the vertical chains are led by random-walk proposals, whereas the horizontal MCMC techniques employ independent proposals, thus allowing an efficient combination of global exploration and local approximation. The interaction is contained in these horizontal iterations. Within the analysis of different implementations of O-MCMC, novel schemes in order to reduce the overall computational cost of parallel multiple try Metropolis (MTM) chains are also presented. Furthermore, a modified version of O-MCMC for optimization is provided by considering parallel simulated annealing (SA) algorithms. Numerical results show the advantages of the proposed sampling scheme in terms of efficiency in the estimation, as well as robustness in terms of independence with respect to initial values and the choice of the parameters.
[4132] vixra:1507.0109 [pdf]
Generalized Neutrino Equations by the Sakurai-Gersten Method
I discuss generalized spin-1/2 massless equations for neutrinos. They have been obtained by Gersten's method for derivation of arbitrary-spin equations. Possible physical consequences are discussed.
[4133] vixra:1507.0108 [pdf]
Some Mathematical Bases for Non-Commytative Field Theories
Misconceptions have recently been found in the definition of a partial derivative (in the case of the presence of both explicit and implicit dependencies of the function subjected to differentiation) in the classical analysis. We investigate the possible influence of this discovery on quantum mechanics and the classical/quantum field theory. Surprisingly, some commutators of operators of space-time 4-coordinates do not equal to zero. Thus, we provide the bases for new-fashioned noncommutative field theory.
[4134] vixra:1507.0103 [pdf]
Time Perspective Bias Apparent Decreasing of Time Intervals, Over Large Scales (TPB)
TPB postulates that time is actually observed and measured with a perspective, analogous to 2D linear perspective in architecture. Accelerated expansion is therefore an illusion (of perspective). Photons travelling to an observer, from remote past events, will appear to arrive with successively decreased time intervals. However, the difference is minute and only significant over scales, measured in LY. Note: TPB does not contradict time dilation, GR, nor expansion. In TPB, corrections of distorted time intervals are first calculated (t'). All classical and relativistic physics should follow, subsequently.
[4135] vixra:1507.0101 [pdf]
Introducing Integral Geometry: Are Notational Flaws Responsible For The Inability To Combine General Relativity And Quantum Mechanics?
Parallel line segments are the basic graphical foundation for geometrical field theories such as General Relativity. Although the concept of parallel and curved lines have been well researched for over a century as a description of gravity, certain controversial issues have persisted, namely point singularities (Black Holes) and the physical interpretation of a scalar multiple of the metric, commonly known as a Cosmological Constant. We introduce a graphical and notational analysis system which we will refer to as Integral Geometry. Through variational analysis of perpendicular line segments we derive equations that ultimately result from the changes in the area bounded by them. Based upon changing area bounded by relative and absolute line segments we attempt to prove the following hypothesis: General Relativity cannot be derived from Integral Geometry. We submit that examination of the notational differences between GR and IG in order to accept the hypothesis could lead to evidence that the inability to merge General Relativity and Quantum Physics may be due to notational and conceptual flaws concerning area inherent in the equations describing them.
[4136] vixra:1507.0065 [pdf]
Gauge Invariance of Sedeonic Klein-Gordon Equation
We discuss the sedeonic Klein-Gordon wave equation based on sedeonic space-time operators and wave function. The generalization of the gauge invariance for a wider class of scalar-vector substitutions is demonstrated.
[4137] vixra:1507.0060 [pdf]
Galactic Rotational Velocities Explained by Relativistically Stable Orbits that Spiral Outward at Increasing Distance as Predicted by Explanation for Gravity
It is proposed that the strong force is the force of space. Development of this concept leads to the prediction that the mass of all matter increases as the universe expands. This mass (energy) increase is absorbed from space and leads to the force of gravity. The rate of mass increase necessary to bring about the known force of gravity is calculated. A relationship between matter mass increase and matter length increase is developed and then used to calculate the rate of increase of matter length. This same rate of increase in length applies to all other lengths and orbits including galactic orbital distance. This fractional expansion rate is determined to be Gm/c(r)^2 where m and r are mass and radius of some smallest particle of definable dimensions. If m and r of a proton is chosen, this equation predicts that gravitational orbits double in length approximately every 45-85 million years. Orbits within galaxies, with orbital periods of hundreds of millions of years, will therefore be outward spirals as measured by time zero length, though orbital distances will always be measured as unchanging. Higher orbital speeds are required to maintain these spiral orbits than the orbital speeds required to maintain circular orbits at the same orbital radii. Calculations within show that for a typical galaxy (M31) at typical galactic distances of about 15 - 30 kpc, the galactic orbits increase in radius, or can be considered to ``accelerate" outward, at approximately the same acceleration rate as the gravitational acceleration rate required to hold stars in a circular orbit at the observed rotational velocity. These equivalent accelerations are near the MOND critical value of 10^-10 m/sec^2. This may explain the anomaly of galactic rotational velocities without dark matter. The concepts proposed here require that the fundamental constants change with the expanding universe; however, if the principle of relativity (not the theory) is embraced, then this requires that these physical and fundamental constants are linked such that they appear to remain unchanged.
[4138] vixra:1507.0058 [pdf]
An Algorithm for Producing Benjamin Franklin's Magic Squares
An algorithm is presented that produces, in six steps, Benjamin Franklin's best known magic squares, one 8 x 8, and one 16 x 16. This same algorithm is then used to produce three related magic squares, dimensioned 4 x 4, 32 x 32, and 64 x 64.
[4139] vixra:1507.0042 [pdf]
Sakata Model of Hadrons Revisited. II. Nuclei and Scattering
This article continues our previous study in arXiv:1010.0458. Sakaton interactions potentials are re-optimized. Masses of mesons, baryons, light nuclei and hypernuclei are obtained in a fair agreement with experiment. Total elastic scattering cross sections of protons, antiprotons, neutrons and lambda hyperons are also close to experimental data in a broad range of momenta 0.1 - 1000 GeV/c. Our results suggest that the Sakata model could be a promising alternative to the quark model of hadrons.
[4140] vixra:1507.0041 [pdf]
Why One Can Maintain that there is a Probability Loophole in the CHSH.
In the paper it is demonstrated that the particular form of CHSH S=E{A(1)[B(1)-B(2)]-A(2)[B(1)+B(2)]} with S maximally 2 and minimally -2, for A and B functions in {-1,1}, is not generally valid for local models. The nonzero probability that local hidden extra parameters violate the CHSH, is not eliminated with basic principles derived from the CHSH.
[4141] vixra:1507.0038 [pdf]
Unreduced Complex Dynamics of Real Computer and Control Systems
The unreduced dynamic complexity of modern computer, production, communication and control systems has become essential and cannot be efficiently simulated any more by traditional, basically regular models. We propose the universal concept of dynamic complexity and chaoticity of any real interaction process based on the unreduced solution of the many-body problem by the generalised effective potential method. We show then how the obtained mathematically exact novelties of system behaviour can be applied to the development of qualitatively new, complex-dynamical kind of computer and control systems.
[4142] vixra:1507.0035 [pdf]
Quantum Time is the Time it Takes for an Elementary Particle to Absorb a Quantum of Energy
This paper is a continuation of a series of papers on the universe as the surface volume of a four dimensional, expanding hyperverse. We argue that the whole universe is undergoing a geometric mean expansion, and is larger than the observable hyperverse by a factor of (R_sub_H / 2 Planck lengths)^4 , and its radius is larger by a factor of (R_sub_H / 2 Planck lengths)^4/3. The growth rate of the whole is actually accelerating, compared to the constant, 2c velocity we measure for the observable universe. We show that the ratio of the length of the small energy quantum (SEQ), to the small radius quantum (SRQ), values discussed at length in earlier hyperverse papers, is increasing at the same rate as the whole radius is increasing. We also show that, depending on the type of particle, the amount of time it takes for a particle to travel the distance of one SEQ length, approximately 10^-23 seconds, matches the time it takes for the particle to absorb one SEQ of energy. The quantum of time is the time it takes for an elementary particle to absorb one SEQ quantum of energy. The unit of quantum time is not a constant, but increases in duration at the same rate as the increase in the velocity of the whole hyperverse, canceling it, giving us the constant, 2c radial expansion rate. Significantly, our equation for the quantum of time, derived from the hyperverse model, using only the values of c, G, h-bar and the radius of the observable hyperverse, matches the quantized time interval calculated for the electron by Piero Caldirola, using classical electron theory. His 'chronon', and our quantum absorption time, are identical values. Equating the two quantum time equations produces the correct equation of electric charge, further supporting the validity of the hyperverse model and the unit of quantum time. We continue by showing that the relation between particle mass and quantum absorption time is governed by the time-energy uncertainty relationship, allowing easy calculation of the quantum time values for all elementary particles, and supporting the concept of the geometric mean expansion of space.
[4143] vixra:1507.0028 [pdf]
Pathchecker: an RFID Application for Tracing Products in Suply-Chains
In this paper, we present an application of RFIDs for supply- chain management. In our application, we consider two types of readers. On one part, we have readers that will mark tags at given points. After that, these tags can be checked by another type of readers to tell whether a tag has followed the correct path in the chain. We formalize this notion and define adequate adversaries. Morever, we derive requirements in or- der to meet security against counterfeiting, cloning and impersonation attacks.
[4144] vixra:1507.0023 [pdf]
A Standard Model at Planck Scale
To extend the standard model to Planck scale energies I propose a phenomenological model of quantum black holes and dark matter. I assume that inside any black hole there is a core object of length scale L_Planck. The core is proposed to replace the singularity of general relativity. A simple phenomenological schematic model is presented for the core as quantum fields of SO(10) grand unified theory. A survey is made of calculational models that could support or supplement the present scheme and of theoretical frameworks for future developments.
[4145] vixra:1507.0015 [pdf]
The Incompleteness of Universe
With simple, but vivid argument I am proving, what the Universe has the Beginning, will have the End and is finite in volume. Tell the others on Facebook, Twitter, etc, to make this world a better place to live and think.
[4146] vixra:1507.0005 [pdf]
The Ultimate Modification of Einstein's Gravity
Satisfied all possible experimental misses of General Relativity by simple and logical modification of Einstein equations. Namely to the left hand is added besides the cosmological constant the Dark Matter tensor. Hereby the Dark Matter is not discovered as the material one, because it is the phantomic matter: just the modification of the geometry rules. Tell the others on Facebook, Twitter, etc, to make this world a better place to live and think.
[4147] vixra:1506.0215 [pdf]
Quantizing Gauge Theory Gravity
The shared background independence of spacetime algebra and the impedance approach to quantization, coupled with the natural gauge invariance of phase shifts introduced by quantum impedances, opens the possibility that identifying the geometric objects of the impedance model with those of spacetime algebra will permit a more intuitive understanding of the equivalence of gauge theory gravity in flat space with general relativity in curved space.
[4148] vixra:1506.0213 [pdf]
The Psychology of The Two Envelope Problem
This article concerns the psychology of the paradoxical Two Envelope Problem. The goal is to find instructive variants of the envelope switching problem that are capable of clear-cut resolution, while still retaining paradoxical features. By relocating the original problem into different contexts involving commutes and playing cards the reader is presented with a succession of resolved paradoxes that reduce the confusion arising from the parent paradox. The goal is to reduce confusion by understanding how we sometimes misread mathematical statements; or, to completely avoid confusion, either by reforming language, or adopting an unambiguous notation for switching problems. This article also suggests that an illusion close in character to the figure/ground illusion hampers our understanding of switching problems in general and helps account for the intense confusion that switching problems sometimes generate.
[4149] vixra:1506.0212 [pdf]
The Standard Model for Everything
To complete the standard model I propose a phenomenological model of quantum black holes and dark matter. I assume that at the center of any black hole there is a Kerr (Schwarzschild) core object of size Planck length. The core replaces the general relativity singularity of the black hole. A simple phenomenological model is presented for the core. During very early inflation the overlapping wave functions of the cores caused rapid expansion of the universe. Gravitons condensated around the cores to form primordial black holes which evolve into dark matter in the big bang together with the standard model particles.
[4150] vixra:1506.0194 [pdf]
Is Gravity Control Propulsion Viable?
In 2015 the answer is still no. However this paper will look at what current physics has to say on this topic and what further questions need to be put forward to advance our enquiries. This work is a modified compilation of several posts that were originally published in the author's blogsite [1] on Gravity Control Propulsion (GCP) looking at several papers that deal with related topics with some ideas and speculations for further research.
[4151] vixra:1506.0193 [pdf]
Electrodynamics on the Threshold of the Fourth Stage of Its Development
Development of electrodynamics during last one and half a century is discussed and it is shown that every half a century its contents changes drastically. In fact, during each of the three past stages three distinct doctrines were reigning in the field, which can safely be visioned as three different theories. Each of these three theories is critically analyzed and it is shown that the third stage is over and electrodynamics has reached the threshold of the fourth stage of its development.
[4152] vixra:1506.0169 [pdf]
Possible Common Solution to the Problems of Dark Energy and Dark Matter in the Universe
We discuss the principal results of the method of the <i>Causal dynamical triangulations</i>, when applied under the assumption of topology <i>S<sup>3</sup></i> of our world, <i>i.e.</i>, assuming the closedness of the Universe. Then it can be concluded that the resulting space-dimensionality three, being equal to what we consider to be the naturally optimal dimensionality of the real space, implies the existence of a certain deviation from this optimal value on the cosmological scale-level (<i>i.e.</i> a deviation from the space-'Euclidicity' there), since a fourth space-dimension is explicitely or implicitely necessary (depending only on what form of the spacetime-metric one has used) in order to allow the Universe to be closed. As a consequence, the bent space (considered to be the component of the curved spacetime), together with the real cosmic stratum there, struggles to arrive to the state with the optimal dimensionality, <i>i.e.</i>, it struggles to expand, while the 'pseudo- pressure' is the carrier of the 'dimensionally-elastic' energy, which appears as the <i>dark energy</i> (on the global cosmological scale) and the <i>dark matter</i> (on the scale-level of cosmic inhomogenities). The basic rules for their appearance are presented, as well as the pertaining questions are discussed: the feedback of the proposed mechanism, the problem of the entropy and self-organization of the cosmic stratum, and the evolution of the phenomenon.
[4153] vixra:1506.0160 [pdf]
A Graviton Condensate Model of Quantum Black Holes and Dark Matter
We propose a model of microscopic black holes and dark matter to reinforce the standard model. We assume that at the center of a black hole there is a spin 1/2 neutral core field. The core is proposed to replace the singularity of the hole. During Starobinsky inflation gravitons condensate around the core to form a primordial quantum black holes which evolve naturally into abundant dark matter universe.
[4154] vixra:1506.0146 [pdf]
Quaternions and Clifford Geometric Algebras
This book provides an introduction to quaternions and Clifford geometric algebras. Quaternion rotations are covered extensively. A reference manual on the entities and operations of conformal (CGA) and quadric geometric algebras (QGA) is given. The Space-Time Algebra (STA) is introduced and offers an example of its use for special relativity velocity addition. Advanced algebraic techniques for the symbolic expansion of the geometric product of blades is explained with numerous examples.
[4155] vixra:1506.0119 [pdf]
Viterbi Classifier Chains for Multi-Dimensional Learning
Multi-dimensional classification (also known variously as multi-target, multi-objective, and multi-output classification) is the supervised learning problem where an instance is associated to qualitative discrete variables (a.k.a. labels), rather than with a single class, as in traditional classification problems. Since these classes are often strongly correlated, modeling the dependencies between them allows MDC methods to improve their performance -- at the expense of an increased computational cost. A popular method for multi-label classification is the classifier chains (CC), in which the predictions of individual classifiers are cascaded along a chain, thus taking into account inter-label dependencies. Different variant of CC methods have been introduced, and many of them perform very competitively across a wide range of benchmark datasets. However, scalability limitations become apparent on larger datasets when modeling a fully-cascaded chain. In this work, we present an alternative model structure among the labels, such that the Bayesian optimal inference is then computationally feasible. The inference is efficiently performed using a Viterbi-type algorithm. As an additional contribution to the literature we analyze the relative advantages and interaction of three aspects of classifier chain design with regard to predictive performance versus efficiency: finding a good chain structure vs.a random structure, carrying out complete inference vs. approximate or greedy inference, and a linear vs. non-linear base classifier. We show that our Viterbi CC can perform best on a range of real-world datasets.
[4156] vixra:1506.0114 [pdf]
Games People Play: an Overview of Strategic Decision-Making Theory in Conflict Situations
In this paper, a gentle introduction to Game Theory is presented in the form of basic concepts and examples. Minimax and Nash's theorem are introduced as the formal definitions for optimal strategies and equilibria in zero-sum and nonzero-sum games. Several elements of cooperaive gaming, coalitions, voting ensembles, voting power and collective efficiency are described in brief. Analytical (matrix) and extended (tree-graph) forms of game representation is illustrated as the basic tools for identifying optimal strategies and “solutions” in games of any kind. Next, a typology of four standard nonzero-sum games is investigated, analyzing the Nash equilibria and the optimal strategies in each case. Signaling, stance and third-party intermediates are described as very important properties when analyzing strategic moves, while credibility and reputation is described as crucial factors when signaling promises or threats. Utility is introduced as a generalization of typical cost/gain functions and it is used to explain the incentives of irrational players under the scope of “rational irrationality”. Finally, a brief reference is presented for several other more advanced concepts of gaming, including emergence of cooperation, evolutionary stable strategies, two-level games, metagames, hypergames and the Harsanyi transformation.
[4157] vixra:1506.0101 [pdf]
Thermalization of Gases: A First Principles Approach
Previous approaches of emergent thermalization for condensed matter based on typical wavefunctions are extended to generate an intrinsically quantum theory of gases. Gases are fundamentally quantum objects at all temperatures, by virtue of rapid delocalization of their constituents. When there is a sufficiently broad spread in the energy of eigenstates, a well- defined temperature is shown to arise by photon production when the samples are optically thick. This produces a highly accurate approximation to the Planck distribution so that thermalization arises from the initial data as a consequence of purely quantum and unitary dynamics. These results are used as a foil for some common hydrodynamic theory for ultracold gases. It is suggested here that strong history dependence typically remains in these gases and so limits the validity of thermodynamics in their description. These problems are even more profound in the extension of hydrodynamics to such gases when they are optically thin, even when their internal energy is not low. We investigate rotation of elliptically trapped gases and consistency problems with deriving a local hydrodynamic approach. The presence of vorticity that is “hidden” from order parameter approaches is discussed along with some buoyancy intrinsically associated with vorticity that gives essential quantum corrections to gases in the regimes where standard perturbation approaches to the Boltzmann equations are known to fail to converge. These results suggest that studying of trapped gases in the far from ultracold regions may yield interesting results not described by classical hydrodynamics.
[4158] vixra:1506.0089 [pdf]
New Mathematics of Complexity and Its Biomedical Applications
We show that the unreduced, mathematically rigorous solution of the many-body problem with arbitrary interaction, avoiding any perturbative approximations and "exact" models, reveals qualitatively new mathematical properties of thus emerging real-world structures (interaction products), including dynamic multivaluedness (universal non-uniqueness of ordinary solution) giving rise to intrinsic randomness and irreversible time flow, fractally structured dynamic entanglement of interaction components expressing physical quality, and dynamic discreteness providing the physically real space origin. This unreduced interaction problem solution leads to the universal definition of dynamic complexity describing structure and properties of all real objects. The united world structure of dynamically probabilistic fractal is governed by the universal law of the symmetry (conservation and transformation) of complexity giving rise to extended versions of all particular (correct) laws and principles. We describe then the unique efficiency of this universal concept and new mathematics of complexity in application to critical problems in life sciences and related development problems, showing the urgency of complexity revolution.
[4159] vixra:1506.0080 [pdf]
Local Vacuum Pressure
The existence of negative pressure of vacuum follows from the cosmological models, based on the results of observations. But, is it possible to detect the pressure of the vacuum as per the geometry of the space around the local bodies? The gravitational mass of bodies placed in confined volume, is less than the sum of the gravitational masses of these bodies, dispersed over infinite distance. It interprets into the transfer of energy to the vacuum, which becomes apparent from its deformation. We determine the gravitational impact of matter on the vacuum, equal in value and opposite in the sign of pressure, in case of weakly gravitating spherical bodies. We have evaluated a possibility to extend the obtained result to arbitrary gravitational systems.
[4160] vixra:1506.0069 [pdf]
Dynamical Analysis of Grover's Search Algorithm in Arbitrarily High-Dimensional Search Spaces
We discuss at length the dynamical behavior of Grover's search algorithm for which all the Walsh-Hadamard transformations contained in this algorithm are exposed to their respective random perturbations inducing the augmentation of the dimension of the search space. We give the concise and general mathematical formulations for approximately characterizing the maximum success probabilities of finding a unique desired state in a large unsorted database and their corresponding numbers of Grover iterations, which are applicable to the search spaces of arbitrary dimension and are used to answer a salient open problem posed by Grover [L. K. Grover, Phys. Rev. Lett. \textbf{80}, 4329 (1998)].
[4161] vixra:1506.0055 [pdf]
Quantum Gravity
This paper uses a small set of mathematical principles to describe a very wide swath of physics. These principles define a new theory of quantum gravity called the theory of infinite complexity. The main result is that Einstein's equation for general relativity can be derived from unrelated, mathematically novel quantum phenomena. That the theory takes no free parameters should be considered strong evidence in favor of a real connection between physics and mathematics.
[4162] vixra:1506.0052 [pdf]
The Quark Gluon Plasma Conundrum - Liquid or Gas ?
The experimental determination that the Quark Gluon Plasma (QGP) is a (perfect) liquid, rather than a gas, creates a crisis for theoretical models. So is the QGP a liquid of a gas? Here we provide a resolution of this puzzle through a consistent application of the symmetry structure of the full SU (3) c group itself, rather than just its local Lie group algebra. Hence this paper provides a novel resolution of the above puzzle.
[4163] vixra:1506.0040 [pdf]
Rotation Curves of Spiral Galaxies as a Consequence of a Natural Physical Mechanism?
Rotation curves of spiral galaxies for baryonic masses, which are inconsistent with the law of gravity, constitute one of the pillars of the dark matter concept. This publication shows that the effects attributed to the influence of dark matter in spiral galaxies, are extremely similar to effects of a certain physical mechanism, which has been noticed during the analysis of this problem.
[4164] vixra:1506.0019 [pdf]
Analysis of Generals Algorithms of Numeric Solutions of Ordinary Differential Equations of Higher Order One with Initial Conditions.
In this paper several existing numerical methods of solution of ordinary differential equations of first order with initial conditions are modified, so that they can be generalized as methods of solution of ordinary differential equations of order n, according to the theory.
[4165] vixra:1505.0233 [pdf]
Informational Money, Islamic Finance, and the Dismissal of Negative Interest Rates
The so-called Islamic Finance Requirements induce the notion of an IFR-compliant financial system. IFR-compliance provides an axiomatic approach to Islamic finance. In order to deal with potential mismatches between IFR-compliance and Islamic principles, IFR-compliant financial systems are referred to as Crescent-Star finances and IFR serves as axioms for Crescent-Star finance (CSF). Literally following that approach negative interest rates are to be avoided in a Crescent-Star financial system just as well as positive interest rates. W.r.t. Islamic finance Crescent-Star finance may admit false positives, i.e. IFR-compliant models of finance that nevertheless fail to qualify as sound from an Islamic perspective, but there won't be false negatives. A weaker version CSFn of CSF is formulated which formally permits negative interest rates, and a strengthening CSFpls of CSF is defined in which it is always required that profit and loss sharing takes place in connection with lending. It is argued that if only informational money is taken into account, CSFn finance prevents the occurrence of negative interest rates. It is shown that the situation is quite different for physical monies. CSFpls finance excludes negative interest rates as well as positive interest rates.
[4166] vixra:1505.0226 [pdf]
Whywhere2.0: an R Package for Modeling Species Distributions on Big Environmental Data
Previous studies have indicated that multi-interval discretization (segmentation) of continuous-valued attributes for classification learning might provide a robust machine learning approach to modelling species distributions. Here we apply a segmentation model to the $Bradypus~variegatus$ -- the brown-throated three-toed sloth -- using the species occurrence and climatic data sets provided in the niche modelling R package \texttt{dismo} and a set of 940 global data sets of mixed type on the Global Ecosystems Database. The primary measure of performance was the area under the curve of the receiver operating characteristic (AUC) on a k-fold validation of predictions of the segmented model and a third order generalized linear model (GLM). This paper also presents further advances in the \texttt{WhyWhere} algorithm available as an R package from the development site at http://github.com/davids99us/whywhere.
[4167] vixra:1505.0215 [pdf]
A Derivation of the Etherington's Distance-Duality Equation
The Etherington's distance-duality equation is the relationship between the luminosity distance of standard candles and the angular-diameter distance. This relationship has been validated from astronomical observations based on the X-ray surface brightness and the Sunyaev-Zel'dovich effect of galactic clusters. In the present study, we propose a derivation of the Etherington's reciprocity relation in the dichotomous cosmology.
[4168] vixra:1505.0203 [pdf]
A Prospect Proof of the Goldbach's Conjecture
Based on, the well-ordering (N,<) of the set of natural numbers N, and some basic concepts of number theory, and using the proof by contradiction and the inductive proof on N, we prove that the validity of the Goldbach's statement: every even integer 2n > 4, with n > 2, is the sum of two primes. This result confirms the Goldbach conjecture, which allows to inserting it as theorem in number theory. Key Words: Well-ordering (N,<), basic concepts and theorems on number theory, the indirect and inductive proofs on natural numbers. AMS 2010: 11AXX, 11p32, 11B37.
[4169] vixra:1505.0175 [pdf]
A Note on Erdős-Szekeres Theorem
Erdős-Szekeres Theorem is proven. The proof is very similar to the original given by Erdős and Szekeres. However, it explicitly uses properties of binary trees to prove and visualize the existence of a monotonic subsequence. It is hoped that this presentation is helpful for pedagogical purposes.
[4170] vixra:1505.0146 [pdf]
Thoughts on Qualia for Machines
I speculate upon the idea that qualia comprise of quanta or packets, and that each packet is generated by physical processes within a neuron, possibly at the quantum level. Pattern-specific neuronal activation causes wave-like interactions among the packets, leading to phenomenal sensation. In essence, I provide in this paper a new panpsychic interpretation of the hard problem of consciousness.
[4171] vixra:1505.0141 [pdf]
PCT, Spin, Lagrangians, Part II
It is shown that the electromagnetic field is not a U(1) gauge theory, and it is shown that the Wightman axioms are inconsistent with the principle of conservation of energy and momentum.
[4172] vixra:1505.0135 [pdf]
Layered Adaptive Importance Sampling
Monte Carlo methods represent the \textit{de facto} standard for approximating complicated integrals involving multidimensional target distributions. In order to generate random realizations from the target distribution, Monte Carlo techniques use simpler proposal probability densities to draw candidate samples. The performance of any such method is strictly related to the specification of the proposal distribution, such that unfortunate choices easily wreak havoc on the resulting estimators. In this work, we introduce a \textit{layered} (i.e., hierarchical) procedure to generate samples employed within a Monte Carlo scheme. This approach ensures that an appropriate equivalent proposal density is always obtained automatically (thus eliminating the risk of a catastrophic performance), although at the expense of a moderate increase in the complexity. Furthermore, we provide a general unified importance sampling (IS) framework, where multiple proposal densities are employed and several IS schemes are introduced by applying the so-called deterministic mixture approach. Finally, given these schemes, we also propose a novel class of adaptive importance samplers using a population of proposals, where the adaptation is driven by independent parallel or interacting Markov Chain Monte Carlo (MCMC) chains. The resulting algorithms efficiently combine the benefits of both IS and MCMC methods.
[4173] vixra:1505.0134 [pdf]
Direct and Quantitative Verifications of Energy Nonconservation by Urbach Tail of Light Absorption in Smiconductors
Based on exact theory of quantum transition and precise numerical calculations, this paper demonstrates quantitatively that the Urbach tail in the diagram of light absorption coefficient of semiconductor versus photon energy are caused by energy nonconservation (ENC). This paper also points out that the light absorption is a non-example of Fermi golden rule; due to ENC the estimations on the dark energy and dark mass in our universe might be no longer to have big significance; ENC is a non-example of the first and second thermodynamic law.
[4174] vixra:1505.0132 [pdf]
Classical Electrodynamics in Agreement with Newton's Third Law of Motion
The force law of Maxwell's classical electrodynamics does not agree with Newton's third law of motion (N3LM), in case of open circuit magnetostatics. Initially, a generalized magnetostatics theory is presented that includes two additional physical fields $B_\Phi$ and $B_l$, defined by scalar functions. The scalar magnetic field $B_l$ mediates a longitudinal Ampère force that balances the transverse Ampère force (aka the magnetic field force), such that the sum of the two forces agrees with N3LM for all stationary current distributions. Secondary field induction laws are derived; a secondary curl free electric field $\E_l$ is induced by a time varying scalar magnetic field $B_l$, which isn't described by Maxwell's electrodynamics. The Helmholtz' decomposition is applied to exclude $\E_l$ from the total electric field $\E$, resulting into a more simple Maxwell theory. Decoupled inhomogeneous potential equations and its solutions follow directly from this theory, without having to apply a gauge condition. Field expressions are derived from the potential functions that are simpler and far field consistent with respect to the Jefimenko fields. However, our simple version of Maxwell's theory does not satisfy N3LM. Therefore we combine the generalized magnetostatics with the simple version of Maxwell's electrodynamics, via the generalization of Maxwell's speculative displacement current. The resulting electrodynamics describes three types of vacuum waves: the $\Phi$ wave, the longitudinal electromagnetic (LEM) wave and the transverse electromagnetic (TEM) wave, with phase velocities respectively a, b and c. Power- and force theorems are derived, and the force law agrees with Newton's third law only if the phase velocities satisfy the following condition: a>>b and b=c. The retarded potential functions can be found without gauge conditions, and four retarded field expressions are derived that have three near field terms and six far field terms. All six far field terms are explained as the mutual induction of two free fields. Our theory supports Rutherford's solution of the 4/3 problem of electromagnetic mass, which requires an extra longitudinal electromagnetic momentum. Our generalized classical electrodynamics might spawn new physics experiments and electrical engineering, such as new photoelectric effects based on $\Phi$- or LEM radiation, and the conversion of natural $\Phi$- or LEM radiation into useful electricity, in the footsteps of Nikola Tesla and T.Henry Moray.
[4175] vixra:1505.0131 [pdf]
Infinitely Complex Topology Changes with Quaternions and Torsion
We develop some ideas that can be used to show relationships between quantum state tensors and gravitational metric tensors. After firmly grasping the math by $\alpha$ and Einstein's equation, this is another attempt to shake it and see what goes and what stays. We introduce slightly more rigorous definitions for some familiar objects and find an unexpected connection between the chirological phase $\Phi^n$ and the quaternions $\bm{q}\in\mathbb{H}$. Torsion, the only field in string theory not already present in the theory of infinite complexity, is integrated. We propose a solution to the Ehrenfest paradox and a way to prove the twin primes conjecture. The theory's apparent connections to negative frequency resonant radiation and time reversal symmetry violation are briefly treated.
[4176] vixra:1505.0092 [pdf]
Duane Hunt Relation Improved
In present paper the Duane-Hunt relation for direct measurement of the Planck constant is improved by including of relativistic corrections. New relation to determine the Planck constant, suggested in this paper contains Duane-Hunt relation as first term and can be applied in a wide range of energies.
[4177] vixra:1505.0091 [pdf]
Space-time Interaction Principle as a Description of Quantum Dynamics of Particle
Abstract We propose a space-time interaction principle (StIP) which states any particle with mass m will involve a random motion without friction, due to random impacts from space-time. Every impact changes the amount \hbar for an action of the particle. According to the principle, firstly, we prove the interaction coefficient must be \Re=\frac{\hbar}{2m_{ST}} deriving from Langevin's equation to the corresponding Fokker-Planck Hamiltonian, where m_{ST} is a space-time sensible mass of the particle. We can derive that an equation of motion for the particle will be the Schr\ddot{o} dinger equation, and prove that the space-time sensible mass m_{ST} reduce to the inertial mass in the non-relativistic quantum mechanics. Secondly, we show that there must exist the smallest mass \bar{m}_{ST} as the minimum of space-time sensible mass, provided the speed of light in vacuum as the maximum speed due to the postulation of special relativity. Furthermore, we estimate a magnitude of this \bar{m}_{ST} from microwave background radiation. Thirdly, an interpretation of Heisenberg's uncertainty principle is suggested, with a stochastic origin of Feynman's path integral formalism. It is shown that we can construct a physical picture distinct from Copenhagen interpretation, and reinvestigate the nature of space-time and reveal the origin of quantum behaviours from the materialistic point of view.
[4178] vixra:1505.0086 [pdf]
On the Ehrenfest Paradox and the Expansion of the Universe
This work presents a formalism of the notions of space and of time which contains that of the special relativity, which is compatible with the quantum theories, and which distinguishes itself from the general relativity by the fact that it allows us to define the possible states of motion between two observers arbitrarily chosen in the nature. Before calculating the advance of the perihelion of an orbit, it is necessary to define the existence of a perihelion and its possible movement. In other words, it is necessary to express the use of a physical space which is a set of spatial positions, a set of world lines constantly at rest according to a unique observer. This document defines all the physical spaces of the nature (some compared with the others) by noting that to choose a temporal variable in one of these spaces, it is enough to choose a particular parametrization along each of its points. If the world lines of a family of observers are not elements of a unique physical space, then even in classical physics, how can they manage to put end to end their rulers to determine the measure of a segment of curve of their reference frame (each will have to ask to his neighbor: a little seriousness please, do not move until the measurement is ended) ? This question is the basis of the solution which will be proposed to paradox of Ehrenfest. A notion of expansion of the universe is established as being a structural reality and a rigorous theoretical formulation of the Hubble's experimental law is proposed. We shall highlight the fact that a relative motion occurs only along specific trajectories and this notion of authorized trajectories is not a novelty in physics as it is stated in the Bohr atomic model. We shall also highlight the fact that a non-uniform rectilinear motion possesses a horizon having the structure of a plan.
[4179] vixra:1505.0079 [pdf]
On a Formalization of Natural Arithmetic Theory
In this paper we define an arithmetic theory PAM, which is an extension of Peano arithmetic PA, and prove that theory PAM has only one (up to isomorphism) model, which is the standard PA–model.
[4180] vixra:1505.0077 [pdf]
Set of All Pairs of Twin Prime Numbers is Infinite
In this paper we formulate an intuitive Hypothesis about a new aspect of a well known method called “Sieve of Eratosthenes” and then prove that set of natural numbers N = {1, 2, . . .} contains infinite number of pairs of twin primes.
[4181] vixra:1505.0068 [pdf]
Moyal Deformations of Clifford Gauge Theories of Gravity
A Moyal deformation of a Clifford $ Cl (3, 1) $ Gauge Theory of (Conformal) Gravity is performed for canonical noncommutativity (constant $\Theta^{\mu \nu }$ parameters). In the very special case when one imposes certain constraints on the fields, there are $no$ first order contributions in the $\Theta^{\mu \nu }$ parameters to the Moyal deformations of Clifford gauge theories of gravity. However, when one does $not$ impose constraints on the fields, there are first order contributions in $\Theta^{\mu \nu }$ to the Moyal deformations in variance with the previous results obtained by other authors and based on different gauge groups. Despite that the generators of $U(2,2), SO(4,2), SO(2,3)$ can be expressed in terms of the Clifford algebra generators this does $not$ imply that these algebras are isomorphic to the Clifford algebra. Therefore one should not expect identical results to those obtained by other authors. In particular, there are Moyal deformations of the Einstein-Hilbert gravitational action with a cosmological constant to first order in $\Theta^{\mu \nu }$ . Finally, we provide a mechanism which furnishes a plausible cancellation of the huge vacuum energy density.
[4182] vixra:1505.0053 [pdf]
Comment on the Article ``extended Linear and Nonlinear Lorentz Transformations and Superluminality''
In this comment some incorrect results published in the article "Extended Linear and Nonlinear Lorentz Transformations and Superluminality" are refuted. The article "Extended Linear and Nonlinear Lorentz Transformations and Superluminality" can be found in the journal "Advances in High Energy Physics", Volume 2013 (2013), article ID 760916.
[4183] vixra:1505.0051 [pdf]
Black Holes Without Singularity?
We propose a model scheme of microscopic black holes. We assume that at the center of the hole there is a spin 1/2 core field. The core is proposed to replace the singularity of the hole. Possible frameworks for non-singular models are discussed briefly.
[4184] vixra:1505.0049 [pdf]
Thermodynamics in F(T;Q)-Gravity
In the present study, we discuss a non-equilibrium picture of thermodynamics at the apparent horizon of flat Friedmann-Roberton-Walker universe in f(T;) theory of gravity, where T is the torsion scalar and is the trace of the energy-momentum tensor. Mainly, we investigate the validity of the first and second laws of thermodynamics in this scenario. We consider two descriptions of the energy-momentum tensor of dark energy density and pressure and discuss that an equilibrium picture of gravitational thermodynamics can not be given in both cases. Furthermore, we also conclude that the second law of gravitational thermodynamics can be achieved both in phantom and quintessence phases of the universe.
[4185] vixra:1505.0048 [pdf]
Ghost Quintessence in Fractal Gravity
In the present study, using the time-like fractal theory of gravity, we mainly focus on the ghost dark energy model which was recently suggested to explain the present acceleration of the cosmic expansion. Next, we establish a connection between the quintessence scalar field and fractal ghost dark energy density. This correspondence allows us to reconstruct the potential and the dynamics of a fractal canonical scalar field (the fractal quintessence) according to the evolution of ghost dark energy density.
[4186] vixra:1505.0047 [pdf]
Local Quantum Measurement Discrimination Without Assistance of Classical Information
The discrimination of quantum operations is an important subject of quantum information processes. For the local distinction, existing researches pointed out that, since any operation performed on a quantum system must be compatible with no-signaling constraint, local discrimination between quantum operations of two spacelike separated parties cannot be realized. We found that, however, local discrimination of quantum measurements may be not restricted by the no-signaling if more multi-qubit entanglement and selective measurements were employed. In this paper we report that local quantum measurement discrimination (LQMD) can be completed via selective projective measurements and numerous seven-qubit GHZ states without help of classical communication if both two observers agreed in advance that one of them should measure her/his qubits before an appointed time. As an application, it is shown that the teleportation can be completed via the LQMD without classical information. This means that the superluminal communication can be realized by using the LQMD.
[4187] vixra:1505.0043 [pdf]
The Maxwell Demon in the Osmotic Membrane
The dielectric with index of refraction n is inserted in the Planck blackbody. The spectral formula for photons in such dielectric medium and the equation for the temperature of photons is derived. The new equation is solved for the constant index of refraction. The photon flow initiates the osmotic pressure of he Debye phonons. The dielectric crystal surface works as the osmotic membrane with the Maxwell demonic refrigerator. Key words: Thermodynamics, blackbody, photons, phonons, dielectric medium, dispersion.
[4188] vixra:1505.0041 [pdf]
Orbiting Particles’ Analytic Time Dilations Correlated with the Sagnac Formula and a General ‘versed Sine’ Satellite Clock Absolute Dilation Factor
Eastward and westward orbiting plane time dilation formulae envisaged for Hafele&Keating’s 1971 equatorial clocks experiment were incorrectly derived in the 2004 textbook Relativity in Rotating Frames although exact analytic expressions actually result directly from velocity composition—provided gravitational effects are disregarded. Nevertheless the same idealised equations together yield the classic formula for Sagnac’s analogous 1913 experiment where interference fringe patterns from monochromatic light waves emitted in opposite directions around a rotating wheel, shifted in accordance with rotation rate—an observation misinterpreted by some as challenging special relativity theory. Although only approximately correct for the 1971 experiment, the resulting formulae also yield—independently of general relativity theory—a notable exact formula for a rotating satellite’s clock dilation. The factor’s inverse equals the versed sine of the angle whose sine equals the satellite’s peripheral speed scaled for unit limit speed—the cubic root of the product of the Earth’s mass, the universal gravitational constant and the orbit’s rate of rotation.
[4189] vixra:1505.0029 [pdf]
Energy Generation by Precession
English (translation): We study here a possible cause of the intrinsic precession of the planetary orbits in terms of the sets of stars ["Reality elements'' Vixra.org: 1407.0107], compared with classic atomic model, which, together with the wave-particle duality, it leads to the conclusion that the thermodynamic energy is continuously being created and destroyed. Based on the above two machines capable of generating electric power by precession resonant circuits are exposed. Español (original): Se estudia aquí una posible causa de la precesión intrínseca de las órbitas planetarias en función de los conjuntos de estrellas ["Reality elements'' Vixra.org: 1407.0107], comparándola con la del modelo atómico clásico, que, junto con la dualidad onda-corpúsculo, da lugar a considerar que la energía termodinámica está continuamente creándose y destruyéndose. En base a todo lo anterior se exponen dos máquinas capaces de generar energía eléctrica mediante la precesión en circuitos resonantes.
[4190] vixra:1505.0003 [pdf]
Review of the Microscopic Approach to the Higgs Mechanism and to Quark and Lepton Masses and Mixings
This review summarizes the results of a series of recent papers, where a microscopic model underlying the physics of elementary particles has been proposed. The model relies on the existence of an internal isospin space, in which an independent physical dynamics takes place. This idea is critically re-considered in the present work. As becomes evident in the course of discussion, the model not only describes electroweak phenomena but also modifies our understanding of other physical topics, like the big bang cosmology and the nature of the strong interactions.
[4191] vixra:1504.0246 [pdf]
Quantum Group $su_q(2)$ as the Proper Group to Describe the Symmetry Structure of the Nucleus
The nucleus displaying both the single particle aspects as well as the collective aspects simultaneously, does not seem to be amenable to a simple group theoretical structure to explain its existence. The isospin group SU(2) takes account of the single particle aspects quite well but the collectivity is basically put in by hand. The point is that, is there some inherent symmetry connecting the single particle aspect and the collective aspects through some group theoretical structure. We do consistent and exact matching of the deformed and the superdeformed bands in various nuclei. Thus we shall show that the Quantum Group $SU_q(2)$ fulfills this requirement - not in any approximate manner, but in an exact manner.
[4192] vixra:1504.0240 [pdf]
Right and Wrong in the Conduct of Science
Science, in particular physics, is a collective enterprise; a fruit of the exquisitely social nature of human living. So it is inevitable to encounter ethical issues in the natural sciences, since the contest of differing interests and views is perennial in its practice, indeed essential to its momentum. The crucial ethical question always hangs in the air: How is the truth best served? This is a very limited imperative for science to follow, excluding as it does most questions of meaning and valuation. For example, in science one does not normally ask: Why is the truth to be served? In this paper we describe some ethical aspects of our own discipline of science: their cultural context and the bounds which they delineate for themselves, sometimes in transgression. We argue that the minimalist ethic espoused in science, namely loyalty to truth, is a bellwether for the much wider, more problematic, and more vital consequences of ethics – and its failure – in human relationships at large.
[4193] vixra:1504.0239 [pdf]
A Note About Power Function
In this paper described some new view and properties of the power function, the main aim of the work is to enter some new ideas. Also described expansion of power function, based on done research. Expansion has like Binominal theorem view, but algorithm not same.
[4194] vixra:1504.0207 [pdf]
Quantum Games of Opinion Formation Based on the Marinatto-Weber Quantum Game Scheme
Quantization becomes a new way to study classical game theory since quantum strategies and quantum games have been proposed. In previous studies, many typical game models, such as prisoner's dilemma, battle of the sexes, Hawk-Dove game, have been investigated by using quantization approaches. In this paper, several game models of opinion formations have been quantized based on the Marinatto-Weber quantum game scheme, a frequently used scheme to convert classical games to quantum versions. Our results show that the quantization can change fascinatingly the properties of some classical opinion formation game models so as to generate win-win outcomes.
[4195] vixra:1504.0203 [pdf]
A Rotating Gravitational Ellipse
A gravitational ellipse is the mathematical result of Newton's law of gravitation. [Ref.1] The equation describing such an ellipse, is obtained by differentiating space-by-time twice. Le Verrier [Ref.2] stated: 'rotating gravitational ellipses are observed in the solar system'. One could be asked, to adjust the existing gravitational equation in such a way, that a rotating gravitational ellipse is obtained. The additional rotation is an extra variable, so the equation will be a three times space-by-time differentiated equation. In order to obtain a three times space-by-time differentiated equation we need to differentiate space-by-time for the third time. Differentiating space-by-time twice gives the following result.[Ref.3] \begin{equation} \centerline{ $(\ddot{X})^2 + (\ddot{Y})^2 = (\ddot{R} - R \dot{a}^2 )^2 + (R\ddot{a} + 2 \dot{R} \dot{a} )^2 $} \end{equation} A third time differentiation of space-by-time gives the result: \begin{equation} \centerline{ $(\dddot X )^2 + (\dddot{Y})^2 = (\dddot{R} - 3\dot{R} \dot{a}^2 - 3 R \dot{a} \ddot{a} )^2 + (R\dddot{a} + 3 \dot{R} \ddot{a} + 3 \ddot{R} \dot{a} - R\dot{a}^3 )^2 $} \end{equation} We are now simply performing the necessary mathematical exercise to produce the new equation, which describes rotating gravitational ellipses. \newline \centerline{\includegraphics{20150202_RotatingEllipse.png} }\newline I assume that the reader accepts the mathematical differential equation, which defines a rotating gravitational motion as observed. But we now have two equations defining rotating gravitational ellipses as observed in nature: the EIH equations (Ref.4) and the above equation 2, which obeys the Euclidean space premises.
[4196] vixra:1504.0192 [pdf]
Fuzzy Abel Grassmann Groupoids
In this book, we introduce the concept of (/in, /in /or q_k)-fuzzy ideals and (/in_gamma, in_gamma /or q_delta)-fuzzy ideals in a non-associative algebraic structure called Abel Grassmann’s groupoid, discuss several important features of a regular AG-groupoid, investigate some characterizations of regular and intra-regular AG-groupoids and so on.
[4197] vixra:1504.0188 [pdf]
Erratum: Single and Cross-Channel Nonlinear Interference in the Gaussian Noise Model with Rectangular Spectra
We correct a typo in the key equation (20) of reference [Opt.Express 21(26), 32254–32268 (2013)] that shows an upper bound on the cross-channel interference nonlinear coefficient in coherent optical links for which the Gaussian Noise model applies.
[4198] vixra:1504.0182 [pdf]
On the Geometry of Space
<p>I explore the possibility that black holes and Space could be the geometrically Compactified Transverse Slices ("CTS"s) of their higher (+1) dimensional space. My hypothesis is that we might live somewhere in between partially compressed regions of space, namely 4d<sub>L+R</sub> hyperspace compactified to its 3d transverse slice, and fully compressed dark regions, i.e. black holes, still containing all <sub>L</sub>d432-1-234d<sub>R</sub> dimensional fields. This places the DGP, ADD, Kaluza-Klein, Randall-Sundrum, Holographic and Vanishing Dimensions theories in a different perspective.</p> <p>I first postulate that a black hole could be the result of the compactification (fibration) of a 3d burned up S<sup>2</sup> star to its 2d transverse slice; the 2d dimensional discus itself further spiralling down into a bundle of one-dimensional fibres.</p> <p>Similarly, Space could be the compactified transverse slice (fibration) of its higher 4d<sub>L+R</sub> S<sup>3</sup> hyper-sphere to its 3d transverse slice, the latter adopting the topology of a closed and flat left+right handed trefoil knot. By further extending these two ideas, we might consider that the Universe in its initial state was a "Matroska" 4d<sub>L+R</sub> hyperspace compactified, in cascading order, to a bundle of one-dimensional fibres. The Big Bang could be an explosion from within that broke the cascadingly compressed Universe open.</p>
[4199] vixra:1504.0157 [pdf]
An Quantum Extension to Inspection Game
Quantum game theory is a new interdisciplinary field between game theory and physical research. In this paper, we extend the classical inspection game into a quantum game version by quantizing the strategy space and importing entanglement between players. The quantum inspection has various Nash equilibrium depending on the initial quantum state of the game. Our results also show that quantization can respectively help each player to increase his own payoff, but can not simultaneously improve the collective payoff in the quantum inspection game.
[4200] vixra:1504.0148 [pdf]
Carefully Estimating the Incidence of Natalizumab-Associated PML
We show that the quarterly updates about the risk of PML during natalizumab therapy, while in principle helpful, underestimate the real incidences systematically and significantly. Calculating the PML incidences using an appropriate method and on realistic assumptions, we obtain estimates that are up to 80% higher. In fact, with the recent paper [Plavina et al 2014], our approximate incidences are up to ten times as high. The present article describes the shortcomings of the methods used in [Bloomgren et al 2012] and by Plavina et al for computing incidences, and demonstrates how to properly estimate the true (prospective) risk of developing PML during natalizumab treatment. One application is that the newest data concerning the advances in risk-mitigation through the extension of dosing intervals, although characterised as not quite statistically significant, are in fact significant. Lastly, we discuss why the established risk-stratification algorithms, even on assessing the PML incidences correctly, are no longer state-of-the-art; in the light of all the progress that has been made so far, already today it is possible to reliably identify over 95% of patients in whom (a personalised regimen of) natalizumab should be very safe.
[4201] vixra:1504.0133 [pdf]
Local Quantum Measurement Discrimination Without Assistance of Classical Communication
The discrimination of quantum operations is an important subject of quantum information processes. For the local distinction, existing researches pointed out that, since any operation performed on a quantum system must be compatible with no-signaling constraint, local discrimination between quantum operations of two spacelike separated parties cannot be realized. We found that, however, local discrimination of quantum measurements may be not restricted by the no-signaling if more multi-qubit entanglement and selective measurements were employed. In this paper we report that local quantum measurement discrimination (LQMD) can be completed via selective projective measurements and numerous seven-qubit GHZ states without help of classical communication if both two observers agreed in advance that one of them should measure her/his qubits before an appointed time. As an application, it is shown that the teleportation can be completed via the LQMD without classical information. This means that the superluminal communication can be realized by using the LQMD.
[4202] vixra:1504.0128 [pdf]
Theorem of the Keplerian Kinematics
As described in the literature the velocity of a Keplerian orbiter on a fixed orbit is always the sum of a uniform rotation velocity and a uniform translation velocity, both coplanar. This property is stated here as a theorem and demonstrated as true. The consequences are investigated among which the Newton's gravitational acceleration appears as its derivative with respect to time, the classical mechanical energy is deduced, the Galileo's equivalence principle is respected. However the Newton's factor $G M$ appears as a kinematics factor, the angular momentum multiplied by the rotation velocity, and this enables to consider a kinematics reason for the rotation of the galaxies, with no need for dark matter. Furthermore the kinematics demonstrate that the gravitational acceleration causes the rotation, but not the attraction, while the mechanical acceleration can only cause a translation. These two accelerations being thus of different natures, the Einstein's equivalence principle can not be correct.
[4203] vixra:1504.0127 [pdf]
Process Physics: Emergent Unified Dynamical 3-Space, Quantum and Gravity: a Review
Experiments have repeatedly revealed the existence of a dynamical structured fractal 3-space, with a speed relative to the Earth of some 500km/s from a southerly direction. Experiments have ranged from optical light speed anisotropy interferometers to zener diode quantum detectors. This dynamical space has been missing from theories from the beginning of physics. This dynamical space generates a growing universe, and gravity when included in a generalised Schrodinger equation, and light bending when included in generalised Maxwell equations. Here we review ongoing attempts to construct a deeper theory of the dynamical space starting from a stochastic pattern generating model that appears to result in 3-dimensional geometrical elements, “gebits”, and accompanying quantum behaviour. The essential concept is that reality is a process, and geometrical models for space and time are inadequate.
[4204] vixra:1504.0126 [pdf]
Dynamical 3-Space and the Earth’s Black Hole: An Expanding Earth Mechanism
During the last decade the existence of space as a quantum-dynamical system was discovered, being first indicated by the measured anisotropy of the speed of EM radiation. The dynamical theory for space has been under development during that period, and has now been successfully tested against experiment and astronomical observations, explaining, in particular, the observed characteristics of galactic black holes. The dynamics involves G and alpha- the fine structure constant. Applied to the earth this theory gives two observed predictions (i) the bore hole g anomaly, and the space-inflow effect.The bore hole anomaly is caused by a black hole (a dynamical space in-flow effect) at the centre of the earth. This black hole will be associated with space-flow turbulence,which, it is suggested, may lead to the generation of new matter, just as such turbulence created matter in the earliest moments of the universe.This process may offer a dynamical mechanism for the observed expanding earth.
[4205] vixra:1504.0125 [pdf]
Review of Experiments that Contradict Special Relativity and Support Neo-Lorentz Relativity: Latest Technique to Detect Dynamical Space Using Quantum Detectors
The anisotropy of the velocity of EM radiation has been repeatedly detected, including the Michelson-Morley experiment of 1887, using a variety of techniques. The experiments reveal the existence of a dynamical space that has a velocity of some 500km/s from a southerly direction. These consistent experiments contradict the assumptions of Special Relativity, but are consistent with the assumptions of neo-Lorentz Relativity. The existence of the dynamical space has been missed by physics since its beginnings. Novel and checkable phenomena then follow from including this space in Quantum Theory, EM Theory, Cosmology, etc, including the derivation of a more general theory of gravity as a quantum wave refraction effect. The corrected Schrodinger equation has resulted in a very simple and robust quantum detector, which easily measures the speed and direction of the dynamical space. This report reviews the key experimental evidence.
[4206] vixra:1504.0124 [pdf]
Dynamical 3-Space: Energy Non-Conservation, Anisotropic Brownian Motion Experiment and Ocean Temperatures
In 2014 Jiapei Dai reported evidence of anisotropic Brownian motion of a toluidine blue colloid solution in water. In 2015 Felix Scholkmann analysed the Dai data and detected a sidereal time dependence, indicative of a process driving the preferred Brownian motion diusion direction to a star-based preferred direction. Here we further analyse the Dai data and extract the RA and Dec of that preferred direction, and relate the data to previous determinations from NASA Spacecraft Earth-flyby Doppler shift data, and other determinations. It is shown that the anisotropic Brownian motion is an anisotropic “heating” generated by 3-space fluctuations: gravitational waves, an eect previously detected in correlations between ocean temperature fluctuations and solar flare counts, with the latter being shown to be a proxy for 3-space fluctuations. The dynamical 3- space does not have a measure of energy content, but can generate energy in matter systems, which amounts to a violation of the 1st Law of Thermodynamics.
[4207] vixra:1504.0117 [pdf]
The Minimal Non-Realistic Modification of Quantum Mechanics
In this article we consider the variant of quantum mechanics (QM) which is based on the non-realism. There exists the theory of the modified QM introduced in [1] and [2] which is based on the non-realism, but it contains also other changes with respect to the standard QM (stQM). We introduce here the other non-realistic modification of QM (n-rQM) which contains the minimal changes with respect to stQM. The change consists in the replacement of the von Neumann`s axiom (ensembles which are in the pure state are homogeneous) by the anti-von Neumann`s axiom (any two different individual states must be orthogonal). This introduces the non-realism into n-rQM. We shall show that experimental consequences of n-rQM are the same as in stQM, but these two theories are substantially different. In n-rQM it is not possible to derive (using locality) the Bell inequalities. Thus n-rQM does not imply the non-locality (in contrast with stQM). Because of this the locality in n-rQM can be restored. The main purpose of this article was to show what could be the minimal modification of QM based on the non-realism, i.e. that the realism of stQM is completely contained in the von Neumann's axiom.
[4208] vixra:1504.0112 [pdf]
A Segmented DAC based Sigma-Delta ADC by Employing DWA
Data weighted averaging algorithm work well for relatively low quantization levels , it begin to present significant problems when internal quantization levels are extended farther. Each additional bit of internal quantization causes an exponential increase in the complexity, size, and power dissipation of the DWA logic and DAC. This is because DWA algorithms work with unit-element DACs. The DAC must have 2N - 1 elements (where N is the number of bits of internal quantization), and the DWA logic must deal with the control signals feeding those 2N-1 unit elements. This paper discusses the prospect of using a segmented feedback path with coarse and ne signals to reduce DWA complexity for modulators with large internal quantizers. However, it also creates additional problems. mathematical analysis of the problems involved with segmenting the digital word in a P- ADC feedback path are presented, along with a potential solution that uses frequency-shapes this mismatch error. A potential circuit design for the frequency-shaping method is presented in detail. Mathematical analysis and behavioral simulation results are presented.
[4209] vixra:1504.0111 [pdf]
Analysis Bio-Potential to Ascertain Movements for Prosthetic Arm with Different Weights Using Labview
The Prosthetic is a branch of biomedical engineering that deals with missing human body parts with artificial one. SEMG powered prosthetic required SEMG signals. The SEMG is a common method of measurement of muscle activity. The analysis of SEMG signals depends on a number of factors, such as amplitude as well as time and frequency domain properties. In the present work, the study of SEMG signals at different location, below elbow and bicep branchii muscles for two operation of hand like grip the different weights and lift the different weights are carried out. SEMG signals are extracted by using a single channel SEMG amplifier. Biokit Datascope is used to acquire the SEMG signals from the hardware. After acquiring the data from two selected location, analysis are done for the estimation of parameters of the SEMG signal using LabVIEW 2012 (evaluation copy). An interpretation of grip/lift operations using time domain features like root mean square (rms) value, zero crossing rate, mean absolute value and integrated value of the EMG signal are carried out. For this study 30 university students are used as subjects with 12 female and 18 male that will be a very helpful for the research in understanding the behavior of SEMG for the development for the prosthetic hand.
[4210] vixra:1504.0110 [pdf]
Design and Control of Grid Interfaced Voltage Source Inverter with Output LCL Filter
This paper presents design and analysis of an LCL-based voltage source converter using for delivering power of a distributed generation source to power utility and local load. LCL filer in output of the converter analytically is designed and its different transfer functions are obtained for assessment on elimination of any probable parallel resonant in power system. The power converter uses a controller system to work on two modes of operation, stand-alone and grid-connected modes, and also has a seamless transfer between these two modes of operation. Furthermore, a fast semiconductor-based protection system is designed for the power converter. Performance of the designed grid interface converter is evaluated by using an 85kV A industrial setup.
[4211] vixra:1504.0109 [pdf]
FF Algorithm for Design of SSSC-Based Facts Controller
Power-system stability improvement by a static synchronous series compensator (SSSC)- based damping controller considering dynamic power system load is thoroughly investigated in this paper. Only remote input signal is used as input to the SSSC-based controller. For the controller design, Firefly algorithm is used to find out the optimal controller parameters. To check for the robustness and effectiveness of the proposed controller, the system is subjected to various disturbances for both single-machine infinite bus power system and multi-machine power system. Detailed analysis regarding dynamic load is done taking practical power system loads into consideration. Simulation results are presented.
[4212] vixra:1504.0106 [pdf]
Analysis of Histogram Based Shot Segmentation Techniques for Video Summarization
Content based video indexing and retrieval has its foundations in the analyses of the prime video temporal structures. Thus, technologies for video segmentation have become important for the development of such digital video systems. Dividing a video sequence into shots is the first step towards VCA and content-based video browsing and retrieval. This paper presents analysis of histogram based techniques on the compressed video features. Graphical User Interface is also designed in MATLAB to demonstrate the performance using the common performance parameters like, precision, recall and F1.
[4213] vixra:1504.0105 [pdf]
Sliding Mode based D.C.Motor Position Control using Multirate Output Feed back Approach
The paper presents discrete time sliding mode Position control of d.c.motor using MROF. Discrete state space model is obtained from continuous time system of d.c.motor. Discrete state variables and control inputs are used for sliding mode controller design using Multirate Output Feed back approach(MROF) with fast output sampling. In this system output is sampled at a faster rate as compared to control input. This approach does not use present output or input. In this paper simulations are carried out for separately excited d.c.motor position control.
[4214] vixra:1504.0102 [pdf]
Does Geometric Algebra Provide a Loophole to Bell's Theorem?
In 2007, and in a series of later papers, Joy Christian claimed to refute Bell's theorem, presenting an alleged local realistic model of the singlet correlations using techniques from geometric algebra (GA). Several authors published papers refuting his claims, and Christian's ideas did not gain acceptance. However, he recently succeeded in publishing yet more ambitious and complex versions of his theory in fairly mainstream journals. How could this be? The mathematics and logic of Bell's theorem is simple and transparent and has been intensely studied and debated for over 50 years. Christian claims to have a mathematical counterexample to a purely mathematical theorem. Each new version of Christian's model used new devices to circumvent Bell's theorem or depended on a new way to misunderstand Bell's work. These devices and misinterpretations are in common use by other Bell critics, so it useful to identify and name them. I hope that this paper can serve as a useful resource to those who need to evaluate new "disproofs of Bell's theorem". Christian's fundamental idea is simple and quite original: he gives a probabilistic interpretation of the fundamental GA equation a.b = (ab + ba)/2. After that, ambiguous notation and technical complexity allows sign errors t be hidden from sight, and new mathematical errors can be introduced.
[4215] vixra:1504.0101 [pdf]
Unitarity in the Canonical Commutation Relation Does not Derive from Homogeneity of Space
Symmetry information beneath wave mechanics is re-examined. Homogeneity of space is the symmetry, fundamental to the quantum free particle. The unitary information of the Canonical Commutation Relation is shown not to be implied by that symmetry. Keywords:quantum mechanics, wave mechanics, Canonical Commutation Relation, symmetry, homogeneity of space unitary, non-unitary.
[4216] vixra:1504.0089 [pdf]
MEMS Microcantilevers Sensor Modes of Operation and Transduction Principles
MEMS based microcantilever is a microfabricated mostly rectangular bar shaped structure, longer as compared to width, and has a thickness much smaller than its length or width. Microfabricated silicon cantilever sensor arrays represent a powerful platform for sensing applications in physics, chemistry, material science, biology and medicine. Microcantielver senses even a few molecules or atoms. A small change in mass causes a greater displacement. It is important that due to micron size of cantilever, the cantilevers bend or displacement is due to small amount of mass but not weight. For application in biomedical diagnostics this device plays an important role in the identification of disease detection particles. In this paper we review the cantilever principle, modes of operation, transduction principle and application of cantilever as sensor. MEMS applications operate the cantilever in either a static mode of operation or a dynamic mode of operation. The concept of stress concentration region (SCR) is used to increase stress occurred in the cantilever.
[4217] vixra:1504.0073 [pdf]
Catalysis, Heavy Fermions, Solitons, Cold Fusion, Low Energy Nuclear Reactions (LENR) and all that
We consider in the paper an idea of a soliton and heavy fermion catalysis for a cold fusion similar to a muon catalysis. This catalysis is achieved via quasi- chemical bonds for heavy fermions and solitons as well. We consider also a soliton catalysis (for KP-solutions), which is quite different. This kind of catalysis is similar to enzymatic catalysis. In the paper we construct a model for a cold fusion reactor based on Onsager–Prigogine irreversible thermodynamics. We give examples of several compounds with heavy fermions (heavy electrons) which are hydrogen storages. Samples of those compounds can be (in principle) cold fusion reactors if filled with a deuter. It is necessary to do several experiments (de- scribed in the paper) in order to find a proper compound which will be a base for a battery device. We consider also a case with cold plasma (e.g. in metals) filled with a deuter. Solitons in a plasma can catalyse a fusion in two regimes: as quasiparticles and in enzymatic-like regime.
[4218] vixra:1504.0067 [pdf]
Fitting Galaxy Rotation Curves Without Dark Matter
The notion is presented that fitting galaxy rotation curves is possible without the invocation of dark matter. Equations of motion similar to those used to describe the internal motions of terrestrial weather systems are applied to the rotation curves of galaxies. However, due to the four-dimensional nature of the equations, additional terms arise. One of these additional terms is particularly useful in matching the rotation curves of galaxies. The extra term presumably describes internal properties of the central galactic black hole. It also determines, in part, the galactic rotation curve, just as the motion of winds in a terrestrial weather system are determined, in part, by the rotation of the Earth.
[4219] vixra:1504.0057 [pdf]
Static Process Algebra as Pre-arithmetical Content for School Arithmetic
Parallel composition in a static setting introduces algebra, in the form of static process algebra, as a modelling tool at the level of primary school mathematics. Static process algebra may play the role of a prearithmetic algebra. Multi-dimensional counters can be used to measure the number of components in a static process expression.
[4220] vixra:1504.0043 [pdf]
Unifying the Galilei and the Special Relativity II: the Galilei Electrodynamics
Using the concept of absolute time introduced in a previous work \cite{carvalho} we define two coordinate systems for spacetime, the Galilean and the Lorentzian systems. The relation between those systems allows us to develop a tensor calculus that transfer the Maxwell electrodynamics to the Galilean system. Then, by using a suitable Galilean limit, we show how this transformed Maxwell theory in the Galilei system results in the Galilei electrodynamics formulated by Levy Leblond and Le Bellac.
[4221] vixra:1504.0041 [pdf]
The Eightfold Way Model and the Cartan Subalgebra Revisited and its Implications for Nuclear Physics
It was shown recently by the author [1], that a proper study of the Eightfold Way model vis-a-vis the SU(3) model shows, that the adjoint representation has certain unique features which provides it with a basic fundamentality which was missed out in the earlier interpretations. That paper [1] also showed that the Lie Algebra gives a more basic and complete description of the particle physics reality than the corresponding group does. In this paper we revisit the Eightfold Way Model and provide further support to the conclusions arrived in Ref. [1]. This demands that a proper Cartan Subalgebra be used for the description of the adjoint representation. This in turn allows us to make non-trivial statements about as to how nucleus may be understood as made up, not only of protons and neutrons treated as indistinguishable particles as in the SU(2)-isospin group, but also as another independent structure where the nucleus behaves as if it is made up of protons and neutrons wherein they are treated as distinguishable fermions.
[4222] vixra:1504.0032 [pdf]
Smallest Symmetric Supergroups Of the Abstract Groups Up to Order 37
Each finite group is a subgroup of some symmetric group, known as the Cayley theorem. We find the symmetric group of smallest order which hosts the finite groups in that sense for most groups of order less than 37. For each of these small groups this is made concrete by providing a permutation group with a minimum number of moved elements in terms of a list of generators of the permutation group in reduced cycle notation.
[4223] vixra:1504.0009 [pdf]
Two New Warp Drive Equations Based on Parallel $3+1$ Adm Formalisms in Contravariant and Covariant Forms Applied to the Natario Spacetime Geometry
Warp Drives are solutions of the Einstein Field Equations that allows superluminal travel within the framework of General Relativity. There are at the present moment two known solutions: The Alcubierre warp drive discovered in $1994$ and the Natario warp drive discovered in $2001$. However the major drawback concerning warp drives is the huge amount of negative energy density able to sustain the warp bubble.In order to perform an interstellar space travel to a "nearby" star at $20$ light-years away in a reasonable amount of time a ship must attain a speed of about $200$ times faster than light.However the negative energy density at such a speed is directly proportional to the factor $10^{48}$ which is $1.000.000.000.000.000.000.000.000$ times bigger in magnitude than the mass of the planet Earth!!. With the correct form of the shape function the Natario warp drive can overcome this obstacle at least in theory.Other drawbacks that affects the warp drive geometry are the collisions with hazardous interstellar matter(asteroids,comets,interstellar dust etc)that will unavoidably occurs when a ship travels at superluminal speeds and the problem of the Horizons(causally disconnected portions of spacetime).The geometrical features of the Natario warp drive are the required ones to overcome these obstacles also at least in theory. Some years ago starting from $2012$ to $2014$ a set of works appeared in the current scientific literature covering the Natario warp drive with an equation intended to be the original Natario equation however this equation do not obeys the original $3+1$ Arnowitt-Dresner-Misner($ADM$) formalism and hence this equation cannot be regarded as the original Natario warp drive equation.However this new equation satisfies the Natario criteria for a warp drive spacetime and as a matter of fact this equation must be analyzed under a new and parallel contravariant $3+1$ $ADM$ formalism.In this work we introduce also a second new Natario equation but using a parallel covariant $3+1$ $ADM$ formalism. We compare both the original and parallel $3+1$ $ADM$ formalisms wether in contravariant or covariant form using the approach of Misner-Thorne-Wheeler($MTW$) and Alcubierre and while in the $3+1$ spacetime the parallel equations differs radically from the original one when we reduce the equations to a $1+1$ spacetime all the equations becomes equivalent.We discuss the possibilities in General Relativity for these new equations.
[4224] vixra:1504.0007 [pdf]
Photonic Temperature & Realistic Non-Singular Cosmology
In our framework for 'realistic non-singular cosmology' we consider a more radical situation where the present photonic temperature of the universe is higher (for instance ~16 K) than that suggested by the microwave background. This leads to a minimal scale of ~0.08, and a maximum temperature of ~192 K, for an oscillating cosmology whose main protagonists are whole galaxies rather that stars, with an age of the current expansion of ~8.9 Gyr. We show that this scheme is quite compatible with the supernovae data for magnitudes and redshifts, provided that the Hubble fraction is ~0.55.
[4225] vixra:1503.0270 [pdf]
Supersymmetrization of Quaternionic Quantum Mechanics
Keeping in view the application of SUSY and quaternion quantum mechanics, in this paper we have made an attempt to develop a complete theory for quaternionic quantum mechanics. We have discussed the N = 1, N = 2 and N = 4 supersymmetry in terms of one, two and four supercharges respectively and it has been shown that N=4 SUSY is the quaternionic extension of N = 2 complex SUSY.
[4226] vixra:1503.0262 [pdf]
Identifying the Gauge Fields of Gauge Theory Gravity
Geometric algebra is universal, encompassing all the tools of the mathematical physics toolbox, is background independent, and is the foundation of gauge theory gravity. Similarly, impedance is a fundamental concept of universal validity, is background independent, and the phase shifts generated by impedances are at the foundation of gauge theory. Impedance may be defined as a measure of the amplitude and phase of opposition to the flow of energy. Generalizing quantum impedances from photon and quantum Hall to all forces and potentials generates a network of both scale dependent and scale invariant impedances. This essay conjectures that these quantum impedances can be identified with the gauge fields of gauge theory gravity, scale dependent with the translation field and scale invariant with rotation.
[4227] vixra:1503.0257 [pdf]
A Quantum Logical Understanding of Bound States
This short note presents the structures of lattices and continuous geometries in the energy spectrum of a quantum bound state. Quantum logic, in von Neumann's original sense, is used to construct these structures. Finally, a quantum logical understanding of the emergence of discreteness is suggested.
[4228] vixra:1503.0246 [pdf]
School Algebra as a Surrounding Container for School Arithmetic
Algebra and arithmetic are contrasted from a perspective of school mathematics. The surrounding container view regarding algebra and arithmetic is formulated and defended. It is argued that an encounter with algebra may precede the introduction to arithmetic in school. In particular static process algebra is suggested as a theme which may play a useful role early on in the educational process heading towards the development of skills and competences in arithmetic.
[4229] vixra:1503.0240 [pdf]
Immunization Strategy Based on the Critical Node in Percolation Transition
The problem of finding a better immunization strategy for controlling the spreading of the epidemic with limited resources has attracted much attention since its great theoretical significance and wide application. In this paper, we propose a novel and successful targeted immunization strategy based on percolation transition. Our strategy immunizes the fraction of critical nodes which lead to the emergence of giant connected component. To test the effectiveness of the proposed method, we conduct the experiments on several artificial networks and real-world networks. The results show that the proposed method outperforms the existing well-known methods with 18% to 50% fewer immunized nodes for same degree of immunization.
[4230] vixra:1503.0218 [pdf]
Fusion D'images Par la Théorie de Dezert-Smarandache (DSmT) en Vue D'applications en Télédétection (Thèse Doctorale)
Thèse dirigée par Pr. Driss Mammass, préparée au Laboratoire Image et Reconnaissance de Formes-Systèmes Intelligents et Communicants IRF-SIC, soutenue le 22 juin 2013, Agadir, Maroc. L'objectif de cette thèse est de fournir à la télédétection des outils automatiques de la classification et de la détection des changements d'occupation du sol utiles à plusieurs fins, dans ce cadre, nous avons développé deux méthodes générales de fusion utilisées pour la classification des images et la détection des changements utilisant conjointement l'information spatiale obtenue par la classification supervisée ICM et la théorie de Dezert-Smarandache (DSmT) avec des nouvelles règles de décision pour surmonter les limites inhérentes des règles de décision existantes dans la littérature. L'ensemble des programmes de cette thèse ont été implémentés avec MATLAB et les prétraitements et visualisation des résultats ont été réalisés sous ENVI 4.0, ceci a permis d'effectuer une validation des résultats avec précision et dans des cas concrets. Les deux approches sont évaluées sur des images LANDSAT ETM+ et FORMOSAT-2 et les résultats sont prometteurs. The main objective of this thesis is to provide automatic remote sensing tools of classification and of change detection of land cover for many purposes, in this context, we have developed two general methods used for classification fusion images and change detection using joint spatial information obtained by supervised classification ICM and Dezert-Smarandache theory (DSmT) with new decision rules to overcome the limitations of decision rules existing in the literature. All programs of this thesis have been implemented in MATLAB and C language and preprocessing and visualization of results were achieved in ENVI 4.0, this has allowed for a validation of the results accurately and in concrete cases. Both approaches are evaluated on LANDSAT ETM + and FORMOSAT-2 and the results are promising.
[4231] vixra:1503.0201 [pdf]
Lunar Drift Explains Lunar Eccentricity Rate
In this short letter, we argue that the observed +38 mm/yr secular Lunar drift from the Earth does – to an admirable degree of agreement between theory and observations ; explain the observed secular increase in the Lunar eccentricity. At present, the recession of the Moon from the Earth is not any more considered as an anomaly as this is believed to be well explained by conventional physics of Lunar-Earth tides. However, the same is not true when it comes to the observed increase in the Lunar eccentricity which is considered to be an anomaly requiring an explanation as to what is the cause behind this phenomenon. We not only demonstrate an intimate connection between these two seemingly unrelated phenomenon, but show that the intimate relationship that we deduce fits so well with observations to a point that – logic dictates that, the Lunar drift must surely be the cause of the secular increase in the Lunar eccentricity.
[4232] vixra:1503.0172 [pdf]
A Note on Quantum Entanglement in Dempster-Shafer Evidence Theory
Dempster-Shafer evidence theory is an efficient mathematical tool to deal with uncertain information. In this theory, basic probability assignment (BPA) is the basic structure for the expression and inference of uncertainty. In this paper, quantum entanglement involved in Dempster-Shafer evidence theory is studied. A criterion is given to determine whether a BPA is in an entangled state or not. Based on that, the information volume involved in a BPA is discussed. The discussion shows that a non-quantum strategy (or observation) can not obtained all information contained in a BPA which is in an entangled state.
[4233] vixra:1503.0159 [pdf]
Simplified Calculation of Component Number in the Curvature Tensor
The number of independent components in the Riemann-Christoffel curvature tensor, being composed of the metric tensor and its first and second derivatives, varies considerably with the dimension of space. Since few texts provide an explicit derivation of component number, we present here a simplified method using only the curvature tensor’s antisymmetry property and the cyclicity condition. For generality and comparison, the method for computing component number in both Riemannian and non-Riemannian space is presented.
[4234] vixra:1503.0150 [pdf]
Taming the Probability Amplitude
We show that the complex number structure of the probability allows to express explicitly the relationship between the energy function H and the Laplace principle of equal ignorance (LPEI). This nonlinear relation- ship reflecting the measurement properties of the considered systems, to- gether with the principle of causality and Newton principle separating the dynamics from initial conditions, lead to the linear Schrodinger equation with the Max Born interpretation, for micro and macro systems!
[4235] vixra:1503.0139 [pdf]
Galileo's Belated Gravity Experiment: The Small Low-Energy Non-Collider
Galileo proposed a simple gravity experiment that has yet to be performed. Suppose we drop a test mass into a hole through the center of a larger source mass. What happens? Using a modified Cavendish balance or an orbiting satellite, modern technology could have revealed the answer decades ago. General Relativity is widely regarded as being supported by empirical evidence throughout its accessible range. Not commonly realized is that, with regard to gravity-induced motion, this evidence excludes the interior regions of material bodies over this whole range. If only to fill this huge gap in our empirical knowledge of gravity, Galileo’s experiment ought to be performed without further delay.
[4236] vixra:1503.0138 [pdf]
Space Generation Model of Gravity, Cosmic Numbers, and Dark Energy
This is an updated and augmented version of the previously published paper, Space Generation Model of Gravitation and the Large Numbers Coincidences. The basis of the gravity model is that motion sensing devices—most notably accelerometers and clocks—consistently tell the truth about their state of motion. When the devices are attached to a uniformly rotating body this is undoubtedly true. Uniform rotation is sometimes referred to as an example of stationary motion. It is proposed here, by analogy, that gravitation is also an example of stationary motion. Einstein used the rotation analogy to deduce spacetime curvature. Similar logic suggests that in both cases the effects of curvature are caused by motion. A key distinction is that, unlike rotation, gravitational motion is not motion through space, but rather motion of space. Extending the analogy further, gravitation is conceived as a process involving movement into a fourth space dimension. Space and matter are dynamic, continuous extensions of each other, which implies that the average cosmic density is a universal constant. Assuming this to be the case leads to a cosmological model according to which ratios such as the gravitational to electrostatic force, electron mass to proton mass, Bohr radius to cosmic radius, and constants such as the fine structure constant, Hubble constant, the saturation density of nuclear matter and the energy density of the cosmic background radiation are all very simply related to one another. Measured values of these numbers are discussed in suffcient detail to facilitate judging whether or not the found and predicted relationships are due to chance. The notorious "cosmological constant" (dark energy) problem is also addressed in light of the new gravity model. Finally, it is emphasized that the model lends itself to a relatively easy laboratory test.
[4237] vixra:1503.0136 [pdf]
`rekenen-Informatica: Informatics for Primary School Mathematics
A number of issues is listed which arise within primary school mathematics and where a perspective of informatics may shed some new light on the matter. Together these points prove that there are many different possible connections between informatics and primary school mathematics, each of which merit further investigation and clarification. A rationale for further investigation of these issues is given.
[4238] vixra:1503.0131 [pdf]
A New Information Unit
It is well known that ”Bit” is the unit in information theory to measure information volume with Shannon entropy. However, one assumption to use bit as information unit is that each hypothesis is exclusive with each other. This assumption is also the basic assumption in probability theory which means that two events cannot happen synchronously. However, the assumption is violated such as the ”Entangled state”. A typical example is Schr?dinger’s cat where a cat may be simultaneously both alive and dead. At this situation, bit is not suitable to measure the information volume. To address this issue, a new information unit, called as ”Deng” and abbreviated as ”D”, is proposed based on Deng entropy. The proposed information unit may be used in entangle information processing and quantum information processing.
[4239] vixra:1503.0130 [pdf]
New Version of General Relativity that Unifies Mass and Gravity n a Common 4D Higgs Compatible Theory
Recent enigmas of astrophysics such as dark energy or accelerating universe need to update General Relativity. A thorough examination of the original Einstein Field Equations (EFE) hghlights three inconsistencies concerning the nature of spacetime. Here we solve these inconsistencies. As a consequence, this article proposes a Higgs compatible 4D expression of mass, m=f(x,y,z,t), and a new explanation of gravity based on Le Sage Push Gravity. This paper is interesting and important because it touches the weakest nerve of General Relativity by asking “how mass curves spacetime”? Moreover, this article is supported by several mathematics demonstrations such as a new version of the Newton Law, a new version of the Schwarzschild Metric, and a 4D rewriting of the Energy-Momentum Tensor and Einstein Constant.
[4240] vixra:1503.0109 [pdf]
The Restoration of Locality: the Axiomatic Formulation of the Modified Quantum Mechanics
From the dichotomy "nonlocality vs non-realism" which is the consequence of Bell Inequalities (BI) we shall choose the non-realism. We shall present here the modified Quantum Mechanics (modQM) in the axiomatic form. ModQM was introduced in [5] and we shall show its non-realism in the description of an internal measurement process. ModQM allows the restoration of locality, since BI cannot be derived in it. In modQM it is possible to solve: the measurement problem, the collapse problem, the problem of a local model for EPR correlations (see[5]). ModQM is a unique explicit realization of non-realism in QM. ModQM should be preferred as an alternative to the standard QM mainly since it restores the locality.
[4241] vixra:1503.0103 [pdf]
Consistent Faith of a Physicist: God's Grace Within Physics
Without forcing to accept my points, I present the glim of my consistent faith to the scientific community of orthodox believers. Because I stay within the dogmas of the Orthodox Christian Church, I suggest to read the text without criticism. It is simply the beautiful and meaningful picture of my personal world. Please enjoy it.
[4242] vixra:1503.0088 [pdf]
Tail Properties and Asymptotic Distribution for Extreme of LGMD
We introduce logarithmic generalized Maxwell distribution which is an extension of the generalized Maxwell distribution. Some interesting properties of this distribution are studied and the asymptotic distribution of the partial maximum of an independent and identically distributed sequence from the logarithmic generalized Maxwell distribution is gained. The expansion of the limit distribution from the normalized maxima is established under the optimal norming constants, which shows the rate of convergence of the distribution for normalized maximum tending to the extreme limit.
[4243] vixra:1503.0075 [pdf]
The Logical Difference in Quantum Mathematics Separating Pure States from Mixed States
<b>Abstract:</b> I give a short explanation of how quantum mathematics representing pure states, is logically distinct from the mathematics of mixed states. And further: how standard quantum theory easily shows itself to contain logical independence. This work is part of a project researching logical independence in quantum mathematics, for the purpose of advancing a complete theory of quantum randomness.<br><br><b>Keywords:</b> quantum mechanics, quantum indeterminacy, quantum information, prepared state, wave packet, unitary, orthogonal, scalar product, mathematical logic, arithmetic, formal system, axioms, Soundness Theorem, Completeness Theorem, logical independence, mathematical undecidability, semantics, syntax.
[4244] vixra:1503.0074 [pdf]
Evidence Combination from an Evolutionary Game Theory Perspective
Dempster-Shafer evidence theory is a primary methodology for multi-source information fusion since it allows to deal with uncertain information. This theory is based on Dempster’s rule of combination to synthesize multiple evidences from various information sources. However, in some cases, counter-intuitive results may be obtained based on Dempster’s rule of combination. Lots of improved or new methods have been proposed to suppress the counter-intuitive results based on a physical perspective that minimizes the lost or deviation of original information. In this paper, inspired by evolutionary game theory, a biological and evolutionary perspective is considered to study the combination of evidences. An evolutionary combination rule (ECR) is proposed to mimick the evolution of propositions in a given population and finally find the biologically most supported proposition which is called as evolutionarily stable proposition (ESP) in this paper. Our proposed ECR provides new insight for the combination of multi-source information. Experimental results show that the proposed method is rational and effective.
[4245] vixra:1503.0063 [pdf]
Comments on Overdetermination of Maxwell's Equations
Maxwell's equations seem overdetermined, which have six unknowns and eight equations. It is generally believed that Maxwell's divergence equations are redundant, and both equations are thought as initial conditions of curl ones. Because of this explanation, two divergence equations usually are not solved in computational electromagnetics. A circular logical fallacy of this explanation is found, and two divergence equations, which are not redundant, but fundamental, cannot be ignored in computational electromagnetics.
[4246] vixra:1503.0058 [pdf]
On the Natural Logarithm Function and its Applications
In present article, we create new integral representations for natural logarithm function, the Euler-Mascheroni constant, the natural logarithm of Riemann zeta function and the first derivative of Riemann zeta function.
[4247] vixra:1503.0032 [pdf]
New Integral Representation for Inverse Sine Function, the Rate of Catalan's Constant by Archimedes Constant and Other Functions
In present article, we developed infinite series representations for inverse sine function and other functions. Our main goal is to get the hypergeometric representation for Catalan constant and hyperbolic sine function; and new integral representation for inverse sine function.
[4248] vixra:1503.0024 [pdf]
Switch or not ? the Simulation of Monty Hall Problem
The Monty Hall problem is a brain teaser,The problem was originally posed in a letter by Steve Selvin to the American Statistician in 1975. To nd out the principle of this conclusion which given by Marilyn vos Savant, and to nd if there is always advantage to the contestants chose to switch their choice .we have make a simulation of this problem.
[4249] vixra:1503.0006 [pdf]
On the Antipodal Symmetry and Seismic Activity.
The article discusses the global aspects of the (almost) antipodal symmetry on Earth which should have been widely known by now but somehow managed to stay unnoticed.
[4250] vixra:1502.0246 [pdf]
Second Time Dimension, Hidden in Plain Sight
In this paper I postulate the existence of a second time dimension, making five dimensions, three space dimensions and two time dimensions. I will postulate some basic properties based on a smoking gun and then use these basic properties to derive the time dilation equations of "Special Relativity", which helps defne additional properties of the second time dimension. The conclusion being that the Universe has five dimensions but that we only perceive four dimensions. Further more I will demonstrate that Newton's second law of motion still holds if you ignore the time contribution from the second time dimension and additionally as a result of this paper we will understand a little bit more about the very nature of time. I believe this to be quiet a significant paper.
[4251] vixra:1502.0245 [pdf]
A Microscopic Theory of the Neutron (I)
A microscopic theory of the neutron, which consists in a neutron model constructed based on key relevant experimental observations, and the first principles solutions for the basic properties of the model neutron, is proposed within a framework consistent with the Standard Model. The neutron is composed of an electron e and a proton p that are separated at a distance r_1\sim 10^{-18} m, and are in relative orbital angular motion and Thomas precession highly relativistically, with their reduced mass moving along a quantised l=1th circular orbit of radius r_1 about their instantaneous mass centre. The associated rotational energy flux or vortex has an angular momentum (1/2)\hbar and is identifiable as a (confined) antineutrino. The particles e,p are attracted with one another predominantly by a central magnetic force produced as result of the particles' relative orbital, precessional and intrinsic angular motions. The interaction force (resembling the weak force), potential (resembling the Higgs' field), and a corresponding excitation Hamiltonian (H_I), among others, are derived based directly on first principles laws of electromagnetism, quantum mechanics and relativistic mechanics within a unified framework. In particular, the equation for (4/3)\pi r_1^3 H_I, which is directly comparable with the Fermi constant G_F, is predicted as G_F=(4/3)\pi r_1^3 H_I =A_o C_{01} /\ammag_e \gamma_p, where A_o=e^2 \hbar^2/12\pi\epsilon_0 m_e^0 m_p^0 c^2, m_e^0, m_p^0 are the e,p rest masses, C_{01} is a geometric factor, and \gamma_e, \gamma_p are the Lorentz factors. Quantitative solution for a stationary meta-stable neutron is found to exist at the extremal point r_{1m}=2.513 \times 10^{-18} m, at which the G_F is a minimum (whence the neutron lifetime is a maximum) and is equal to the experimental value. Solutions for the neutron spin (1/2), apparent magnetic moment, and the intermediate vector boson masses are also given in this paper.
[4252] vixra:1502.0242 [pdf]
Conundrums Overlooked in Physics for Evermore ...
Another ... friendly and creative ... author-editor interaction is presented in which several basic conundrums in physics are mentioned, conundrums no physicist seems to care about ...
[4253] vixra:1502.0236 [pdf]
Impact of Preference and Equivocators on Opinion Dynamics with Evolutionary Game Framework
Opinion dynamics, aiming to understand the evolution of collective behavior through various interaction mechanisms of opinions, represents one of the most challenges in natural and social science. To elucidate this issue clearly, binary opinion model becomes a useful framework, where agents can take an independent opinion. Inspired by the realistic observations, here we propose two basic interaction mechanisms of binary opinion model: one is the so-called BSO model in which players benefit from holding the same opinion; the other is called BDO model in which players benefit from taking different opinions. In terms of these two basic models, the synthetical effect of opinion preference and equivocators on the evolution of binary opinion is studied under the framework of evolutionary game theory (EGT), where the replicator equation (RE) is employed to mimick the evolution of opinions. By means of numerous simulations, we show the theoretical equilibrium states of binary opinion dynamics, and mathematically analyze the stability of each equilibrium state as well.
[4254] vixra:1502.0231 [pdf]
A Swot Analysis of Instruction Sequence Theory
After 15 years of development of instruction sequence theory (IST) writing a SWOT analysis about that project is long overdue. The paper provides a comprehensive SWOT analysis of IST based on a recent proposal concerning the terminology for the theory and applications of instruction sequences.
[4255] vixra:1502.0228 [pdf]
A Terminology for Instruction Sequencing
Instruction sequences play a key role in computing and have the potential of becoming more important in the conceptual development of informatics in addition to their existing role in computer technology and machine architectures. After 15 years of development of instruction sequence theory a more robust and outreaching terminology is needed for it which may support further development. Instruction sequencing is the central concept around which a new family of terms and phrases is developed.
[4256] vixra:1502.0225 [pdf]
A Mathematical Approach to Physical Realism
I propose to ask mathematics itself for the possible behaviour of nature, with the focus on starting with a most simple realistic model, employing a philosophy of investigation rather than invention when looking for a unified theory of physics. Doing a 'mathematical experiment' of putting a least set of conditions on a general time-dependent manifold results in mathematics itself inducing a not too complex 4-dimensional object similar to our physical spacetime, with candidates for gravitational and electromagnetic fields emerging on the tangent bundle. This suggests that the same physics might govern spacetime not only on a macroscopic scale, but also on the microscopic scale of elementary particles, with possible junctions to quantum mechanics.
[4257] vixra:1502.0222 [pdf]
Deng Entropy: a Generalized Shannon Entropy to Measure Uncertainty
Shannnon entropy is an efficient tool to measure uncertain information. However, it cannot handle the more uncertain situation when the uncertainty is represented by basic probability assignment (BPA), instead of probability distribution, under the framework of Dempster Shafer evidence theory. To address this issue, a new entropy, named as Deng entropy, is proposed. The proposed Deng entropy is the generalization of Shannnon entropy. If uncertain information is represented by probability distribution, the uncertain degree measured by Deng entropy is the same as that of Shannnon’s entropy. Some numerical examples are illustrated to shown the efficiency of Deng entropy.
[4258] vixra:1502.0215 [pdf]
The Origin of the Solar System in the Field of a Standing Sound Wave
According to the planetary origin conceptual model proposed in this paper, the protosun centre of the pre-solar nebula exploded, resulting in a shock wave that passed through it and then returned to the centre, generating a new explosion and shock wave. Recurrent explosions in the nebula resulted in a spherical standing sound wave, whose antinodes concentrated dust into rotating rings that transformed into planets. The extremely small angular momentum of the Sun and the tilt of its equatorial plane were caused by the asymmetry of the first, most powerful explosion. Differences between inner and outer planets are explained by the migration of solid matter, while the Oort cloud is explained by the division of the pre-solar nebula into a spherical internal nebula and an expanding spherical shell of gas. The proposed conceptual model can also explain the origin and evolution of exoplanetary systems and may be of use in searching for new planets.
[4259] vixra:1502.0204 [pdf]
Personal Multithreading: Account Snippet Proposals and Missing Account Indications
A modular way of making progress concerning personal multithreading is suggested: collecting account snippet proposals and missing account indications without an immediate need for integration into a coherent account. Six account snippets for personal multithreading are proposed and and four options for further contributions, that is missing account indications, on personal multi-threading are listed.
[4260] vixra:1502.0192 [pdf]
Physical Dimension of Sciences
I propose a classification of scientific fields by the place that their typical objects occupy in three-dimensional space of physical dimensions: length, mass and time - on a logarithmic scale. Classification includes some areas of physics, chemistry, biology and geology, as well as history. Natural interdisciplinary connections are established , as well as the gaps - the region of space in which there are no objects of modern science.
[4261] vixra:1502.0174 [pdf]
The Units of Planck's Constant are not [ J x s ].
The challenge of this essay was to demonstrate that the units of Planck’s constant are not [J x s]. Borrowing from the logic of the calibration, an attempt was made to find a complete set of small scale measuring sticks for each of time, space, mass, charge and temperature. This however was not possible unless we let the units of Planck’s constant be [J]. It appears that Planck et al forgot to incorporate measure-time into the famous energy equation, E = hν. The extra unit of [s] that is normally assigned to h actually belongs to a previously hidden measure-time variable. This logic suggests that Planck’s constant is an energy constant and not an action constant. After correcting this error, a complete set of unit measuring sticks, calibrated to the time scale of the cycle was calculated. A self-similar unit set was then calibrated to the time scale of the second. The scalability and self-similarity of these unit sets opens the door to the fractal paradigm, one of the main motivations for this research. This small change to the units of Planck’s constant has far reaching implications. All equations that contain h need to be reevaluated. All interpretations founded in unit analysis need to be reexamined. Much work still needs to be done to vindicate this approach.
[4262] vixra:1502.0171 [pdf]
From Physicality to Mathematicality, to Informaticality, to Consciousness, and Ontology, Or, What Mathematics Cannot Describe is Ontology
Extraordinary mathematicality of physics is also shown by dimensionlessness of Planck spacetime and mass. At the same time the Planck granularity of spacetime also shows that physics can be simulated by a binary computer. So physics is informational. But mathematics is not everything in physics, consciousness cannot be solely explained by mathematics. The reason that quantum gravity (QG) does not yet exist is the lack of knowledge about spacetime as background. Quantum mechanics is not complete, because foundational principle is not yet known, and because consciousness and QG are not yet explained. The free will and quantum randomness are similar unexplained phenomena. Even philosophy is important in physics, because what mathematics cannot describe in physics is ontology. And, intuition affects what is mainstream physics. Simplicity and clearness of physics and mathematics of physics are important not only for beginners, but also for the development of the fundamental physics. Uncertainty principle is so simple that maybe it could be derived without the use of wave functions. On the simplicity and clearness of fundamental physics it can be done a lot.
[4263] vixra:1502.0160 [pdf]
History of Problem, Cooperstock is Wrong.
Dear readers, the picture of Physics lefts you in confusion. The prime example is refutation of black holes in 2014, Phys.Lett.B 738, 61–7 by Laura, a Professor. I have arguments against her paper, but perhaps I am the only one, who is worried. They keep bringing things forward, which are thought to be refuted and over refuted. Another example of mind blowing is the Dr. Cooperstock. First his attempt was to deny the Standards of Metrology (within "Energy Localization hypothesis"). I have arguments against his idea. Then he came up with another mind abuse: absence of long detected Dark Matter. In the following I am defending the Dark Matter from the nihilistic aggression of Dr. Cooperstock. Speaking of nihilism, the most grim picture is in Quantum Mechanics of Niels Bohr. In 2015 they have "proved" in elitist "Nature", that Shr"odinger's Cat is real. Thus, the world does not exist: a thing can not both be and not be. It is very convenient now: if even a grain of sand is crazy hallucination (like the "proven" "reality" of undead cat), then this non-existent grain needs no divine (loved, but more often hated) Creator. The reason of delusion: they have missed an intelligent factors, e.g. evil spirits, which very often act on the measuring device. Recall the wrong alarms in atomic armies.
[4264] vixra:1502.0133 [pdf]
An Accumulative Model for Quantum Theories
For a general quantum theory that is describable by a path integral formalism, we construct a mathematical model of an accumulation-to-threshold process whose outcomes give predictions that are nearly identical to the given quantum theory. The model is neither local nor causal in spacetime, but is both local and causal is in a non-observable path space. The probabilistic nature of the squared wavefunction is a natural consequence of the model. We verify the model with simulations, and we discuss possible discrepancies from conventional quantum theory that might be detectable via experiment. Finally, we discuss the physical implications of the model.
[4265] vixra:1502.0127 [pdf]
Muonic Hydrogen and the Proton Radius Puzzle.theoretical Model of Repulsive Interaction , Yukawa Type .Mediation of Gravitino by Decay, a W Boson and Leptons (Virtuals Particles ) .Exact Theoretical Calculation of the Proton Radius of Muonic Hydrogen at
The extremely precise extraction of the proton radius by Pohl et al. from the mea- sured energy diference between the 2P and 2S states of muonic hydrogen disagrees significantly with that extracted from electronic hydrogen or elastic electron-proton scattering. This is the proton radius puzzle. Randolf Pohl Max-Planck-Institut für Quantenoptik, 85748 Garching, Germany Ronald Gilman Department of Physics & Astronomy, Rutgers University, Piscataway, NJ 08854-8019, USA. Gerald A. Miller Department of Physics, Univ. of Washington, Seattle, WA 98195-1560, USA. Krzysztof Pachucki Faculty of Physics, University of Warsaw, Hoz˙a 69, 00-681 Warsaw, Poland arXiv:1301.0905v2 [physics.atom-ph] 30 May 2013 In this paper, using the fundamental equation that equals the electromagnetic force and gravity; obtained in one of our previous work; and which is derived from gravity, both elementary electric charge, the mass of electron and gravitino; as a dependent equation canonical partition function of the imaginary parts of the nontrivial zeros of the Riemann zeta function and the Planck mass. This equation directly implies the existence of a repulsive gravitational force at very short distances. The decay of gravitinos in a W boson and lepton (muon) would be the phenomenon responsible for this repulsive force that would make the radius of the proton in the muonic hydrogen atom, was that obtained experimentally 8.4087 x 10 ^ -16 m . Likewise, the long half-life, very massive gravitinos, would allow these, penetrate the proton where they finally decay in the W boson and the a lepton. The invariance of the proton, ie its non transformation into a neutron; would be the consequence of an effect of virtual particles: gravitinos, W boson, muon and even the X,Y bosons, of the theories SU(5) grand unification. The same canonical partition function of the zeros of the Riemann zeta function; It is indeed a sum of Yukawa type potential; therefore repulsive by exchanging a vector boson. The absence of singularities of black holes, surely, is an effect of this repulsive force. For this reason, increasing the area of a black hole can be interpreted physically as the action of this repulsive force.
[4266] vixra:1502.0122 [pdf]
Extending du Bois-Reymond’s Infinitesimal and Infinitary Calculus Theory Part 1 Gossamer Numbers
The discovery of what we call the gossamer number system ∗G, as an extension of the real numbers includes an infinitesimal and infinitary number system; by using ‘infinite integers’, an isomorphic construction to the reals by solving algebraic equations is given. We believe this is a total ordered field. This could be an equivalent construction of the hyperreals. The continuum is partitioned: 0 < Φ+ < R+ + Φ < +Φ−1 < ∞.
[4267] vixra:1502.0121 [pdf]
Extending du Bois-Reymond’s Infinitesimal and Infinitary Calculus Theory Part 2 the Much Greater Than Relations
An infinitesimal and infinitary number system the Gossamer numbers is fitted to du Bois-Reymond’s infinitary calculus, redefining the magnitude relations. We connect the past symbol relations much-less-than and much-less-than or equal to with the present little-o and big-O notation, which have identical definitions. As these definitions are extended, hence we also extend little-o and big-O, which are defined in Gossamer numbers. Notation for an reformed infinitary calculus, calculation at a point is developed. We proceed with the introduction of an extended infinitary calculus.
[4268] vixra:1502.0120 [pdf]
Extending du Bois-Reymond’s Infinitesimal and Infinitary Calculus Theory Part 3 Comparing Functions
An algebra for comparing functions at infinity with infinireals, comprising of infinitesimals and infinities, is developed: where the unknown relation is solved for. Generally, we consider positive monotonic functions f and g, arbitrarily small or large, with relation z: f z g. In general we require f, g, f − g and f/g to be ultimately monotonic.
[4269] vixra:1502.0119 [pdf]
Extending du Bois-Reymond’s Infinitesimal and Infinitary Calculus Theory Part 4 the Transfer Principle
Between gossamer numbers and the reals, an extended transfer principle founded on approximation is described, with transference between different number systems in both directions, and within the number systems themselves. As a great variety of transfers are possible, hence a mapping notation is given. In ∗G we find equivalence with a limit with division and comparison to a transfer ∗G → R with comparison.
[4270] vixra:1502.0118 [pdf]
Extending du Bois-Reymond’s Infinitesimal and Infinitary Calculus Theory Part 5 Non-Reversible Arithmetic and Limits
Investigate and define non-reversible arithmetic in ∗G and the real numbers. That approximation of an argument of magnitude, is arithmetic. For non-reversible multiplication we define a logarithmic magnitude relation. Apply the much-greater-than relation in the evaluation of limits. Consider L’Hopital’s rule with infinitesimals and infinities, and in a comparison f (z) g form.
[4271] vixra:1502.0117 [pdf]
Extending du Bois-Reymond’s Infinitesimal and Infinitary Calculus Theory Part 6 Sequences and Calculus in ∗G
With the partition of positive integers and positive infinite integers, it follows naturally that sequences are also similarly partitioned, as sequences are indexed on integers. General convergence of a sequence at infinity is investigated. Monotonic sequence testing by comparison. Promotion of a ratio of infinite integers to non-rational numbers is conjectured. Primitive calculus definitions with infinitary calculus, epsilon-delta proof involving arguments of magnitude are considered.
[4272] vixra:1502.0115 [pdf]
Convergence Sums at Infinity with New Convergence Criteria
Development of sum and integral convergence criteria, leading to a representation of the sum or integral as a point at infinity. Application of du Bois-Reymond’s comparison of functions theory, when it was thought that there were none. Known convergence tests are alternatively stated and some are reformed. Several new convergence tests are developed, including an adaption of L’Hopital’s rule. The most general, the boundary test is stated. Thereby we give an overview of a new field we call ‘Convergence sums’. A convergence sum is essentially a strictly monotonic sum or integral where one of the end points after integrating is deleted resulting in a sum or integral at a point.
[4273] vixra:1502.0112 [pdf]
Rearrangements of Convergence Sums at Infinity
Convergence sums theory is concerned with monotonic series testing. On face value, this may seem a limitation but, by applying rearrangement theorems at infinity, non-monotonic sequences can be rearranged into monotonic sequences. The resultant monotonic series are convergence sums. The classes of convergence sums are greatly increased by the additional versatility applied to the theory.
[4274] vixra:1502.0111 [pdf]
Ratio Test and a Generalization with Convergence Sums
For positive series convergence sums we generalize the ratio test in ∗G the gossamer numbers. Via a transfer principle, within the tests we construct variations. However, most significantly we connect and show the generalization to be equivalent to the boundary test. Hence, the boundary test includes the generalized tests: the ratio test, Raabe’s test, Bertrand’s test and others.
[4275] vixra:1502.0110 [pdf]
The Boundary Test for Positive Series
With convergence sums, a universal comparison test for positive series is developed, which compares a positive monotonic series with an infinity of generalized p-series. The boundary between convergence and divergence is an infinity of generalized p-series. This is a rediscovery and reformation of a 175 year old convergence/divergence test.
[4276] vixra:1502.0102 [pdf]
On the Two Possible Interpretations of Bell Inequalities.
It is argued that the lesson we should learn from Bell inequalities (BI) is not that Quantum Mechanics (QM) is nonlocal, but that QM contains an error which must be corrected. 1
[4277] vixra:1502.0088 [pdf]
The Principle of Anti-Superposition in QM and the Local Solution of the Bell’s Inequality Problem
In this paper we identify the superposition principle as a main source of problems in QM (measurement, collapse, non-locality etc.). Here the superposition principle for individual systems is substituted by the antisuperposition principle: no non-trivial superposition of states is a possible individual state (for ensembles the superposition principle is true). The modified QM is based on the anti-superposition principle and on the new type of probability theory (Extended Probability Theory [1]), which allows the reversible Markov processes as models for QM. In the modified QM the measurement is a process inside of QM and the concept of an observation of the measuring system is defined. The outcome value is an attribute of the ensemble of measured systems. The collapse of the state is substituted by the Selection process. We show that the derivation of Bell’s inequalities is then impossible and thus QM remains a local theory. Our main results are: the locality of the modified QM, the local explanation of EPR correlations, the non-existence of the wave-particle duality, the solution of the measurement problem. We show that QM can be understood as a new type of the statistical mechanics of many-particle systems.
[4278] vixra:1502.0082 [pdf]
Cooperstock is Wrong: the Dark Matter is Necessary
An example of mind blowing is the Dr. Cooperstock. First his attempt was to deny the Standards of Metrology (within "Energy Localization hypothesis"). I have arguments against his idea. Then he came up with another mind abuse: absence of long detected Dark Matter. In the following I am defending the Dark Matter from the nihilistic aggression of Dr. Cooperstock. Speaking of nihilism, the most grim picture is in Quantum Mechanics of Niels Bohr. In 2015 they have "proved" in elitist "Nature", that Schr"odinger's Cat is real. Thus, the world does not exist: a thing can not both be and not be. It is very convenient now: if even a grain of sand is crazy hallucination (like the "proven" "reality" of undead cat), then this non-existent grain needs no divine (loved, but more often hated) Creator. The reason of delusion: they have missed an intelligent factors, e.g. evil spirits, which very often act on the measuring device. Recall the wrong alarms in atomic armies.
[4279] vixra:1502.0079 [pdf]
Channel Access-Aware User Association with Interference Coordination in Two-Tier Downlink Cellular Networks
The diverse transmit powers of the base stations (BSs) in a multi-tier cellular network, on one hand, lead to uneven distribution of the traffic loads among different BSs when received signal power (RSP)-based user association is used. This causes under utilization of the resources at low-power BSs. On the other hand, strong interference from high-power BSs affects the downlink transmissions to the users associated with low-power BSs. In this context, this paper proposes a channel access-aware (CAA) user association scheme that can simultaneously enhance the spectral efficiency (SE) of downlink transmission and achieve traffic load balancing among different BSs. The CAA scheme is a network-assisted user association scheme that requires traffic load information from different BSs in addition to the channel quality indicators. We develop a tractable analytical framework to derive the SE of downlink transmission to a user who associates with a BS using the proposed CAA scheme. To mitigate the strong interference, the almost blank subframe (ABS)-based interference coordination is exploited first in macrocell-tier and then in smallcell-tier. The performance of the proposed CAA scheme is analyzed in presence of these two interference coordination methods. The derived expressions provide approximate solutions of reasonable accuracy compared to the results obtained from Monte-Carlo simulations. Numerical results comparatively analyze the gains of CAA scheme over conventional RSP-based association and biased RSP-based association with and without the interference coordination method. Also, the results reveal insights regarding the selection of the proportion of ABS in macrocell/smallcell-tiers for various network scenarios.
[4280] vixra:1502.0074 [pdf]
General Method for Summing Divergent Series. Determination of Limits of Divergent Sequences and Functions in Singular Points
In this work I am going to mention historical development of divergent series theory, and to give a number of different examples, as some of the methods for their summing. After that, I am going to introduce the general method, which I discovered, for summing divergent series, which we can also consider as a method for computing limits of divergent sequences and functions in divergent points, In this case, limits of sequences of their partials sums. Through the exercises, I am going to apply this method on given examples and prove its validity. Then I'm going to apply the method to compute the value of some divergent integrals.
[4281] vixra:1502.0072 [pdf]
Cross-Correlation in Cricket Data and RMT
We analyze cross-correlation between runs scored over a time interval in cricket matches of different teams using methods of random matrix theory (RMT). We obtain an ensemble of cross-correlation matrices $C$ from runs scored by eight cricket playing nations for (i) test cricket from 1877 -2014 (ii)one-day internationals from 1971 -2014 and (iii) seven teams participating in the Indian Premier league T20 format (2008-2014) respectively. We find that a majority of the eigenvalues of C fall within the bounds of random matrices having joint probability distribution $P(x_1\ldots,x_n)=C_{N \beta} \, \prod_{j<k}w(x_j)\left | x_j-x_k \right |^\beta$ where $w(x)=x^{N\beta a}\exp\left(-N\beta b x\right)$ and $\beta$ is the Dyson parameter. The corresponding level density gives Marchenko-Pastur (MP) distribution while fluctuations of every participating team agrees with the universal behavior of Gaussian Unitary Ensemble (GUE). We analyze the components of the deviating eigenvalues and find that the largest eigenvalue corresponds to an influence common to all matches played during these periods.
[4282] vixra:1502.0064 [pdf]
Uncertainty Principle and Position Operator in Quantum Theory
The Heisenberg uncertainty principle is a consequence of the postulate that coordinate and momentum representations are related to each other by the Fourier transform. This postulate has been accepted from the beginning of quantum theory by analogy with classical electrodynamics. We argue that the postulate is based neither on strong theoretical arguments nor on experimental data. A position operator proposed in our recent publication resolves inconsistencies of standard approach and sheds a new light on important problems of quantum theory. We do not assume that the reader is an expert in the given field and the content of the paper can be understood by a wide audience of physicists.
[4283] vixra:1502.0051 [pdf]
Debye Length Cannot be Interpreted as Screening or Shielding Length
We show the existing solution of Poisson-Boltzmann equation (PBE) to violate charge conservation principle, and then derive the correct formula for charge density distribution $(\rho_e)$ in a fluid. We replaced unphysical old boundary conditions with some conditions that have never been used. Our result demonstrates that PBE cannot explain the formation of `Electric Double Layer' (EDL); it follows that the present physical interpretation of `Debye length' $(\lambda_D)$ is wrong, too.
[4284] vixra:1502.0048 [pdf]
PCT, Spin, Lagrangians
In this paper I invite you to take a step aside current quantum field theory (QFT): QFT has been said to be "well-established" since the 80's of the last century by its foremost theorists), and the majority of physicists consider it to be essentially complete since the discovery of the Higgs particle. It will be interesting to see, what that really means: What are the problems left over to the younger generations? I'll show you that a.o. it fails in its Lagrangian formalism, its postulate of positivity of energy, I'll show the uselessness of the uncertainty principle as to electromagnetic fields, and we'll see that there are serious doubts as to its conception of the photonic nature of electromagnetic fields, which a simple experiment could test against.
[4285] vixra:1502.0033 [pdf]
The Mathematical Structure of Quantum Nambu Mechanics and Neutrino Oscillations
Some Lie-algebraic structures of three-dimensional quantum Nambu mechanics are studied. From our result, we argue that the three-dimensional quantum Nambu mechanics is a natural extension of the ordinary Heisenberg quantum theory, and we give our insight that we can construct several candidates "beyond the Heisenberg quantum theory".
[4286] vixra:1502.0032 [pdf]
A Manifestation Toward the Nambu-Goldstone Geometry
Various geometric aspects of the Nambu-Goldstone ( NG ) type symmetry breakings ( normal, generalized, and anomalous NG theorems ) are summarized, and their relations are discussed. By the viewpoint of Riemannian geometry, Laplacian, curvature and geodesics are examined. Theory of Ricci ow is investigated in complex geometry of the NG-type theorems, and its diusion and stochastic forms are derived. In our anomalous NG theorems, the structure of symplectic geometry is emphasized, Lagrangian submanifolds and mirror duality are noticed. Possible relations between the Langlands correspondence, the Riemann hypothesis and the geometric nature of NG-type theorems are given.
[4287] vixra:1502.0012 [pdf]
Nonlinear Electrodynamics and Modification of Initial Singularities, and Dark Matter and Dark Energy Affecting Structure Formation in the Early and Later Universe
We find that having the scale factor close to zero due to a given magnetic field value in an early universe magnetic field affects how we would interpret Mukhanov’s chapter on “self reproduction” of the universe in in his reference. The stronger an early-universe magnetic field is, the greater the likelihood of production of about 20 new domains of size 1/H, with H the early-universe Hubble constant, per Planck time interval in evolution. We form DM from considerations as to a minimum time step, and then generate DM via axions. Through Ng’s quantum infinite statistics, we compare a DM count, giving entropy. The remainder of the document is in terms of DE as well as comparing entropy in galaxies versus entropy in the universe, through a lens of Mistra’s quantum theory of the big bang
[4288] vixra:1502.0004 [pdf]
La Constante, la Longueur et la Surface de Hubble. the Hubble Constant, Length and Surface
Nous faisons un retour sur l’expression de la loi de Hubble, réduisons Ho à trois constantes fondamentales et définissons la surface de Hubble σH. We review Hubble’s law formulation, we reduce Ho to a combination of three fundamental constants and define the Hubble surface σH .
[4289] vixra:1502.0003 [pdf]
An Interesting Perspective to the P Versus NP Problem
We discuss the P versus NP problem from the perspective of addition operation about polynomial functions. Two contradictory propositions for the addition operation are presented. With the proposition that the sum of k (k<=n+1) polynomial functions on n always yields a polynomial function, we prove that P=NP, considering the maximum clique problem. And with the proposition that the sum of k polynomial functions may yield an exponential function, we prove that P!=NP by constructing an abstract decision problem. Furthermore, we conclude that P=NP and P!=NP if and only if the above propositions hold, respectively.
[4290] vixra:1501.0235 [pdf]
A Note on the Definitions of Discrete Symmetries Operators
On the basis of the Silagadze research[1], we investigate the question of the definitions of the discrete symmetry operators both on the classical level, and in the secondary-quantization scheme [2,3]. We studied the physical content within several bases: light-front form formulation [4], helicity basis, angular momentum basis, and so on, on several practical examples. The conclusion is that we have ambiguities in the definitions of the corresponding operators P, C; T, which lead to different physical consequences [5,6].
[4291] vixra:1501.0231 [pdf]
A Nopreprint on the Pragmatic Logic of Fractions
A survey of issues concerning the pragmatic logic of fractions is presented, including a seemingly paradoxical calculation. The presence of nested ambiguity in the language of fractions is documented. Careful design of fraction related datatypes and of logics appropriate for such datatypes is proposed as a path towards novel resolution of these complications. The abstract datatype of splitting fractions is informally described. A rationale of its design is provided. A multi-threaded research plan on fractions is outlined.
[4292] vixra:1501.0223 [pdf]
Finite and Infinite Basis in P and NP
This article provide new approach to solve P vs NP problem by using cardinality of bases function. About NP-Complete problems, we can divide to infinite disjunction of P-Complete problems. These P-Complete problems are independent of each other in disjunction. That is, NP-Complete problem is in infinite dimension function space that bases are P-Complete. The other hand, any P-Complete problem have at most a finite number of P-Complete basis. The reason is that each P problems have at most finite number of Least fixed point operator. Therefore, we cannot describe NP-Complete problems in P. We can also prove this result from incompleteness of P.
[4293] vixra:1501.0217 [pdf]
Energy-Momentum Tensor in Electromagnetic Theory and Gravitation from Relativistic Quantum Equations
Recently, several discussions on the possible observability of 4-vector fields have been published in literature. Furthermore, several authors recently claimed existence of the helicity=0 fundamental field. We re-examine the theory of antisymmetric tensor fields and 4-vector potentials. We study the massless limits. In fact, theoretical motivation for this venture is the old papers of Ogievetskiı and Polubarinov, Hayashi, and Kalb and Ramond. They proposed the concept of the notoph, whose helicity properties are complementary to those of the photon. We analyze the quantum field theory with taking into account mass dimensions of the notoph and the photon. We also proceed to derive equations for the symmetric tensor of the second rank on the basis of the Bargmann-Wigner formalism. They are consistent with the general relativity. Particular attention has been paid to the correct definitions of the energy-momentum tensor and other Nöther currents. We estimate possible interactions, fermion-notoph, graviton-notoph, photon-notoph. PACS number: 03.65.Pm , 04.50.-h , 11.30.Cp
[4294] vixra:1501.0216 [pdf]
Electromagnetism & Solar System
English (traslation): It is described here as the electric charge is not invariable, but depends on the solar orbit. It also equates with what could be considered as gravitational charge, both by the diamond related parameters. Is also shown, as to vary the frequencies at different orbits, one can conclude the actual temperature of Sol. It is stated that the brightness per unit area is constant. And he concludes that both sunspots and magnetic reversals are caused by the permutation of links between gravitons and sets of stars of matter and antimatter. Spanish (original): Se describe aquí como la carga eléctrica no es invariable, sino que depende de la órbita solar. Se equipara, además, con lo que se podría considerar como carga gravitatoria, relacionadas ambas mediante los parámetros diamante. Se muestra también, como al variar las frecuencias en las distintas órbitas, se puede concluir la temperatura real del Sol. Se expone que la luminosidad por unidad de superficie es constante. Y se concluye afirmando que tanto las manchas solares como las inversiones magnéticas están causadas por la permutación de enlaces entre los gravitones y los conjuntos de estrellas de materia y antimateria.
[4295] vixra:1501.0212 [pdf]
Quantum Mechanical Biology
This article focuses on the approach to biology in terms of quantum mechanics. Quantum biology is a hypothesis that allows experimental verification, and pretends to be a further refinement of the known gene-centric model. The state of the species is represented as the state vector in the Hilbert space, so that the evolution of this vector is described by means of quantum mechanics. Experimental verification of this hypothesis is based on the accuracy of quantum theory and the ability to quickly gather statistics when working with populations of bacteria. The positive result of such experiment would allow to apply to the living computational methods of quantum theory, which has not yet go beyond the particular ''quantum effects''.
[4296] vixra:1501.0210 [pdf]
Rotating Space of the Universe, as a Source of Dark Energy and Dark Matter
Sources and physical nature of dark energy and dark matter can be explained, to find and to determine if it is assumed that after the Big Bang expanding spherical space of the Universe revolves around one of their central axes. Under this condition, loses its meaning and the concept of dark energy, and the imaginary phenomenon of divergence of the objects of the Universe, which is registered as a red shift based on the Doppler effect due to the increase of the linear velocity of these objects with increasing distances from the observer in a rotating spherical space of the Universe. The kinetic energy of the rotating Universe may be the source of dark matter. The energy of the accelerated expansion of the Universe by the pressure of the vacuum does not depend on the absolute value of the vacuum pressure, but it depends on the relative value. Relative value vacuum pressure is equal to the difference between the values of vacuum pressure at the boundary of the Universe to expand and beyond. Since this difference is equal to zero, then does not exist of dark energy.
[4297] vixra:1501.0208 [pdf]
The First Zeptoseconds: A Template for the Big Bang
Impedance may be defined as a measure of the amplitude and phase of opposition to the flow of energy. Impedance is a fundamental concept, of universal validity. Classical or quantum impedances, geometric or topological, scale invariant or scale dependent, fermionic or bosonic, mechanical or electromagnetic or gravitational - impedance matching governs the flow of energy. This is a universal principle. As such, it is not surprising to find that quantized impedances provide an interesting sensibility when taken as a template for the Big Bang, presenting a detailed perspective on the first few zeptoseconds
[4298] vixra:1501.0205 [pdf]
Observation on the Paper: Logical Independence and Quantum Randomness
Abstract I comment on the background meaning, beneath Boolean encodings, used in the paper by Tomasz Paterek et al. Keywords foundations of quantum theory, quantum mechanics, quantum randomness, quantum indeterminacy, quantum information, prepared state, measured state, unitary, orthogonal, scalar product, mathematical logic, logical independence, mathematical undecidability. DOI: 10.13140/2.1.4703.4883
[4299] vixra:1501.0203 [pdf]
A Nopreprint on Algebraic Algorithmics: Paraconsistency as an Afterthought
Algebraic Algorithmics, a phase taken from G.E. Tseitlin, is given a specific interpretation for the line of work in the tradition of the program algebra and thread algebra. An application to algebraic algorithmics of preservationist paraconsistent reasoning in the style of chunk and permeate is suggested and discussed. In the first appendix nopreprint is coined as a tag for new a publication category, and a rationale for its use is given. In a second appendix some rationale is provided for the affiliation from which the paper is written and posted.
[4300] vixra:1501.0201 [pdf]
High Degree Diophantine Equation by Classical Number Theory
The main idea of this article is simply calculating integer functions in module. The algebraic in the integer modules is studied in completely new style. By analysis in module and a careful constructing, a condition of non-solution of Diophantine Equation $a^p+b^p=c^q$ is proved that: $(a,b)=(b,c)=1,a,b>0,p,q>12$, $p$ is prime. The proof of this result is mainly in the last two sections.
[4301] vixra:1501.0183 [pdf]
An Alternative Kaluza Theory Identifying 5D Momentum and Charge
Kaluza's 1921 theory of gravity and electromagnetism using a fifth wrapped-up spatial dimension is inspiration for many modern attempts to develop new physical theories. Here an alternative approach is presented that more fully unifies gravity and electromagnetism. Emphasis is placed on admitting important electromagnetic fields not present in Kaluza's original theory without constraints, and on deriving a Lorentz force law. This is done by identifying 5D momentum with a kinetic charge. By doing so the usual assumption of Ricci flatness corresponding to sourceless electromagnetic fields is replaced by the weaker constraint of vanishing 5D momentum outside of charge models. A weak field limit is also used. An electromagnetic limit is imposed by assuming a constant scalar field. A further extended postulate set involving a super-energy divergence law and a conformal factor is also suggested that allows for a varying scalar field, within what then becomes a type of geometrical conformal gauge theory.
[4302] vixra:1501.0158 [pdf]
Question of Scaling of Gravitational Quanta in Gravitational Wave Detection Experiments
The limits of applicability of Planck's constant are brought into question in the context of the framework of quantum mechanics. The possibility is raised that gravitational quanta may be scaled by a more diminutive "action" whose detection requires sensitivities beyond the standard quantum limit. An experiment that could unequivocally test this possibility is suggested.
[4303] vixra:1501.0156 [pdf]
The Lorentz Transformation Cannot Be Physical
The Lorentz transformation will always remain only as an abstract mathematical transformation that cannot be incorporated into any theory of physics. The reason being there is no natural principle that a mathematical transformation carries over association of physical units with real numbers from the domain space to the image space. Any application of the Lorentz transformation will only result in space and time that have no relation to our physical world. On the other hand, there is no such issue with the Galilean transformation as the rulers and clocks calibrated at time zero reads in the same non-distorted units at all times. All physical theories founded on the Lorentz transformation are invalid. These include Einstein’s special relativity, particle physics, electromagnetism of the Maxwell-Heaviside equations.
[4304] vixra:1501.0129 [pdf]
The Prime Number Formulas
Abstract There are many proposed partial prime number formulas, however, no formula can generate all prime numbers. Here we show three formulas which can obtain the entire prime numbers set from the positive integers, based on the Möbius function plus the “omega” function, or the Omega function, or the divisor function.
[4305] vixra:1501.0095 [pdf]
Towards A Quaternionic Spacetime Tensor Calculus
Introducing a special quaternionic vector calculus on the tangent bundle of a 4-dimensional space, and by forcing a condition of holomorphism, a Minkowski-type spacetime emerges, from which special relativity, gravitation and also the whole Maxwell theory of electromagnetic fields arises.
[4306] vixra:1501.0094 [pdf]
Quantized Capacitance and Energy of the Atom and Photon
By modeling both the atom and the photon as capacitors, the correct energy levels are easily produced via extrapolation from Maxwell's, Gauss', Coulomb's and Ohm's laws — without the need to inject Planck's constant into the equation ad-hoc. In the case of the photon, Einstein's photoelectric equation is formulated as a result, with Planck's constant consequently occurring as an aggregate of fundamental constants. Analysis of these equations lends credence to Planck's fervent and controversial personal dogma that the constant which he himself had discovered is nothing but “a mathematical trick”. Further analysis shows that this model reconciles the wave-particle duality; wherein the wave properties of light and matter produce the particle-like aspects as a result of the laws of electrical engineering in conjunction with the uncertainty principle and Schrödinger's wave equations.
[4307] vixra:1501.0089 [pdf]
A New Warp Drive Equation Based on a Parallel $3+1$ Adm Formalism Applied to the Natario Spacetime Geometry
Warp Drives are solutions of the Einstein Field Equations that allows superluminal travel within the framework of General Relativity. There are at the present moment two known solutions: The Alcubierre warp drive discovered in $1994$ and the Natario warp drive discovered in $2001$. However the major drawback concerning warp drives is the huge amount of negative energy density able to sustain the warp bubble.In order to perform an interstellar space travel to a "nearby" star at $20$ light-years away in a reasonable amount of time a ship must attain a speed of about $200$ times faster than light.However the negative energy density at such a speed is directly proportional to the factor $10^{48}$ which is $1.000.000.000.000.000.000.000.000$ times bigger in magnitude than the mass of the planet Earth!!. With the correct form of the shape function the Natario warp drive can overcome this obstacle at least in theory.Other drawbacks that affects the warp drive geometry are the collisions with hazardous interstellar matter(asteroids,comets,interstellar dust etc)that will unavoidably occurs when a ship travels at superluminal speeds and the problem of the Horizons(causally disconnected portions of spacetime).The geometrical features of the Natario warp drive are the required ones to overcome these obstacles also at least in theory. Some years ago starting from $2012$ to $2014$ a set of works appeared in the current scientific literature covering the Natario warp drive with an equation intended to be the original Natario equation however this equation do not obeys the original $3+1$ Arnowitt-Dresner-Misner($ADM$) formalism and hence this equation cannot be regarded as the original Natario warp drive equation.However this new equation satisfies the Natario criteria for a warp drive spacetime and as a matter of fact this equation must be analyzed under a new and parallel $3+1$ $ADM$ formalism. We compare both original and parallel $3+1$ $ADM$ formalisms using the approach of Misner-Thorne-Wheeler($MTW$) and Alcubierre and while in the $3+1$ spacetime the parallel equation differs radically from the original one when we reduce both equations to a $1+1$ spacetime both equations becomes equivalent.We discuss the possibilities in General Relativity for this new equation.
[4308] vixra:1501.0088 [pdf]
Decision Taking Avoiding Agency
In the setting of outcome oriented decision taking (OODT), decisions are scarce events occurring in between of many other events which are not decisions. Activities of and agency by an agent may occur without any decisions being taken by that agent, with choice and action determination as the only mechanisms for actively resolving uncertainty about future behavior. Such behaviour will be referred to as decision taking avoiding agency. A model, or rather a preliminary qualitative description, of decision taking avoiding agency is provided for systems consisting of or controlled by a solitary natural or artificial agent, as well as for a group of agents.
[4309] vixra:1501.0079 [pdf]
Is Axial Anomaly Really an Anomaly?
The concept of axial anomaly is fully accepted by modern physics, but nearly all physicists find it weird. Once we know from experiment that axial current is not conserved, why do we start with equation that preserves axial symmetry? Do we intentionally "make even number of mistakes" to obtain the correct result? Is there a field equation allowing to directly derive the correct expression for the divergence of the axial current? In this paper we will find such equation and describe some amazing consequences emerging from it.
[4310] vixra:1501.0051 [pdf]
A Unified Description of Particles, Strings and Branes in Clifford Spaces and P-Brane/polyparticle Duality
It is proposed how the Extended Relativity Theory in $C$-spaces (Clifford spaces) allows a unified formulation of point particles, strings, membranes and $p$-branes, moving in ordinary target spacetime backgrounds, within the description of a single $polyparticle$ moving in $C$-spaces. The degrees of freedom of the latter are provided by Clifford polyvector-valued coordinates (antisymmetric tensorial coordinates). A correspondence between the $p$-brane ($p$-loop) wave functional ``Schroedinger-like" equations of Ansoldi-Aurilia-Spallucci and the polyparticle wave equation in $C$-spaces is found via the polyparticle/$p$-brane duality/correspondence. The crux of exploiting this correspondence is that it might provide another unexplored avenue to quantize $p$-branes (a notoriously difficult and unsolved problem) from the more straightforward quantization of the polyparticle in $C$-spaces, even in the presence of external interactions. We conclude with some comments about the $compositeness$ nature of the polyvector-valued coordinate operators in terms of ordinary $p$-brane coordinates via the evaluation of $n$-ary commutators.
[4311] vixra:1501.0025 [pdf]
From Software Crisis to Informational Money
Some comments are made on the societal impact of software engineering, with a focus on the impact of software engineering concepts on the development of informational money as well as on the concept of money at large.
[4312] vixra:1501.0023 [pdf]
The Classical Maxwell Electrodynamics and the Electron Inertia Problem Within the Feynman Proper Time Paradigm
The Maxwell electromagnetic and the Lorentz type force equations are derived in the framework of the R. Feynman proper time paradigm and the related vacuum field theory approach. The electron inertia problem is analyzed within the Lagrangian and Hamiltonian formalisms and the related pressure-energy compensation principle. The modified Abraham- Lorentz damping radiation force is derived, the electromagnetic elctron mass origin is argued.
[4313] vixra:1501.0021 [pdf]
Bitcoin and Islamic Finance
It is argued that a Bitcoin-style money-like informational commodity may constitute an effective instrument for the further development of Islamic Finance. The argument involves the following elements: (i) an application of circulation theory to Bitcoin with the objective to establish the implausibility of interest payment in connection with Bitcoin, (ii) viewing a Bitcoin-like system as a money-like exclusively informational commodity with the implication that such a system need not support debt, (iii) the idea that Islamic Finance imposes different requirements compared to conventional financial policies on a money concerning its use as a tool for achieving social and economic objectives, and (iv) identification of two aspects of mining, gambling and lack of trust, that may both be considered problematic from the perspective of compliance with the rules of Islamic Finance and a corresponding proposal to modify the architecture of mining in order to improve compliance with these rules.
[4314] vixra:1501.0018 [pdf]
More Precise View about Remote Replication
The works of Luc Montagnier and Peter Gariaev suggests remote replication of DNA is possible. The developments in the model of dark DNA allow to imagine a detailed mechanism for how water can represent DNA and how DNA could be transcribed to dark DNA - essentially the analog of DNA-RNA transcription would be in question. The transcription/association represents a rule and rules are represented in terms of negentropic entanglement in TGD framework with pairs of states in superposition representing the instances of the rule. Transition energy serves as a characterizer of a molecule - say DNA codon - and the entangled state is a superposition of pairs in which either molecule is excited or dark DNA codon is excited to higher cyclotron state with same energy: this requires tuning of the magnetic field and sufficiently large value of h<sub>eff</sub> at the flux tube. Negentropic entanglement would due to the exchange of dark photons: this corresponds to wave DNA aspect. Dark cyclotron photons also generate negatively charged exclusion zones (EZs) discovered by Pollack and in this process transform part of protons to dark ones residing at the magnetic flux tubes associated with EZs and forming dark proton sequences. This allows to identify a mechanism of remote replication.
[4315] vixra:1501.0017 [pdf]
Induced Second Quantization
The notion of induced second quantization is introduced as an unavoidable aspect of the induction procedure for metric and spinor connection, which is the key element of TGD. Induced second quantisation provides insights about the QFT limit, about generalizes Feynman diagrammatics, and about TGD counterpart of second quantization of strings which appear in TGD as emergent objects. Zero energy ontology (ZEO) naturally restricts the anti-commutation relations inside causal diamonds defining quantum coherence regions so that the counterintuitive implication that all identical particles of the Universe are in totally symmetric/antisymmetric state is avoided. The relation of statistics to negentropic entanglement and the new view about position measurement provided by ZEO are discussed.
[4316] vixra:1501.0016 [pdf]
Psychedelic Induced Experiences as Key to the Understanding of the Connection Between Magnetic Body and Information Molecules?
There is a book about psychedelics titled as "Inner paths to outer space: Journies to Alien Worlds through Psychedelics and Other Spiritual Technics" written by Rick Strassman, Slawek Wojtowicz, Luis Eduardo Luna and Ede Frecska. The basic message of the book is that psychedelics might make possible instantaneous remote communications with distant parts of the Universe. The basic objection is that light velocity sets stringent limits on classical communications. In TGD framework this argument does not apply. In the article a model for remote mental interactions is constructed using basic notions of TGD inspired quantum biology such as magnetic body, dark photons, and Zero Energy Ontology.
[4317] vixra:1501.0015 [pdf]
Geometric Theory of Harmony
In the earlier article I introduced the notion of Hamiltonian cycle as a mathematical model for musical harmony and also proposed a connection with biology: motivations came from two observations. The number of icosahedral vertices is 12 and corresponds to the number of notes in 12-note system and the number of triangular faces of icosahedron is 20, the number of aminoacids. This led to a group theoretical model of genetic code and replacement of icosahedron with tetraicosahedron to explain also the 21st and 22nd amino-acid and solve the problem of simplest model due to the fact that the required Hamilton's cycle does not exist. This led also to the notion of bioharmony. This article was meant to be a continuation to the mentioned article providing a proposal for a theory of harmony and detailed calculations. It however turned out that the proposed notion of bioharmony was too restricted: all icosahedral Hamilton cycles with symmetries turned out to be possible rather than only the 3 cycles forced by the assumption that the polarity characteristics of the amino-acids correlate with the properties of the Hamiltonian cycle. In particular, it turned out that the symmetries of the Hamiltonian cycles are the icosahedral symmetries needed to predict the basic numbers of the genetic code and its extension to include also 21st and 22nd aminoacids. One also ends up with a proposal for what harmony is leading to non-trivial predictions both at DNA and amino-acid level.
[4318] vixra:1501.0014 [pdf]
How Are Visual Percepts Constructed
How does visual system analyze the incoming visual information and reconstruct from it a (highly artistic) picture of the external world? I encountered this problem for the first time for about 35 years ago while listening some lecture about what happens in retina. Although I have written also about visual qualia and visual perception, I have not considered this particular problem. In this framework a rather simple model suggesting why and how visual perception first build a simple sketch analogous to cartoon about the visual field with saccadic motion playing a key role in the process.
[4319] vixra:1501.0013 [pdf]
Criticality and Dark Matter
Quantum criticality is one of the corner stone assumptions of TGD. The value of Kähler coupling strength fixes quantum TGD and is analogous to critical temperature. TGD Universe would be quantum critical. What does this mean is however far from obvious and I have pondered the notion repeatedly both from the point of view of mathematical description and phenomenology. Superfluids exhibit rather mysterious looking effects such as fountain effect and what looks like quantum coherence of superfluid containers which should be classically isolated. These findings serve as a motivation for the proposal that genuine superfluid portion of superfluid corresponds to a large h<sub>eff</sub> phase near criticality at least and that also in other phase transition like phenomena a phase transition to dark phase occurs near the vicinity.
[4320] vixra:1501.0012 [pdf]
TGD Inspired Model for the Formation of Exclusion Zones from Coherence Regions
Emilio del Giudice et al have proposed that so collar coherent regions (CDs) of size about 1 micrometer in water are crucial for living systems and there is empirical evidence for them. Gerald Pollack and collaborators have discovered what they call exclusion zones with size up to about 200 micrometers. In the sequel a model for the formation of EZs assuming that CDs are their predecessors is proposed. The model involves also hydrogen bonds in an essential manner. The basic prediction is that the formation of CDs requires UV irradiation at energy around 12.06 eV. Solar radiation at this energy cannot propagate through atmosphere as ordinary photons but could arrive along magnetic flux tubes as dark protons.
[4321] vixra:1501.0011 [pdf]
Pioneer and Flyby Anomalies for Almost Decade Later
Pioneer and Flyby anomalies are astrophysical anomalies in our solar system. Standard physics explanations for Pioneer anomaly have been proposed but certainly fail for Flyby anomalies. In this article I update almost decade old TGD inspired model for these anomalies as direct demonstration about existence of spherical dark matter shells associated with planets and with radii of planetary orbits. The dark matter density would be universal as also the accleration anomaly equal to Hubble acceleration. A possible test for the model is provided by Earth-Moon system.
[4322] vixra:1501.0010 [pdf]
The Classical Part of the Twistor Story
Twistors Grassmannian formalism has made a breakthrough in N=4 supersymmetric gauge theories and the Yangian symmetry suggests that much more than mere technical breakthrough is in question. Twistors seem to be tailor made for TGD but it seems that the generalisation of twistor structure to that for 8-D imbedding space H=M<sup>4</sup>× CP<sub>2</sub> is necessary. M<sup>4</sup> (and S<sup>4</sup> as its Euclidian counterpart) and CP<sub>2</sub> are indeed unique in the sense that they are the only 4-D spaces allowing twistor space with Kähler structure. The Cartesian product of twistor spaces P<sub>3</sub>=SU(2,2)/SU(2,1)× U(1) and F<sub>3</sub> defines twistor space for the imbedding space H and one can ask whether this generalized twistor structure could allow to understand both quantum TGD and classical TGD defined by the extremals of Kähler action. In the following I summarize the background and develop a proposal for how to construct extremals of Kähler action in terms of the generalized twistor structure. One ends up with a scenario in which space-time surfaces are lifted to twistor spaces by adding CP<sub>1</sub> fiber so that the twistor spaces give an alternative representation for generalized Feynman diagrams. There is also a very closely analogy with superstring models. Twistor spaces replace Calabi-Yau manifolds and the modification recipe for Calabi-Yau manifolds by removal of singularities can be applied to remove self-intersections of twistor spaces and mirror symmetry emerges naturally. The overall important implication is that the methods of algebraic geometry used in super-string theories should apply in TGD framework. The physical interpretation is totally different in TGD. The landscape is replaced with twistor spaces of space-time surfaces having interpretation as generalized Feynman diagrams and twistor spaces as sub-manifolds of P<sub>3</sub>× F<sub>3</sub>replace Witten's twistor strings.
[4323] vixra:1501.0004 [pdf]
Open Letter on Hilbert's Fifth Problem
Hilbert's Fifth Problem, in English translation, [1], is as follows : ``How far Lie's concept of continuous groups of transformations is approachable in our investigations without the assumption of the differentiability of the functions ?" followed by : ``In how far are the assertions which we can make in the case of differentiable functions true under proper modifications without this assumptions ?" Lately, in the American mathematical literature, due to unclear reasons, it has often been distorted and truncated as follows, [3] : ``Hilbert's fifth problem, like many of Hilbert's problems, does not have a unique interpretation, but one of the most commonly accepted accepted interpretations ..." A recent letter in this regard, sent to Terence Tao, and the editors of [3], Dan Abramovich, Daniel S Freed, Rafe Mazzeo and Gigliola Staffilani can be found below.
[4324] vixra:1412.0277 [pdf]
Analysis of Histogram Based Shot Segmentation Techniques for Video Summarization
Content based video indexing and retrieval has its foundations in the analyses of the prime video temporal structures. Thus, technologies for video segmentation have become important for the development of such digital video systems. Dividing a video sequence into shots is the first step towards VCA and content-based video browsing and retrieval. This paper presents analysis of histogram based techniques on the compressed video features. Graphical User Interface is also designed in MATLAB to demonstrate the performance using the common performance parameters like, precision, recall and F1.
[4325] vixra:1412.0276 [pdf]
On Asymptotic of Extremes from Generalized Maxwell Distribution
In this paper, with optimal normalized constants, the asymptotic expansions of the distribution and density of the normalized maxima from generalized Maxwell distribution are derived. For the distributional expansion, it shows that the convergence rate of the normalized maxima to the Gumbel extreme value distribution is proportional to $1/\log n.$ For the density expansion, on the one hand, the main result is applied to establish the convergence rate of the density of extreme to its limit. On the other hand, the main result is applied to obtain the asymptotic expansion of the moment of maximum.
[4326] vixra:1412.0275 [pdf]
Higher-Order Expansion for Moment of Extreme for Generalized Maxwell Distribution
In this paper, the higher-order asymptotic expansion of the moment of extreme from generalized Maxwell distribution is gained, by which one establishes the rate of convergence of the moment of the normalized partial maximum to the moment of the associate Gumbel extreme value distribution.
[4327] vixra:1412.0259 [pdf]
Underlying Symmetry Among the Quark and Lepton Mixing Angles (Seven Year Update)
In 2007 a single mathematical model encompassing both quark and lepton mixing was described. This model exploited the fact that when a 3 x 3 rotation matrix whose elements are squared is subtracted from its transpose, a matrix is produced whose non-diagonal elements have a common absolute value, where this value is an intrinsic property of the rotation matrix. For the traditional CKM quark mixing matrix with its second and third rows interchanged (i.e., c - t interchange) this value equals one-third the corresponding value for the leptonic matrix (roughly, 0.05 versus 0.15). This model was distinguished by three such constraints on mixing. As seven years have elapsed since its introduction, it is timely to assess the model's accuracy. Despite large conflicts with experiment at the time of its introduction, and significant improvements in experimental accuracy since then, the model's six angles collectively fit experiment well; but one angle, incorrectly forecast, did require toggling (in 2012) the sign of an integer exponent. The model's mixing angles in degrees are 45, 33.210911, 8.034394 (the angle affected) for leptons; and 12.920966, 2.367442, 0.190986 for quarks.
[4328] vixra:1412.0258 [pdf]
Longitudinal Waves in Scalar, Three-Vector Gravity
The linear field equations are solved for the metrical component $g_{00}$. The solution is applied to the question of gravitational energy transport. The Hulse-Taylor binary pulsar is treated in terms of the new theory. Finally, the detection of gravitational waves is discussed.
[4329] vixra:1412.0252 [pdf]
A Substitution Map Applied to the Simplest Algebraic Identities
A substitution map applied to the simplest algebraic identities is shown to yield second- and third-order equations that share an interesting property at the minimum 137.036.
[4330] vixra:1412.0239 [pdf]
Doppler Boosting a Doublet Version of the Dirac Equation from a Free Fall Grid Onto a Stationary Grid in a Central Field of Gravity.
This paper is a sequel to ``Doppler Boosting a de Broglie Electron from a Free Fall Grid Into a Stationary Field of Gravity''. We Doppler boost a de Broglie particle from a free fall grid onto a stationary field of gravity. This results in an identification of the two Doppler boost options with an electron energy double-valueness similar to electron spin. It seems that, within the limitations of our approach to gravity, we found a bottom up version of a possible theory of Quantum Gravity, on that connects the de Broglie hypothesis to gravity. This paper finishes and adapts ``Towards a 4-D Extension of the Quantum Helicity Rotator with a Hyperbolic Rotation Angle of Gravitational Nature'' for the quantum gravity part. We try to boost the de Broglie particle's quantum wave equation from the free fall grid to the stationary grid. We find that this is impossible on the Klein-Gordon level, the Pauli level and the Dirac level. But when we double the Dirac level and thus realize a kind of a Yang-Mills doublet level, we can formulate a doublet version of the Weyl-Dirac equation that can be Doppler boosted from the free fall grid into the stationary grid in a central field of gravity. In the end we add a quantitative prediction for the gravitational Doppler shift of the matter wave or probability density of an electron positron pair. Our free fall grid to stationary grid approach is ad-hoc and does not present a fundamental theory, but is a pragmatic attempt to formulate quantum mechanics outside the Poincaré group environment and beyond Lorentz symmetry.
[4331] vixra:1412.0222 [pdf]
On Self-Collapsing Wavefunctions and the Fine Tuning of the Universe
A new variation on the Copenhagen interpretation of quantum mechanics is introduced, and its effects on the evolution of the Universe are reviewed. It is demonstrated that this modified form of quantum mechanics will produce a habitable Universe with no required tuning of the parameters, and without requiring multiple Universes or external creators.
[4332] vixra:1412.0221 [pdf]
On The Potential Hostility of Alien Species
In this article we discuss the possibility that an extraterrestrial species could be hostile to humanity, and present estimates for the probability that visitors to the Earth will be aggressive. For this purpose we develop a generic model of multiple civilizations which are permitted to interact, and using randomized parameters we simulate thousands of potential worlds through several millenia of their development. By reviewing the species which survive the simulation, we can estimate the fraction of species which are hostile and the fraction which are supportive of other cultures.
[4333] vixra:1412.0217 [pdf]
Doppler Boosting a de Broglie Electron from a Free Fall Grid Into a Stationary Field of Gravity.
This paper is a sequel to ``Frequency Gauged Clocks on a Free Fall Grid and Some Gravitational Phenomena''. We Doppler boost a de Broglie particle from a free fall grid onto a stationary field of gravity. First we do this for a photon and then for a particle with non-zero rest-mass. This results in an identification of the two Doppler boost options with electron spin or with electron energy double-valueness. It seems that, within the limitations of our approach to gravity, we found a bottom up version of a possible theory of Quantum Gravity, on that connects the de Broglie hypothesis to gravity. This paper realizes the connection between our papers ``Frequency Gauged Clocks on a Free Fall Grid and Some Gravitational Phenomena'' and ``Towards a 4-D Extension of the Quantum Helicity Rotator with a Hyperbolic Rotation Angle of Gravitational Nature''.
[4334] vixra:1412.0213 [pdf]
Frequency Gauged Clocks on a Free Fall Grid and Some Gravitational Phenomena
Using frequency gauged clocks on a free fall grid we look at gravitational phenomena as they appear for observers on a stationary grid in a central field of gravity. With an approach based on Special Relativity, the Weak Equivalence Principle and Newton's gravitational potential we derive first order correct expressions for the gravitational red shift of stationary clocks and of satellites. We also derive first order correct expressions for the geodetic precession, the Shapiro delay basis and the gravitational index of refraction, so phenomena connected to the curvature of the metric. Our approach is pragmatic and inherently limited but, due to its simplicity, it might be useful as an intermediate in between SR and GR.
[4335] vixra:1412.0209 [pdf]
An Eternal Steady State Universe
Some cosmological theories, such as many versions of eternal inflation and ΛCDM involve creation processes which continue indefinitely with no defined termination. Such processes can only occur in a temporally unbounded but finite universe. This requirement imposes serious constraints on many theories but the issue is often ignored. I propose an eternal steady state cosmological model in which past- or future-incomplete processes with no defined beginning or end are not permitted. Much well regarded theory is incompatible with this model; however there are viable alternatives.
[4336] vixra:1412.0201 [pdf]
Proof of the Existence of Transfinite Cardinals Strictly Smaller Than Aleph Zero with an Ensuing Solution to the Twin Prime Conjecture
In this paper the author submits a proof using the Power Set relation for the existence of a transfinite cardinal strictly smaller than Aleph Zero, the cardinality of the Naturals. Further, it can be established taking these arguments to their logical conclusion that even smaller transfinite cardinals exist. In addition, as a lemma using these new found and revolutionary concepts, the author conjectures that some outstanding unresolved problems in number theory can be brought to heel. Specifically, a proof of the twin prime conjecture is given.
[4337] vixra:1412.0165 [pdf]
Regge Trajectories by 0-Brane Matrix Dynamics
The energy spectrum of two 0-branes for fixed angular momentum in 2+1 dimensions is calculated by the Rayleigh-Ritz method. The basis function used for each angular momentum consists of 80 eigenstates of the harmonic oscillator problem on the corresponding space. It is seen that the spectrum exhibits a definite linear Regge trajectory behavior. It is argued how this behavior, together with other pieces of evidence, suggests the picture by which the bound-states of quarks and QCD-strings are governed by the quantum mechanics of matrix coordinates.
[4338] vixra:1412.0159 [pdf]
Current-Field Equations Including Charge Creation-Annihilation Fields and Derivation of Klein-Gordon and Schr{\"{o}}dinger Equations and Gauge Transformation
We found new current-field equations including charge creation-annihilation fields. Although it is difficult to treat creation and annihilation of charge pairs for Maxwell's equations, the new equations easily treat them. The equations cause the confinement of charge creation and annihilation centers, which means the charge conservation for this model. The equations can treat not only electromagnetic field but also weak and strong force fields. Weak gravitational field can be also treated by the equations, where four current means energy and momentum. It is shown that Klein-Gordon and Schr\"{o}dinger equations and gauge transformation can be directly derived from the equations, where the wave function is defined as complex exponential function of the energy creation-annihilation field.
[4339] vixra:1412.0151 [pdf]
Moving Into Black Hole: is there a Wall?
How much has been said in the media, that Earth-man never sees the body B fall into a black hole. Reason: time dilation. But researcher A with rocket has full control of the situation, he can come close to Black Hole, almost to contact and observe everything. Therefore, the distance between A and B may be zero. It does not depend on when in the past the body B was shut into a black hole. Therefore, the black hole horizon has one big collision of bodies.
[4340] vixra:1412.0148 [pdf]
Alternative Classical Mechanics IV
This paper presents an alternative classical mechanics which is invariant under transformations between reference frames and which can be applied in any reference frame without the necessity of introducing fictitious forces. Additionally, a new principle of conservation of energy is also presented.
[4341] vixra:1412.0126 [pdf]
Space-Time Structure and Fields of Bound Charges
An exact solution for the field of a charge in a uniformly accelerated noninertial frame of reference (NFR) alongside the "Equivalent Situation Postulate"allows one to find space-time structure as well as fields from arbitrarily shaped charged conductors, without using Einstein’s equations. In particular, the space-time metric over a charged plane can be related to the metric being obtained from an exact solution to Einstein- Maxwell’s equations. This solution describes an equilibrium of charged dust in parallel electric and gravitational fields. The field and metric outside a conducting ball have been found. The method proposed eliminates divergence of the proper energy and makes classical electrodynamics consistent at any sufficiently small distances. An experiment is proposed to verify the approach suggested.
[4342] vixra:1412.0107 [pdf]
Toward a Theory of Consciousness
More effort is needed to arrive at a complete theory of consciousness that provide a satisfactory explanation of conscious experience. Some considerations that may inspire and contribute to such a theory are described. The hypothesis that consciousness can result in evolutionary advantages only as a consequence of its participation in and its effects on decision making is considered. The hypotheses that consciousness is an intrinsic property of reality and that encoded information gives rise to a unique conscious experience is considered as well as if evolutionary processes shape the consciousness/system interface, and an abstract information-based reality. It is suggested that implementation of the non-event-based conscious decision making strategy may be the only possibility of creating artificial intelligent systems that have free will.
[4343] vixra:1412.0088 [pdf]
Is Entropy Enough to Evaluate the Probability Transformation Approach of Belief Function?
In Dempster-Shafer Theory (DST) of evidencee and transferable belief model (TBM), the probability transformation is necessary and crucial for decision-making. The evaluation of the quality of the probability transformation is usually based on the entropy or the probabilistic information content (PIC) measures, which are questioned in this paper. Another alternative of probability transformation approach is proposed based on the uncertainty minimization to verify the rationality of the entropy or PIC as the evaluation criteria for the probability transformation. According to the experimental results based on the comparisons among different probability transformation approaches, the rationality of using entropy or Probabilistic Information Content (PIC) measures to evaluate probability transformation approaches is analyzed and discussed.
[4344] vixra:1412.0084 [pdf]
Application of Referee Functions to the Vehicle-Born Improvised Explosive Device Problem
We propose a solution to the Vehicle-Born Improvised Explosive Device problem. This solution is based on a modelling by belief functions, and involves the construction of a combination rule dedicated to this problem. The construction of the combination rule is made possible by a tool developped in previous works, which is a generic framework dedicated to the construction of combination rules. This tool implies a tripartite architecture, with respective parts implementing the logical framework, the combination definition (referee function) and the computation processes. Referee functions are decisional arbitrament conditionally to basic decisions provided by the sources of information, and allows rule definitions at logical level adapted to the application.We construct a referee function for the Vehicle-Born Improvised Explosive Device problem, and compare it to reference combinaton rules.
[4345] vixra:1412.0083 [pdf]
Change Detection from Remote Sensing Images Based on Evidential Reasoning
Theories of evidence have already been applied more or less successfully in the fusion of remote sensing images. In the classical evidential reasoning, all the sources of evidence and their fusion results are related with the same invariable (static)frame of discernment. Nevertheless, there are possible change occurrences through multi-temporal remote sensing images, and these changes need to be detected efficiently in some applications. The invariable frame of classical evidential reasoning can’t efficiently represent nor detect the changes occurrences from heterogenous remote sensing images. To overcome this limitation, Dynamical Evidential Reasoning (DER) is proposed for the sequential fusion of multi-temporal images. A new state transition frame is defined in DER and the change occurrences can be precisely represented by introducing a state transition operator. The belief functions used in DER are defined similarly to those defined in the Dempster-Shafer Theory (DST). Two kinds of dynamical combination rules working in free model and constrained model are proposed in this new framework for dealing with the different cases. In the final, an experiment using three pieces of real satellite images acquired before and after an earthquake are provided to show the interest of the new approach.
[4346] vixra:1412.0081 [pdf]
Edge Detection in Color Images Based on DSmT
In this paper, we present a non-supervised methodology for edge detection in color images based on belief functions and their combination. Our algorithm is based on the fusion of local edge detectors results expressed into basic belief assignments thanks to a flexible modeling, and the proportional conflict redistribution rule developed in DSmT framework. The application of this new belief-based edge detector is tested both on original (noise-free) Lena’s picture and on a modified image including artificial pixel noises to show the ability of our algorithm to work on noisy images too.
[4347] vixra:1412.0075 [pdf]
A Fuzzy-Cautious OWA Approach with Evidential Reasoning
Multi-criteria decision making (MCDM) is to make decisions in the presence of multiple criteria. To make a decision in the framework of MCDM under uncertainty, a novel fuzzy -Cautious OWA with evidential reasoning (FCOWA-ER) approach is proposed in this paper. Payoff matrix and belief functions of states of nature are used to generate the expected payoffs, based on which, two Fuzzy Membership Functions (FMFs) representing optimistic and pessimistic attitude, respectively can be obtained. Two basic belief assignments (bba’s) are then generated from the two FMFs. By evidence combination, a combined bba is obtained, which can be used to make the decision. There is no problem of weights selection in FCOWA-ER as in traditional OWA. When compared with other evidential reasoning-based OWA approaches such as COWA-ER, FCOWA-ER has lower computational cost and clearer physical meaning. Some experiments and related analyses are provided to justify our proposed FCOWA-ER.
[4348] vixra:1412.0074 [pdf]
Hierarchical DSmP Transformation for Decision-Making Under Uncertainty
Dempster-Shafer evidence theory is widely used for approximate reasoning under uncertainty; however, the decisionmaking is more intuitive and easy to justify when made in the probabilistic context. Thus the transformation to approximate a belief function into a probability measure is crucial and important for decision-making based on evidence theory framework. In this paper we present a new transformation of any general basic belief assignment (bba) into a Bayesian belief assignment (or subjective probability measure) based on new proportional and hierarchical principle of uncertainty reduction. Some examples are provided to show the rationality and efficiency of our proposed probability transformation approach.
[4349] vixra:1412.0069 [pdf]
On The Validity of Dempster-Shafer Theory
We challenge the validity of Dempster-Shafer Theory by using an emblematic example to show that DS rule produces counter-intuitive result. Further analysis reveals that the result comes from a understanding of evidence pooling which goes against the common expectation of this process. Although DS theory has attracted some interest of the scientific community working in information fusion and artificial intelligence, its validity to solve practical problems is problematic, because it is not applicable to evidences combination in general, but only to a certain type situations which still need to be clearly identified
[4350] vixra:1412.0054 [pdf]
Characterization of Hard and Soft Sources of Information: a Practical Illustration
Physical sensors (hard sources) and humans (soft sources) have complementary features in terms of perception, reasoning, memory. It is thus natural to combine their associated information for a wider coverage of the diversity of the available information and thus provide an enhanced situation awareness for the decision maker. While the fusion domain mainly considers (although not only) the processing and combination of information from hard sources, conciliating these two broad areas is gaining more and more interest in the domain of hard and soft fusion. In order to better understand the diversity and specificity of sources of information, we propose a functional model of a source of information, and a structured list of dimensions along which a source of information can be qualified. We illustrate some properties on a real data gathered from an experiment of light detection in a fog chamber involving both automatic and human detectors.
[4351] vixra:1412.0047 [pdf]
Translational and Rotational Properties of Tensor Fields in Relativistic Quantum Mechanics
Recently, several discussions on the possible observability of 4-vector fields have been published in literature. Furthermore, several authors recently claimed existence of the helicity=0 fundamental field. We re-examine the theory of antisymmetric tensor fields and 4-vector potentials. We study the massless limits. In fact, a theoretical motivation for this venture is the old papers of Ogievetskii and Polubarinov, Hayashi, and Kalb and Ramond. They proposed the concept of the notoph, whose helicity properties are complementary to those of the photon. We analyze the quantum field theory with taking into account mass dimensions of the notoph and the photon. We also proceed to derive equations for the symmetric tensor of the second rank on the basis of the Bargmann-Wigner formalism They are consistent with the general relativity. Particular attention has been paid to the correct definitions of the energy-momentum tensor and other Noether currents. We estimate possible interactions, fermion-notoph, graviton-notoph, photon-notoph. PACS number: 03.65.Pm , 04.50.-h , 11.30.Cp
[4352] vixra:1412.0029 [pdf]
Consecutive, Reversed, Mirror, and Symmetric Smarandache Sequences of Triangular Numbers
We use the Maple system to check the investigations of S. S. Gupta regarding the Smarandache consecutive and the reversed Smarandache sequences of triangular numbers [Smarandache Notions Journal, Vol. 14, 2004, pp. 366–368]. Furthermore, we extend previous investigations to the mirror and symmetric Smarandache sequences of triangular numbers.
[4353] vixra:1412.0023 [pdf]
Menelaus’s Theorem for Hyperbolic Quadrilaterals in The Einstein Relativistic Velocity Model of Hyperbolic Geometry
In this study, we present (i) a proof of the Menelaus theorem for quadrilaterals in hyperbolic geometry, (ii) and a proof for the transversal theorem for triangles, and (iii) the Menelaus*s theorem~for n-gons.
[4354] vixra:1412.0020 [pdf]
Smarandache Filters in Smarandache Residuated Lattice
In this paper we dene the Smarandache residuated lattice, Smarandache lter, Smarandache implicative lter and Smarandache positive implicative lter, we obtain some related results. Then we determine relationships between Smarandache lters in Smarandache residuated lattices.
[4355] vixra:1412.0007 [pdf]
The Electromagnetic Wave Evolution on Very Long Distance
We lay down the fundamental hypothesis that any electromagnetic radiation transforms progressively, evolving towards and finally reaching after an appropriate distance the value of the cosmic microwave background radiation, a 1,873 mm wavelength. This way we explain the cosmic redshift Z of far away Galaxies using only Maxwell’s equations and the energy quantum principle of the photons. Hubble’s law sprouts out naturally as the consequence of this transformation. According to this hypothesis we compute the constant Ho (84,3 Km/s/Mpc) using data from the Pioneer satellite and doing so deciphering the enigm of its anomalous behaviour. This hypothesis is confirmed by solving some cases that are still enigmatic for the standard cosmology. We review the distance modulus formula and comment on the limits of cosmological observations.
[4356] vixra:1412.0001 [pdf]
Improvement On Operator Axioms And Fundamental Operator Functions
The Operator axioms have been contructed to deduce number systems. In this paper, we slightly improve on the syntax of the Operator axioms and construct a semantics of the Operator axioms. Then on the basis of the improved Operator axioms, we define two fundamental operator functions to study the analytic properties of the Operator axioms. Finally, we prove two theorems about the fundamental operator functions and pose some conjectures. Real operators can give new equations and inequalities so as to precisely describe the relation of mathematical objects or scientific objects.
[4357] vixra:1411.0580 [pdf]
Alternative Classical Mechanics III
This paper presents an alternative classical mechanics which establishes the existence of a new universal force of interaction (called kinetic force) and which can be applied in any reference frame without the necessity of introducing fictitious forces.
[4358] vixra:1411.0567 [pdf]
L'évolution de L'onde électromagnétique Sur de Très Grandes Distances
Nous posons l’hypothèse fondamentale que toute radiation électromagnétique se transforme progressivement, évoluant vers et atteignant après une distance appropriée la valeur de la radiation cosmique résiduelle soit une longueur d’onde de 1,873 mm. Ainsi nous expliquons le décalage vers le rouge Z de la radiation provenant des galaxies éloignées moyennant les équations classiques de Maxwell et le principe quantique de l’énergie des photons. La loi de Hubble émerge tout naturellement comme conséquence de cette transformation. Suivant cette hypothèse, nous évaluons la constante Ho (84,3 Km/s/Mpc ) en utilisant les données fournies par le satellite Pioneer tout en expliquant l’anomalie de comportement attribuée à ce satellite. Cette hypothèse se confirme de même par la résolution de quelques situations inexpliquées par la cosmologie actuelle. Nous poursuivons en présentant l’expression modifiée du module de distance et discutons de la distance limite d’observation des phénomènes cosmologiques.
[4359] vixra:1411.0549 [pdf]
'spooky Action at a Distance' in the Micropolar Electromagnetic Theory
Still now there are no theoretical background for explanation of physical phenomena of `spooky action at a distance' as a quantum superposition of quantum particles. Several experiments shows that speed of this phenomena is at least four orders of magnitude of light speed in vacuum. The classical electromagnetic field theory is based on similarity to the classic dynamic of solid continuum media. The today's experimental data of spin of photon is not reflected reasonable manner in Maxwell's equations of EM. So, new proposed micropolar extensions of electromagnetic field equations cloud explain observed rotational speed of electromagnetic field, which experimentally exceed speed of light at least in four order of magnitude.
[4360] vixra:1411.0513 [pdf]
Automatic Aircraft Recognition using DSmT and HMM
In this paper we propose a new method for solving the Automatic Aircraft Recognition (AAR) problem from a sequence of images of an unknown observed aircraft. Our method exploits the knowledge extracted from a training image data set (a set of binary images of different aircrafts observed under three different poses) with the fusion of information of multiple features drawn from the image sequence using Dezert-Smarandache Theory (DSmT) coupled with Hidden Markov Models (HMM).
[4361] vixra:1411.0509 [pdf]
Algorithm of Nature
Numerical simulations of elementary gravitation and electromagnetic elds are done with an amazingly simple algorithm. But this algorithm renders nature correctly. As well, the material world is revealed to be completely Riemannian-geometrical without exception. Mathematics is based on the Geometric theorie of elds, which refers to Einstein and Rainich. The correctness of the theory is manifested in it that known particles appear as discrete solutions of geometric eld equations. The results involve new understanding of mathematical principles.
[4362] vixra:1411.0499 [pdf]
Interval-Valued Neutrosophic Soft Sets and Its Decision Making
In this paper, the notion of the interval valued neutrosophic soft sets (ivn−soft sets) is defined which is a combination of an interval valued neutrosophic sets [36] and a soft sets [30]. Our ivn−soft sets generalizes the concept of the soft set, fuzzy soft set, interval valued fuzzy soft set, intuitionistic fuzzy soft set, interval valued intuitionistic fuzzy soft set and neutrosophic soft set. Then, we introduce some definitions and operations on ivn−soft sets sets. Some properties of ivn−soft sets which are con- nected to operations have been established. Also, the aim of this paper is to investigate the decision making based on ivn−soft sets by level soft sets. Therefore, we develop a decision making methods and then give a example to illustrate the developed approach.
[4363] vixra:1411.0491 [pdf]
Neutrosophic Ideals of ¡-Semirings
Neutrosophic ideals of a ¡-semiring are introduced and studied in the sense of Smarandache[14], along with some operations such as intersection, composition, cartesian product etc. on them. Among the other results/characterizations, it is shown that all the operations are structure preserving.
[4364] vixra:1411.0488 [pdf]
Neutrosophic Soft Relations and Some Properties
In this work, we rst dene a relation on neutrosophic soft sets which allows to compose two neutrosophic soft sets. It is devised to derive useful information through the composition of two neutrosophic soft sets. Then, we examine symmetric, transitive and reexive neutrosophic soft relations and many related concepts such as equivalent neutrosophic soft set relation, partition of neutrosophic soft sets, equivalence classes, quotient neutrosophic soft sets, neutrosophic soft composition are given and their propositions are discussed. Finally a decision making method on neutrosophic soft sets is presented.
[4365] vixra:1411.0485 [pdf]
Possibility Neutrosophic Soft Sets with Applications in Decision Making and Similarity Measure
In this paper, concept of possibility neutrosophic soft set and its operations are defined, and their properties are studied. An application of this theory in decision making is investigated. Also a similarity measure of two possibility neutrosophic soft sets is introduced and discussed. Finally an application of this similarity measure in personal selection for a firm.
[4366] vixra:1411.0479 [pdf]
On Sub-Implicative (; )-Fuzzy Ideals of BCH-Algebras
The theory of fuzzy sets, which was initiated by Zadeh in his seminal paper [33] in 1965, was applied to generalize some of the basic concepts of algebra. The fuzzy algebraic structures play a vital role in mathematics with wide applications in many other branches such as theoretical physics, computer sciences, control engineering, information sciences, coding theory, logic, set theory, real analysis, measure theory etc.
[4367] vixra:1411.0478 [pdf]
Surfaces Family With Common Smarandache Asymptotic Curve
In this paper, we analyzed the problem of consructing a family of surfaces from a given some special Smarandache curves in Euclidean 3-space. Using the Frenet frame of the curve in Euclidean 3-space, we express the family of surfaces as a linear combination of the components of this frame, and derive the necessary and sufficient conditions for coefficents to satisfy both the asymptotic and isoparametric requirements. Finally, examples are given to show the family of surfaces with common Smarandache curve.
[4368] vixra:1411.0462 [pdf]
Neutrosophic Soft Semirings
The purpose of this paper is to study semirings and its ideals by neutrosophic soft sets. After noting some preliminary ideas for subsequent use in Section 1 and 2, I have introduced and studied neutrosophic soft semiring, neutrosophic soft ideals, idealistic neutrosophic soft semiring, regular (intra-regular) neutrosophic soft semiring along with some of their characterizations in Section 3 and 4. In Section 5, I have illustrated all the necessary definitions and results by examples.
[4369] vixra:1411.0461 [pdf]
Neutrosophic Soft Sets and Neutrosophic Soft Matrices Based on Decision Making
Maji[32], firstly proposed neutrosophic soft sets can handle the indeterminate information and inconsistent information which exists commonly in belief systems. In this paper, we have firstly redefined complement, union and compared our definitions of neutrosophic soft with the definitions given by Maji. Then, we have introduced the concept of neutrosophic soft matrix and their operators which are more functional to make theoretical studies in the neutrosophic soft set theory.
[4370] vixra:1411.0460 [pdf]
Neutrosophic Soft Sets with Applications in Decision Making
We firstly present definitions and properties in study of Maji [11] on neutrosophic soft sets. We then give a few notes on his study. Next, based on C¸ agman [4], we redefine the notion of neutrosophic soft set and neutrosophic soft set operations to make more functional. By using these new definitions we construct a decision making method and a group decision making method which selects a set of optimum elements from the alternatives. We finally present examples which shows that the methods can be successfully applied to many problems that contain uncertainties.
[4371] vixra:1411.0454 [pdf]
Similarity Measure Between Possibility Neutrosophic Soft Sets and Its Applications
In this paper, a similarity measure between possibility neutrosophic soft sets (PNS-set) is dened, and its properties are studied. A decision making method is established based on proposed similarity measure. Finally, an application of this similarity measure involving the real life problem is given.
[4372] vixra:1411.0449 [pdf]
Subsethood Measure for Single Valued Neutrosophic Sets
The main aim of this paper is to introduce a neurosophic subsethood measure for single valued neutrosophic sets. For this purpose, we rst introduce a system of axioms for subsethood measure of single valued neutrosophic sets. Then we give a simple subsethood mea- sure based to distance measure. Finally, to show eectiveness of intended subsethood measure, an application is presented in multicriteria decision making problem and results obtained are discussed. Though having a simple measure for calculation, the subsethood measure presents a new approach to deal with neutrosophic information.
[4373] vixra:1411.0444 [pdf]
Florentin Smarandache: A Celebration
We celebrate Prof. Florentin Smarandache, the Associate Editor and co-founder of Progress in Physics who is a prominent mathematician of the 20th/21th centuries. Prof. Smarandache is most known as the founder of neutrosophic logic, which is a modern extension of fuzzy logics by introducing the neutralities and denials (such as “neutral A” and “non-A” between “A” and “anti-A”). He is also known due to his many discoveries in the filed of pure mathematics such as number theory, set theory, functions, etc. (see many items connected with his name in CRC Encyclopedia of Mathematics). As a multi-talented person, Prof. Smarandache is also known due to his achievements in the other fields of science, and also as a poet and writer. He still work in science, and continues his creative research activity.
[4374] vixra:1411.0443 [pdf]
Generalized Exponential Type Estimator for Population Variance in Survey Sampling
In this paper, generalized exponential-type estimator has been proposed for estimating the population variance using mean auxiliary variable in singlephase sampling. Some special cases of the proposed generalized estimator have also been discussed.
[4375] vixra:1411.0432 [pdf]
Advances in DS Evidence Theory and Related Discussions
Based on the review of the development and recent advances in model, reasoning, decision and evaluation in evidence theory, some analyses and discussions on some problems, confusions and misunderstandings in evidence theory are provided together with some related numerical examples in this paper. The relations between evidence theory and probability theory, the evidence conflict and related counter-intuitive results, some definitions on distance of evidence and the evaluation criteria in evidence theory are covered in this paper. The future developing trends of evidence theory are also analyzed. This paper aims to provide reference for the correctly understanding and using of evidence theory.
[4376] vixra:1411.0431 [pdf]
An Airplane Image Target0s Multi-feature Fusion Recognition Method
This paper proposes an image target0s multi-feature fusion recognition method based on probabilistic neural networks (PNN) and Dezert-Smarandache theory (DSmT). To aim at multiple features extracted from an image, the information from them is fused. Firstly, the image is preprocessed with binarization and then multiple features are extracted, such as Hu moments, normalized moment of inertia, a±ne invariant moments, discrete outline parameters and singular values.
[4377] vixra:1411.0428 [pdf]
Combining Sources of Evidence with Reliability and Importance for Decision Making
The combination of sources of evidence with reliability has been widely studied within the framework of Dempster-Shafer theory (DST), which has been employed as a major method for integrating multiple sources of evidence with uncertainty.By the fact that sources of evidence may also be different in importance, for example inmulti-attribute decision making (MADM), we propose the importance discounting and combination method within the framework of DST to combine sources of evidence with importance, which is composed of an importance discounting operation and an extended Dempster’s rule of combination.
[4378] vixra:1411.0425 [pdf]
D Numbers Theory: a Generalization of Dempster-Shafer Theory
Dempster-Shafer theory is widely applied to uncertainty modelling and knowledge reasoning due to its ability of expressing uncertain information. However, some conditions, such as exclusiveness hypothesis and completeness constraint, limit its development and application to a large extend. To overcome these shortcomings in Dempster-Shafer theory and enhance its capability of representing uncertain information, a novel theory called D numbers theory is systematically proposed in this paper.
[4379] vixra:1411.0422 [pdf]
Generalized Evidence Theory
Conflict management is still an open issue in the application of Dempster Shafer evidence theory. A lot of works have been presented to address this issue. In this paper, a new theory, called as generalized evidence theory (GET), is proposed. Compared with existing methods, GET assumes that the general situation is in open world due to the uncertainty and incomplete knowledge.
[4380] vixra:1411.0415 [pdf]
Performance of M-ary Soft Fusion Systems Using Simulated Human Responses
A major hurdle in the development of soft and hard/soft data fusion systems is the inability to determine the practical performance gains between fusion operators without the burdens associated with human testing. Drift diffusion models of human responses (i.e., decision, confidence assessments, and response times) from cognitive psychology can be used to gain a sense of the performance of a fusion system during the design phase without the need for human testing.
[4381] vixra:1411.0414 [pdf]
Performance of Probability Transformations Using Simulated Human Opinions
Probability transformations provide a method of relating Dempster-Shafer sources of evidence to subjective probability assignments. These transforms are constructed to facilitate decision making over a set of mutually exclusive hypotheses.
[4382] vixra:1411.0408 [pdf]
Determinarea Solut, Iilor Ecuat, Iilor Diofantice 2069-2080
Functia care asociaza fiecarui numar natural n pe cel mai mic numar natural m care are proprietatea ca m! este multiplu lui n a fostconsiderata de prima data de Lucas in anul 1883.
[4383] vixra:1411.0402 [pdf]
An Introduction to the Theory of Algebraic Multi-Hyperring Spaces
A Smarandache multi-space is a union of n dierent spaces equipped with some dierent structures for an integer n 2 which can be used both for discrete or connected spaces, particularly for geometries and spacetimes in theoretical physics. In this paper, applying the Smarandaches notion and combining this with hyperrings in hyperring theory, we introduce the notion of multi-hyperring space and initiate a study of multi-hyperring theory. Some characterizations and properties of multi-hyperring spaces are investigated and obtained. Some open problems are suggested for further study and investigation.
[4384] vixra:1411.0389 [pdf]
Smarandache Curves According to Curves on a Spacelike Surface in Minkowski 3-Space R31
In this paper, we introduce Smarandache curves according to the Lorentzian Darboux frame of a curve on spacelike surface in Minkowski 3-space R 3 1. Also, we obtain the Sabban frame and the geodesic curvature of the Smarandache curves and give some characterizations on the curves when the curve is an asymptotic curve or a principal curve. And, we give an example to illustrate these curves.
[4385] vixra:1411.0388 [pdf]
Smarandache Curves According to Bishop Frame in Euclidean 3-Space
In this paper, we investigate special Smarandache curves according to Bishop frame in Euclidean 3-space and we give some differential geometric properties of Smarandache curves. Also we find the centers of the osculating spheres and curvature spheres of Smarandache curves.
[4386] vixra:1411.0387 [pdf]
Smarandache Curves According to Sabban Frame on S2
In this paper, we introduce special Smarandache curves according to Sabban frame on S2 and we give some characterization of Smarandache curves. Besides, we illustrate examples of our results
[4387] vixra:1411.0384 [pdf]
Smarandache Lattice and Pseudo Complement
In this paper, we have introduced smarandache - 2 - Algebraic structure of lattice namely smarandache lattice. A smarandache 2- algebraic structure on a set N means a weak algebraic structure Ao on N such that there exists a proper subset M of N which is embedded with a stronger algebraic structure A1. Stronger algebraic structure means a structure which satises more axioms, by proper subset one can understand a subset dierent from the empty set, by the unit element if any, and from the whole set. We have dened smarandache lattice and obtained some of its characterization through Pseudo complemented .For the basic concept, we referred the PadilaRaul [4].
[4388] vixra:1411.0383 [pdf]
Smarandache N-Structure on ci-Algebras
The Smarandache algebraic structures theory was introduced in 1998 by Padilla [11]. In [6], Kandasamy studied of Smarandache groupoids, sub-groupoids, ideal of groupoids, seminormal sub groupoids, Smarandache Bol groupoids, and strong Bol groupoids and obtained many interesting results about them.
[4389] vixra:1411.0382 [pdf]
Smarandache R Module and Morita Context
In this paper we introduced Smarandache - 2 - algebraic structure of R-Module namely Smarandache - R - Module. A Smarandache - 2 - algebraic structure on a set N means a weak algebraic structure A0 on N such that there exist a proper subset M of N, which is embedded with a stronger algebraic structure A1, stronger algebraic structure means satisfying more axioms, by proper subset one understands a subset from the empty set, from the unit element if any , from the whole set. We dene Smarandache - R - Module and obtain some of its characterization through S - algebra and Morita context. For basic concept we refer to Raul Padilla.
[4390] vixra:1411.0378 [pdf]
Some Normal Congruences in Quasigroups Determined by Linear-Bivariate Polynomials Over the Ring ZN
In this work, two normal congruences are built on two quasigroups with underlining set Z2 n relative to the linear-bivariate polynomial P(x; y) = a + bx + cy that generates a quasigroup over the ring Zn. Four quasigroups are built using the normal congruences and these are shown to be homomorphic to the quasigroups with underlining set Z2 n. Some subquasigroups of the quasigroups with underlining set Z2 n are also found.
[4391] vixra:1411.0377 [pdf]
On Some Smarandache Determinant Sequences
Murthy introduced the concept of the Smarandache Cyclic Determinant Natural Sequence, the Smarandache Cyclic Arithmetic Determinant Sequence, the Smarandache Bisymmetric Determinant Natural Sequence, and the Smarandache Bisymmetric Arithmetic Determinant Sequence and in [2], Majumdar derived the n-th terms of these four sequences. In this paper, we present some of the results found by Majumdar in [2] but of different approach..
[4392] vixra:1411.0376 [pdf]
Bi-Strong Smarandache BL-Algebras
A Smarandache structure on a set A means a weak structureW on A such that there exists a proper subset B of A which is embedded with a strong structure S.
[4393] vixra:1411.0362 [pdf]
The Orthogonal Planes Split of Quaternions and Its Relation to Quaternion Geometry of Rotations
Recently the general orthogonal planes split with respect to any two pure unit quaternions $f,g \in \mathbb{H}$, $f^2=g^2=-1$, including the case $f=g$, has proved extremely useful for the construction and geometric interpretation of general classes of double-kernel quaternion Fourier transformations (QFT) [E.Hitzer, S.J. Sangwine, The orthogonal 2D planes split of quaternions and steerable quaternion Fourier Transforms, in E. Hitzer, S.J. Sangwine (eds.), "Quaternion and Clifford Fourier Transforms and Wavelets", TIM \textbf{27}, Birkhauser, Basel, 2013, 15--39.]. Applications include color image processing, where the orthogonal planes split with $f=g=$ the grayline, naturally splits a pure quaternionic three-dimensional color signal into luminance and chrominance components. Yet it is found independently in the quaternion geometry of rotations [L. Meister, H. Schaeben, A concise quaternon geometry of rotations, MMAS 2005; \textbf{28}: 101--126], that the pure quaternion units $f,g$ and the analysis planes, which they define, play a key role in the spherical geometry of rotations, and the geometrical interpretation of integrals related to the spherical Radon transform of probability density functions of unit quaternions, as relevant for texture analysis in crystallography. In our contribution we further investigate these connections.
[4394] vixra:1411.0358 [pdf]
A Note on Computing Lower and Upper Bounds of Subjective Probability from Masses of Belief
This short note shows on a very simple example the consistency of free DSm model encountered in Dezert-Smarandache Theory (DSmT) [2] with a refined model for computing lower and upper probability bounds from basic belief assignments (bba). The belief functions have been introduced in 1976 by Shafer in Dempster-Shafer Theory (DST), see [1] and [2] for definitions and examples.
[4395] vixra:1411.0335 [pdf]
Neutrosophic Crisp Sets & Neutrosophic Crisp Topological Spaces
In this paper, we generalize the crisp topological space to the notion of neutrosophic crisp topological space, and we construct the basic concepts of the neutrosophic crisp topology. In addition to these, we introduce the denitions of neutrosophic crisp continuous function and neutrosophic crisp compact spaces. Finally, some characterizations concerning neutrosophic crisp compact spaces are presented and one obtains several properties. Possible application to GIS topology rules are touched upon.
[4396] vixra:1411.0332 [pdf]
Neutrosophic Multi Relations and Their Properties
In this paper, the neutrrosophic multi relation (NMR) defined on the neutrosophic multisets [18] is introduced. Various properties like re°exiv- ity,symmetry and transitivity are studied.
[4397] vixra:1411.0329 [pdf]
Neutrosophic Rened Relations and Their Properties
In this paper, the neutrosophic rened relation (NRR) dened on the neutrosophic rened sets( multisets) [13] is introduced. Various properties like refexivity, symmetry and transitivity are studied.
[4398] vixra:1411.0319 [pdf]
Rough Neutrosophic Sets
Both neutrosophic sets theory and rough sets theory are emerging as powerful tool for managing uncertainty, indeterminate, incomplete and imprecise information.In this paper we develop an hybrid structure called rough neutrosophic sets and studied their properties.
[4399] vixra:1411.0316 [pdf]
Soft Neutrosophic Loop, Soft Neutrosophic Biloop and Soft Neutrosophic N-Loop
Soft set theory is a general mathematical tool for dealing with uncertain, fuzzy, not clearly de…ned objects. In this paper we introduced soft neutrosophic loop,soft neutosophic biloop, soft neutrosophic N-loop with the discuission of some of their characteristics. We also introduced a new type of soft neutrophic loop, the so called soft strong neutrosophic loop which is of pure neutrosophic character. This notion also foound in all the other corresponding notions of soft neutrosophic thoery. We also given some of their properties of this newly born soft structure related to the strong part of neutrosophic theory.
[4400] vixra:1411.0300 [pdf]
A New Wave Quantum Relativistic Equation from Quaternionic Representation of Maxwell-Dirac Isomorphism as an Alternative to Barut-Dirac Equation
It is known that Barut's equation could predict lepton and hadron mass with remarkable precision. Recently some authors have extended this equation, resulting in Barut-Dirac equation. In the present article we argue that it is possible to derive a new wave equation as alternative to Barut-Dirac's equation from the known exact correspondence (isomorphism) between Dirac equation and Maxwell electromagnetic equations via biquaternionic representation. Furthermore, in the present note we submit the viewpoint that it would be more conceivable if we interpret the vierbein of this equation in terms of super°uid velocity, which in turn brings us to the notion of topological electronic liquid. Some implications of this proposition include quantization of celestial systems. We also argue that it is possible to find some signatures of Bose-Einstein cosmology, which thus far is not explored su±ciently in the literature. Further experimental observation to verify or refute this proposition is recommended.
[4401] vixra:1411.0266 [pdf]
Eccentricity, Space Bending, Dimension
The main goal of this paper is to present new transformations, previously non-existent in traditional mathematics, that we call centric mathematics (CM) but that became possible due to the new born eccentric mathematics, and, implicitly, to the supermathematics (SM).
[4402] vixra:1411.0260 [pdf]
Luhn Prime Numbers
The first prime number with the special property that its addition with its reversal gives as result a prime number too is 299. The prime numbers with this property will be called Luhn prime numbers. In this article we intend to present a performing algorithm for determining the Luhn prime numbers.
[4403] vixra:1411.0257 [pdf]
A New Proof of Menelaus’s Theorem of Hyperbolic Quadrilaterals in the Poincaré Model of Hyperbolic Geometry
Hyperbolic geometry appeared in the …rst half of the 19th century as an attempt to understand Euclids axiomatic basis of geometry. It is also known as a type of non-euclidean geometry, being in many respects similar to euclidean geometry. Hyperbolic geometry includes similar concepts as distance and angle.
[4404] vixra:1411.0235 [pdf]
Application of New Absolute and Relative Conditioning Rules in Threat Assessment
This paper presents new absolute and relative conditioning rules as possible solution of multi-level conditioning in threat assessment problem. An example of application of these rules with respect to target observation threat model has been provided. The paper also presents useful directions in order to manage the implemented multiple rules of conditioning in the real system.
[4405] vixra:1411.0233 [pdf]
Computational-Communicative Actions of Informational Processing
This study is circumscribed to the Information Science. The zetetic aim of research is double: a) to dene the concept of action of informational processing and b) to design a taxonomy of actions of informational processing.
[4406] vixra:1411.0232 [pdf]
Alternative Classical Mechanics II
This paper presents an alternative classical mechanics which is invariant under transformations between reference frames and which can be applied in any reference frame without the necessity of introducing fictitious forces.
[4407] vixra:1411.0218 [pdf]
Faith in God and the Light on the Paradoxes of Einstein
Speaking of Truth, till 2014 the paradoxes of Physics are not solved without the God's Grace. One must return to Holy Trinity. The development of physics was guided by the strange idea, that God is absent. Namely, He gave the laws, gave the matter and left for rest, for vacation till the Judgement Day. That is wrong, because Jesus Christ is the God, Who made miracles among us (see the Bible).
[4408] vixra:1411.0144 [pdf]
On the Nature of the Newton Gravitational Constant
A definition of G is derived using the product of two Planck point masses and a definition of hbar based on the speed of light in vacuum and geometry. The theoretical value of G is found to be 6.74981057667161 x 10^-11 m^3 kg^-1 s^-2 yielding a relative accuracy error of the CODATA 2010 G-value of -1.1255%. One experiment resulted in a value with a smaller relative accuracy error than the CODATA 2010 G-value of -0.5098%. Both rest and relativistic mass product equations are derived. These equations relate the relative spacetime spin frequency w_s, the relative orbital frequency w_o and (relativistic equation only) the Lorentz factor y describing relative linear speed of two bodies to the mass product. The Planck mass is a special case mass with w_sw_o = w_planck^2 = 1 s^-2. The theoretical value of the Planck mass was found to be 2.16039211144077 x 10^-8 kg. The relative accuracy error of the CODATA 2010 Planck mass value is 0.7461%. This error is attributed to use of the different definition of hbar. When derived from both hbar and G constants as well as the rest mass product equation, three kilogram unit definition candidates are all inconsistent. The candidate derived from the rest mass product equation is the only candidate that has equal second and meter exponents suggesting a kind of symmetry. This definition is considered the nominal kilogram unit definition. The other two candidates are considered to be artifacts of the hbar and G constants.
[4409] vixra:1411.0143 [pdf]
The 3D Visualization of E8 Using Anh4 Folding Matrix, Math Version
This paper will present various techniques for visualizing a split real even $E_8$ representation in 2 and 3 dimensions using an $E_8$ to $H_4$ folding matrix. This matrix is shown to be useful in providing direct relationships between $E_8$ and the lower dimensional Dynkin and Coxeter-Dynkin geometries contained within it, geometries that are visualized in the form of real and virtual 3 dimensional objects.
[4410] vixra:1411.0130 [pdf]
The 3D Visualization of E8 using an H4 Folding Matrix
This paper will present various techniques for visualizing a split real even E8 representation in 2 and 3 dimensions using an E8 to H4 folding matrix. This matrix is shown to be useful in providing direct relationships between E8 and the lower dimensional Dynkin and Coxeter-Dynkin geometries contained within it, geometries that are visualized in the form of real and virtual 3 dimensional objects. A direct linkage between E8, the folding matrix, fundamental physics particles in an extended Standard model, quaternions, and octonions is introduced, and its importance is investigated and described.
[4411] vixra:1411.0090 [pdf]
Light Cone Gauge Quantization of String, Dynamics of D-Brane and String Dualities
This review aims to show the Light cone gauge quantization of strings. It is divided up into three parts. The first consists of an introduction to bosonic and superstring theories and a brief discussion of Type II superstring theories. The second part deals with different configurations of D-branes, their charges and tachyon condensation. The third part contains the compactification of an extra dimension, the dual picture of D-branes having electric as well as magnetic field and the different dualities in string theories. In ten dimensions, there exist five consistent string theories and in eleven dimensions there is a unique M-Theory under these dualities, the different superstring theories are the same underlying M-Theory.
[4412] vixra:1411.0074 [pdf]
On Some Novel Consequences of Clifford Space Relativity Theory
Some of the novel physical consequences of the Extended Relativity Theory in $C$-spaces (Clifford spaces) are presented. In particular, generalized photon dispersion relations which allow for energy-dependent speeds of propagation while still $retaining$ the Lorentz symmetry in ordinary spacetimes, while breaking the $extended$ Lorentz symmetry in $C$-spaces. We analyze in further detail the extended Lorentz transformations in Clifford Space and their physical implications. Based on the notion of ``extended events" one finds a very different physical explanation of the phenomenon of ``relativity of locality" than the one described by the Doubly Special Relativity (DSR) framework. We finalize with a discussion of the modified dispersion relations, rainbow metrics and generalized uncertainty relations in $C$-spaces which are extensions of the stringy uncertainty relations.
[4413] vixra:1411.0072 [pdf]
Derivation of the Recurrence Relation for Orthogonal Polynomials and Usage.
Derivation of the recurrence relation for orthogonal polynomials and usage. Вывод рекуррентного соотношения ортогональных многочленов из процесса ортогонализации Грама-Шмидта, а также схема применения полученного рекуррентного соотношения
[4414] vixra:1411.0064 [pdf]
Estimation of the Probability of Transition Between Phases
The purpose of this paper is to present a general method to estimate the probability of transitions of a system between phases. The system must be represented in a quantitative model, with vectorial variables depending on time, satisfying general conditions which are usually met. The method can be implemented in Physics, Economics or Finances.
[4415] vixra:1411.0037 [pdf]
Precise Model of Hawking Radiation from the Tunnelling Mechanism
We recently improved the famous result of Parikh and Wilczek, who found a probability of emission of Hawking radiation which is compatible with a non-strictly thermal spectrum, showing that such a probability of emission is really associated to two non-strictly thermal distributions for boson and fermions. Here we finalize the model by finding the correct value of the pre-factor of the Parikh and Wilczek probability of emission. In fact, that expression has the ∼ sign instead of the equality. In general, in this kind of leading order tunnelling calculations, the exponent arises indeed from the classical action and the pre-factor is an order Planck constant correction. But in the case of emissions of Hawking quanta, the variation of the Bekenstein-Hawking entropy is order 1 for an emitted particle having energy of order the Hawking temperature. As a consequence, the exponent in the Parikh and Wilczek probability of emission is order unity and one asks what is the real significance of that scaling if the pre-factor is unknown. Here we solve the problem assuming the unitarity of the black hole (BH) quantum evaporation and considering the natural correspondence between Hawking radiation and quasi-normal modes (QNMs) of excited BHs , in a “Bohr-like model” that we recently discussed in a series of papers. In that papers, QNMs are interpreted as natural BH quantum levels (the “electron states” in the “Bohr-like model”). Here we find the intriguing result that, although in general it is well approximated by 1, the pre-factor of the Parikh and Wilczek probability of emission depends on the BH quantum level n. We also write down an elegant expression of the probability of emission in terms of the BH quantum levels.
[4416] vixra:1411.0025 [pdf]
On the Nature of the Planck Constants
A deeper understanding of why the reduced Planck constant and Planck constant ("Planck constants") have the values they have as determined by experiments is developed. New definitions of the Planck constants are arrived at using the speed of light in vacuum and geometric considerations. The kilogram SI base unit is found to be derived from the SI base units second and meter. The values of the Planck constants determined by experiments and published by CODATA (2010) are found to both have a relative accuracy error of 0.3552%. A new kilogram definition is proposed and it is argued that since the kilogram will then be a derived SI unit, the kilogram should not be considered an SI base unit anymore.
[4417] vixra:1411.0006 [pdf]
Weighted Neutrosophic Soft Sets Approach in a Multi-criteria Decision Making Problem
The paramount importance of decision making problem in an imprecise environment is becoming very much significant in recent years. In this paper we have studied weighted neutrosophic soft sets which is a hybridization of neutrosophic sets with soft sets corresponding to weighted parameters. We have considered here a multicriteria decision making problem as an application of weighted neutrosophic soft sets.
[4418] vixra:1410.0206 [pdf]
The Two-Dimensional Vavilov-Cherenkov Radiation in Led
We derive by the Schwinger source theory method, the power spectrum of photons, generated by charged particle moving within 2D sheet, with index of refraction n. Some graphene-like structures, for instance graphene with implanted ions, or, also 2D-glasses, are dielectric media, enabling the experimental realization of the Vavilov-Cherenkov radiation. The relation of the Vavilov-Cherenkov radiation to LED, where the 2D the additional dielectric sheet is the integral part of LED, is discussed. It is not excluded that LEDs with the 2D dielectric sheets will be the crucial components of detectors in experimental particle physics. 1 Introduction
[4419] vixra:1410.0203 [pdf]
Bohr-Like Model for Black Holes
It is an intuitive but general conviction that black holes (BHs) result in highly excited states representing both the “hydrogen atom” and the “quasi-thermal emission” in quantum gravity. Here we show that such an intuitive picture is more than a picture, discussing a model of quantum BH somewhat similar to the historical semi-classical model of the structure of a hydrogen atom introduced by Bohr in 1913. Our model has important implications on the BH information puzzle and on the non-strictly random character of Hawking radiation. It is also in perfect agreement with existing results in the literature, starting from the famous result of Bekenstein on the area quantization. This paper improves, clarifies and finalizes some recent results that, also together with collaborators, we published in various peer reviewed journals. Preliminary results on the model in this paper have been recently discussed in an Invited Lecture at the 12th International Conference of Numerical Analysis and Applied Mathematics.
[4420] vixra:1410.0193 [pdf]
High Availability-Aware Optimization Digest for Applications Deployment in Cloud
Cloud computing is continuously growing as a business model for hosting information and communication technology applications. Although on-demand resource consumption and faster deployment time make this model appealing for the enterprise, other concerns arise regarding the quality of service offered by the cloud. One major concern is the high availability of applications hosted in the cloud. This paper demonstrates the tremendous effect that the placement strategy for virtual machines hosting applications has on the high availability of the services provided by these applications. In addition, a novel scheduling technique is presented that takes into consideration the interdependencies between applications components and other constraints such as communication delay tolerance and resource utilization. The problem is formulated as a linear programming multi-constraint optimization model. The evaluation results demonstrate that the proposed solution improves the availability of the scheduled components compared to OpenStack Nova scheduler.
[4421] vixra:1410.0191 [pdf]
Introduction to Ammi Methodology
This work is based on the short course “A Metodologia AMMI: Com Aplicacão ao Melhoramento Genético” taught during the 58a RBRAS and 15o SEAGRO held in Campina Grande - PB and aim to introduce the AMMI method for those that have and no have the mathematical training. We do not intend to submit a detailed work, but the intention is to serve as a light for researchers, graduate and postgraduate students. In other words, is a work to stimulate research and the quest for knowledge in an area of statistical methods. For this propose we make a review about the genotype-by-environment interaction, definition of the AMMI models and some selection criteria and biplot graphic. More details about it can be found in the material produced for the short course.
[4422] vixra:1410.0182 [pdf]
Comment on ``backreaction of Hawking Radiation on a Gravitationally Collapsing Star I: Black Holes?''
I present objections against the statement of the popular press ``Big Bang was not, and black holes do not exist - proved mathematically''. Sadly, but this was deduced from Houghton paper, which has different value.
[4423] vixra:1410.0174 [pdf]
Solving Diophantine Equations
In this book a multitude of Diophantine equations and their partial or complete solutions are presented. How should we solve, for example, the equation η(π(x)) = π(η(x)), where η is the Smarandache function and π is Riemann function of counting the number of primes up to x, in the set of natural numbers? If an analytical method is not available, an idea would be to recall the empirical search for solutions. We establish a domain of searching for the solutions and then we check all possible situations, and of course we retain among them only those solutions that verify our equation. In other words, we say that the equation does not have solutions in the search domain, or the equation has n solutions in this domain. This mode of solving is called partial resolution. Partially solving a Diophantine equation may be a good start for a complete solving of the problem. The authors have identified 62 Diophantine equations that impose such approach and they partially solved them. For an efficient resolution it was necessarily that they have constructed many useful ”tools” for partially solving the Diophantine equations into a reasonable time. The computer programs as tools were written in Mathcad, because this is a good mathematical software where many mathematical functions are implemented. Transposing the programs into another computer language is facile, and such algorithms can be turned to account on other calculation systems with various processors.
[4424] vixra:1410.0173 [pdf]
On Quasi-Normal Modes, Area Quantization and Bohr Correspondence Principle
In Int. Journ. Mod. Phys. D 14, 181 (2005) Khriplovich verbatim claims that “the correspondence principle does not dictate any relation between the asymptotics of quasinormal modes and the spectrum of quantized black holes” and that “this belief is in conflict with simple physical arguments”. In this paper we stress that Khriplovich's criticisms work only for the original proposal by Hod, while they do not work for the improvements suggested by Maggiore and recently finalized by the author and collaborators through a connection between Hawking radiation and black hole (BH) quasi-normal modes (QNMs). Thus, QNMs can be really interpreted as BH quantum levels.
[4425] vixra:1410.0154 [pdf]
In/Equivalence of Klein-Gordon and Dirac Equation
It will be proven that Klein-Gordon and Dirac equation, when defined on an F-space of distributions, have the same set of solutions, which makes the two equations equivalent on that vector space of distributions. Some consequences of this for quantum field theory are shortly discussed.
[4426] vixra:1410.0153 [pdf]
Ives-Stilwell Time Dilation Li^+ Esr Darmstadt Experiment and Neo-Lorentz Relativity
Botermann, {\it et al.} in {\it Test of Time Dilation Using Stored Li$^+$ Ions as Clocks at Relativistic Speed}, {\it Physical Review Letters}, 2014, 113, 120405, reported results from an Ives-Stilwell-type time dilation experiment using $Li^+$ ions at speed 0.338c in the ESR storage ring at Darmstadt, and concluded that the data verifies the Special Relativity time dilation effect. However numerous other experiments have shown that it is only neo-Lorentz Relativity that accounts for all data, and all detect a 3-space speed $V\approx 470$km/s essentially from the south. Here we show that the ESR data confirms both Special Relativity and neo-Lorentz Relativity, but that a proposed different re-analysis of the ESR data should enable a test that could distinguish between these two theories.
[4427] vixra:1410.0149 [pdf]
The Cosmological Constant from the Extended Theory of Gravitation in Clifford Spaces
The exploration of the novel physical consequences of the Extended Theory of Gravity in $C$-spaces (Clifford spaces) is continued. One of the most salient physical feature of the extended gravitational theory in $C$-spaces is that one can generate an $effective$ stress energy tensor mimicking the effects of ``dark" matter/energy. In particular, it is found that the presence of the cosmological constant, along with a plausible mechanism to explain its extremely small value and/or its cancellation, can be understood entirely from a purely Clifford algebraic and geometric perspective. For this reason we believe that this theory may have important consequences in Cosmology and further research in Gravitation and Particle Physics.
[4428] vixra:1410.0110 [pdf]
The Analysis of Harold Puthoff Applied to the Natario Warp Drive Spacetime: Can the Spacetime Metric Engineering be Really Used for Superluminal Interstellar Spaceflight??}
Warp Drives are solutions of the Einstein Field Equations that allows superluminal travel within the framework of General Relativity. There are at the present moment two known solutions: The Alcubierre warp drive discovered in $1994$ and the Natario warp drive discovered in $2001$. However the major drawback concerning warp drives is the huge amount of negative energy density able to sustain the warp bubble.In order to perform an interstellar space travel to a "nearby" star at $20$ light-years away in a reasonable amount of time a ship must attain a speed of about $200$ times faster than light.However the negative energy density at such a speed is directly proportional to the factor $10^{48}$ which is $1.000.000.000.000.000.000.000.000$ times bigger in magnitude than the mass of the planet Earth!!. With the correct form of the shape function the Natario warp drive can overcome this obstacle at least in theory.Other drawbacks that affects the warp drive geometry are the collisions with hazardous interstellar matter(asteroids,comets,interstellar dust etc)that will unavoidably occurs when a ship travels at superluminal speeds and the problem of the Horizons(causally disconnected portions of spacetime).The geometrical features of the Natario warp drive are the required ones to overcome these obstacles also at least in theory. \newline Some years ago the American physicist Harold Puthoff published a very interesting work in the Journal of the British Interplanetary Society.He theorized about the possibility of the modification of the spacetime geometry by arbitrary advanced civilizations able to generate the so-called metric engineering and such a modification would be supposed to "allow" the propulsion of spaceships at superluminal velocities.However Puthoff used only diagonalized metrics for his analysis and he even quotes the Schwarzschild metric.In this work we reproduce the Puthoff analysis for the Natario warp drive spacetime and due to the fact that the Natario warp drive is a non-diagonalized metric due to the presence of both the shift and Natario vectors our results are different than the ones obtained by Puthoff.However his idea of a spacetime metric engineering able to distort the spacetime geometry "allowing" superluminal interstellar spaceflight is perfectly possible.
[4429] vixra:1410.0086 [pdf]
Reply to "Phantom Energy and Cosmic Doomsday"
Perhaps is found mistake in most reputable journal (after the "Nature"), therefore the Doomsday can be more far away. Objection "there is Dark Matter, so General Relativity is wrong" is rejected in "Dmitri Martila, "Simplest Explanation of Dark Matter and Dark Energy", LAP LAMBERT, 2013.
[4430] vixra:1410.0070 [pdf]
Convergence Rate of Extreme of Skew Normal Distribution Under Power Normalization
Let $\{X_n,~n\geq1\}$ be independent and identically distributed random variables with each $X_n$ following skew normal distribution. Let $M_n=\max\{X_k,~1\leq k\leq n\}$ denote the partial maximum of $\{X_n,~n\geq1\}$. Liao et al. (2014) considered the convergence rate of the distribution of the maxima for random variables obeying the skew normal distribution under linear normalization. In this paper, we obtain the asymptotic distribution of the maximum under power normalization and normalizing constants as well as the associated pointwise convergence rate under power normalization.
[4431] vixra:1410.0069 [pdf]
Reconstruction of Quantum Field Theory as Extension of Wave Mechanics
The task to be carried out should be clear from the title. One motivation for this endeavour is coming from the fact that the usual version of quantum field theory is not acceptable. In the paper, above all, three intentions are pursued (a) an adequate consideration of the interaction (b) a proof that the means of classical field theory are sufficient (c) a new attempt to describe particles by stable wave packets.
[4432] vixra:1410.0066 [pdf]
Notes on the Proof of Second Hardy-Littlewood Conjecture
In this paper a slightly stronger version of the Second Hardy-Littlewood Conjecture, namely that inequality $\pi(x)+\pi(y) > \pi (x+y)$ s examined, where $\pi(x)$ denotes the number of primes not exceeding $x$. It is shown that the inequality holds for all sufficiently large x and y. It has also been shown that for a given value of $y \geq 55$ the inequality $\pi(x)+\pi(y) > \pi (x+y)$ holds for all sufficiently large $x$. Finally, in the concluding section an argument has been given to completely settle the conjecture.
[4433] vixra:1410.0058 [pdf]
Existence of Antiparticles as an Indication of Finiteness of Nature
It is shown that in a quantum theory over a Galois field, the famous Dirac's result about antiparticles is generalized such that a particle and its antiparticle are already combined at the level of irreducible representations of the symmetry algebra without assuming the existence of a local covariant equation. We argue that the very existence of antiparticles is a strong indication that nature is described by a finite field rather than by complex numbers.
[4434] vixra:1410.0049 [pdf]
Conductivity Equations Based on Rate Process Theory and Free Volume Concept for Addressing Low Temperature Conductive Behaviors like Superconductivity
New conduction equations are derived on the basis of Eyring’s rate process theory and free volume concept. The basic assumptions are that electrons traveling from one equilibrium position to the other may obey Eyring’s rate process theory; the traveling distance is governed by the free volume available to each electron by assuming that electrons may have a spherical physical shape with an imaginative effective radius. The obtained equations predict that the superconductivity happens only when electrons form certain structures of a relative small coordinate number like Cooper pair at low temperatures; If each electron has a large coordinate number such as 8 when electrons form the body-centered-cubic (bcc) lattice structure like Wigner crystal, the predicted conductivity decreases instead increases when temperatures approach to zero. The electron condensation structures have a big impact on the conductivity. A sharp conductivity decrease at low temperatures, probably due to an Anderson transition, is predicted even when the Cooper pair is formed and the electrons can only travel short distances; While the Mott transition appears when crystalline structures like Wigner crystal form. On the other hand, the electron pairing or called the strong spin-spin coupling is predicted to induce Kondo effect when electrons are assumed to travel a very short distance. The Anderson localization seems to have a lot of similarities as Kondo effect such as electron pairing and low traveling distances of electrons at low temperatures. The Cooper pair that is the essence of BCS theory for superconductivity and the spin-spin coupling that is the cause for Kondo effect seem to contradict each other, but are seamlessly united in our current conductivity equations. The topological insulators become the natural occurrences of our equations, as both Kondo insulator and superconductivity share a same physical origin–the electron pairs, but the electrons just travel different distances at these two cases. A material containing an element of a high electro-negativity (or high ionization energy) and an element of a low electro-negativity(or low ionization energy) may form a good topological insulator and superconductor. Any magnetic element, like Iron, Nickel, and Cobalt, that has unpaired electrons and can induce Kondo effect as a dopant, could be a very good superconductor candidate once it is synthesized together with other proper elements of low electro- negativity (for example forming pnictide superconductors). The numbers of both conduction and valence electrons and the volume of a material under investigation have positive impacts on the conductivity. Any method that may increase the numbers of both conduction and valence electrons may move the superconductivity transition temperatures to higher regions. Any method that may reduce the volume of the material like external pressure seems to lower transition temperatures, unless that the applied pressure is so high that the electron density between the chemical bonds increases. The derived equations are in good agreement with the currently observed experimental phenomena. The current work may shed light on the mechanisms of superconductivity, presenting clues on how to move the superconductivity transition temperatures to higher regions.
[4435] vixra:1410.0035 [pdf]
Efficient Linear Fusion of Partial Estimators
Many signal processing applications require performing statistical inference on large datasets, where computational and/or memory restrictions become an issue. In this big data setting, computing an exact global centralized estimator is often either unfeasible or impractical. Hence, several authors have considered distributed inference approaches, where the data are divided among multiple workers (cores, machines or a combination of both). The computations are then performed in parallel and the resulting partial estimators are finally combined to approximate the intractable global estimator. In this paper, we focus on the scenario where no communication exists among the workers, deriving efficient linear fusion rules for the combination of the distributed estimators. Both a constrained optimization perspective and a Bayesian approach (based on the Bernstein-von Mises theorem and the asymptotic normality of the estimators) are provided for the derivation of the proposed linear fusion rules. We concentrate on finding the minimum mean squared error (MMSE) global estimator, but the developed framework is very general and can be used to combine any type of unbiased partial estimators (not necessarily MMSE partial estimators). Numerical results show the good performance of the algorithms developed, both in problems where analytical expressions can be obtained for the partial estimators, and in a wireless sensor network localization problem where Monte Carlo methods are used to approximate the partial estimators.
[4436] vixra:1410.0029 [pdf]
Differential and Integral Calculus in Ultrapower Fields, Without the Transfer Principle of Nonstandard Analysis, or Any Topological Type Structures
It has for long been been overlooked that, quite easily, infinitely many {\it ultrapower} field extensions $\mathbb{F}_{\cal U}$ can be constructed for the usual field $\mathbb{R}$ of real numbers, by using only elementary algebra. This allows a simple and direct access to the benefit of both infinitely small and infinitely large scalars, {\it without} the considerable usual technical difficulties involved in setting up and then using the Transfer Principle in Nonstandard Analysis. A natural Differential and Integral Calculus - which extends the usual one on the field $\mathbb{R}$ - is set up in these fields $\mathbb{F}_{\cal U}$ without any use of the Transfer Principle in Nonstandard Analysis, or of any topological type structure. Instead, in the case of the Riemann type integrals introduced, three simple and natural axioms in Set Theory are assumed. The case when these three axioms may be inconsistent with the Zermelo-Fraenkel Set Theory is discussed in section 5.
[4437] vixra:1410.0014 [pdf]
Solar Flare Five-Day Predictions from Quantum Detectors of Dynamical Space Fractal Flow Turbulence: Gravitational Wave Diminution and Earth Climate Cooling
Space speed fluctuations, which have a 1/f spectrum, are shown to be the cause of solar flares. The direction and magnitude of the space flow has been detected from numerous different experimental techniques, and is close to the normal to the plane of the ecliptic. Zener diode data shows that the fluctuations in the space speed closely match the Sun Solar Cycle 23 flare count, and reveal that major solar flares follow major space speed fluctuations by some 6 days. This implies that a warning period of some 5 days in predicting major solar flares is possible using such detectors. This has significant consequences in being able to protect various spacecraft and Earth located electrical systems from the subsequent arrival of ejected plasma from a solar flare. These space speed fluctuations are the actual gravitational waves, and have a significant magnitude. This discovery is a significant application of the dynamical space phenomenon and theory. We also show that space flow turbulence impacts on the Earth's climate, as such turbulence can input energy into systems, which is the basis of the Zener Diode Quantum Detector. Large scale space fluctuations impact on both the sun and the Earth, and as well explain temperature correlations with solar activity, but that the Earth temperatures are not caused by such solar activity. This implies that the Earth climate debate has been missing a key physical process. Observed diminishing gravitational waves imply a cooling epoch for the Earth for the next 30 years.
[4438] vixra:1410.0005 [pdf]
Integral Calculus in Ultrapower Fields
Infinitely many {\it ultrapower} field extensions $\mathbb{F}_{\cal U}$ are constructed for the usual field $\mathbb{R}$ of real numbers by using only elementary algebra, thus allowing for the benefit of both infinitely small and infinitely large scalars, and doing so {\it without} the considerable usual technical difficulties involved in setting up the Transfer Principle in Nonstandard Analysis. A natural Integral Calculus - which extends the usual one on the field $\mathbb{R}$ - is set up in these fields $\mathbb{F}_{\cal U}$. A separate paper presents the same for the Differential Calculus.
[4439] vixra:1410.0001 [pdf]
Are Tachyons Governed by an Upper Bound Uncertainty Principle?
In an earlier reading, we argued from a physical and number theoretic standpoint that an upper bound speed limit such as the speed of light implies the existence of a lower limit to the duration of events in the Universe. Consequently, this leads to a minimum characteristic length separation for events in the Universe. Herein, we argue that matter and energy that is in compliance with and in observance of the upper bound light speed limit is governed by the lower limiting uncertainty principle of Professor Werner Heisenberg. If there is a lower limiting uncertainty principle, we ask the natural and logical question 'What would an upper bound uncertainty principle mean?' We come to the interesting conclusion that an upper bound uncertainty principle must apply to particles that travel at speeds, equal to, or greater than the speed of light. Further, we argue that consequently, a tachyon must exist in a permanent state of confinement and must be intrinsically and inherently unstable in which event it oscillates between different states. These two requirements place quarks in a position to be good candidates for tachyons.
[4440] vixra:1409.0208 [pdf]
On the Preponderance of Matter Over Antimatter (Symmetry Properties of the Curved Spacetime Dirac Equations)
Quantum Electrodynamics (QED) is built on the original Dirac equation, an equation that exhibits perfect symmetry in that it is symmetric under charge conjugation (C), space (P) and time (T) reversal and any combination of these discrete symmetries. We demonstrate herein that the proposed Lorentz invariant Curved Spacetime Dirac Equations} (CSTD-Equations), while they obey (CPT) and PT-Symmetries, these equations readily violate C, P, T, CP and CT-Symmetries. Realizing this violation, namely the C-Violation, we take this golden opportunity to suggest that the Curved Spacetime Dirac Equations may help in solving the long standing riddle and mystery of the preponderance of matter over antimatter. We come to the tentative conclusion that if these CSTD-Equations are to explain the preponderance of matter over antimatter, then, photons are to be thought of as described by the flat version of this set of equations, while ordinary matter is to be explained by the positive and negatively curved spacetime versions of this same set of equations.
[4441] vixra:1409.0199 [pdf]
Does Constant Torque Induce Phase Transition Increasing the Value of Planck Constant?
The hierarchy of phases with effective value of Planck constant coming as an integer multiple of the ordinary Planck constant and interpreted as dark matter is crucial in the TGD inspired model of living matter. The challenge is to identify physical mechanisms forcing the increase of effective Planck constant h<sub>eff</sub> (whether to call it effective or not is to some extent matter of taste). The work with certain potential applications of TGD led to a discovery of a new mechanism possibly achieving this. The method would be simple: apply constant torque to a rotating system. I will leave it for the reader to rediscover how this can be achieved. The importance of the result is that it provides strong mathematical motivations for zero energy ontology (ZEO), causal diamonds (CDs), and hierarchy of (effective) Planck constants . Quite generally, the results apply to systems with external energy feed inducing generalized force acting in some compact degrees of freedom. Living matter represents basic example of this kind of system. Amazingly, ATP synthase enzyme contains generator with a rotating shaft: a possible TGD based interpretation is that the associated torque forces the generation of large h<sub>eff</sub> phases.
[4442] vixra:1409.0198 [pdf]
Scattering Amplitudes in Positive Grassmannian: TGD Perspective
A generalization of twistor Grassmannian approach defines a very promising vision about the construction of generalized Feynman diagrams. Since particles are replaced with 3-D surfaces and since string like objects emerge naturally in TGD framework, one expects that scattering amplitudes define a generalization of twistor Grassmannian amplitudes with a generalized Yangian symmetry. The realization of this approach has been however plagued by long-standing problems. SUSY in some form seems to be strongly supported by theoretical elegance and TGD indeed suggests a good candidate for a broken SUSY realized in terms of covariantly constant right-handed neutrino and not requiring Majorana spinors. Separate conservation of baryon and lepton number imply that super-generators carry quark or lepton number. This has been the main obstacle in attempts to construct stringly amplitudes. In this article it is found that this obstacles can be overcome and that stringy approach is forced both by the TGD view about physical particles and by the cancellation of UV and IR divergences. Also the planarity restriction emerges automatically in stringy approach. Absolutely essential ingredient is that fundamental fermions can be regarded as massless on-shell fermions having non-physical helicity with propagator replaced with its inverse: this representation follows by perfoming the integration over the virtual four-momentum squared using residue calculus.
[4443] vixra:1409.0197 [pdf]
Implications of Strong Gravimagnetism for TGD Inspired Quantum Biology
Physicists M. Tajmar and C. J. Matos and their collaborators working in ESA (European Satellite Agency) have made an amazing claim of having detected strong gravimagnetism with gravimagnetic field having a magnitude which is about 20 orders of magnitude higher than predicted by General Relativity. </p><p> Tajmar et al have proposed the gravimagnetic effect as an explanation of an anomaly related to the superconductors. The measured value of the mass of the Cooper pair is slightly larger than the sum of masses whereas theory predicts that it should be smaller. The explanation would be that actual Thomson field is larger than it should be because of gravimagnetic contribution to quantization rule used to deduce the value of Thomson field. The required value of gravimagnetic Thomson field is however 28 orders of magnitude larger than General Relativity suggests. TGD inspired proposal is based on the notion of gravitational Planck constant assignable to the flux tubes connecting to massive objects. It turns out that the TGD estimate for the Thomson field has correct order of magnitude. The identification h<sub>eff</sub> = h<sub>gr</sub> at particle physics and atomic length scales emerges naturally. </p><p> A vision about the fundamental role of quantum gravitation in living matter emerges. The earlier hypothesis that dark EEG photons decay to biophotons with energies in visible and ultraviolet range receives strong quantitative support. Also a mechanism for how magnetic bodies couple bio-chemistry emerges. The vision conforms with Penrose's intuitions about the role of quantum gravity in biology.
[4444] vixra:1409.0196 [pdf]
Further Progress Concerning the Relationship Between TGD and GRT and Kähler Dirac Action
The earlier attempts to understand the relationship between TGD and GRT have been in terms of solutions of Einstein's equations imbeddable to M<sup>4</sup> ×CP<sub>2</sub> instead of introducing GRT space-time as a fictive notion naturally emerging from TGD as asimplified concept replacing many-sheeted space-time. This resolves also the worries related to Equivalence Principle. TGD can be seen as a "microscopic" theory behind TGD and the understanding of the microscopic elements becomes the main focus of theoretical and hopefully also experimental work some day. </p><p> The understanding of Kähler Dirac action has been second long term project. How can one guarantee that em charge is well-defined for the spinor modes when classical W fields are present? How to avoid large parity breaking effects due to classical Z<sup>0</sup> fields? How to avoid the problems due to the fact that color rotations induce vielbein rotation of weak fields? The common answer to these questions is restriction of the modes of induced spinor field to 2-D string world sheets (and possibly also partonic 2-surfaces) such that the induced weak fields vanish. This makes string picture a part of TGD.
[4445] vixra:1409.0195 [pdf]
Recent View About Kähler Geometry and Spin Structure of "World of Classical Worlds"
The construction of Kähler geometry of WCW ("world of classical worlds") is fundamental to TGD program. I ended up with the idea about physics as WCW geometry around 1985 and made a breakthrough around 1990, when I realized that Kähler function for WCW could correspond to Kähler action for its preferred extremals defining the analogs of Bohr orbits so that classical theory with Bohr rules would become an exact part of quantum theory and path integral would be replaced with genuine integral over WCW. The motivating construction was that for loop spaces leading to a unique Kähler geometry. The geometry for the space of 3-D objects is even more complex than that for loops and the vision still is that the geometry of WCW is unique from the mere existence of Riemann connection. </p><p> This article represents the updated version of the construction providing a solution to the problems of the previous construction. The basic formulas remain as such but the expressions for WCW super-Hamiltonians defining WCW Hamiltonians (and matrix elements of WCW metric) as their anticommutator are replaced with those following from the dynamics of the modified Dirac action.
[4446] vixra:1409.0194 [pdf]
TGD Variant of the Model of Widom and Larsen for Cold Fusion
Widom and Larsen (for articles see the Widom Larsen LENR Theory Portal have proposed atheory of cold fusion (LENR), which claims to predict correctly the various isotope ratios observed in cold fusion and accompanying nuclear transmutations. The ability to predict correctly the isotope ratios suggests that the model is on the right track. A further finding is that the predicted isotope ratios correspond to those appearing in Nature which suggests that LENR is perhaps more important than hot fusion in solar interior as far as nuclear abundances are considered. TGD leads to the same proposal and Lithium anomaly could be understood as one implication of LENR. The basic step of the reaction would rely on weak interactions: the proton of hydrogen atom would transform to neutron by capturing the electron and therefore would overcome the Coulomb barrier. This transformation is extremely slow unless the value of Planck constant is so large that weak bosons have Compton lengths of order atomic length scale.
[4447] vixra:1409.0193 [pdf]
Morphogenesis, Morphostasis, and Learning in TGD Framework
According to Michael Levin, concerning morphogenesis and morphostasis the basic challenge is to understand how the shape of the organism is generated and how it is preserved. The standard local approach based on belief on genetic determinism does not allow one to answer these questions satisfactorily. </p><p> The first approach to this problem relies on a self-organization paradigm in which the local dynamics of cells leads to large scale structures as self-organization patterns. In TGD framework 3-D self-organization is replaced with 4-D self-organization (the failure of strict determinism of the classical dynamics is essential motivating zero energy ontology (ZEO)). One can speak about 4-D healing: expressing it in somewhat sloppy manner, the space-time surface serving as a classical correlate for the patient is as a whole replaced with the healed one: after the 4-D healing process the organism was never ill in geometrical sense! Note that in quantal formulation one must speak of quantum superposition of space-time surfaces. </p><p> Second approach could be seen as computational. The basic idea is that the process is guided by a template of the target state and morphogenesis and healing are computational processes. What Levin calls morphogenetic fields would define this template. It is known that organisms display a kind of coordinate grid providing positional information that allows cells to "decide" about the profile of genetic expression. In TGD framework magnetic body forming coordinate grid formed from flux tubes is a natural candidate for this structure. They would also realize topological quantum computation (TQC) with basic computational operations realized at the nodes of flux tubes to which it is natural to associate some biological sub-structures. </p><p> The assumption about final goal defining a template can be argued to be too strong: much weaker principle defining a local direction of dynamics and leading automatically to the final state as something analogous to free energy minimum in thermodynamics might be enough. Unfortunately, second law is the only principle that standard physics can offer. Negentropy Maximization Principle (NMP) provides the desired principle in TGD framework. Also the approach of WCW spinor field to the maximum of vacuum functional (or equivalently that of Kähler function) gives a goal for the dynamics after the perturbation of the organism causing "trauma". If Kähler function is classical space-time correlate for entanglement negentropy, these two views are equivalent. </p><p> TGD thus suggests an approach, which could be seen as a hybrid of approaches based on self-organization and computationalism. The magnetic body becomes the key notion and codes also for learned behaviors as TQC programs coded by the braiding of flux tubes. The replication of the magnetic body means also the replication of the programs behind behavioral patterns (often somewhat misleadingly regarded as synonymous with long term memories): both structure and function are replicated. This hypothesis survives the killer tests provided by the strange findings about planaria cut into two and developing new head or tail while retaining its learned behaviors: the findings indicate that behavioral programs are preserved although planaria develops a new brain.
[4448] vixra:1409.0192 [pdf]
Bicep2 Might Have Detected Gravitational Waves
BICEP2 team has announced a detection of gravitational waves via the effects of gravitational waves on the spectrum on polarization of cosmic microwave background (CMB). The findings - if true - have powerful implications for cosmological models. In this article the findings are discussed in the framework of TGD based cosmology in which the flatness of 3-space is interpreted in terms of quantum criticality rather than inflation. The key role is played by gradually thickening cosmic strings carrying magnetic monopole flux, dark energy as magnetic energy and dark matter as large h<sub>eff</sub> phases at cosmic strings. Very thin cosmic strings dominate the cosmology before the emergence of space-time as we know it and quantum criticality is associated with the phase transition between these two phases. Later cosmic strings serve as seeds of various cosmological structures by decaying partially to ordinary matter somewhat like inflaton fields in inflationary cosmology. Cosmic strings also explain the presence of magnetic fields in cosmos difficult to understand in standard approch. The crucial point is that - in contrast to ordinary magnetic fields - monopole fluxes do not require for their creation any currents coherent in long scales.
[4449] vixra:1409.0191 [pdf]
Pollack's Findings About Fourth Phase of Water: TGD View
The discovery of negatively charged exclusion zone formed in water bounded by gel phase has led Pollack to propose the notion of gel like fourth phase of water. In this article this notion is discussed in TGD framework. The proposal is that the fourth phase corresponds to negatively charged regions - exclusion zones - with size up to 100-200 microns generated when energy is fed into the water - say as radiation, in particular solar radiation. The stoichiometry of the exclusion zone is H<sub>1.5</sub>O and can be understood if every fourth proton is dark proton residing at the flux tubes of the magnetic body assignable to the exclusion zone and outside it. This leads to a model for prebiotic cell as exclusion zone. Dark protons are proposed to fork dark nuclei whose states can be grouped to groups corresponding to DNA, RNA, amino-acids, and tRNA and for which vertebrate genetic code is realized in a natural manner. The voltage associated with the system defines the analog of membrane potential, and serves as a source of metabolic energy as in the case of ordinary metabolism. The energy is liberated in a reverse phase transition in which dark protons transform to ordinary ones. Dark proton strings serve as analogs of basic biopolymers and one can imagine analog of bio-catalysis with enzymes replaced with their dark analogs. The recent discovery that metabolic cycles emerge spontaneously in absence of cell support this view.
[4450] vixra:1409.0190 [pdf]
What is EEG made of?
A model for EEG as a communication tool of magnetic body is developed. The basic assumption is that communications from cell membrane occur by Josephson radiation inducing the analogs of cyclotron transitions: this leads to resonance conditions forcing the Josephson frequencies to be equal to cyclotron frequencies in the simplest situation. Music metaphor - in particular the metaphor "right brain sings, left brain" talks allows to further develop the earlier model. One must generalize the original assumption concerning the allowed values of magnetic field $B_{end}$ at flux tubes of the magnetic body: this generalization was forced already by the quantum model of hearing. </p><p> The model leads to a detailed identification of sub-bands of EEG in terms of cyclotron frequencies assignable to bosonic ions. One can understand the basic features of various EEG bands, why conscious experiences possible occurring during sleep are not remembered and the four stages of sleep, why beta amplitudes are low and tend to be chaotic, the origin of resonance frequencies of EEG. Also a model for how Schumann resonances could affect consciousness emerges. </p><p> Music metaphor allows to develop in more detail the earlier proposal that nerve pulse patterns defined a languages with "phonemes" having duration of .1 seconds and obeying genetic code with 6 bits. Also the right brain signs metaphor can be given a detailed quantitative content in terms of the analog of music scale associated with the resting potential.
[4451] vixra:1409.0189 [pdf]
Negentropic Entanglement, NMP, Bariding and Topological Quantum Computation
Negentropic entanglement for which number theoretic entropy characterized by p-adic prime is negative so that entanglement carries information, is in key role in TGD inspired theory of consciousness and quantum biology. </p><p> <OL> <LI> The key feature of negentropic entanglement is that density matrix is proportional to unit matrix so that the assumption that state function reduction corresponds to the measurement of density matrix does not imply state function reduction to one-dimensional sub-space. This special kind of degenerate density matrix emerges naturally for the hierarchy h<sub>eff</sub>=nh interpreted in terms of a hierarchy of dark matter phases. I have already earlier considered explicit realizations of negentropic entanglement assuming that E is invariant under the group of unitary or orthogonal transformations (also subgroups of unitary group can be considered - say symplectic group). One can however consider much more general options and this leads to a connection with topological quantum computation (TQC). <LI> Entanglement matrix E equal to 1/n<sup>1/2</sup> factor times unitary matrix U (as a special case orthogonal matrix O) defines a density matrix given by ρ=UU<sup>†</sup>/n= Id<sub>n</sub>/n, which is group invariant. One has NE respected by state function reduction if NMP is assumed. This would give huge number of negentropically entangled states providing a representation for some unitary group or its subgroup (such as symplectic group). In principle any unitary representation of any Lie group would allow representation in terms of NE. In principle any unitary representation of any Lie group would allow a representation in terms of NE. <LI> In physics as generalized number theory vision, a natural condition is that the matrix elements of E belong to the algebraic extension of p-adic numbers used so that discreted algebraic subgroups of unitary or orthogonal group are selected. This realizes evolutionary hierarchy as a hierarchy of p-adic number fields and their algebraic extensions, and one can imagine that evolution of cognition proceeds by the generation of negentropically entangled systems with increasing algebraic dimensions and increasing dimension reflecting itself as an increase of the largest prime power dividing n and defining the p-adic prime in question. <LI> One fascinating implication is the ability of TGD Universe to emulate itself like Turing machine: unitary S-matrix codes for scattering amplitudes and therefore for physics and negentropically entangled subsystem could represent sub-matrix for S-matrix as rules representing "the laws of physics" in the approximation that the world corresponds to n-dimension Hilbert space. Also the limit n→ ∞ makes sense, especially so in the p-adic context where real infinity can correspond to finite number in the sense of p-adic norm. Here also dimensions n given as products of powers of infinite primes can be formally considered. </OL>
[4452] vixra:1409.0188 [pdf]
General Ideas About Octonions, Quatenrions, and Twistors
An updated view about M<sup>8</sup>-H duality is discussed. M<sup>8</sup>-H duality allows to deduce M<sup>4</sup>× CP<sub>2</sub> via number theoretical compactification. One important correction is that octonionic spinor structure makes sense only for M<sup>8</sup> whereas for M<sup>4</sup>× CP<sub>2</sub> complefixied quaternions characterized the spinor structure. </p><p> Octonions, quaternions, quaternionic space-time surfaces, octonionic spinors and twistors and twistor spaces are highly relevant for quantum TGD. In the following some general observations distilled during years are summarized. </p><p> There is a beautiful pattern present suggesting that H=M<sup>4</sup>× CP<sub>2</sub> is completely unique on number theoretical grounds. Consider only the following facts. M<sup>4</sup> and CP<sub>2</sub> are the unique 4-D spaces allowing twistor space with Kähler structure. Octonionic projective space OP<sub>2</sub> appears as octonionic twistor space (there are no higher-dimensional octonionic projective spaces). Octotwistors generalise the twistorial construction from M<sup>4</sup>; to M<sup>8</sup> and octonionic gamma matrices make sense also for H with quaternionicity condition reducing OP<sub>2</sub> to to 12-D G<sub>2</sub>/U(1)× U(1) having same dimension as the the twistor space CP<sub>3</sub>× SU(3)/U(1)× U(1) of H assignable to complexified quaternionic representation of gamma matrices. </p><p> A further fascinating structure related to octo-twistors is the non-associated analog of Lie group defined by automorphisms by octonionic imaginary units: this group is topologically six-sphere. Also the analogy of quaternionicity of preferred extremals in TGD with the Majorana condition central in super string models is very thought provoking. All this suggests that associativity indeed could define basic dynamical principle of TGD. </p><p> Number theoretical vision about quantum TGD involves both p-adic number fields and classical number fields and the challenge is to unify these approaches. The challenge is non-trivial since the p-adic variants of quaternions and octonions are not number fields without additional conditions. The key idea is that TGD reduces to the representations of Galois group of algebraic numbers realized in the spaces of octonionic and quaternionic adeles generalizing the ordinary adeles as Cartesian products of all number fields: this picture relates closely to Langlands program. Associativity would force sub-algebras of the octonionic adeles defining 4-D surfaces in the space of octonionic adeles so that 4-D space-time would emerge naturally. M<sup>8</sup>-H correspondence in turn would map the space-time surface in M<sup>8</sup> to M<sup>4</sup>× CP<sub>2</sub>.
[4453] vixra:1409.0187 [pdf]
Why TGD and What TGD Is?
This piece of text was written as an attempt to provide a popular summary about TGD. This is of course mission impossible since TGD is something at the top of centuries of evolution which has led from Newton to standard model. This means that there is a background of highly refined conceptual thinking about Universe so that even the best computer graphics and animations fail to help. One can still try to create some inspiring impressions at least. This chapter approaches the challenge by answering the most frequently asked questions. Why TGD? How TGD could help to solve the problems of recent day theoretical physics? What are the basic princples of TGD? What are the basic guidelines in the construction of TGD? </p><p> These are examples of this kind of questions which I try to answer in using the only language that I can talk. This language is a dialect of the language used by elementary particle physicists, quantum field theorists, and other people applying modern physics. At the level of practice involves technically heavy mathematics but since it relies on very beautiful and simple basic concepts, one can do with a minimum of formulas, and reader can always to to Wikipedia if it seems that more details are needed. I hope that reader could catch the basic principles and concepts: technical details are not important. And I almost forgot: problems! TGD itself and almost every new idea in the development of TGD has been inspired by a problem.
[4454] vixra:1409.0186 [pdf]
The Notion of Four-Momentum in TGD
One manner to see TGD is as a solution of the energy problem of General Relativity in terms of sub-manifold gravity. The translations act now as translations of 8-D imbedding space M<sup>4</sup>×CP<sub>2</sub> rather than in space-time itself and four-momentum can be identified as Noether charge. The detailed realization of this vision however involves several conceptual delicacies. What does Equivalence Principle mean in this framework: equivalence of gravitational and inertial momenta or just Einstein's equations or their generalization? What is the precise definition of inertial and gravitational four-momenta? What does quantum classical correspondence mean and could Equivalence Principle reduce to it? p-Adic mass calculations and the notion of generalised conformal invariance provide strong constraints on the attempts to answer these questions. This article provides the most recent view about the most plausible looking answer to these questions. Twistor Grassmann approach relies on the Yangian variant of 4-D conformal symmetry generalizing in TGD framework to Yangian variants of huge conformal symmetry algebras due to the effective 2-dimensionality of light-like 3-surfaces. This suggest also a generalization of four-momentum bringing in multilocal contributions analogous to interaction energy and also this is discussed in some detail.
[4455] vixra:1409.0185 [pdf]
TGD View About Homeopathy, Water Memory, and Evolution of Immune System
This article represents a brief sketch of TGD based model of water memory and homeopathy as it is after two steps of progress. First one was due to Pollack's findings about exclusion zones of water explained in terms of fourth phase of water. Second step or progress was inspired by an anomaly claimed by Tajmar et al and known as strong gravimagnetism. The attempt to understand the claim led to h<sub>eff</sun> = h<sub>gr</sub>=h<sub>em</sub> hypothesis unifying two TGD views about the notion of hierarchy of Planck constants proposed to characterize the phases of dark matter. </p><p> In this framework the attempt to understand homeopathy leads to additional insights about about water as living system and about prebiotic life as being based on the dark realization of genetic code realized in terms of dark proton strings which are nothing than dark variants of nuclei. Formation of exclusion zones would be formation of primitive lifeforms and primitive form of metabolism. Homeopathy could be seen as a manifestation of a fundamental form of immune system based on the recognition of invader molecules using reconnection mechanism for magnetic flux tubes and on mimicking the braiding of the magnetic bodies of invader molecules using dark variants of proteins (later proteins) and and eventually representing them symbolically in terms of dark DNA (later ordinary DNA) coding for the dark proteins. Genetic code might have geometric interpretation as coding for the 2-braiding of 3-D coordinate grids represented by magnetic flux tubes serving as the 4-D template coding not only for the structure of the organism but also its functions as spatio-temporal patterns. Protein folding would represent a behavior of protein and DNA would code also for it.
[4456] vixra:1409.0184 [pdf]
Pythagoras, Music, Sacred Geometry, and Genetic Code
t the 12-note scale could allow mapping to a closed path covering all vertices of icosahedron having 12 vertices and not intersecting itself is attractive. Also the idea that the triangles defining the faces of the icosahedron could have interpretation as 3-chords defining the notion of harmony for a given chord deserves study. The paths in question are known as Hamiltonian cycles and there are 1024 of them. There paths can be classified topologically by the numbers of triangles containing 0, 1, or 2 edges belonging to the cycle representing the scale. Each topology corresponds to particular notion of harmony and there are several topological equivalence classes. </p><p> I have also played with the idea that the 20 amino-acids could somehow correspond to the 20 triangles of icosahedron. The combination of this idea with the idea of mapping 12-tone scale to a Hamiltonian cycle at icosahedron leads to the question whether amino-acids could be assigned with a topological equivalence class of Hamiltonian cycle and whether topological characteristics could correspond to physical properties of amino-acids. It turns out that the identification of 3 basic polar amino-acids with triangles containing no edges of the scale path, 7 polar and acidic polar amino-acids with those containing 2 edges of the scale path, and 10 non-polar amino-acids with triangles containing 1 edge on the scale path would be consistent with the constraints on the Hamiltonian cycles. One could of course criticize the lumping of acidic polar and polar aminoacids to same group. </p><p> The number of DNAs coding for a given amino-acid could be also seen as such a physical property. The model for dark nucleons leads to the vertebrate genetic code with correct numbers of DNAs coding for amino-acids. It is however far from clear how to interpreted DNAs geometrically and the problem whether one cold understand genetic code geometrically remains open.
[4457] vixra:1409.0183 [pdf]
New Results About Microtubules as Quantum Systems
The latest news in quantum biology is the observation by the group led by Anirban Bandyopadhyay about detection of quantum vibration in microtubule scale - their lengths vary up to 50 μm. If this observation can be replicated, one can speak about breakthrough in quantum consciousness. </p><p> The findings reported in an earlier talk of Banduopadhyay give support for the general TGD inspired view about topological quantum computation (TQC) and allow for a rather detailed model in the case of microtubules. The idea is that flux tubes form a 2-D coordinate grid consisting of parallel flux tubes in two different directions. Crossing points would be associated with tubulins and the conformational state of tubulin could define a bit coding whether the braid strands defining coordinate lines are braided or not (swap or not). In this manner any bit pattern at microtubule defines a particular TQC program. If also conformations are quantum superposed, one would have "quantum-quantum computation". It however seems that conformation change is irreversible chemical reaction so that this option is not feasible. </p><p> The TGD inspired modification of the proposal in terms of flux tube coordinate grids making possible TQC architectures with tubulin dimers defining bits defining in turn TQC program looks rather natural. Coordinate grids can be fixed on basis of the experimental findings and there are 8 of them. The interpretation is in terms of different resolutions. The grids for A and B type lattices are related by 2π twist for the second end of the basic 13-unit for microtubule. An attractive interpretation for the resonance frequencies is in terms of phase transitions between A and B type lattices. If A type lattices can be generated only in phase transitions induced by AC stimulus at resonance frequencies, one could understand their experimental absence, which is a strong objection against Penrose-Hameroff model. </p><p> TGD suggests also a generalization of the very notion of TQC to 2-braid TQC with 2-D string world sheets becoming knotted in 4-D space-time. Now qubits (or their generalizations) could correspond to states of flux tubes defining braid strands as Penrose and Hameroff seem to suggest and the emergence of MTs could be seen as an evolutionary leap due to the emergence of a new abstraction level in cognitive processing.
[4458] vixra:1409.0182 [pdf]
General Model for Metabolism
The general strategy in attempts to understand metabolism is based on the assumption that a very large class of anomalous phenomena rely on same basic mechanism. This includes life as a phenomenon, water memory and homeopathy, free energy phenomena involving over-unity phenomena related to the dissociation of water, lightning and ball lightning, anomalous effects associated with rotating magnetic systems, phenomena related to UFOs (light balls), even remote mental interactions. One must have a unified explanation for all these phenomena based on a real theory. Plasmoids are TGD inspired proposal for prebiotic lifeforms and the input from anomalies related to electrolysis of water together with TGD based proposal that sequences of dark protons define dark nuclei realizing vertebrate genetic code leads to the vision that the biochemical metabolic machinery including photosynthesis has a simple analog realized in terms of "polymers" of water molecules with one dark proton with protons bound to sequence by color bonds. </p><p> The old view about the metabolic energy quanta as energies liberated as particle "drops" to a larger space-time sheet is modified. Metabolic energy quanta are liberated when the space-time sheet at which the particles reside expands in a phase transition increasing its p-adic prime and reducing the value of Planck constant correspondingly so that the net result is that the size of the space-time sheet remains the same. This condition implies a close relationship between p-adic and dark matter hierarchies. This process is automatically coherent since all particles suffer the change simultaneously. It applies also to a situation in which particles are in magnetic field: in this case the scale of cyclotron energies changes since the strength of the magnetic field is scaled down to guarantee the conservation of magnetic flux. This transition is not cyclotron transition but liberates essentially the same energy as coherent cyclotron transition so that magnetic fields (their "motor actions") become essential players also in metabolic activities.
[4459] vixra:1409.0179 [pdf]
P-Adic Length Scale Hypothesis
The book is devoted to the applications of p-adic length scale hypothesis and dark matter hierarchy. <OL> <LI>p-Adic length scale hypothesis states that primes p≈ 2<sup>k</sup>, k integer, in particular prime, define preferred p-adic length scales. Physical arguments supporting this hypothesis are based on the generalization of Hawking's area law for blackhole entropy so that it applies in case of elementary particles. <LI>A much deeper number theory based justification for this hypothesis is based on the generalization of the number concept fusing real number fields and p-adic number fields among common rationals or numbers in their non-trivial algebraic extensions. This approach also justifies the notion of multi-p-fractality and allows to understand scaling law in terms of simultaneous p≈ 2<sup>k</sup>- and 2-fractality. <LI>Certain anomalous empirical findings inspire in TGD framework the hypothesis about the existence of entire hierarchy of phases of matter identifiable as dark matter. The levels of dark matter hierarchy are labeled by the values of dynamical quantized Planck constant. The justification for the hypothesis provided by quantum classical correspondence and the fact the sizes of space-time sheets identifiable as quantum coherence regions can be arbitrarily large. </OL> The organization of the book is following. <OL> <LI>The first part of the book is devoted to the description of elementary particle massivation in terms of p-adic thermodynamics. <LI>In second part is devoted to the detailed calculation of masses of elementary particles and hadrons, and to various new physics suggested or predicted by the resulting scenario. </OL>
[4460] vixra:1409.0178 [pdf]
Hyper-Finite Factors, P-Adic Length Scale Hyoithesis, and Dark Matter Hierarchy
The book is devoted to hyper-finite factors and hierarchy of Planck constants. <OL> <LI>Configuration space spinors indeed define a canonical example about hyper-finite factor of type II<sub>1</sub>. The work with TGD inspired model for quantum computation led to the realization that von Neumann algebras, in particular hyper-finite factors of type II<sub>1</sub> could provide the mathematics needed to develop a more explicit view about the construction of M-matrix. This has turned out to be the case to the extent that a general master formula for M-matrix with interactions described as a deformation of ordinary tensor product to Connes tensor products emerges. <LI>The idea about hierarchy of Planck constants emerged from anomalies of biology and the strange finding that planetary orbits could be regarded as Bohr orbits but with a gigantic value of Planck constant. This lead to the vision that dark matter corresponds to ordinary particles but with non-standard value of Planck constant and to a generalization of the 8-D imbedding space to a book like structure with pages partially characterized by the value of Planck constant. Using the intuition provided by the inclusions of hyper-finite factors of type II<sub>1</sub> one ends up to a prediction for the spectrum of Planck constants associated with> M<sup>4</sup>; and CP<sub>2</sub> degrees of freedom. This inspires the proposal that dark matter could be in quantum Hall like phase localized at light-like 3-surfaces with macroscopic size and behaving in many respects like black hole horizons. </OL>
[4461] vixra:1409.0177 [pdf]
Can Niels Bohr's Philosophy be Wrong?
I take a very exciting and revival excurse on a hypothesis that has risen in the midst of General Relativity and Quantum field description (in the form of the electromagnetic wave). The Emeritus Dr. Cooperstock has derived the absence of energy harvesting from the gravitational waves using wave description of light, however latter MUST be seen as photon gas. Let me show this necessity in the paper. The paper explains the known result of Dr. Cooperstock, hereby defending the previous authors (which the Dr. Cooperstock criticizes). I show, that they do not contradict the Dr. Cooperstock result, but very strongly support it. Also presented my attempt to generalize the Dr. Cooperstock result to more nonlinear, more higher precision.
[4462] vixra:1409.0165 [pdf]
Advances in Graph Algorithms
This is a course on advances in graph algorithms that we taught in Taiwan. The topics included are Exact algorithms, Graph classes, fixed-parameter algorithms, and graph decompositions.
[4463] vixra:1409.0162 [pdf]
The Proof for Non-existence of Magic Square of Squares in Order Three
This paper shows the non-existence of magic square of squares in order three by investigating two new tools, the first is representing three perfect squares in arithmetic progression by two numbers and the second is realizing the impossibility of two similar equations for the same problem at the same time in different ways and the variables of one is relatively less than the other.
[4464] vixra:1409.0149 [pdf]
On the Luminosity Distance and the Hubble Constant (Revised)
By differentiating the standard formula for the luminosity distance with respect to time, we find that the equation is inconsistent with light propagation. Therefore, a new defnition of the luminosity distance is provided for an expanding Universe. From supernovae observations, using this defnition we find that the Hubble parameter is a constant of physics equal to Ho = 63.2 km/s/Mpc.
[4465] vixra:1409.0129 [pdf]
Sparse Representations and Its Applications in Digital Communication
Sparse representations are representations that account for most or all information of a signal with a linear combination of a small number of elementary signals called atoms. Often, the atoms are chosen from a so called over-complete dictionary. Formally, an over-complete dictionary is a collection of atoms such that the number of atoms exceeds the dimension of the signal space, so that any signal can be represented by more than one combination of different atoms. Sparseness is one of the reasons for the extensive use of popular transforms such as the Discrete Fourier Transform, the wavelet transform and the Singular Value Decomposition. The aim of these transforms is often to reveal certain structures of a signal and to represent these structures using a compact and sparse representation. Sparse representations have therefore increasingly become recognized as providing extremely high performance for applications as diverse as: noise reduction, compression, feature extraction, pattern classification and blind source separation. Sparse representation ideas also build the foundations of wavelet denoising and methods in pattern classification, such as in the Support Vector Machine and the Relevance Vector Machine, where sparsity can be directly related to learnability of an estimator. The technique of finding a representation with a small number of significant coefficients is often referred to as Sparse Coding. Decoding merely requires the summation of the relevant atoms, appropriately weighted. However, unlike a transform coder with its invertible transform, the generation of the sparse representation with an over-complete dictionary is non-trivial. Indeed, the general problem of finding a representation with the smallest number of atoms from an arbitrary dictionary has been shown to be NP-hard. This has led to considerable effort being put into the development of many sub-optimal schemes. These include algorithms that iteratively build up the signal approximation one coefficient at a time, e.g. Matching Pursuit, Orthogonal Matching Pursuit, and those that process all the coefficients simultaneously, e.g. Basis Pursuit, Basis Pursuit De-Noising and the Focal Underdetermined System Solver family of algorithms.
[4466] vixra:1409.0127 [pdf]
Rates of Convergence of Lognormal Extremes Under Power Normalization
Let $\{X_n,n\geq1\}$ be an independent and identically distributed random sequence with common distribution $F$ obeying the lognormal distribution. In this paper, we obtain the exact uniform convergence rate of the distribution of maxima to its extreme value limit under power normalization.
[4467] vixra:1409.0125 [pdf]
Extreme Values of the Sequence of Independent and Identically Distributed Random Variables with Mixed Asymmetric Distributions
In this paper, we derive the extreme value distributions of independent identically distributed random variables with mixed distributions of two and finite components, which include generalized logistic, asymmetric Laplace and asymmetric normal distributions.
[4468] vixra:1409.0124 [pdf]
Homology Classes of Generalised Triangulations Made up of a Small Number of Simplexes
By means of a computer, all the possible homogeneous compact generalised triangulations made up of a small number of 3-simplexes (from 1 to 3) have been classified in homology classes. The analysis shows that, with a small number of simplexes, it is already possible to build quite a large number of separate topological spaces.
[4469] vixra:1409.0122 [pdf]
The Proof of Dirichlet's Assertion on Celestial Mechanics
Since the proof given by Wang Qiu-Dong on Dirichlet's assertion is based on the successive approximations objections to assumptions of the global solution of this problem have to be raised.
[4470] vixra:1409.0120 [pdf]
Examples of Solving PDEs by Order Completion
So far, the order completion method for solving PDEs, introduced in 1990, can solve by far the most general linear and nonlinear systems of PDEs, with possible initial and/or boundary data. Examples of solving various PDEs with the order completion method are presented. Some of such PDEs do not have global solutions by any other known methods, or are even proved not to have such global solutions. The presentation next aims to be as summary, and in fact, sketchy as possible, even if by that it may create some difficulty. However, nowadays, being subjected to an ever growing ``information overload", that approach may turn out to be not the worst among two bad alternatives. Details can be found in [1], while on the other hand, alternative longer "short presentations" are in [6-8].
[4471] vixra:1409.0119 [pdf]
Tail Behavior of the Generalized Exponential and Maxwell Distributions
Motivated by Finner et al. (2008), the asymptotic behavior of the probability density function (pdf) and the cumulative distribution function (cdf) of the generalized exponential and Maxwell distributions are studied. Specially, we consider the asymptotic behavior of the ratio of the pdfs (cdfs) of the generalized exponential and Student's $t$-distributions (likewise for the Maxwell and Student's $t$-distributions) as the degrees of freedom parameter approach infinity in an appropriate way. As by products, Mills' ratios for the generalized exponential and Maxwell distributions are gained. Moreover, we illustrate some examples to indicate the application of our results in extreme value theory.
[4472] vixra:1409.0105 [pdf]
A Proof of Nonexistence of Green's Functions for the Maxwell Equations
Arguments in favor of existence of Green's functions for all linear equations are analyzed. In case of equation for electromagnetic field, these arguments have been widely used through formal considerations according to which electromagnetic field equations are nothing but some non-covariant scalar equations. We criticize these considerations and show that justification of applying the method of Green's functions to equations of classical electrodynamics are invalid. Straightforward calculations are presented which show that in case of dipole radiation the method gives incorrect results.
[4473] vixra:1409.0097 [pdf]
Electro-Osmosis With Corrected Solution of Poisson-Boltzmann Equation That Satisfies Charge Conservation Principle
We derive the electro-osmotic velocity profile in a micro-channel using a recently corrected charge density distribution within an electrolytic solution. Previous distribution did not take care of charge conservation principle while solving Poisson-Boltzmann equation and needed modification, hence the velocity profile also needs modification that we do here. Helmholtz-Smoluchowskii velocity scale is redefined, which accommodates Debye length parameter in it, unlike old definition.
[4474] vixra:1409.0084 [pdf]
Extended Lorentz Transformations in Clifford Space Relativity Theory
Some novel physical consequences of the Extended Relativity Theory in $C$-spaces (Clifford spaces) were explored recently. In particular, generalized photon dispersion relations allowed for energy-dependent speeds of propagation while still $retaining$ the Lorentz symmetry in ordinary spacetimes, but breaking the $extended$ Lorentz symmetry in $C$-spaces. In this work we analyze in further detail the extended Lorentz transformations in Clifford Space and their physical implications. Based on the notion of ``extended events" one finds a very different physical explanation of the phenomenon of ``relativity of locality" than the one described by the Doubly Special Relativity (DSR) framework. A generalized Weyl-Heisenberg algebra, involving polyvector-valued coordinates and momenta operators, furnishes a realization of an extended Poincare algebra in $C$-spaces. In addition to the Planck constant $\hbar$, one finds that the commutator of the Clifford scalar components of the Weyl-Heisenberg algebra requires the introduction of a $dimensionless$ parameter which is expressed in terms of the ratio of two length scales : the Planck and Hubble scales. We finalize by discussing the concept of ``photons", null intervals, effective temporal variables and the addition/subtraction laws of generalized velocities in $C$-space.
[4475] vixra:1409.0071 [pdf]
Anima: Adaptive Personalized Software Keyboard
We present a Software Keyboard for smart touchscreen de- vices that learns its owner’s unique dictionary in order to produce personalized typing predictions. The learning pro- cess is accelerated by analysing user’s past typed communi- cation. Moreover, personal temporal user behaviour is cap- tured and exploited in the prediction engine. Computational and storage issues are addressed by dynamically forgetting words that the user no longer types. A prototype implemen- tation is available at Google Play Store.
[4476] vixra:1409.0060 [pdf]
Paths of Least Time for Quantum Scale and Geometrical Interpretation of Light Diffraction
In this paper, a geometrical interpretation of light diffraction is given using an infinity of fluctuating geodesics for the small scale (the quantum scale) that represent paths of least time in an homogeneous space. Without using the wave theory, we provide a geometrical explanation of the deviation of light's overall direction from rectilinear when light encounters edges, apertures and screens.
[4477] vixra:1409.0059 [pdf]
Thomas Precession and Acceleration
We determine nonlinear transformations between coordinate systems which are mutually in a constant symmetrical accelerated motion. The maximal acceleration limit follows from the kinematical origin and it is an analogue of the maximal velocity in special relativity. We derive the dependence of mass, length, time, Doppler effect, Cherenkov effect and transition radiation angle on acceleration as an analogue phenomena in special theory of relativity. The last application of our method is the Thomas precession by uniform acceleration which can play the crucial role in modern particle physics and cosmology.
[4478] vixra:1409.0052 [pdf]
When π(N) Does not Divide N
Let $\pi(n)$ denote the prime-counting function and let <br>% <br>$$f(n)=\left|\left\lfloor\log n-\lfloor\log n\rfloor-0.1\right\rfloor\right|\left\lfloor\frac{\left\lfloor n/\lfloor\log n-1\rfloor\right\rfloor\lfloor\log n-1\rfloor}{n}\right\rfloor\text{.}$$ <br>% <br>In this paper we prove that if $n$ is an integer $\ge 60184$ and $f(n)=0$, then $\pi(n)$ does not divide $n$. We also show that if $n\ge 60184$ and $\pi(n)$ divides $n$, then $f(n)=1$. In addition, we prove that if $n\ge 60184$ and $n/\pi(n)$ is an integer, then $n$ is a multiple of $\lfloor\log n-1\rfloor$ located in the interval $[e^{\lfloor\log n-1\rfloor+1},e^{\lfloor\log n-1\rfloor+1.1}]$. This allows us to show that if $c$ is any fixed integer $\ge 12$, then in the interval $[e^c,e^{c+0.1}]$ there is always an integer $n$ such that $\pi(n)$ divides $n$. <p>Let $S$ denote the sequence of integers generated by the function $d(n)=n/\pi(n)$ (where $n\in\mathbb{Z}$ and $n>1$) and let $S_k$ denote the $k$th term of sequence $S$. Here we ask the question whether there are infinitely many positive integers $k$ such that $S_k=S_{k+1}$.
[4479] vixra:1409.0051 [pdf]
On Multiple Try Schemes and the Particle Metropolis-Hastings Algorithm
Markov Chain Monte Carlo (MCMC) algorithms and Sequential Monte Carlo (SMC) methods (a.k.a., particle filters) are well-known Monte Carlo methodologies, widely used in different fields for Bayesian inference and stochastic optimization. The Multiple Try Metropolis (MTM) algorithm is an extension of the standard Metropolis- Hastings (MH) algorithm in which the next state of the chain is chosen among a set of candidates, according to certain weights. The Particle MH (PMH) algorithm is another advanced MCMC technique specifically designed for scenarios where the multidimensional target density can be easily factorized as multiplication of conditional densities. PMH combines jointly SMC and MCMC approaches. Both, MTM and PMH, have been widely studied and applied in literature. PMH variants have been often applied for the joint purpose of tracking dynamic variables and tuning constant parameters in a state space model. Furthermore, PMH can be also considered as an alternative particle smoothing method. In this work, we investigate connections, similarities and differences among MTM schemes and PMH methods. This study allows the design of novel efficient schemes for filtering and smoothing purposes in state space models. More specially, one of them, called Particle Multiple Try Metropolis (P-MTM), obtains very promising results in different numerical simulations.
[4480] vixra:1409.0030 [pdf]
On Application of Green Function Method to the Solution of 3D Incompressible Navier-Stokes Equations
The fluid equations, named after Claude-Louis Navier and George Gabriel Stokes, describe the motion of fluid substances. These equations arise from applying Newton's second law to fluid motion, together with the assumption that the stress in the fluid is the sum of a diffusing viscous term (proportional to the gradient of velocity) and a pressure term - hence describing viscous flow. Due to specific of NS equations they could be transformed into full/partial inhomogeneous parabolic differential equations: differential equations in respect of space variables and the full differential equation in respect of time variable and time dependent inhomogeneous part. Velocity and outer forces densities components were expressed in form of curl for obtaining solutions satisfying continuity condition specifying divergence of velocities equality to zero. Finally, solution in 3D space for any shaped boundary was expressed in terms of 3D Green function and inverse Laplace transform accordantly.
[4481] vixra:1409.0028 [pdf]
The Proof for Non-existence of Perfect Cuboid
This paper shows the non-existence of perfect cuboid by using two tools, the first is representing Pythagoras triplets by two numbers and the second is realizing the impossibility of two similar equations for the same problem at the same time in different ways and the variables of one is relatively less than the other. When we express all Pythagoras triplets in perfect cuboid problem and rearrange it we can get a single equation that can express perfect cuboid. Unfortunately perfect cuboid has more than two similar equations that can express it and contradict one another.
[4482] vixra:1409.0024 [pdf]
New Version of General Relativity that Unifies Mass and Gravitation in a Common 4D Theory
Eintein’s General Relativity does not explain recent enigmas of astrophysics such as dark energy or accelerating universe. A thorough examination of the Einstein Field Equations (EFE) highlights four inconsistencies. Solving these inconsistencies and bringing closer the EFE to the Higgs mechanism fully explains the mass and gravity phenomena. The main interest of this study is to propose a formula of mass in 4D, m=f(x,y,z,t), that solves several basic enigmas fully demonstrated with mathematics but still not explained with logic and good sense, such as “How spacetime can be curved by mass?”, or ‘What is the nature of the gravitational force?”, or “What is the mechanism of conversion of mass into energy (E =mc2)?”. This paper also solves the main enigmas of astrophysics and leads to interesting explanations of quantum mechanics.
[4483] vixra:1409.0006 [pdf]
What Mathematics Is The Most Fundamental?
Standard mathematics involves such notions as infinitely small/large, continuity and standard division. This mathematics is usually treated as fundamental while finite mathematics is treated as inferior. Standard mathematics has foundational problems (as follows, for example, from G\"{o}del's incompleteness theorems) but it is usually believed that this is less important than the fact that it describes many experimental data with high accuracy. We argue that the situation is the opposite: standard mathematics is only a degenerate case of finite one in the formal limit when the characteristic of the ring or field used in finite mathematics goes to infinity. Therefore foundational problems in standard mathematics are not fundamental.
[4484] vixra:1408.0240 [pdf]
Nonuniform Dust, Oppenheimer-Snyder, and a Singular Detour to Nonsingular Physics
Oppenheimer and Snyder treated in "comoving coordinates" a finite-radius ball of self-gravitationally contracting dust whose energy density is initially static; this is incisively dealt with by use of Tolman's rarely-cited closed-form "comoving" metric solutions for all spherically-symmetric nonuniform dust distributions. Unaware of Tolman's general solutions, Oppenheimer and Snyder assumed that the uniform space-filling dust solution applies without modification to the interior of their dust ball, which is validated by Tolman's solutions. We also find that all nonuniform dust solutions which adhere to the Oppenheimer-Snyder initial conditions have a time-cycloid character that strikingly parallels Newtonian particle gravitational infall, and as well renders those solutions periodically singular. The highly intricate, and thus easily misapprehended, singular transformation of the Oppenheimer-Snyder dust-ball solution from "comoving" to "standard" coordinates is re-derived in detail; it reveals the completely nonsingular nature of the dust-ball metric in "standard" coordinates. Thus the periodically-singular quasi-Newtonian character of the "comoving" dust-ball metric is an artifact of the perceptibly unphysical "synthetic" nature of "comoving coordinates", whose definition requires the clocks of an infinite number of observers.
[4485] vixra:1408.0203 [pdf]
Helical Model of the Electron
A semiclassical model of the electron is presented based on the Principle of Helical Motion (“A free electron always moves at the speed of light following a helical motion, with a constant radius, and with the direction of movement perpendicular to the rotation plane”). This model interprets the Zitterbewegung as a real motion that causes rotation of the electron spin and its magnetic moment. Based on this model, the quantum magnetic flux and quantum Hall resistance are obtained as parameters of the electron and special relativity theory is derived from the helical motion of the electron. Finally, a fix is proposed for the De Broglie’s wavelength that questions the very validity of the Dirac equation.
[4486] vixra:1408.0200 [pdf]
Sistemas C-Ortocéntricos Y Circunferencia de Feuerbach Para Cuadriláteros en Planos de Minkowski (C-Orthocentric Systems and Feuerbach Cricle for Cuadrangles in Minkowski Planes)
Se presenta el estudio de propiedades geométricas de un cuadrilátero inscrito en una circunferencia, en un plano de Minkowski. Se estudian las relaciones entre los cuatro triángulos formados por los vértices del cuadrilátero, sus antitriángulos y puntos de simetría, sus baricentros y otros puntos asociados con dichos triángulos, respectivamente. Se introduce la noción de anticuadrilátero y se extiende la noción de circunferencia de Feuerbach de un cuadriláteros, inscritos en una circunferencia, a planos de Minkowski en general. --- The study of geometric properties of a inscribed quadrangle in a circle, in a Minkowski plane is presented. We study the relations between the four triangles formed by the vertices of the quadrangle, its anti-triangles and points of symmetry, its barycenters and other points associated with such triangles, respectively. The notion of anti-quadrangle is introduced and extends the notion of Feuerbach circle for quadrangles, inscribed in a circle, to Minkowski planes in general.
[4487] vixra:1408.0196 [pdf]
Bootstrapping Generations
A supersymmetric version of Chew's "democratic bootstrap" argument predicts the existence of three generations of particles, with a quark, of type "up", more massive that the other five.
[4488] vixra:1408.0195 [pdf]
Comments on Recent Papers by S. Marshall Claiming Proofs of Several Conjectures in Number Theory
In a recent series of preprints S. Marshall claims to give proofs of several famous conjectures in number theory, among them the twin prime conjecture and Goldbach’s conjecture. A claimed proof of Beal’s conjecture would even imply an elemen- tary proof of Fermat’s Last Theorem. It is the purpose of this note to point out serious errors. It is the opinion of this author that it is safe to say that the claims of the above mentioned papers are lacking any basis.
[4489] vixra:1408.0191 [pdf]
C-Ortocentros Y Sistemas C-Ortocéntricos en Planos de Minkowski (C-Orthocenters and C-Orthocentric Systems in Minkowski Planes)
Usando la noción de C-ortocentro se extienden, a planos de Minkowski en general, nociones de la geometría clásica relacionadas con un triángulo, como por ejemplo: puntos de Euler, triángulo de Euler, puntos de Poncelet. Se muestran propiedades de estas nociones y sus relaciones con la circunferencia de Feuerbach. Se estudian sistemas C-ortocéntricos formados por puntos presentes en dichas nociones y se establecen relaciones con la ortogonalidad isósceles y cordal. Además, se prueba que la imagen homotética de un sistema C-ortocéntrico es un sistema C-ortocéntrico. --- Using the notion of C-orthocenter, notions of the classic euclidean geometry related with a triangle, as for example: Euler points; Euler’s triangle; and Poncelet’s points, are extended to Minkowski planes in general. Properties of these notions and their relations with the Feuerbach’s circle, are shown. C-orthocentric systems formed by points in the above notions are studied and relations with the isosceles and chordal orthogonality, are established. In addition, there is proved that the homothetic image of a C-orthocentric system is a C-orthocentric system.
[4490] vixra:1408.0186 [pdf]
Mystery of Missing Co-ions Solved
Presence of a charged wall distributes like charges (co-ions) and unlike charges (counter-ions) differently within an electrolytic solution. It is reasonable to expect that counter-ions have more population near the wall, while co-ions are abundant away from it; experiments and simulations support this. An analytical formula for the net charge-density distribution has been used widely since almost hundred years, was obtained by solving the Poisson-Boltzmann equation. However, the old formula shows excess counter-ions everywhere, cannot account for the missing co-ions satisfactorily, and clearly violates charge conservation principle. Here, I correct the distribution formula from fundamental considerations. The old derivation expresses charge-density distribution as a function of electrostatic potential through Boltzmann distribution, but missed a crucial point that the indefinite nature of electrostatic potential makes charge-density indefinite as well. We must tune electrostatic potential by adding suitable constant until the integral of charge-density becomes consistent with the net charge present in solution; old theory did not do it, that I do here. This result demonstrates how to reconcile a definite quantity to an indefinite one, when they are related. I anticipate, this result is going to have far reaching impacts on many fields like colloid science, electro-kinetics, bio-technology etc. that use the old theory
[4491] vixra:1408.0178 [pdf]
Matter Creation
English (traslation): The influence of electricity in the gravity is shown here. Being able to verify this with a simple experiment, charging and discharging a capacitor. Modifying the equation used is achieved justify mass number of atoms in part. And also check that the pressure is needed to create atoms of the planetary cores. Spanish (original): Se muestra aquí la influencia de la electricidad en la gravedad. Pudiendo constatarlo mediante un sencillo experimento, cargando y descargando un condensador. Modificando la ecuación empleada se consigue justificar, en parte, el número másico de los átomos. Y se comprueba, además, que la presión necesaria para crear átomos es la de los núcleos planetarios.
[4492] vixra:1408.0165 [pdf]
Structural Unification of Newtonian and Rational Gravity
[The paper is precisely the preprint viXra:1407.0070 which was withdrawn as suggested by a well-known mainstream journal. However, the editors simply played game with me. I am afraid I could not find a job through my life. I am over 53 year old and got sleepless each day. Life as a creative scientist is very hard, and academic circle has turned to be a common profitable business. Galilei, Newton, Maxwell, Planck could not have had a job if they had lived today] If there were no electromagnetic interaction in the Solar system from the beginning then no heavy bodies like Earth or Sun would have existed and elementary particles would miss collision with each other due to the sparse population of particles. The solar system would have been a tiny ``spiral galaxy'' or ``elliptical galaxy'' because of the structuring nature of Newton's gravity.
[4493] vixra:1408.0162 [pdf]
The Higgs Vacuum.The Particles Of The Standard Model And The Compliance With the Energy-Momentum Equation. The Necessary Existence Of the Axion. Stop Quark Mass
In this paper it is demonstrated that all the masses of the standard model particles; including the Higgs boson h; with nonzero rest mass, comply with the equation of energy-momentum. The model Higgs vacuum corresponds to a virtual vacuum, for which the contribution of particles with zero rest mass is null (photon, gluon, graviton) .With the best values ​​of the masses of the particles (Particle Data Group), it is found that axion mass has to be extremely small. Also the theoretical model Higgs vacuum, a completely new model based on the lattice R8 and sixteen matrix elements of energy; corresponding to the four solutions of the equation of energy-momentum (isomorphism with the four components of the scalar field. One complex doublet ); to be factored into two factors with real and imaginary components of energy. Of this new model Higgs vacuum; naturally get the beta angle. Stop quark mass is obtained by solving the equation of radiative mass correction Higss boson, h, to one loop. The exact determination of the beta angle, allows calculating the mass of the stop of about 1916 GeV. Similarly, a mass for axion is proposed of around 110 micro eV
[4494] vixra:1408.0153 [pdf]
Absence of Non-Trivial Supersymmetries and Grassmann Numbers in Physical State Spaces
This paper reviews the well-known fact that nilpotent Hermitian operators on physical state spaces are zero, thereby indicating that the supersymmetries and ``Grassmann numbers" are also zero on these spaces. Next, a positive definite inner product of a Grassmann algebra is demonstrated, constructed using a Hodge dual operator which is similar to that of differential forms. From this example, it is shown that the Hermitian conjugates of the basis do not anticommute with the basis and, therefore, the property that ``Grassmann numbers" commute with ``bosonic quantities" and anticommute with ``fermionic quantities", must be revised. Hence, the fundamental principles of supersymmetry must be called into question.
[4495] vixra:1408.0143 [pdf]
Sistemas C-Ortocéntricos, Bisectrices Y Euclidianidad en Planos de Minkowski (C-Orthocentric Systems, Angular Bisectors and Euclidianity in Minkowski Planes)
Mediante el estudio de ciertas propiedades geométricas de los sistemas C-ortocéntricos, relacionadas co las nociones de ortogonalidad (Birkhoff, isósceles, cordal), bisectriz (Busemann, Glogovskij) y línea soporte a una circunferencia, se muestran nueve caracterizaciones de euclidianidad para planos de Minkowski arbitrarios. Tres de estas generalizan caracterizaciones dadas para planos de Minkowski estrictamente convexos en [8, 9], y las otras seis son nuevos aportes sobre el tema. -- By studying certain geometric properties of C-orthocentric systems related to the notions of orthogonality (Birkhoff, isosceles, chordal), angular bisectors (Busemann, Glogovskij) and support line to a circumference, shows nine characterizations of the Euclidean plane for arbitrary Minkowski planes. Three of these generalized characterizations given for strictly convex Minkowski planes in [8, 9], and the other six are new contributions on subject.
[4496] vixra:1408.0131 [pdf]
New Hamiltonian and Cooper Pair's Origin of Pseudogap and Colossal Magnetoresistance in Manganites
Based on the thirteen similarities of structures of lattice, electron, and strong correlation Hamiltonian between CMR (colossal magnetoresistance) manganites and the high-Tc cuprates, this paper concludes that the Hamiltonian of the high-Tc cuprates and CMR manganites are the same. Based on uniform and quantitative explanations for fifteen experimental facts, this paper concludes that the pseudogap and CMR of manganites are caused completely by formation of Cooper pairs, consisting of two oxygen 2pσ holes in MnO2 plane
[4497] vixra:1408.0125 [pdf]
Reflection of Plane Electromagnetic Wave from Conducting Plane
The phenomenon of reflection from conducting surface is considered in terms of exact solutions of Maxwell equations. Matching of waves and current density at the plane is completed. Amplitudes of reflected and transmitted waves are found as functions of incident wave and conductivity of the plane. This work is completed also for conducting plane lying between two distinct media. It is shown that in case of conducting interface waves with some certain parameters (polarization, incidence angle and frequency) and transform completely into waves of current density whereas amplitude of the reflected wave is equal to zero that is equivalent to total absorption.
[4498] vixra:1408.0109 [pdf]
Historical Perspective on Energy Flow in Quantum Field Theory
The absence of impedances from the Standard Model is most remarkable. Impedance is a fundamental concept, of universal validity. Impedances govern the flow of energy. In particular, the coupling of the photon to matter happens in the near field. The absence of the photon near-field impedance in photon-electron interactions is the most basic and profound example of this remarkable circumstance, sitting unnoticed in the foundation of quantum electrodynamics. One cannot obtain a complete understanding of such interactions without examining the role of impedances. How this essential principle escaped notice in the development of quantum field theory is outlined, and consequences of its inclusion in our present understanding are explored.
[4499] vixra:1408.0106 [pdf]
Two-Level Mass Model of the Milky Way
The observed absence of a cuspy halo in the centers of galaxies implies a certain mechanism, which scatter DM. As this mechanism is considered annihilation of galactic antineutrino DM and neutrino DM of stellar origin. Annihilation intensity increases towards the center with increasing concentration of stars and density of DM, however, the scattering effect of annihilation begins to manifest mainly in the bulge. Based on such a hysteresis of scattering effect, we make up a two-level mass model of the Milky Way, where the mass distribution is regulated at two levels of concentration by one and the same law of decreasing density, inversely proportional to the distance from the center of power of 2.5. The first level starts from the surface of the central neutron collapsar and ends at the border of the bulge, and the second level extends from the bulge to the edge of the Galaxy.
[4500] vixra:1408.0093 [pdf]
Physical Solution of Poisson-Boltzmann Equation
We correct the solution of Poisson-Boltzmann equation regarding charge distribution in an electrolytic solution bounded by walls. Considering charge conservation principle properly, we show that the gradient of electrostatic potential at different walls are strictly related, and cannot be assigned independent values, unlike old theory. It clarifies some cause and effect ideas: distribution turns out to be independent of the initial polarity of walls; the accumulated charges in liquid usually induce opposite polarity on the wall surface, forms `Electric Double Layer' (EDL), contrary to the common belief that a charged wall attracts counter-ions to form EDL. Distribution depends only on the potential difference between walls and the net charge present in the solution, apart from Debye length.
[4501] vixra:1408.0086 [pdf]
The Definition of Density in General Relativity
According to general relativity the geometry of space depends on the distribution of matter or energy fields. The relation between the locally defined geometry parameters and the volume elements depends on curvature. Thus integration of local properties like energy density, defined in the Euclidean tangent space, does not lead to correct integral data like total energy. To obtain integral conservation, a correction term must be added to account for the curvature of space. This correction term is the equivalent of potential energy in Newtonian gravitation. With this correction the formation of singularities by gravitational collapse does no longer occur and the so called dark energy finds its natural explanation as potential energy of matter itself.
[4502] vixra:1408.0084 [pdf]
Analytic Functions for Clifford Algebras
Cauchy Theory is applied and extended to n-dimensional functions in Clifford algebras, showing the existence of integrals that do not exist in Euclidean spaces. It celebrates the depth of Cauchy's lecture, held on the 22nd of August, 1814, so 200 years ago, in times of bitter warfare. (I should like to recommend reading about his life, e.g. in Wikipedia.)
[4503] vixra:1408.0078 [pdf]
Special Relativity Fails to Resolve Cosmic Muon Decay
The Special Theory of Relativity does not explain the cosmic muon decay phenomenon; it is not evidence of time dilation. The “dilated time” of special relativity has no physical interpretation as time as interpreted in physics; therefore, the extended lifetime of the muon, arrived at through the used of time dilation,cannot be used in the simple formula : distance = speed x time. So far, no one seems to have pointed to this simple fact. An invalid argument has been repeated and propagated for decades that purportedly resolves the cosmic muon decay phenomenon. Furthermore, the best experiment by CERN [3] measured the average lifetime of muon as 64.368(29) μs, (γ = 29.33, v = 0.9994 c). The now accepted mean proper lifetime for μ − = 2.19489(10) μs is a value computed from the relativistic time dilation formula using the figure of 64.368(29) μs assuming the validity of Special Relativity Theory; it is not an empirical value from experiments.
[4504] vixra:1408.0072 [pdf]
A Physical Axiom System Based on the Spirit
This paper established a physical axiom system. By the axiom system we can derive important physical laws such as momentum conservation law, Newton's second law, Newton's law of gravity, Schrodinger equation and Maxwell's equations, simplified existing physical theories and explain some physical phenomenons those unresolved by traditional physical theories. We can also derive Schwarzschild solution of external spherically symmetric gravitational field, gravitational red shift equation, proved that if in large-scale distance, Newton's law of gravity and red shift equation must be corrected, the data by corrected formulas meet the astronomical observed results well.
[4505] vixra:1408.0064 [pdf]
Four-Center Integral of a Dipolar Two-Electron Potential Between S-Type Gto's
We reduce two-electron 4-center products of Cartesian Gaussian Type Orbitals with Boys' contraction to 2-center products of the form psi_alpha(r_i-A) psi_beta(r_j-B), and compute the 6-dimensional integral over d^3r_i d^3r_j over these with the effective potential V_{ij} = (r_i-r_j) . r_j/|r_i-r_j|^3 in terms of Boys' confluent hypergeometric functions.
[4506] vixra:1408.0063 [pdf]
Bell's Inequality Loophole: Precession
Justifying a local hidden variable theory requires an explanation of Bell's inequality violation. Ever since Bell derived the inequality to test the classical prediction on the correlation of two spin-1/2 particles, many experiments have observed the violation, and thus concluded against the local realism, while validating the non-locality of quantum entanglement. Still, many scientists remain unconvinced of quantum entanglement because the experiments have loopholes that could potentially allow a local realistic explanation. Upholding the local realism, this paper introduces how a precession of the spin would produce a cosine-like correlation function, and furthermore how it would also contribute to a fair sampling loophole. Simulating the precession in Monte Carlo method reveals that it can explain the observed Bell's violation using only classical mechanics.
[4507] vixra:1408.0060 [pdf]
Maxwell Equations and Total Internal Reflection
The phenomenon of total internal reflection is considered in terms of exact solutions of Maxwell equations. Matching of plane and evanescent waves at the interface is completed. It is shown that amplitude of the reflected wave cannot be obtained from the matching alone. Since it can differ from that of incident wave due to possible energy loss which may occur in the evanescent wave zone if there is another layer of optically dense medium, this loss is to be specified via amplitude of the reflected wave. Besides, reflected wave potential has phase shift which also depends on this specification.
[4508] vixra:1408.0056 [pdf]
On the Origin and Physical Nature of the Cosmological Vacuum V.4
In the present manuscript, we consider the origin of the cosmological space, and the Cos- mological Constant as a consequence of the annihilation of the matter - antimatter at the very beginning of the big bang. Since the cosmological expansion creates the space, the cosmological vacuum is considered as a very hegemonic entity which is the locus where all the objects create the events. Certainly, N units of masses in conjunct are required to produce an equilibrium between the gravitational phenomena of the matter in bulk and each coulombic interaction between protons and electrons within the nuclear and atomic contour in all astrophysical (e.g. stars) or cosmological entities (e.g. galaxies). The "dark matter" is considered to be formed by highly excited H and HeI Rydberg's atoms in equilibrium with the CMB radiation.
[4509] vixra:1408.0048 [pdf]
Trillion by Trillion Matrix Inverse: not Actually that Crazy
A trillion by trillion matrix is almost unimaginably huge, and finding its inverse seems to be a truly im- possible task. However, given current trends in com- puting, it may actually be possible to achieve such a task around 2040 — if we were willing to devote the the entirety of human computing resources to a single computation. Why would we want to do this? Perhaps, as Mallory said of Everest: “Because it’s there”.
[4510] vixra:1408.0026 [pdf]
Maximal Acceleration Perspective Problems.
We determine nonlinear transformations between coordinate systems which are mutually in a constant symmetrical accelerated motion. The maximal acceleration limit follows from the kinematical origin. Maximal acceleration is an analogue of the maximal velocity in special relativity. We derive the dependence of mass, length, time, Doppler effect, Cherenkov angle and transition radiation angle on acceleration as an analogue phenomena in special theory of relativity. The derived addition theorem for acceleration can play crucial role in modern particle physics and cosmology
[4511] vixra:1408.0024 [pdf]
Dimension of Physical Space
Each vector of state has its own corresponing element of the Cayley–Dickson algebra. Properties of a state vector require that this algebra was a normalized division algebra. By the Hurwitz and Frobenius theorems maximal dimension of such algebra is 8. Consequently, a dimension of corresponding complex state vectors is 4, and a dimension of the Clifford set elements is 4x4. Such set contains 5 matrices - among them - 3 diagonal. Hence, a dimension of the dot events space is equal to 3+1.
[4512] vixra:1408.0015 [pdf]
Launching the Six-Coloring Baryon-Antibaryon Antisymmetric Iso-Wavefunctions and Iso-Matrices
In this work, we upgrade the Inopin-Schmidt quark confinement and baryon-antibaryon duality proof with Santilli's new iso-mathematics. For a baryon-antibaryon pair confined to the six-coloring kagome lattice of the Inopin Holographic Confinement Ring (IHCR), we construct a cutting-edge procedure that iso-topically lifts the antisymmetric wavefunctions and matrices to iso-wavefunctions and iso-matrices, respectively. The initial results support our hypothesis that transitions between the energy and resonance states of the hadronic spectra may be rigorously characterized by properly-calibrated iso-topic liftings. In total, these rich developments suggest a promising future for this emerging iso-confinement framework, which must be subjected to additional scientific inquiry, scrutiny, and exploration.
[4513] vixra:1408.0013 [pdf]
A Simple Theory of Gravity Based on Mach's Principle
A simple and intuitive alternative theory of gravity based on Mach's Principle is proposed. At any location, total gravitational potential from the Universe's matter distribution is c<sup>2</sup>. This Universal background potential constitutes unit rest energy of matter and provides its unit mass, which is the essence behind E=mc<sup>2</sup>. The background gravity creates a local sidereal inertial frame at every location. A velocity increases gravitational potential through net blue-shift of Universe's background gravity, causing kinematic time dilation, which is a form of gravitational time dilation. Matter and energy follow different rules of motion, as matter does not undergo Shapiro delay. As a consequence, speed of matter may exceed the speed of light. The theory is consistent with existing relativity experiments, and is falsifiable based on experiments whose predictions differ from General Relativity. The theory also explains why all the ICARUS and corrected OPERA experiments still show mean neutrino velocities slightly above speed of light (early arrival of neutrinos by 0.1-6.5ns), even after correcting the issues that had led to the original OPERA experiments to erroneously report faster than light neutrinos (early arrival by ~60ns).
[4514] vixra:1408.0011 [pdf]
Biquaternion em Forces with Hidden Momentum and an Extended Lorentz Force Law, the Lorentz-Larmor Law
In this paper we apply our version of biquaternion math-phys to electrodynamics, especially to moving electromagnetic dipole moments. After a terminological introduction and applying the developed math-phys to the Maxwell environment, we propose to fuse the Larmor angular velocity with the Lorentz Force Law, producing the biquaternion Lorentz-Larmor Law. This might be a more economic way to deal with cyclotron and tokamak related physics, where the Lorentz Force Law and the Larmor angular velocity actively coexist. Then we propose a biquaternion formulation for the energy-torque-hidden-momentum product related to a moving electromagnetic dipole moment. This expression is then used to get a relativistically invariant force equation on moving electromagnetic dipole moments. This equation is then used to get at the force on a hidden electromagnetic dipole moment and for a derivation of the Ahanorov-Casher force on a magnetic dipole moving in an external electric field. As a conclusion we briefly relate our findings to Mansuripurs recent critique of the Lorentz Force Law and the electrodynamic expressions for the energy-momentum tensor.
[4515] vixra:1408.0008 [pdf]
The Grow-Shrink Strategy for Learning Markov Network Structures Constrained by Context-Specific Independences
Markov networks are models for compactly representing complex probability distributions. They are composed by a structure and a set of numerical weights. The structure qualitatively describes independences in the distribution, which can be exploited to factorize the distribution into a set of compact functions. A key application for learning structures from data is to automatically discover knowledge. In practice, structure learning algorithms focused on "knowledge discovery" present a limitation: they use a coarse-grained representation of the structure. As a result, this representation cannot describe context-specific independences. Very recently, an algorithm called CSPC was designed to overcome this limitation, but it has a high computational complexity. This work tries to mitigate this downside presenting CSGS, an algorithm that uses the Grow-Shrink strategy for reducing unnecessary computations. On an empirical evaluation, the structures learned by CSGS achieve competitive accuracies and lower computational complexity with respect to those obtained by CSPC.
[4516] vixra:1408.0003 [pdf]
Co-Prime Gap N-Tuples that Sum to a Number and Other Algebraic Forms
We study the spacings of numbers co-prime to an even consecutive product of primes, P_m\# and its structure exposed by the fundamental theorem of prime sieving (FTPS). We extend this to prove some parts of the Hardy-Littlewood general prime density conjecture for all finite multiplicative groups modulo a primorial. We then use the FTPS to prove such groups have gap spacings which form arithmetic progressions as long as we wish. We also establish their densities and provide prescriptions to find them.
[4517] vixra:1407.0225 [pdf]
A Model of Global Instructions, from Classical to Quantum Mechanics, and Its Application to the Measurement Problem and Entanglement
In this work the usual formulation of the variational methods of Clas- sical Mechanics is slightly modified by describing space as an interface implementing instructions: these instructions, in the form of bit strings, determine the existence and the dynamics of classical systems and are global – that is, their information content is present at every point of space. These changes are then carried over to Feynman’s path integral formulation of non-relativistic Quantum Mechanics by recurring to the quantum superposition principle. The information content of the instructions is ex- panded to include spin; it then follows an interpretation within this framework of the collapse of the wave function in terms of splitting and merging of information and, as an illustration, of Wheeler’s delayed choice experiment.
[4518] vixra:1407.0217 [pdf]
The BICEP2 Experiment And The Inflationary Model: Dimensionless Quantization of Gravity. Predictive Theory of Quantum Strings.Quantum Wormholes and Nonlocality of QM.The Absence Of Dark Matter
In this paper shown; on the one hand, as the tensor-scalar polarization modes B ratio, it is derived from the initial properties of the vacuum due to the unification of gravitation and electromagnetism. This ratio suggest that it is 2/Pi ^ 2 (0.20262423673), as an upper bound. Secondly; demonstrates that it is not necessary to introduce any inflaton scalar field or similar ( ad-hoc fields ); If on the other hand, is the same structure of the vacuum and the quantization of gravity which perfectly explains this initial exponential expansion of the universe. In some respects exponential vacuum emptying has certain similarities with the emission of radiation of a black hole. This quantization of gravity and its unification with electromagnetic field, shown in previous work; It allows deriving complete accurately the exponential factor of inflation; and therefore calculate accurately the Hubble constant, mass of the universe, matter density, the value of the vacuum energy density, the GUT mass scale ( bosons X,Y ), the gravitino mass and more. The method of quantize gravity used in this work; It is based on dimensionless constants that must be enforced in accordance with general relativity. We demonstrate the existence of quantum wormholes as the basic units of space-time energy, as an inseparable system.These quantum wormholes explain the instantaneous speed of propagation of entangled particles. Or what is the same: an infinite speed, with the condition of zero net energy. Another consequence of the dimensionless quantization of gravity; is the existence of a constant gravitational acceleration that permeates all space. Its nature is quantum mechanical, and inseparable from the Hubble constant. This work is not mere speculation; since applying this vacuum gravitational acceleration; first allows us to explain and accurately calculate the anomaly of the orbital eccentricity of the Moon. This anomaly was detected and accurately measured with the laser ranging experiment. This same constant acceleration in vacuum (in all coordinate of space), which interacts with the masses; explains the almost constant rotation curves of galaxies and clusters of galaxies. Therefore there is no dark matter. Current interpretation of quantum mechanics is completely erroneous. We explain that; as the de Broglie–Bohm theory, also known as the pilot-wave theory; is a much more realistic and correct interpretation of quantum mechanics. The current assumption that there is reality no defined; until the act of observation does not occur; is an aberrant, illogical assertion false and derived from the obsolete current interpretation of quantum mechanics. The age of the universe derived from the Hubble constant is a wrong estimate; due to absolute ignorance of the true nature of this constant and its physical implications. The universe acquired its current size in the very short period of time, a unit of Planck time. We understand that this work is dense and completely revolutionary consequences. Experiments reflection of lasers; type of the laser ranging experiment; undoubtedly will confirm one of the main results: the existence of an intrinsic acceleration of vacuum of gravitational quantum mechanical nature, which explains the rotation curves of galaxies and clusters of galaxies; and which thus makes unnecessary the existence of dark matter.
[4519] vixra:1407.0214 [pdf]
A Fundamental Therorem Of Prime Sieving
We introduce a fundamental theorem of prime sieving (FTPS) and show how it illuminates structure on numbers co-prime to a random product of unique prime numbers. This theorem operates on the transition between the set of numbers co-prime to any product of unique prime numbers and the new set when another prime number is introduced in the product.
[4520] vixra:1407.0206 [pdf]
Unified Field Theory in a Nutshell (Ellicit Dreams of a Final Theory)
The present reading is part of our on-going attempt at the foremost endeavour of physics since man began to comprehend the heavens and the earth. We present a much more improved unified field theory of all the Forces of Nature i.e. the gravitational, the electromagnetic, the weak and the strong nuclear forces. The proposed theory is a radical improvement of Professor Herman Weyl's supposed failed attempt at a unified theory of gravitation and electromagnetism. As is the case with Professor Weyl's theory, unit vectors in the resulting/proposed theory vary from one point to the next, albeit, in a manner such that they are compelled to yield tensorial affinities. In a separate reading, the Dirac equation is shown to emerge as part of the description of the these variable unit vectors. The nuclear force fields -- i.e., electromagnetic, weak and the strong -- together with the gravitational force field are seen to be described by a four vector field, which forms part of the body of the variable unit vectors and hence the metric of spacetime. The resulting theory very strongly appears to be a logically consistent and coherent unification of classical and quantum physics and at the same time a grand unity of all the forces of Nature. Unlike most unification theories, the present proposal is unique in that it achieves unification on a four dimensional continuum of spacetime without the need for extra-dimensions.
[4521] vixra:1407.0205 [pdf]
An Application of Hardy-Littlewood Conjecture
In this paper, we assume that weaker Hardy-Littlewood Conjecture, we got a better upper bound of the exceptional real zero for a class of prime number module.
[4522] vixra:1407.0198 [pdf]
Launching the Chaotic Realm of Iso-Fractals: a Short Remark
In this brief note, we introduce the new, emerging sub-discipline of iso-fractals by highlighting and discussing the preliminary results of recent works. First, we note the abundance of fractal, chaotic, non-linear, and self-similar structures in nature while emphasizing the importance of studying such systems because fractal geometry is the language of chaos. Second, we outline the iso-fractal generalization of the Mandelbrot set to exemplify the newly generated Mandelbrot iso-sets. Third, we present the cutting-edge notion of dynamic iso-spaces and explain how a mathematical space can be iso-topically lifted with iso-unit functions that (continuously or discretely) change; in the discrete case examples, we mention that iteratively generated sequences like Fibonacci's numbers and (the complex moduli of) Mandelbrot's numbers can supply a deterministic chain of iso-units to construct an ordered series of (magnified and/or de-magnified) iso-spaces that are locally iso-morphic. Fourth, we consider the initiation of iso-fractals with Inopin's holographic ring (IHR) topology and fractional statistics for 2D and 3D iso-spaces. In total, the reviewed iso-fractal results are a significant improvement over traditional fractals because the application of Santilli's iso-mathematics arms us an extra degree of freedom for attacking problems in chaos. Finally, we conclude by proposing some questions and ideas for future research work.
[4523] vixra:1407.0197 [pdf]
Gravitational Binding Energy in Charged Cylindrical Symmetry
We consider static cylindrically symmetric charged gravitating object with perfect fluid and investigate the gravitational binding energy. It is found that only the localized part of the mass function provides the gravitational binding energy, whereas the non-localized part generated by the electric coupling does not contribute for such energy.
[4524] vixra:1407.0196 [pdf]
Cylindrical Thin-Shell Wormholes in $f(R)$ Gravity
In this paper, we employ cut and paste scheme to construct thin-shell wormhole of a charged black string with $f(R)$ terms. We consider $f(R)$ model as an exotic matter source at wormhole throat. The stability of the respective solutions are analyzed under radial perturbations in the context of $R+{\delta}R^2$ model. It is concluded that both stable as well as unstable solutions do exist for different values of $\delta$. In the limit $\delta{\rightarrow}0$, all our results reduce to general relativity.
[4525] vixra:1407.0169 [pdf]
New Developments in Clifford Fourier Transforms
We show how real and complex Fourier transforms are extended to W.R. Hamilton's algebra of quaternions and to W.K. Clifford’s geometric algebras. This was initially motivated by applications in nuclear magnetic resonance and electric engineering. Followed by an ever wider range of applications in color image and signal processing. Clifford's geometric algebras are complete algebras, algebraically encoding a vector space and all its subspace elements. Applications include electromagnetism, and the processing of images, color images, vector field and climate data. Further developments of Clifford Fourier Transforms include operator exponential representations, and extensions to wider classes of integral transforms, like Clifford algebra versions of linear canonical transforms and wavelets.
[4526] vixra:1407.0147 [pdf]
Bimodal Quantum Theory
Some variants of quantum theory theorize dogmatic “unimodal” states-of-being, and are based on hodge-podge classical-quantum language. They are based on <i>ontic</i> syntax, but <i>pragmatic</i> semantics. This error was termed semantic inconsistency [1]. Measurement seems to be central problem of these theories, and widely discussed in their interpretation. Copenhagen theory deviates from this prescription, which is modeled on experience. A <i>complete</i> quantum experiment is “<i>bimodal</i>”. An experimenter <i>creates</i> the system-under-study in <i>initial</i> mode of experiment, and <i>annihilates</i> it in the <i>final</i>. The experimental <i>intervention</i> lies beyond the theory. I theorize most rudimentary bimodal quantum experiments studied by Finkelstein [2], and deduce “bimodal probability density” π = |ψin>⊗<φfin| to represent <i>complete</i> quantum experiments. It resembles core insights of the Copenhagen theory.
[4527] vixra:1407.0133 [pdf]
Extremely Efficient Acceptance-Rejection Method for Simulating Uncorrelated Nakagami Fading Channels
Multipath fading is one of the most common distortions in wireless communications. The simulation of a fading channel typically requires drawing samples from a Rayleigh, Rice or Nakagami distribution. The Nakagami-m distribution is particularly important due to its good agreement with empirical channel measurements, as well as its ability to generalize the well-known Rayleigh and Rice distributions. In this paper, a simple and extremely efficient rejection sampling (RS) algorithm for generating independent samples from a Nakagami-m distribution is proposed. This RS approach is based on a novel hat function composed of three pieces of well-known densities from which samples can be drawn easily and efficiently. The proposed method is valid for any combination of parameters of the Nakagami distribution, without any restriction in the domain and without requiring any adjustment from the final user. Simulations for several parameter combinations show that the proposed approach attains acceptance rates above 90% in all cases, outperforming all the RS techniques currently available in the literature.
[4528] vixra:1407.0107 [pdf]
Reality Elements
English (traslation): Half of the stars in the universe are composed of ordinary matter, half antimatter. Each atomic particle has the same number of gravitons that half of the stars in the Universe. Links of gravitons with stars of one kind or another, determine the gravity and inertia. The blueshift phenomenon is explained and also shows how you can get energy from gravity, based on the above. Spanish (original): La mitad de las estrellas del Universo están compuestas de materia ordinaria, la otra mitad de antimateria. Cada partícula atómica tiene el mismo número de gravitones que la mitad de las estrellas del Universo. Los enlaces de los gravitones con las estrellas, de uno u otro tipo, determinan la gravedad y la inercia. Se explica el fenómeno del blueshift y se muestra, asimismo, como se puede obtener energía a partir de la gravedad, basándose en lo anterior.
[4529] vixra:1407.0084 [pdf]
On the Confinement of Superluminal Quarks Without Applying the Bag Pressure
We explain herein the fatal error or at least an ironic questionable parallel method in formulation of strong interaction (quantum chromodynamics). We postulate that quarks are tachyons and do not obey Yang-Mills theory. By applying this correction to the dynamics of quarks, we can confine quarks in hadrons. We seek to show why quarks do not obey the Pauli exclusion principle and why we cannot observe free quarks. In addition, we obtained the correct sizes of hadrons and derive straightforward formulations of strong interaction. Instead of several discrete QCD methods, we derive a united formulation that enables us to solve the strong interaction for all energy values. Finally we discuss about some experimental evidence such as chiral magnetic effect, scattering angular distribution, cherenkov gluon radiation, hadron mass gap, nucleon spin crisis and CP violation in standard model that may result from this assumption.
[4530] vixra:1407.0074 [pdf]
Estimating the Hubble Constant on a Base of Observed Values of the Hubble Parameter $H(z)$ in a Model Without Expansion
In the model of low-energy quantum gravity by the author, the ratio ${H(z) / (1+z)}$ should be equal to the Hubble constant. Here, the weighted average value of the Hubble constant has been found using 29 observed values of the Hubble parameter $H(z)$: $<H_{0}>\pm \sigma_{0}=(64.40 \pm 5.95) \ km \ s^{-1} \ Mpc^{-1}. $
[4531] vixra:1407.0067 [pdf]
The Analysis of Lobo and Visser Applied to Both Natario and Casimir Warp Drives. Physical Reactions of Gravitational Repulsive Behavior Between the Positive Mass of the Spaceship and the Negative Mass of the Warp Bubble
Warp Drives are solutions of the Einstein Field Equations that allows superluminal travel within the framework of General Relativity. There are at the present moment two known solutions: The Alcubierre warp drive discovered in $1994$ and the Natario warp drive discovered in $2001$. The major drawback concerning warp drives is the huge amount of negative energy able to sustain the warp bubble.In order to perform an interstellar space travel to a "nearby" star at $20$ light-years away in a reasonable amount of time a ship must attain a speed of about $200$ times faster than light.However the negative energy density at such a speed is directly proportional to the factor $10^{48}$ which is $1.000.000.000.000.000.000.000.000$ times bigger in magnitude than the mass of the planet Earth!! Although the energy conditions of General Relativity forbids the existence of negative energy the Casimir Effect first predicted by Casimir in $1948$ and verified experimentally by Lamoreaux in $1997$ allows sub-microscopical amounts of it.We introduce in this work a shape function that will low the negative energy density requirements in the Natario warp drive from $10^{48} \frac{Joules}{Meter^{3}}$ to $10^{-7} \frac{Joules}{Meter^{3}}$ a low and affordable level. However reducing the negative energy density requirements of the warp drive to arbitrary low levels works only for empty bubbles not for bubbles with real spaceships inside because the positive mass of the spaceship exerts over the negative mass of the bubble a gravitational repulsive force and a spaceship with a large positive mass inside a bubble of small negative mass destroys the bubble.According to Lobo and Visser we can reduce the negative energy density of the warp bubble only to the limit when the negative energy becomes a reasonable fraction of the positive mass/energy of the spaceship.and no less otherwise the bubble is destroyed.The analysis of Lobo and Visser must be taken in account when considering bubbles with real spaceships inside otherwise the warp drive may not work.We reproduce in this work the analysis of Lobo and Visser for the Natario and Casimir warp drives.The work of Lobo and Visser is the third most important work in warp drive science immediately after the works of Alcubierre and Natario and the Lobo-Visser paper must also be considered a seminal paper like the ones of both Alcubierre and Natario.
[4532] vixra:1407.0051 [pdf]
Extension of Maxwell's Equations for Charge Creation and Annihilation
Extension of Maxwell's equations is proposed to realize charge creation and annihilation. The proposed equations includes Nakanishi-Lautrup (NL) field, which was introduced to construct Lorentz covariant electromagnetic field model for quantum electrodynamics (QED). The necessity of the extension of Maxwell's equations is shown by the comparison of current values given by Maxwell's and the proposed equations in the simple structure consisting of a silicon sphere surrounded by SiO2. Maxwell's equations give unreasonable currents in SiO2, although the proposed equations give reasonable result. The electromagnetic field energy density is increased by existence of NL field.
[4533] vixra:1407.0041 [pdf]
Gravity Experiment in Waiting
In 1632 Galileo proposed an extremely simple gravity experiment that has yet to be carried out. Its essence is to determine what happens when a test mass is dropped into a hole through the center of a larger source mass. It is a common problem in first year physics courses. Using a modified Cavendish balance or an orbiting satellite, with modern technology the experiment could have been done decades ago. In a seemingly unrelated context, many modern theories in physics have been criticized for their lack of connection with empirical evidence. One of the critics, Jim Baggott, has expounded on the problem in a book and more recently in an article, <em>The Evidence Crisis</em>, posted to the weblog, <em>Scientia Salon</em>. Einstein’s theory of gravity is widely regarded as being supported by empirical evidence throughout its accessible range, from the scale of millimeters to Astronomical Units. Not commonly realized, however, is that, with regard to gravity-induced motion, the evidence excludes the <em>central</em> regions of material bodies over this whole range. Specifically, the gravitational <em>interior</em> solution has not been tested. It is thus argued that here too modern physics suffers an evidence crisis. The lack of evidence in this case pertains to what may be called the most ponderous half of the gravitational Universe, inside matter. This large gap in our empirical knowledge of gravity could be easily filled by conducting Galileo’s experiment. As conscientious scientists, it is argued, this is what we ought to do.
[4534] vixra:1407.0035 [pdf]
Collection of Arguments Vs. Niels Bohr's Destruction of Reality
Because of found mistakes, which we are lazy to correct, we need to check and recheck the foundations of Physics. As example of mistakes: "all" scientists used solution of dust collapse almost century, but it was wrong [Journal of Cosmology, 6, 1473-84, 2010]. Honest work on the errors, as I understand, has not begun. You postpone everything until the Second Coming? But God speaks: Matthew 25:26.
[4535] vixra:1407.0032 [pdf]
A Brief Note on the Magnecule Order Parameter Upgrade Hypothesis
In this short remark, we report on recent hypothetical work that aims to equip Santilli's magnecule model with topological deformation order parameters (OP) of fractional statistics to define a preliminary set of wave-packet wave-functions for the electron toroidal polarizations. The primary objective is to increase the representational precision and predictive accuracy of the magnecule model by exemplifying the fluidic characteristics for direct industrial application. In particular, the OPs are deployed to encode the spontaneous superfluidic gauge symmetry breaking (which may be restored at the iso-topic level) and correlated with Leggett's superfluid B phases to establish a long range constraint for the wave-functions. These new, developing, theoretical results may be significant because the OP configuration arms us with an extra degree of freedom for encoding a magnecule's states and transitions, which may reveal further insight into the underlying physical mechanisms and features associated with these state-of-the-art magnecular bonds.
[4536] vixra:1407.0010 [pdf]
A Lower Bound of 2^n Conditional Jumps for Boolean Satisfiability on A Random Access Machine
We establish a lower bound of 2^n conditional jumps for deciding the satisfiability of the conjunction of any two Boolean formulas from a set called a full representation of Boolean functions of n variables - a set containing a Boolean formula to represent each Boolean function of n variables. The contradiction proof first assumes that there exists a RAM program that correctly decides the satisfiability of the conjunction of any two Boolean formulas from such a set by following an execution path that includes fewer than 2^n conditional jumps. By using multiple runs of this program, with one run for each Boolean function of n variables, the proof derives a contradiction by showing that this program is unable to correctly decide the satisfiability of the conjunction of at least one pair of Boolean formulas from a full representation of n-variable Boolean functions if the program executes fewer than 2^n conditional jumps. This lower bound of 2^n conditional jumps holds for any full representation of Boolean functions of n variables, even if a full representation consists solely of minimized Boolean formulas derived by a Boolean minimization method. We discuss why the lower bound fails to hold for satisfiability of certain restricted formulas, such as 2CNF satisfiability, XOR-SAT, and HORN-SAT. We also relate the lower bound to 3CNF satisfiability.
[4537] vixra:1406.0190 [pdf]
Initiating a Hypothetical Upgrade to Magnecules with Topological Deformation Order Parameters for Spontaneous Superfluidic Gauge Symmetry Breaking
In this preliminary work, we propose a hypothesis and launch a procedural upgrade to magnecules by equipping these new iso-chemical creatures with topological deformation order parameters (OP) of fractional statistics to encode the spontaneous superfluidic gauge symmetry breaking (which we expect to be restored at the iso-topic level), correlated helices with long range order, and wave-packet wave-functions for the electron toroidal polarizations. For this initial "base case", we consider a single magnecular bond between dual inter-locked protium atoms in a magnecule. The results of this equipment support our hypothesis and are significant because the OP configuration arms us with an extra degree of freedom for encoding a magnecule's states and transitions; this may enable us to further decode and comprehend the underlying physical mechanisms and features associated with these state-of-the-art magnecular bonds for direct industrial application. Hence, these outcomes should be subjected to additional stringent examination and improvement.
[4538] vixra:1406.0188 [pdf]
On Lanczos' Conformal Trick
The Weyl conformal tensor describes the distorting but volume-preserving tidal effects of gravitation on a material body. A rather complicated combination of the Riemann-Christoffel tensor, the Ricci tensor and the Ricci scalar, the Weyl tensor is used in the construction of a unique conformally-invariant Lagrangian. In 1938 Cornelius Lanczos discovered a clever simplification of the mathematics that eliminated the RC term, thus considerably reducing the complexity of the overall Lagrangian. Here we present an equivalent but simpler approach to the one Lanczos used.
[4539] vixra:1406.0186 [pdf]
A Nonabelian Gauge Theory of Gravitation
The aim of the paper is to develop a gauge theory, which shall be on the one hand as similar as possible to the original ansatz of Einstein’s theory of general relativity, and on the other hand in agreement with other gauge theories as, for instance, those of the electroweak or of the strong interaction. The result is a nonabelian gauge theory with the general linear group GL(4,R) as its gauge group.
[4540] vixra:1406.0185 [pdf]
Kritische Analyse Der Quantenfeldtheorie Und Rekonstruktion im Rahmen Der Klassischen Feldtheorie
A critical inspection of quantum field theory will reveal that quantum field theory can be reconstructed only by means of classical field theory. In detail the following six assertions are claimed and proved: (1) Perturbation theory can be achieved by means of classical field theory. (2) Particles that are independent of one another are not correlated. This is especially true for the ingoing particles of scattering processes. (3) Outgoing particles in scattering processes are correlated. But the usual justification of quantum statistics is faulty. (4) In quantum field theory there is an amazing multitude of particle concepts. But a concise description of real existing elementary particles is lacking. (5) The path integral representation is not clearly defined in the particle picture of quantum mechanics. In the wave picture it is only another description of the expansion of a quantum state. (6) Functional representation is nothing else than a comprehensive version of perturbation theory.
[4541] vixra:1406.0184 [pdf]
Bell's Theorem Refuted, and 't Hooft's Superdeterminism Rejected, as We Factor Quantum Entanglements in Full Accord with Commonsense Local Realism
Commonsense local realism (CLR) is the fusion of local-causality (no causal influence propagates superluminally) and physical-realism (some physical properties change interactively). Advancing our case for a wholly CLR-based quantum mechanics, we use undergraduate maths and logic to factor the quantum entanglements in EPRB and Aspect (2002). Such factors (one factor relating to beables in Alice's domain, the other to beables in Bob's), refute Bell's theorem and eliminate the need for ‘t Hooft's superdeterminism. An obvious unifying algorithm (based on spin-s particles in a single thought-experiment) is foreshadowed and left as an exercise. That is, to emphasise the physical significance of our results, we here factor EPRB and Aspect (2002) separately and in detail.
[4542] vixra:1406.0177 [pdf]
A Wave Function and Quantum State Vector in Indefinite Metric Minkowski Space
Indefinite metric vectors are absolutely required as the physical states in Minkowski space because that is indefinite metric space and the physical space-time. For example, Maxwell equations are wave equations in Minkowski space. However, traditional Quantum theory ordinarily has been studied only in definite metric space, i.e., Hilbert space. There are no clear expression for indefinite metric vectors. Here we show a wave function example using Dirac's delta function for indefinite metric vectors in Minkowski space. In addition, we show the vectors can interfere with itself. This example also suggests indefinite metric will be absolutely required.
[4543] vixra:1406.0174 [pdf]
The Four-Frequency of Light
A basic overview of the photon four-frequency is given, demonstrating its use in the explanation of the aberration of starlight and the Doppler shift, at a mathematical level suitable for undergraduate students in the physical sciences.
[4544] vixra:1406.0172 [pdf]
Kaluza-Klein for Kids
A very elementary overview of the original Kaluza-Klein theory is presented, suitable for undergraduates who want to learn the basic mathematical formalism behind a revolutionary idea that was proposed one hundred years ago, yet today serves as the template for modern higher-dimensional particle and gravity theories.
[4545] vixra:1406.0152 [pdf]
Why Exponential Disk?
Galaxies demonstrate spectacular structure and ordinary spiral galaxies are simply an exponential disk. A rational structure has at least one net of orthogonal Darwin curves, and the exponential disk has infinite nets. This paper proves that the nets of Darwin curves of exponential disk define an intrinsic vector field in the disk plane. Finally, a proposition is given that the vector field should connect to the phenomenon of constant rotation curves.
[4546] vixra:1406.0139 [pdf]
Planetary Cores
English (traslation): The current model of the structure of the planets at issue in this article. Arguing that probably your inner core is composed of hydrogen (like the stars), and temperature Hartree. It is also proposed that the age of the planets is the same as that of the stars. And while they generate electromagnetic radiation, the planets would generate matter. Spanish (original): Se cuestiona en éste artículo el actual modelo de la estructura de los planetas. Argumentando que probablemente su núcleo interno esté compuesto por hidrógeno (al igual que las estrellas), y a la temperatura de Hartree. Se propone asimismo que la antigüedad de los planetas es la misma que la de las estrellas. Y que mientras éstas generan radiación electromagnética, los planetas generarían materia.
[4547] vixra:1406.0129 [pdf]
Theory of Colorless and Electrically Neutral Quarks and Colored and Electrically Charged Gauge Bosons
We propose a model of interactions in which two states of a quark, a colored and electrically charged state and a colorless and electrically neutral state, can transform into each other through the emission or absorption of a colored and electrically charged gauge boson. A novel feature of the model is that the colorless and electrically neutral quarks carry away the missing energy in decay processes as do neutrinos.
[4548] vixra:1406.0128 [pdf]
Gravitational Interaction in the Medium of Non-Zero Density
The paper presents the novel results obtained by more comprehensively analyzing well-known and clearly visible physical processes associated with gravitational interaction in the system of material bodies in the medium of non-zero density. The work is based on the statement that the "buoyancy" (Archimedes) force acting on the material body located in a medium of non-zero density is of the gravitational nature. Due to this approach, we managed, staying in the framework of classical physics and mechanics definitions, to introduce the concept of the body's gravitating mass as a mass determining the gravitational interaction intensity and also to establish an analytical relation between the gravitating and inertial masses of the material body. Combining the direct and indirect (Archimedes force) gravitational effect on the material body in the medium, we succeeded in distinguishing from the total gravitational field of this system a structure with a dipole-like field line distribution. This fact allows us to assert that, along with the gravitational attraction, there exists also gravitational repulsion of material bodies, and this fact does not contradict the meanings of existing basic definitions and concepts of classical physics and mechanics.
[4549] vixra:1406.0127 [pdf]
The Higgs Boson and Quark Compositeness
Considering that each quark is composed of two prequarks it is shown that the recently found Higgs boson belongs to a triplet of neutral bosons, and that there are two quadruplets of charged Higgs-like bosons. The quantum numbers of these bosons are calculated and shown to be associated to a new kind of hypercharge directly linked to quark compositeness. Particularly, the quantum number of the recently found Higgs boson is identified. A chart for quark decays via virtual Higgs-like bosons is proposed. Justifications for quark compositeness are presented.
[4550] vixra:1406.0117 [pdf]
Question of Planckian "Action" in Gravitational Wave Detection Experiments
The validity of Planck's constant in gravitational wave detection experiments is brought into question in the context of the framework of quantum mechanics. It is shown that in the absence of a purely gravitational measurement of Planck's constant one cannot at present rule out the possibility that gravitational quanta may be scaled by a more diminutive "action." An experiment that could unequivocally test this possibility is suggested.
[4551] vixra:1406.0103 [pdf]
On the Status and Trends of Research Involving Higher Vocational Colleges Mathematics Micro Curriculum Based on Computer Network Technology
Micro Course (Micro lecture) refers to the use of constructivist methods into to online learning or mobile learning for the purpose of practical teaching content. Micro-Flip classroom curriculum as part Keqianyuxi important carrier, causing the majority of teachers and school management's attention, and gradually become a new topic of education reform and development of education information. In this paper, Higher Mathematics Teaching, for example, outlines the current situation and development trend of the implementation of advanced mathematics courses micro computer network technology based under. Key words: Higher Mathematics; Micro lecture; Computer network technology;
[4552] vixra:1406.0092 [pdf]
The α-Theory. From Tachyonic to Bradyonic Universe. New Physical Scenarios
The present book is devoted to the construction of a new theory for the elementary particles. This theory was born in order to resolve some open issues of the Standard Model, among which the research of a general equation for the description of all quantum particles in a unique way. Starting from the Pauli equation, two relativistic partial differential equations are obtained, one to the first and other to the second order, which are able to describe particles with arbitrary spin. The analysis of the energetic spectrum concerning such equations puts in evidence that the particles they describe have an imaginary mass. Then, the probability density study of these equations shows the principle of causality is not satisfied. Therefore, these equations characterize the tachyonic universe. We can see such a universe admits spontaneous symmetry breaking. In general terms, we can establish the broken symmetry might be a global characteristic of the tachyonic universe, due to its negative square energy, which made it unstable. Hence, it is possible to assume that after the hot Big-Bang there was another catastrophic cosmological event – said Big-Break – which made to break down the tachyonic universe in a positive square energy one (bradyonic universe), and from which the four fundamental interactions are probably generated. The transition from tachyonic to bradyonic universe transforms the tachyonic equations in bradyonic ones, being then able to describe elementary particles with arbitrary spin. The first order partial differential equation gives asymmetric quantum states, while the second order partial differential equation gives symmetric quantum states and so they describe different theories. In the course of this work, both these theories will be studied. Either way, only one of the two could be the right theory of the elementary particles. Since this theory follows on from the Big-Bang, from the resultant tachyonic universe and from Big-Break, it has been called α-Theory, i.e. the “beginning theory.” The α-Theory may theoretically predict the experimental properties of neutrinos and anti-neutrinos and allows, thanks to the appendix A, to generalize the concepts of the Dirac sea, Pauli exclusion principle and spin-statistics theorem, thus giving rise to “s-matter” and “multi-statistics,” which are able to take an interesting approach for the explanation of the Dark Matter. Furthermore, it expects on large-scale the “double inflation” mechanism, which, without the Dark Energy, could explain the acceleration of our universe (bradyonic universe). From the α-Theory, two string actions can be deduced too, by which new ideas can be developed. Practically, the α-Theory wants to be a GUT, able to describe the quantum particles and our universe in a simple and elegant way.
[4553] vixra:1406.0090 [pdf]
Speed of Light and Rates of Clocks in the Space Generation Model of Gravitation, Part 1
General Relativity’s Schwarzschild solution describes a spherically symmetric gravitational field as an utterly static thing. The Space Generation Model (SGM) describes it as an absolutely moving thing. The SGM nevertheless agrees equally well with observations made in the fields of the Earth and Sun, because it predicts almost exactly the same spacetime curvature. This success of the SGM motivates deepening the context—especially with regard to the fundamental concepts of motion. The roots of Einstein’s relativity theories thus receive critical examination. A particularly illuminating and widely applicable example is that of uniform rotation, which was used to build General Relativity (GR). Comparing Einstein’s logic to that of the SGM, the most significant difference concerns the interpretation of the readings of accelerometers and the rates of clocks. Where Einstein infers relativity of motion and spacetime symmetry, it is argued to be more logical to infer absoluteness of motion and spacetime asymmetry. This approach leads to reassessments of the essential nature of matter, time, and the dimensionality of space, which lead in turn to some novel cosmological consequences. Special emphasis is given to the model’s deviations from standard predictions inside matter, which have never been tested, but could be tested by conducting a simple experiment.
[4554] vixra:1406.0085 [pdf]
Advanced Numerical Approaches in the Dynamics of Relativistic Flows
Strong gravity and relativistic plasma flows are among the fundamental ingredients powering high-energy astrophysical phenomena such as short and long gamma ray bursts, core-collapse supernovae and relativistic outflows from black-hole accreting systems. General-relativistic hydrodynamics is also essential in modelling the merger of neutron stars binaries and black-hole neutron- star binaries that are among the best sources for future gravitational-wave detectors such as LIGO, Virgo or KAGRA. Over the past decade, the understanding of these phenomena has benefited significantly from the results obtained through non-linear numerical calculations. Key factors in this progress have been the switch to more advanced numerical schemes that are able to properly treat relativistic shock waves, and the progressive inclusion of more “physics”, such as magnetic fields or realistic equations of state. Following this trend, even better numerical tools and more accurate physical description will be be essential to understand these phenomena. This thesis aims at contributing to both of these aspects.
[4555] vixra:1406.0033 [pdf]
The Cubic Equation's Relation to the Fine Structure Constant, the Mixing Angles, and Weinberg Angle
A special case of the cubic equation is shown to possess three unusually economical solutions. A minimal case associated with these solutions is then shown to yield a congruous set of numbers that fit the fine structure constant, the sines squared of the quark and lepton mixing angles, as well as the Weinberg angle. Had Renaissance mathematicians probed the cubic equation's solutions more deeply these numbers might have formed a well-known part of algebra from the 16th century.
[4556] vixra:1406.0031 [pdf]
Plug-Flow Does not Exist in Electro-Osmosis
I eliminate hundred years old notion of `plug-flow' in electro-osmosis, which was predicted by incomplete `electric double layer' (EDL) theory. A recently developed `electric triple layer' (ETL) theory removes some serious shortcomings of EDL theory regarding conservation of electric charge, and when applied to electro-osmosis, shows that the velocity profile is not `plug-like' at all, but more like a parabola; it agrees with experiments and molecular dynamical simulation (MDS) results. Also, I redefine ‘Helmholtz-Smoluchowski velocity-scale’, which clears certain misunderstandings regarding representation of flow direction, and accommodates solution and geometrical properties within it. I describe some novel electro-osmotic flow controlling mechanisms. The entire electrokinetic theory must be modified using these concepts.
[4557] vixra:1406.0027 [pdf]
Bell’s Theorem Refuted: Bell’s 1964:(15) is False
Generalizing Bell 1964:(15) to realizable experiments, CHSH (1969) coined the term “Bell's theorem”. Despite loopholes, but as expected, the results of such experiments contradict Bell's theorem to our total satisfaction. Thus, for us, at least one step in Bell's supposedly commonsense analysis must be false. Using undergraduate maths and logic, we find a mathematical error in Bell (1964) --- a false equality, uncorrected and thus continuing, undermines all Bell-style EPRB-based analyses, rendering them false. We again therefore predict with certainty that all loophole-free EPRB-style experiments will also give the lie to Bell's theorem.
[4558] vixra:1406.0015 [pdf]
A New Fundamental Factor in the Interpretation of Young's Double-Slit Experiment
In this paper, we reproduce the interference pattern using only space-time geodesics. We prove that fringes and bands can be reproduced by using fluctuating geodesics, which suggests that the interference pattern shown to occur with electrons, atoms, molecules and other elementary particles might be a natural manifestation of the space-time geodesics for the small scale world.
[4559] vixra:1406.0013 [pdf]
Two Statements that Are Equivalent to a Conjecture Related to the Distribution of Prime Numbers
Let $n\in\mathbb{Z}^+$. In [8] we ask the question whether any sequence of $n$ consecutive integers greater than $n^2$ and smaller than $(n+1)^2$ contains at least one prime number, and we show that this is actually the case for every $n\leq 1,193,806,023$. In addition, we prove that a positive answer to the previous question for all $n$ would imply Legendre's, Brocard's, Andrica's, and Oppermann's conjectures, as well as the assumption that for every $n$ there is always a prime number in the interval $[n,n+2\lfloor\sqrt{n}\rfloor-1]$. <p>Let $\pi[n+g(n),n+f(n)+g(n)]$ denote the amount of prime numbers in the interval $[n+g(n),n+f(n)+g(n)]$. Here we show that the conjecture described in [8] is equivalent to the statement that <br>% <br>$$\pi[n+g(n),n+f(n)+g(n)]\ge 1\text{, }\forall n\in\mathbb{Z}^+\text{,}$$ <br>% <br>where <br>% <br>$$f(n)=\left(\frac{n-\lfloor\sqrt{n}\rfloor^2-\lfloor\sqrt{n}\rfloor-\beta}{|n-\lfloor\sqrt{n}\rfloor^2-\lfloor\sqrt{n}\rfloor-\beta|}\right)(1-\lfloor\sqrt{n}\rfloor)\text{, }g(n)=\left\lfloor1-\sqrt{n}+\lfloor\sqrt{n}\rfloor\right\rfloor\text{,}$$ <br>% <br>and $\beta$ is any real number such that $1<\beta<2$. We also prove that the conjecture in question is equivalent to the statement that <br>% <br>$$\pi[S_n,S_n+\lfloor\sqrt{S_n}\rfloor-1]\ge 1\text{, }\forall n\in\mathbb{Z}^+\text{,}$$ <br>% <br>where <br>% <br>$$S_n=n+\frac{1}{2}\left\lfloor\frac{\sqrt{8n+1}-1}{2}\right\rfloor^2-\frac{1}{2}\left\lfloor\frac{\sqrt{8n+1}-1}{2}\right\rfloor+1\text{.}$$ <br>% <br>We use this last result in order to create plots of $h(n)=\pi[S_n,S_n+\lfloor\sqrt{S_n}\rfloor-1]$ for many values of $n$.
[4560] vixra:1406.0011 [pdf]
A Microscopic Approach to Quark and Lepton Masses and Mixings
In recent papers a microscopic model for the SM Higgs mechanism has been proposed, and an idea how to determine the 24 quark and lepton masses of all 3 generations has emerged in that framework. This idea is worked out in detail here by accommodating the fermion masses and mixings to microscopic parameters. The top quark mass turns out to be $m_t\approx 170$GeV and can be given in terms of the weak boson masses and of certain exchange couplings of isospin vectors obeying a tetrahedral symmetry. The observed hierarchy in the family spectrum is attributed to a natural hierarchy in the microscopic couplings. The neutrinos will be shown to vibrate within the potential valleys of the system, thus retaining very tiny masses. This is related to a Goldstone effect inside the internal dynamics. A discussion of the quark and lepton mixing matrices is also included. The mixing angles of the PMNS matrix are calculated for an example set of parameters, and a value for the CP violating phase is given.
[4561] vixra:1406.0003 [pdf]
Greenberg Parafermions and a Microscopic Model of the Fractional Quantum Hall Effect
So far all theoretical models claiming to explain the Fractional Quantum Hall Effect are macroscopic in nature. In this paper we suggest a truly microscopic structure of this phenomenon. At the base is how electron charge is defined in the group SU(N) for arbitrary values of integer N. It is shown how all discovered charges in the Fractional Quantum Hall Effect are accounted for in this model. We show how Greenberg Parafermions, obeying parastatistics, are fundamentally required within this picture to explain the Fractional Quantum Hall Effect. We also show how both the Fractional Quantum Hall Effect and the Integral Quantum Hall Effect are explained in a common unified description in this microscopic model.
[4562] vixra:1405.0354 [pdf]
Electric Triple Layer Theory
I correct hundred years old theory of charge distribution within an electrolytic solution surrounded by charged walls. Existing theory always implies excess amount of counter-ions (having polarity unlike walls) everywhere in the solution domain; so it cannot handle a solution that possesses excess ions of other type (co-ions) or is electrically neutral as a whole. Here, in the corrected distribution, counter-ions dominate near the walls, while the rest of the domain is allowed to be dominated by co-ions; the algebraic sum gives the net charge present, which can be of any sign and magnitude that makes theory quite general. This clarifies and raises many important concepts: a novel concept of `Electric Triple Layer' (ETL) replaces `Electric Double Layer' (EDL) theory; widths of electric layers can be calculated accurately instead of estimating by Debye length scale etc.
[4563] vixra:1405.0338 [pdf]
A Clifford-Gravity Based Cosmology, Dark Matter and Dark Energy
A Clifford-Gravity based model is exploited to build a generalized action (beyond the current ones used in the literature) and arrive at relevant numerical results which are consistent with the presently-observed de Sitter accelerating expansion of the universe driven by a very small vacuum energy density $ \rho_{obs} \sim 10^{ -120} (M_P)^4 $ ($ M_P $ is the Planck mass) and provide promising dark energy/matter candidates in terms of the $16$ scalars corresponding to the degrees of freedom associated with a $ Cl(3,1)$-algebra valued scalar field $ {\bf \Phi} $ in four dimensions.
[4564] vixra:1405.0329 [pdf]
Kerr-Newman, Jung, and the Modified Cosmological Model
Where physical theory normally seeks to describe an objective natural world, the modified cosmological model (MCM) seeks to describe an observer's interaction with that world. Qualitative similarities between the psychological observer, the MCM, and the Kerr-Newman black hole are presented. We describe some minimal modifications to previously proposed processes in the MCM. Inflation, large-scale CMB fluctuations and the free energy device are discussed.
[4565] vixra:1405.0305 [pdf]
Virtual Crossword of Grand Unification
Grand unification can be considered as a virtual crossword ideas, hypotheses and theories. Compose and solve this crossword, we develop new ideas and hypotheses, which intersect with each other. The idea of space consisting of unit cells allows us to relate gravitation to the ability of unit cells to change volume, and electromagnetism to the ability of unit cells to internal movement. The ability of unit cells to form short-range order suggests that photons, leptons, mesons and baryons have structure of regular and semiregular polyhedra. Applied to dark matter, new model of elementary particles leads to the conclusion that elements of DM are non-relativistic nuclei consisting of neutrinos or antineutrinos. It is alleged that neutrinos are capable to form short-lived Cooper pairs most of which immediately decay appearing oscillation, and other fuse into nuclei of deutrinium, the lightest element of DM.
[4566] vixra:1405.0297 [pdf]
Initiating the Effective Unification of Black Hole Horizon Area and Entropy Quantization with Quasi-Normal Modes
Black hole (BH) quantization may be the key to unlocking a unifying theory of quantum gravity (QG). Surmounting evidence in the field of BH research continues to support a horizon (surface) area with a discrete and uniformly spaced spectrum, but there is still no general agreement on the level spacing. In this specialized and important BH case study, our objective is to report and examine the pertinent groundbreaking work of the strictly thermal and non-strictly thermal spectrum level spacing of the BH horizon area quantization with included entropy calculations, which aims to tackle this gigantic problem. In particular, this work exemplifies a series of imperative corrections that eventually permits a BH's horizon area spectrum to be generalized from strictly thermal to non-strictly thermal with entropy results, thereby capturing multiple preceding developments by launching an effective unification between them. Moreover, the identified results are significant because quasi-normal modes (QNM) and "effective states" characterize the transitions between the established levels of the non-strictly thermal spectrum.
[4567] vixra:1405.0294 [pdf]
Hyperkomplexe Algebren Und Ihre Anwendung in Der Mathematischen Formulierung Der Quantentheorie (Hypercomplex Algebras and Their Application to the Mathematical Formulation of Quantum Theory)
Quantum theory (QT) which is one@@ of the basic theories of physics, namely in terms of Schrödinger's 1926 wave functions in general requires the field <b>C</b> of the complex numbers to be formulated. However, even the complex-valued description soon turned out to be insufficient. Incorporating Einstein's theory of Special Relativity (Schrödinger, Klein, Gordon, 1926, Dirac 1928) leads to an equation which requires some coefficients which are hypercomplex. Conventionally the Dirac equation is written using pairwise anti-commuting matrices. However, a unitary ring of square matrices <i>is</i> an - associative - hypercomplex algebra by definition. However, only the algebraic properties of the elements and their relations to one another are important. We hence replace the matrix formulation by a more symbolic one. In the case of the Dirac equation, these elements are called biquaternions. As an algebra over <b>R</b>, the biquaternions are eight-dimensional; as subalgebras, this algebra contains the division ring <b>H</b> of the quaternions at one hand and the algebra <b>C</b>⊗<b>C</b> of the bicomplex numbers at the other, the latter being commutative. As it will later turn out, <b>C</b>⊗<b>C</b> contains <i>pure non-real</i> subalgebras isomorphic to <b>C</b>. Within this paper, we first consider shortly the basics of the non-relativistic and the relativistic quantum theory. Then we introduce general hypercomplex algebras and also show how a relativistic quantum equation like Dirac's one can be formulated using hypercomplex coefficients. Subsequently, some algebraic preconditions for operations within hypercomplex algebras and their subalgebras will be examined. For our purpose equations akin the Schrödinger's one should be able to be set up and solved. Functions of complementary variables like <b>x</b> and <b>p</b> should be Fourier transforms of each other. This should hold within a purely non-real subspace which must hence be a subalgebra. Furthermore, it is an ideal denoted by <i>J</i>. It must be isomorphic to <b>C</b>, hence containing an internal identity element. The bicomplex numbers will turn out to fulfil these preconditions, and therefore, the formalism of QT can be developed within its subalgebras. We also show that bicomplex numbers encourage the definition of several different kinds of conjugates. One of these treats the elements of <i>J</i> precisely as the usual conjugate treats complex numbers. This defines a quantity what we call a modulus which, in contrast to the complex absolute square, remains non-real (but may be called `pseudo-real'). However, we do not conduct an explicit physical interpretation here but we leave this to future examinations.
[4568] vixra:1405.0290 [pdf]
Extended Ricci and Holographic Dark Energy Models in Fractal Cosmology
We consider the fractal Friedmann-Robertson-Walker universe filled with dark fluid. By making use of this assumption, we discuss two types of dark energy models: Generalized Ricci and generalized holographic dark energies. We calculate the equation of state parameters, investigate some special limits of the results and discuss the physical implications via graphs. Also, we reconstruct the potential and the dynamics of the quintessence and k-essence(kinetic-quintessence) according to the results obtained for the fractal dark energy.
[4569] vixra:1405.0281 [pdf]
Hypercomplex Algebras and Their Application to the Mathematical Formulation of Quantum Theory
Quantum theory (QT) which is one of the basic theories of physics, namely in terms of Schrödinger's 1926 wave functions in general requires the field <b>C</b> of the complex numbers to be formulated. However, even the complex-valued description soon turned out to be insufficient. Incorporating Einstein's theory of Special Relativity (Schrödinger, Klein, Gordon, 1926, Dirac 1928) leads to an equation which requires some coefficients which are hypercomplex. Conventionally the Dirac equation is written using pairwise anti-commuting matrices. However, a unitary ring of square matrices <i>is</i> an - associative - hypercomplex algebra by definition. However, only the algebraic properties of the elements and their relations to one another are important. We hence replace the matrix formulation by a more symbolic one. In the case of the Dirac equation, these elements are called biquaternions. As an algebra over <b>R</b>, the biquaternions are eight-dimensional; as subalgebras, this algebra contains the division ring <b>H</b> of the quaternions at one hand and the algebra <b>C</b>⊗<b>C</b> of the bicomplex numbers at the other, the latter being commutative. As it will later turn out, <b>C</b>⊗<b>C</b> contains <i>pure non-real</i> subalgebras isomorphic to <b>C</b>. Within this paper, we first consider briefly the basics of the non-relativistic and the relativistic quantum theory. Then we introduce general hypercomplex algebras and also show how a relativistic quantum equation like Dirac's one can be formulated using hypercomplex coefficients. Subsequently, some algebraic preconditions for operations within hypercomplex algebras and their subalgebras will be examined. For our purpose equations akin the Schrödinger's one should be able to be set up and solved. Functions of complementary variables like <b>x</b> and <b>p</b> should be Fourier transforms of each other. This should hold within a purely non-real subspace which must hence be a subalgebra. Furthermore, it is an ideal denoted by <i>J</i>. It must be isomorphic to <b>C</b>, hence containing an internal identity element. The bicomplex numbers will turn out to fulfil these preconditions, and therefore, the formalism of QT can be developed within its subalgebras. We also show that bicomplex numbers encourage the definition of several different kinds of conjugates. One of these treats the elements of <i>J</i> precisely as the usual conjugate treats complex numbers. This defines a quantity what we call a modulus which, in contrast to the complex absolute square, remains non-real (but may be called `pseudo-real'). However, we do not conduct an explicit physical interpretation here but we leave this to future examinations.
[4570] vixra:1405.0280 [pdf]
An Adaptive Population Importance Sampler: Learning from the Uncertanity
Monte Carlo (MC) methods are well-known computational techniques, widely used in different fields such as signal processing, communications and machine learning. An important class of MC methods is composed of importance sampling (IS) and its adaptive extensions, such as population Monte Carlo (PMC) and adaptive multiple IS (AMIS). In this work, we introduce a novel adaptive and iterated importance sampler using a population of proposal densities. The proposed algorithm, named adaptive population importance sampling (APIS), provides a global estimation of the variables of interest iteratively, making use of all the samples previously generated. APIS combines a sophisticated scheme to build the IS estimators (based on the deterministic mixture approach) with a simple temporal adaptation (based on epochs). In this way, APIS is able to keep all the advantages of both AMIS and PMC, while minimizing their drawbacks. Furthermore, APIS is easily parallelizable. The cloud of proposals is adapted in such a way that local features of the target density can be better taken into account compared to single global adaptation procedures. The result is a fast, simple, robust and high-performance algorithm applicable to a wide range of problems. Numerical results show the advantages of the proposed sampling scheme in four synthetic examples and a localization problem in a wireless sensor network.
[4571] vixra:1405.0263 [pdf]
A Fast Universal Self-Tuned Sampler Within Gibbs Sampling
Bayesian inference often requires efficient numerical approximation algorithms, such as sequential Monte Carlo (SMC) and Markov chain Monte Carlo (MCMC) methods. The Gibbs sampler is a well-known MCMC technique, widely applied in many signal processing problems. Drawing samples from univariate full-conditional distributions efficiently is essential for the practical application of the Gibbs sampler. In this work, we present a simple, self-tuned and extremely efficient MCMC algorithm which produces virtually independent samples from these univariate target densities. The proposal density used is self-tuned and tailored to the specific target, but it is not adaptive. Instead, the proposal is adjusted during an initial optimization stage, following a simple and extremely effective procedure. Hence, we have named the newly proposed approach as FUSS (Fast Universal Self-tuned Sampler), as it can be used to sample from any bounded univariate distribution and also from any bounded multi-variate distribution, either directly or by embedding it within a Gibbs sampler. Numerical experiments, on several synthetic data sets (including a challenging parameter estimation problem in a chaotic system) and a high-dimensional financial signal processing problem, show its good performance in terms of speed and estimation accuracy.
[4572] vixra:1405.0235 [pdf]
Knot Physics: Deriving the Fine Structure Constant
Knot physics describes the geometry of particles and fields. In a previous paper we described the topology and geometry of an electron. From the geometry of an electron we can construct a mathematical model relating its charge to its spin angular momentum. From experimental data, the spin angular momentum is hbar/2. Therefore the mathematical model provides a comparison of electron charge to Planck’s constant, which gives the fine structure constant alpha. We find that using only electromagnetic momentum to derive the fine structure constant predicts a value for 1/alpha that is about two orders of magnitude too small. However, the equations of knot physics imply that the electromagnetic field cusp must be compensated by a geometric field cusp. The geometric cusp is the source of a geometric field. The geometric field has momentum that is significantly larger than the momentum from the electromagnetic field. The angular momentum of the two fields together predicts a fine structure constant of 1/alpha = 136.85. Compared to the actual value of 1/alpha = 137.04, the error is 0.13%. Including the effects of virtual particles may reduce the error further.
[4573] vixra:1405.0234 [pdf]
Reconciling Mach's Principle and General Relativity Into a Simple Alternative Theory of Gravity
A theory of gravity reconciling Mach's Principle and General Relativity (GR) is proposed. At any location, total gravitational potential from the Universe's matter distribution is c<sup>2</sup>. This Universal background potential constitutes unit rest energy of matter and provides its unit mass, which is the essence behind E=mc<sup>2</sup>. The background gravity creates a local sidereal inertial frame at every location. A velocity increases gravitational potential through net blue-shift of Universe's background gravity, causing kinematic time dilation, which is a form of gravitational time dilation. Matter and energy follow different rules of motion, and speed of matter may exceed the speed of light. The theory is consistent with existing relativity experiments, and is falsifiable based on experiments whose predictions differ from GR. The theory also explains why all the ICARUS and corrected OPERA experiments still show mean neutrino velocities slightly above speed of light (early arrival of neutrinos by 0.1-6.5ns), even after correcting the issues that had led to the original OPERA experiments to erroneously report faster than light neutrinos (early arrival by ~60ns).
[4574] vixra:1405.0223 [pdf]
Knot Physics: Spacetime in co-Dimension 2
Spacetime is assumed to be a branched 4-dimensional manifold embedded in a 6-dimensional Minkowski space. The branches allow quantum interference, each individual branch is a history in the sum-over-histories. A n-manifold embedded in a n+2-space can be knotted. The metric on the spacetime manifold is inherited from the Minkowski space and only allows a particular variety of knots. We show how those knots correspond to the observed particles with corresponding properties. We derive a Lagrangian. The Lagrangian combined with the geometry of the manifold produces gravity, electromagnetism, weak force, and strong force.
[4575] vixra:1405.0222 [pdf]
Learning Markov Networks Structures Constrained by Context-Specific Independences
This work focuses on learning the structure of Markov networks. Markov networks are parametric models for compactly representing complex probability distributions. These models are composed by: a structure and a set of numerical weights. The structure describes independences that hold in the distribution. Depending on the goal of learning intended by the user, structure learning algorithms can be divided into: density estimation algorithms, focusing on learning structures for answering inference queries; and knowledge discovery algorithms, focusing on learning structures for describing independences qualitatively. The latter algorithms present an important limitation for describing independences as they use a single graph, a coarse grain representation of the structure. However, many practical distributions present a flexible type of independences called context-specific independences, which cannot be described by a single graph. This work presents an approach for overcoming this limitation by proposing an alternative representation of the structure that named canonical model; and a novel knowledge discovery algorithm called CSPC for learning canonical models by using as constraints context-specific independences present in data. On an extensive empirical evaluation, CSPC learns more accurate structures than state-of-the-art density estimation and knowledge discovery algorithms. Moreover, for answering inference queries, our approach obtains competitive results against density estimation algorithms, significantly outperforming knowledge discovery algorithms.
[4576] vixra:1405.0213 [pdf]
Modified Gravity and Cosmology
We propose a modification of Einstein-Cartan gravity equations. The modified cosmology departs from the standard model of cosmology for small Hubble parameter. A characteristic Hubble scale h0, which is intrinsically related to cosmological constant, marks the boundary between the validity domains of the standard model of cosmology and modified cosmology. Such a role for h0 is similar to Planck’s constant in the quantum/classical context, or to the speed of light c in the relativity/classical context. For large Hubble parameter, the standard model of cosmology is restored. In the opposite limit of small Hubble parameter, which is the case for present epoch, Lorentz-violating effects would manifest themselves. One of the implications is that there may be no need to invoke dark matter to account for cosmological mass discrepancies.
[4577] vixra:1405.0199 [pdf]
Two-Parameter Model of Barred Galaxies and its Testification
Natural structure is unique and galaxies are natural structure. In our previous work, we showed that rational structure is unique. In this paper, the unique two-parameter rational structure is used to model barred galaxies. The model fits to galaxy images satisfactorily and the prediction on galaxy arms is consistent with observation. Today the accepted theory applied to galaxies is Newton's universal gravity which, however, fails to galactic observation generally. Accordingly, dark matter was introduced but has never been observed. A simple glimpse of the images of edge-on spiral galaxies suggests that Newton's concept of action at a distance should be rejected. It is hard to imagine that the stars far away from the galaxy center suffer an instant force from the center. In our previous work we suggested a new universal gravity which generalizes Newton's one and is uniquely determined by rational structure. The new gravity results from local curvature of Darwin surfaces and simply explains the kinematic phenomenon of constant rotation curves. In this paper, a preliminary study on the new gravity with Poisson's equation is presented which verifies the phenomenon.
[4578] vixra:1405.0184 [pdf]
A New Class of LRS Bianchi Type VI0 Universes with Free Gravitational Field and Decaying Vacuum Energy Density
A new class of LRS Bianchi type VI0 cosmological models with free gravitational elds and a variable cosmological term is investigated in presence of perfect uid as well as bulk viscous uid. To get the deterministic solution we have imposed the two dierent conditions over the free gravitational elds. In rst case we consider the free gravitational eld as magnetic type whereas in second case `gravitational wrench' of unit `pitch" is supposed to be present in free gravitational eld. The viscosity coecient of bulk viscous uid is assumed to be a power function of mass density. The eect of bulk viscous uid distribution in the universe is compared with perfect uid model. The cosmological constant is found to be a positive decreasing function of time which is corroborated by results from recent observations. The physical and geometric aspects of the models are discussed.
[4579] vixra:1405.0154 [pdf]
A Study of Finite Length Thermoelastic Problem of Hollow Cylinder with Radiation
In this paper, in hollow cylinders it is to be noticed that all possible prob- lems on boundary conditions can be solved by particularizing the method described here. A new finite integral transformation an extension of those given by Sneddon [11] whose kernel is given by cylindrical functions, is used to solve the problem of finding the temperature at any point of a hollow cylin- der of any height, with boundary conditions of radiation type on the outside and inside surfaces with independent radiation constants.
[4580] vixra:1405.0151 [pdf]
Second Order Parallel Tensors on Generalized Sasakian Spaceforms
The object of present paper is to study the symmetric and skew symmetric properties of a second order parallel tensor in a generalized Sasakian space-form.
[4581] vixra:1405.0144 [pdf]
A Hybrid Iterative Scheme for a General System of Variational Inequalities Based on Mixed Nonlinear Mappings
The purpose of this paper is to study the strong convergence of a hybrid iterative scheme for finding a common element of the set of a general system of variational inequalities for α-inverse- strongly monotone mapping and relaxed (c,d)- cocoercive mapping, the set of solutions of a mixed equilibrium problem and the set of common fixed points of a finite family of nonexpansive mappings in a real Hilbert space. Using the demi-closedness principle for nonexpansive mapping, we prove that the iterative sequence converges strongly to a common element of these three sets under some control conditions. Our results extend recent results announced by many others.
[4582] vixra:1405.0135 [pdf]
Coprime Factorization of Singular Linear Systems. a Stein Matritial Equation Approach
In this work immersed in the field of control theory on a Given a singular linear dynamic time invariant represented by Ex+(t) = Ax(t)Bu(t), y(t) = Cx(t). We want to classify singular systems such that by means a feedback and an output injection, the transfer ma- trix of the system is a polynomial, for that we analyze conditions for obtention of a coprime factorization of transfer matrices of singular lin- ear systems defined over commutative rings R with element unit. The problem presented is related to the existence of solutions of a Stein matritial equation XE − NXA = Z.
[4583] vixra:1405.0133 [pdf]
Numerical Solution of Fuzzy Differential Equations Under Generalized Differentiability by Modified Euler Method
In this paper, we interpret a fuzzy differential equation by using the strongly generalized differentiability concept. Utilizing the Generalized Characterization Theorem, we investigate the problem of finding a nu- merical approximation of solutions. The Modified Euler approximation method is implemented and its error analysis, which guarantees point- wise convergence, is given. The method applicability is illustrated by solving a linear first-order fuzzy differential equation.
[4584] vixra:1405.0023 [pdf]
On a Simpler, Much More General and Truly Marvellous Proof of Fermat's Last Theorem (II)
English mathematics Professor, Sir Andrew John Wiles of the University of Cambridge finally and conclusively proved in 1995 Fermat's Last Theorem} which had for 358 years notoriously resisted all efforts to prove it. Sir Professor Andrew Wiles's proof employs very advanced mathematical tools and methods that were not at all available in the known World during Fermat's days. Given that Fermat claimed to have had the `truly marvellous' proof, this fact that the proof only came after 358 years of repeated failures by many notable mathematicians and that the proof came from mathematical tools and methods which are far ahead of Fermat's time, this has led many to doubt that Fermat actually did possess the `truly marvellous' proof which he claimed to have had. In this short reading, via elementary arithmetic methods which make use of Pythagoras theorem, we demonstrate conclusively that Fermat's Last Theorem actually yields to our efforts to prove it.
[4585] vixra:1405.0022 [pdf]
Crisis in Quantum Theory and Its Possible Resolution
It is argued that the main reason of crisis in quantum theory is that nature, which is fundamentally discrete, is described by continuous mathematics. Moreover, no ultimate physical theory can be based on continuous mathematics because, as follows from G\"{o}del's incompleteness theorems, any mathematics involving the set of all natural numbers has its own foundational problems which cannot be resolved. In the first part of the paper inconsistencies in standard approach to quantum theory are discussed and the theory is reformulated such that it can be naturally generalized to a formulation based on discrete and finite mathematics. Then the cosmological acceleration and gravity can be treated simply as {\it kinematical} manifestations of de Sitter symmetry on quantum level ({\it i.e. for describing those phenomena the notions of dark energy, space-time background and gravitational interaction are not needed}). In the second part of the paper motivation, ideas and main results of a quantum theory over a Galois field (GFQT) are described. In contrast to standard quantum theory, GFQT is based on a solid mathematics and therefore can be treated as a candidate for ultimate quantum theory. The presentation is non-technical and should be understandable by a wide audience of physicists and mathematicians.
[4586] vixra:1405.0020 [pdf]
Steer by Logic: Einstein's Challenge to Academic Physicists
FQXi 2014 asks, ‘How should humanity steer the future?' Recalling false obstacles to medical progress in humanity's recent past — eg, impeding Semmelweis (b.1818), McClintock (1902), Marshall (1951) — we reply, ‘Steer by Logic.' Then — with Logic in view and other scientific disciplines in mind — we amplify our answer via an online coaching-clinic/challenge based on Einstein's work. With the future mostly physical, this physics-based challenge shows how we best steer clear of false obstacles — unnecessary barriers that slow humanity's progress. Hoping to motivate others to participate, here's our position: we locate current peer-reviewed claims of ‘impossible' — like those from days of old — and we challenge them via refutations and experimental verifications. The case-study identifies an academic tradition replete with ‘impossibility-proofs' — with this bonus: many such ‘proofs' are challengeable via undergraduate maths and logic. So — at the core of this clinic/challenge; taking maths to be the best logic — we model each situation in agreed mathematical terms, then refute each obstacle in like terms. Of course, upon finding ‘impossibilities' that are contradicted by experiments, our next stride is easy: at least one step in such analyses must be false. So — applying old-fashioned commonsense; ie, experimentally verifiable Logic — we find that false step and correct it. With reputable experiments agreeing with our corrections, we thus negate the false obstacles. Graduates of the clinic can therefore more confidently engage in steering our common future: secure in the knowledge that old-fashioned commonsense — genuine Logic — steers well.
[4587] vixra:1405.0006 [pdf]
Unobservable Potentials to Explain a Quantum Eraser and a Delayed-Choice Experiment
We present a new explanation for a quantum eraser. Mathematical description of the traditional explanation needs quantum-superposition states. However, the phenomenon can be explained without quantum-superposition states by introducing unobservable potentials which can be identified as an indefinite metric vector. In addition, a delayed choice experiment can also be explained by the interference between the photons and unobservable potentials, which seems like an unreal long-range correlation beyond the causality.
[4588] vixra:1405.0004 [pdf]
Cosmic Gravity
English(traduction): Here mention that gravity is not a variety of Riemann space is following patterns Similar to the special theory of relativity as to time warp refers . We show that the Euclidean continuum is a mathematical entelequia, and therefore has a curved surface and a single point of a plane tangent to said surface . The infinitesimal character disappears to make way for spacetime quanta, which depend on the distance to the center of the attractor mass and the magnitude of it. Regarding the gemini speed. Perceive the distorted astronomical periods. Explaining and severity depends on the set of all the stars that populate the universe. It is argued that solar radiation is due to the expansion of the universe, also mentioning the peculiarities the Sun. It explains that both local gravity, as the trajectory of a photon are few and obey forminvariantes space and time, based on all stars. It is stated that the time dilation factor in the TRE is a consequence of the sets of star, both with inertia as electric potential. And finally , the possibility of an even ours but added compound Universe antimatter. Being the last over time because of the flux density of electric field. The details of an experiment made ??with a time dilation charged sphere can be found in Appendix C. Spanish (original): Aquí se hace mención a que la gravedad no es una variedad del espacio de Riemann, siguiendo unos patrones similares a los de la teoría de la relatividad especial en cuanto a deformación del tiempo se refiere. Se demuestra que el continuo euclidiano es una entelequia matemática, y, por tanto, una superficie curva tiene un y un solo punto de un cierto plano, tangente a dicha superficie. El carácter infinitesimal desaparece para dejar paso a cuantos de espaciotiempo, que dependen de la distancia al centro de la masa atractora y de la magnitud de ésta. En cuanto a la velocidad géminis. Percibimos los períodos astronómicos distorsionados. Explicando como la gravedad depende del conjunto de todas las estrellas que pueblan el Universo. Se argumenta que la radiación solar es debida a la expansión del Universo, mencionando además las peculiaridades del Sol. Se explica que tanto la gravedad local, como la trayectoria de un fotón son forminvariantes y obedecen a cuantos de espacio y de tiempo, en función de todas las estrellas. Se expone que el factor de dilatación temporal de la TRE es consecuencia de los conjuntos de estrellas, tanto con inercia como con potencial eléctrico. Y, por último, se añade la posibilidad de que exista un Universo parejo al nuestro pero compuesto de antimateria. Siendo la causa última del transcurso del tiempo la densidad de flujo de Campo eléctrico. En el Apéndice C se recogen los detalles de un experimento de dilatación temporal hecho con una esfera cargada.
[4589] vixra:1404.0476 [pdf]
Implementation of Quantum Computing Techniques on NMR Systems
This document is submitted as a partial requirement for the course Quantum Information and Computing, BITS Pilani. The phenomenon of NMR can be used to generate spin states of nuclei and these can be used as qubits for computational purposes, the speciality being, an ensemble of molecules must be utilized. This allows for an exponentiation in the processing power of the computer. The document explains about the setup, measurement and initialization of an NMR quantum computer.
[4590] vixra:1404.0463 [pdf]
A New Model of Gravitation
This article describes a new model of gravitation based on idea of interconnection of the gravitational interaction with the curvature of space (but only 3-dimensional, without the ”curvature” of time). It is proposed to consider the space consisting of unit cells with dimensions comparable to the size of elementary particles. Curvature of space is interpreted through the change in the relative volume of unit cells. In the gravitational field the curva- ture of space is qualified as a decrease in the distension of space with increasing distance from the center of the attracting body. As an important complement of the kinematic relativistic effects of moving bodies is introduced a new kinematic effect of longitudinal distension of comoving space. It is alleged that the gravitational interaction is manifested as a result of following of changes of kinematic effects of moving bodies for change in the local distension of curved space. It is shown that the extrapolation of the fall of matter on the center of the attracting body leads to the conclusion about the existence of density limit, as which can be accepted the matter density of the neutron star.
[4591] vixra:1404.0445 [pdf]
Mibc and the Dirac Spin Effect in Torsion Gravity
The spin precession of a Dirac particle in monotonically increasingly boosted coordinates is calculated using torsion gravity (teleparallel theory of gravity). Also, we find the vector and the axial-vector parts of the torsion tensor.
[4592] vixra:1404.0443 [pdf]
On Clifford Space Relativity, Black Hole Entropy, Rainbow Metrics, Generalized Dispersion and Uncertainty Relations
An analysis of some of the applications of Clifford Space Relativity to the physics behind the modified black hole entropy-area relations, rainbow metrics, generalized dispersion and minimal length stringy uncertainty relations is presented.
[4593] vixra:1404.0441 [pdf]
The Action Function of Adiabatic Systems
It is shown that the action function of a macroscopic adiabatic system of particles described as continuously differentable functions of energy-momentum in space-time, exists, that this is a plane wave, and that this function can in turn be integrated to a 4-vector field, which satisfies the Maxwell equations in the Lorentz gauge. Also, it is shown, how to formulate these results in terms of Functional Analysis of Hilbert spaces. With it, we show a.o. that PCT = -CPT = ±1 holds, which is a strong form of the PCT-theorem; we show that - in order to capture the concept of mass - the standard model gauge group has to be augmented by a factor group U(2), such that the complete gauge group becomes U(4). It is shown that the sourceless action field in itself suffices to describe the long ranged interaction of matter, both, electromagnetic and gravitational. This turns Einstein’s conception of photons as real particles and subsequently the concept of gravitons into physically unproven assumptions, which complicate, but not simplify the theory. The results appear to imply that the fields themselves do not interact with their sources. Though, this has never been checked by an experiment. As shown, a simple experiment could be carried out to answer this question.
[4594] vixra:1404.0439 [pdf]
The Casimir Warp Drive:is the Casimir Effect a Valid Candidate to Generate and Sustain a Natario Warp Drive Spacetime Bubble??
Warp Drives are solutions of the Einstein Field Equations that allows superluminal travel within the framework of General Relativity. There are at the present moment two known solutions: The Alcubierre warp drive discovered in $1994$ and the Natario warp drive discovered in $2001$. The major drawback concerning warp drives is the huge amount of negative energy able to sustain the warp bubble.In order to perform an interstellar space travel to a "nearby" star at $20$ light-years away in a reasonable amount of time a ship must attain a speed of about $200$ times faster than light.However the negative energy density at such a speed is directly proportional to the factor $10^{48}$ which is $1.000.000.000.000.000.000.000.000$ times bigger in magnitude than the mass of the planet Earth!! Although the energy conditions of General Relativity forbids the existence of negative energy the Casimir Effect first predicted by Casimir in $1948$ and verified experimentally by Lamoreaux in $1997$ allows sub-microscopical amounts of it. Lamoreaux obtained experimentally negative energy densities of $10^{-4} \frac{Joules}{Meter^{3}}$.This is an extremely small value $10^{20}$ times lighter than the ones of a body of $1$ kilogram in a cubic meter of space or better:$100.000.000.000.000.000.000$ times lighter than the ones of a body of $1$ kilogram in a cubic meter of space. We introduce in this work a shape function that will low the negative energy density requirements in the Natario warp drive from $10^{48} \frac{Joules}{Meter^{3}}$ to $10^{-7} \frac{Joules}{Meter^{3}}$ a result $1000$ times lighter than the ones obtained by Lamoreaux proving that the Casimir Effect can generate and sustain a Natario warp drive spacetime.We also discuss other warp drive drawbacks:collisions with hazardous interstellar matter(asteroids or comets) that may happen in a real interstellar travel and Horizons(causally disconnected portions of spacetime).
[4595] vixra:1404.0400 [pdf]
Numerical Solution of Time-Dependent Gravitational Schr ¨odinger Equation
In recent years, there are attempts to describe quantization of planetary distance based on time-independent gravitational Schr¨odinger equation, including Rubcic & Rubcic’s method and also Nottale’s Scale Relativity method. Nonetheless, there is no solution yet for time-dependent gravitational Schr ¨odinger equation (TDGSE). In the present paper, a numerical solution of time-dependent gravitational Schrodinger equation is presented, apparently for the first time. This numerical solution leads to gravitational Bohr-radius, as expected. In the subsequent section, we also discuss plausible extension of this gravitational Schr¨odinger equation to include the effect of phion condensate via Gross-Pitaevskii equation, as described recently by Moffat. Alternatively one can consider this condensate from the viewpoint of Bogoliubov de Gennes theory, which can be approximated with coupled time-independent gravitational Schr¨odinger equation. Further observation is of course recommended in order to refute or verify this proposition.
[4596] vixra:1404.0355 [pdf]
Schr¨odinger Equation and the Quantization of Celestial Systems
In the present article, we argue that it is possible to generalize Schr ¨odinger equation to describe quantization of celestial systems. While this hypothesis has been described by some authors, including Nottale, here we argue that such a macroquantization was formed by topological superfluid vortice. We also provide derivation of Schr¨odinger equation from Gross-Pitaevskii-Ginzburg equation, which supports this superfluid dynamics interpretation.
[4597] vixra:1404.0354 [pdf]
Schr¨odinger-Langevin Equation with PT-Symmetric Periodic Potential and its Application to Deuteron Cluster
In this article, we find out some analytical and numerical solutions to the problem of barrier tunneling for cluster deuterium, in particular using Langevin method to solve the time-independent Schr¨odinger equation.
[4598] vixra:1404.0295 [pdf]
A New Form of Matter—Unmatter, Composed of Particles and Anti-Particles
This article is an improved version of an old manuscript. This is a theoretical assumption about the possible existence of a new form of matter. Up to day the unmatter was not checked in the lab.
[4599] vixra:1404.0256 [pdf]
Generalizations of the Distance and Dependent Function in Extenics to 2D, 3D, and N−D
Extension Theory (or Extenics) was developed by Professor Cai Wen in 1983 by publishing a paper called Extension Set and Non-Compatible Problems. Its goal is to solve contradictory problems and also nonconventional, nontraditional ideas in many fields.
[4600] vixra:1404.0158 [pdf]
Declaration de la Liberte Academique
Le debut du 21eme siecle reflete, plus qu’aucun autre temps de l’histoire, la profondeur et l’importance de la science et la technologie dans les affaires humaines.
[4601] vixra:1404.0140 [pdf]
An Exact Mapping from Navier-Stokes Equation to Schr¨odinger Equation via Riccati Equation
In the present article we argue that it is possible to write down Schr¨odinger representation of Navier-Stokes equation via Riccati equation. The proposed approach, while diers appreciably from other method such as what is proposed by R. M. Kiehn, has an advantage, i.e. it enables us extend further to quaternionic and biquaternionic version of Navier-Stokes equation, for instance via Kravchenko’s and Gibbon’s route. Further observation is of course recommended in order to refute or verify this proposition
[4602] vixra:1404.0129 [pdf]
Five Paradoxes and a General Question on Time Traveling
Traveling to the past Joe40, who is 40 years old, travels 10 years back to the past when he was 30 years old. He meets himself when he was 30 years old, let’s call this Joe30. Joe40 kills Joe30. If so, we mean if Joe died at age 30 (because Joe30 was killed), how could he live up to age 40?
[4603] vixra:1404.0098 [pdf]
A Simple and General Proof of Beal's Conjecture
Using the same method that we used in the paper http://vixra.org/abs/1309.0154 to prove Fermat's Last Theorem in a simpler and truly marvellous way, we demonstrate that Beal's Conjecture yields -- in the simplest imaginable manner; to our effort to proving it.
[4604] vixra:1404.0095 [pdf]
Quantum Mechanics Plan B
The Quantum mechanics works but is not yet well understood. The difficulty in understanding lies not so much in the already very sophisticated mathematical formulations, but are much more rooted in the question of how the objects do that? Starting with the double-slit experiment, we will first provide the viewpoint of quantum mechanics. This leads us to the wave-particle duality. Photons have both wave and particle properties. Then we see the exact opposite: photons, electrons... are neither wave nor particle and address the question: Can we find truth from falsity? Can quantum mechanics be wrong in their conditions and yet lead to so excellent results? We then address the question of how good the mathematical prerequisites in classical physics is and how well they are fulfilled in quantum physics. Finally, we look at some very basic experiments from the point of view of the objects.
[4605] vixra:1404.0076 [pdf]
Novel Consequences of a New Derivation of Maximum Force in Agreement with General Relativity's F_max = C^4/4G
Schiller has shown not only that a maximum force follows from General Relativity, he has also argued that General Relativity can be derived from the principle of maximum force. In the present paper an alternative derivation of maximum force is given. Inspired by the equivalence principle, the approach is based on a modification of the well known special relativity equation for the velocity acquired from uniform proper acceleration. Though in Schiller's derivation the existence of gravitational horizons plays a key role, in the present derivation this is not the case. In fact, though the kinematic equation that we start with does exhibit a horizon, it is not carried over to its gravitational counterpart. A few of the geometrical consequences and physical implications of this result are discussed.
[4606] vixra:1404.0072 [pdf]
On The Leibniz Rule And Fractional Derivative For Differentiable And Non-Differentiable Functions
In the recent paper {\it Communications in Nonlinear Science and Numerical Simulation. Vol.18. No.11. (2013) 2945-2948}, it was demonstrated that a violation of the Leibniz rule is a characteristic property of derivatives of non-integer orders. It was proved that all fractional derivatives ${\cal D}^{\alpha}$, which satisfy the Leibniz rule ${\cal D}^{\alpha}(fg)=({\cal D}^{\alpha}f) \, g + f \, ({\cal D}^{\alpha}g)$, should have the integer order $\alpha=1$, i.e. fractional derivatives of non-integer orders cannot satisfy the Leibniz rule. However, it should be noted that this result is only for differentiable functions. We argue that the very reason for introducing fractional derivative is to study non-differentiable functions. In this note, we try to clarify and summarize the Leibniz rule for both differentiable and non-differentiable functions. The Leibniz rule holds for differentiable functions with classical integer order derivative. Similarly the Leibniz rule still holds for non-differentiable functions with a concise and essentially local definition of fractional derivative. This could give a more unified picture and understanding for Leibniz rule and the geometrical interpretation for both integer order and fractional derivative.
[4607] vixra:1404.0067 [pdf]
The Galactic Black Hole
Many galaxies have a concentration of mass at their center. In what follows, the mass is attributed to a neutral gas of electrons and positrons. It is found that electron degeneracy pressure supports the smaller masses against gravity. The larger masses are supported by ideal gas and radiation pressure. Physical properties are calculated for the range 450 to 45 billion solar masses. Keywords: model; supermassive black hole; active galaxy; quasar
[4608] vixra:1404.0032 [pdf]
Initiating a Hypothetical Molecular Upgrade to Iso-Electronium with Topological Deformation Order Parameters for Spontaneous Superfluidic Gauge Symmetry Breaking
In this preliminary work, we propose a hypothesis and initiate a step-by-step, systematic upgrade to the cutting-edge iso-electronium model by further equipping it with order parameters of fractional statistics to encode the topological deformations, spontaneous superfluidic gauge symmetry breaking, correlated helices with long range order, and wavepacket wavefunctions for the toroidal polarizations. For this initial case, we consider the singlet planar coupling of two hydrogen atoms that are interlocked with a Santilli-Shillady strong valence bond to form a molecule with iso-electronium. The enhancement results support our hypothesis and are significant because the order parameters arm the iso-electronium model an extra degree of freedom to work with, which may authorize us to further decode and comprehend the underlying physical mechanisms and features associated with the configuration of the toroidal polarizations. Thus, these outcomes should be subjected to additional rigorous scrutiny and improvement via the scientific method.
[4609] vixra:1404.0030 [pdf]
Sudies on Vortex
We check that the relation between the angle and the radius of the movement of an object following a logarithm spiral of p p SO(2) is constant.
[4610] vixra:1404.0021 [pdf]
General Relativistic Predictions are Incompatible with Solar Planetary Recessions
It is generally assumed that Einstein's General Theory of Relativity (GTR) is silent on the issue of planetary recession such as has been measured recently. In this short note, we demonstrate that the GTR is not silent on this matter, it does make a clear predictions albeit, predictions that is contrary with experience and for this task, we use the same solution that was and has been used triumphantly to explain the perihelion precession of the planet Mercury. From a pure stand-point of binary logic, we expect this solution to stand-up to all its predictions for both the precession of perihelion precession and as-well the expansion of orbits. At any rate imaginable, this apparent contradiction presents an interesting state of affairs for the GTR.
[4611] vixra:1403.0977 [pdf]
Solid Angle of the Off-Axis Circle Sector
The solid angle of a circular sector specified by circle radius, angle of the sector, and distance of the circle plane to the observer is calculated in terms of various trigonometric and cyclometric functions. This generalizes previous results for the full circle that have appeared in the literature.
[4612] vixra:1403.0950 [pdf]
Simplified Path Integral Approach to the Aharonov-Bohm Effect
In classical electrodynamics the vacuum is defined as a region where there are no electric or magnetic fields. In such a region, a charged particle (such as an electron) will feel no effect — the Lorentz force is zero. The space external to a perfect (i.e., infinite) solenoid can be considered an electromagnetic vacuum, since E and B vanish there. While a non-zero vector potential A does exist outside the solenoid, it can exert no influence on the particle, and thus cannot be directly detected or quantified classically. However, in 1959 Aharonov and Bohm predicted that a vector field would exert a purely quantum-mechanical effect on the phase of the particle’s wave function, which in principle should be detectable. The predicted phase shift was not observed experimentally until 1986, when Tonomura brilliantly verified the effect using a microscopic solenoid. This paper provides a simplified explanation of the Aharonov-Bohm effect using a path-integral approach that is suitable for the advanced undergraduate.
[4613] vixra:1403.0942 [pdf]
On Legendre’s Conjecture
Legendre’s conjecture, stated by Adrien-Marie Legendre ( 1752-1833 ), says there is always a prime between n2 and (n+1)2 . This conjecture is part of Landau’s problems. In this paper a proof of this conjecture is presented, using the method of generating prime numbers between consecutive squares, and proving that for every pair of consecutive squares with n >= 3 may be generated at least one prime number that belongs to the interval [n,(n+1)^2]
[4614] vixra:1403.0927 [pdf]
The Dichotomous Cosmology with a Static Material World and Expanding Luminous World
The dichotomous cosmology is an alternative to the expanding Universe theory, and consists of a static matter Universe, where cosmological redshifts are explained by a tired-light model with an expanding luminous world. In this model the Hubble constant is also the photon energy decay rate, and the luminous world is expanding at a constant rate as in de Sitter cosmology for an empty Universe. The present model explains both the luminosity distance versus redshift relationship of supernovae Ia, and ageing of spectra observed with the stretching of supernovae light curves. Furthermore, it is consistent with a radiation energy density factor (1 + z)^4 inferred from the Cosmic Microwave Background Radiation.
[4615] vixra:1403.0628 [pdf]
On the Hybrid Mean Value of the Smarandache kn Digital Sequence with SL(n) Function and Divisor Function D(n)1
The main purpose of this paper is using the elementary method to study the hybrid mean value properties of the Smarandache kn digital sequence with SL(n) function and divisor function d(n), then give two interesting asymptotic formulae for it.
[4616] vixra:1403.0580 [pdf]
Direct Processing of Run-Length Compressed Document Image for Segmentation and Characterization of a Specified Block
Extracting a block of interest referred to as segmenting a specified block in an image and studying its characteristics is of general research interest, and could be a challenging if such a segmentation task has to be carried out directly in a compressed image. This is the objective of the present research work. The proposal is to evolve a method which would segment and extract a specified block, and carry out its characterization without decompressing a compressed image, for two major reasons that most of the image archives contain images in compressed format and 'decompressing' an image indents additional computing time and space. Specifically in this research work, the proposal is to work on run-length compressed document images.
[4617] vixra:1403.0482 [pdf]
Singed Total Domatic Number of a Graph
In this paper, some properties related signed total domatic number and signed total domination number of a graph are studied and found the sign total domatic number of certain class of graphs such as fans, wheels and generalized Petersen graph.
[4618] vixra:1403.0476 [pdf]
Smarandache V−Connected Spaces
In this paper Smarandache V−connectedness and Smarandache locally−connectedness in topological space are introduced, obtained some of its basic properties and interrelations are verified with other types of connectedness.
[4619] vixra:1403.0395 [pdf]
Smarandache U-Liberal Semigroup Structure
In this paper, Smarandache U-liberal semigroup structure is given. It is shown that a semigroup S is Smarandache U-liberal semigroup if and only if it is a strong semilattice of some rectangular monoids.
[4620] vixra:1403.0388 [pdf]
Gravitational Wave Experiments with Zener Diode Quantum Detectors: Fractal Dynamical Space
The discovery that the electron current fluctuations through Zener diode pn junctions in reverse bias mode, which arise via quantum barrier tunnelling, are completely driven by space fluctuations, has revolutionised the detection and characterisation of gravitational waves, which are space fluctuations, and also has revolutionised the interpretation of probabilities in the quantum theory. Here we report very simple and cheap table-top gravitational wave experiments using Zener diode detectors, and reveal the implications for the nature of space and time, and for the quantum theory of “matter”, and the emergence of the “classical world” as space-induced wave function localisation. The dynamical space posses an intrinsic inflation epoch.
[4621] vixra:1403.0387 [pdf]
Gravitational Wave Experiments with Zener Diode Quantum Detectors: Fractal Dynamical Space
The discovery that the electron current fluctuations through Zener diode $pn$ junctions in reverse bias mode, which arise via quantum barrier tunnelling, are completely driven by space fluctuations, has revolutionised the detection and characterisation of gravitational waves, which are space fluctuations, and also has revolutionised the interpretation of probabilities in the quantum theory. Here we report new data from the very simple and cheap table-top gravitational wave experiment using Zener diode detectors, and reveal the implications for the nature of space and time, and for the quantum theory of ``matter", and the emergence of the ``classical world" as space-induced wave function localisation. The dynamical space posses an intrinsic inflation epoch with associated fractal turbulence: gravitational waves, perhaps as observed by the BICEP2 experiment in the Antarctica.
[4622] vixra:1403.0367 [pdf]
A Note on Path Signed Digraphs
For standard terminology and notion in digraph theory, we refer the reader to the classic text- books of Bondy and Murty [2]and Harary et al. [4]; the non-standard will be given in this paper as and when required.
[4623] vixra:1403.0313 [pdf]
"Emission & Regeneration" Unified Field Theory
The methodology of today's theoretical physics consists in introducing first all known forces by separate definitions independent of their origin, arriving then to quantum mechanics after postulating the particle's wave, and is then followed by attempts to infer interactions of particles and fields postulating the invariance of the wave equation under gauge transformations, allowing the addition of minimal substitutions. The origin of the limitations of our standard theoretical model is the assumption that the energy of a particle is concentrated at a small volume in space. The limitations are bridged by introducing artificial objects and constructions like gluons, W and Z bosons, gravitons, dark matter, dark energy, etc. The proposed approach models subatomic particles such as electrons and positrons as focal points in space where continuously fundamental particles are emitted and absorbed, fundamental particles where the energy of the electron or positron is stored as rotations defining longitudinal and transversal angular momenta (fields). Interaction laws between angular momenta of fundamental particles are postulated in that way, that the basic laws of physics (Coulomb, Ampere, Lorentz, Maxwell, Gravitation, etc.) can be derived from the postulates. This methodology makes sure, that the approach is in accordance with the basic laws of physics, in other words, with well proven experimental data. Due to the dynamical description of the particles the proposed approach has not the limitations of the standard model and is not forced to introduce artificial objects or constructions. All forces are the product of electronagnetic interactions described by QED.
[4624] vixra:1403.0310 [pdf]
Operator Exponentials for the Clifford Fourier Transform on Multivector Fields
This paper briefly reviews the notion of Clifford's geometric algebras and vector to multivector functions; as well as the field of Clifford analysis (function theory of the Dirac operator). In Clifford Fourier transformations (CFT) on multivector signals the complex unit $i\in \mathbb{C}$ is replaced by a multivector square root of $-1$, which may be a pseudoscalar in the simplest case. For these transforms we derive, via a multivector function representation in terms of monogenic polynomials, the operator representation of the CFTs by exponentiating the Hamilton operator of a harmonic oscillator.
[4625] vixra:1403.0308 [pdf]
Reconstruction of Ghost Scalar Fields
In literature, a large number of approaches have been done to reconstruct the potential and dynamics of the scalar fields by establishing a connection between holographic/Ricci/new agegraphic/ghost energy density and a scalar eld model of dark energy. In most of these attempts, the analytical form of the potentials in terms of the scalar eld have not been reconstructed due to the complexity of the equations involved. In the present work, we establish a correspondence between ghost dark energy and quintessence, tachyon and dilaton scalar eld models in anisotropic Bianchi type-I universe to reconstruct the dynamics of these scalar fields.
[4626] vixra:1403.0307 [pdf]
Dynamics of Light in Teleparallel Bianchi Type-I Universe
In the present study, using the Fourier analyze method and considering the Bianchi-type I spacetime, we investigate the dynamics of photon in the torsion gravity, and show that the free-space Maxwell equations give the same results. Furthermore, we also discuss the harmonic oscillator behavior of the solutions.
[4627] vixra:1403.0306 [pdf]
Thermodynamics of Chaplygin Gas Interacting with Cold Dark Matter
The main goal of the present work is to investigate the validity of the second law of gravitational thermodynamics in an expanding Godel-type universe filled with generalized Chaplygin gas interacting with cold dark matter. By assuming the Universe as a thermodynamical system bounded by the apparent horizon, and calculating separately the entropy variation for generalized Chaplygin gas, cold dark matter and for the horizon itself, we obtained an expression for the time derivative of the total entropy. We conclude that the 2nd law of gravitational thermodynamics is conditionally valid in the cosmological scenario where the generalized Chaplygin gas interacts with cold dark matter.
[4628] vixra:1403.0295 [pdf]
The Minimal Length Stringy Uncertainty Relations Follow from Clifford Space Relativity
We improve our earlier work in [14] and derive the minimal length string/membrane uncertainty relations by imposing momentum slices in flat Clifford spaces. The Jacobi identities associated with the modified Weyl-Heisenberg algebra require noncommuting spacetime coordinates, but commuting momenta, and which is compatible with the notion of curved momentum space. Clifford Phase Space Relativity requires the introduction of a maximal scale which can be identified with the Hubble scale and is a consequence of Born's Reciprocal Relativity Principle.
[4629] vixra:1403.0287 [pdf]
Anisotropy in Stelar Plasma Doppler Profile Disproves Cosmic Inflation
I'm going to prove in this brief paper that actually there is not a so called "cosmic inflation", because that so called "cosmic inflation" is just an artifact of an incorrect theoretic model of Doppler effect for electromagnetic waves in remote sources.
[4630] vixra:1403.0278 [pdf]
Noncommutative_ricci_curvature_and_dirac_operator_on_b_qsu_2_at_the_fourth_root_of_unity
We calculate the torsion free spin connection on the quantum group Bq[SU2] at the fourth root of unity. From this we deduce the covariant derivative and the Riemann curvature. Next we compute the Dirac operator of this quantum group and we give numerical approximations of its eigenvalues.
[4631] vixra:1403.0277 [pdf]
Noncommutative_geometry_on_d6.
We study the noncommutative geometry of the dihedral group D6 using the tools of quantum group theory. We explicit the torsion free regular spin connection and the corresponding 'Levi-Civita' connection. Next, we nd the Riemann curvature and its Ricci tensor. The main result is the Dirac operator of a representation of the group which we nd the eigenvalues and the eigenmodes
[4632] vixra:1403.0276 [pdf]
Non_commutative_geometry_on_usb2
We study the Borel algebra dene by [xa; xb] = 2a;1xb as a noncommutative manifold R 3 . We calculate its noncommutative dierential form relations. We deduce its partial derivative relations and the derivative of a plane wave. After calculating its de Rham cohomology, we deduce the wave operator and its corresponding magnetic solution
[4633] vixra:1403.0263 [pdf]
Efficient Computation of Clebsch-Gordan Coefficients
The problem of angular momentum addition requires the calculation of Clebsch-Gordan coefficients. While systems involving small values of momenta and spin present no special problem, larger systems require extensive computational effort. This paper describes a straightforward method for computing the coefficients for any two-particle problem exactly by means of a simplified form of the recursion formula in a notation that is particularly accessible to the third- or fourth-year student. The method is summarized in a brief BASIC program.
[4634] vixra:1403.0252 [pdf]
Euler-Savary's Formula for the Planar Curves in Two Dimensional Lightlike Cone
In this paper, we study the Euler-Savary's formula for the planar curves in the lightlike cone. We ¯rst de¯ne the associated curve of a curve in the two dimensional lightlike cone Q2:Then we give the relation between the curvatures of a base curve, a rolling curve and a roulette which lie on two dimensional lightlike cone Q2.
[4635] vixra:1403.0234 [pdf]
A New Additive Function and the Smarandache Divisor Product Sequences
For any positive integer n, we define the arithmetical function G(n) as G(1) = 0. The main purpose of this paper is using the elementary method and the prime distribution theory to study the mean value properties of G(n) in Smarandache divisor product sequences fpd(n)g and fqd(n)g, and give two sharper asymptotic formulae for them.
[4636] vixra:1403.0209 [pdf]
The Fulfilled Euclidean Plane
The fulfilled euclidean plane is the real projective plane completed with the infinite point of its infinite line denoted c. This new incidence structure is a structure with neighbouring elements, in which the unicity of the line through two distinct points is not assured. This new Geometry is a Smarandacheian structure introduced in [10] and [11], which generalizes and unites in the same time: Euclid, Bolyai Lobacewski Gauss and Riemann Geometries.
[4637] vixra:1403.0186 [pdf]
On the Smarandache Function and the Divisor Product Sequences
Let n be any positive integer, Pd(n) denotes the product of all positive divisors of n. The main purpose of this paper is using the elementary and analytic methods to study the mean value properties of a new arithmetical function S (Pd(n)), and give an interesting asymptotic formula for it.
[4638] vixra:1403.0182 [pdf]
Anthropic Principle, Cosmomicrophysics and Biosphere
It is shown, that the Earth biosphere main parameters, such as the alive matter total mass, the DNA and peptides length, are connected with the observed Universe parameters. In particular, the length of the integral genome DNA of the Earth biosphere is equal to the Hubble radius. The consideration of humanity as the biosphere matter especial part provides us to value the human mass limit and extremal quantity; the obtained results coincide with the ones, calculated by the demography specialists. The obtained relations are explained by the author conception of the synergistical evolutioning Universe, where the nonequilibrium structures, including the alive ones, are created under the action of the observed energy forms as well as of the dark energy. The connection between the biosphere and Universe parameters provides us to extend the Anthropic principle notion as well as the observer conception because the biosphere is an adaptation tracing system, which adjusts itself under the Universe evolution to be the one unit with it. Keywords: biosphere, cosmological parameters, Hubble radius, DNA length, humanity mass, Earth biosphere mass, Universe mass, Hubble radius, P. Dirac Big Numbers, fine structure constant, synergetical Universe, biospheres number in the Universe.
[4639] vixra:1403.0165 [pdf]
A Note on Smarandache BL-Algebras
Using some new characterizations of ideals in BL-algebras, we revisit the paper of A. Borumand, and al.[1] recently published in this Journal. Using the concept of MV-center of a BL-algebra, we give a very simple characterization of Smarandache BL-algebra. We also restate some of the results and provide much simpler proofs. Among other things, we notice that Theorem 3.17 and Theorem 3.18 of [1] are not true and they aect a good portion of the paper. Since Deni- tion 3.19, Examples 3.20, 3.21, Theorem 3.22, Remark 3.23 and Remark 3.24 are based on a wrong Theorem, they are completely irrelevant.
[4640] vixra:1403.0127 [pdf]
A Note on Q-Analogue of S´ANDOR’S Functions
The additive analogues of Pseudo-Smarandache, Smarandache-simple func-tions and their duals have been recently studied by J. S´andor. In this note, we obtain q-analogues of S´andor’s theorems
[4641] vixra:1403.0126 [pdf]
Pseudo-Manifold Geometries ¸ with Applications
A Smarandache geometry is a geometry which has at least one Smarandachely denied axiom(1969), i.e., an axiom behaves in at least two different ways within the same space, i.e., validated and invalided, or only invalided but in multiple distinct ways and a Smarandache n-manifold is a nmanifold that support a Smarandache geometry. Iseri provided a construction for Smarandache 2-manifolds by equilateral triangular disks on a plane and a more general way for Smarandache 2-manifolds on surfaces, called map geome- tries was presented by the author in [9]−[10] and [12]. However, few observations for cases of n ≥ 3 are found on the journals. As a kind of Smarandache geometries, a general way for constructing dimensional n pseudo-manifolds are presented for any integer n ≥ 2 in this paper. Connection and principal fiber bundles are also defined on these manifolds. Following these constructions, nearly all existent geometries, such as those of Euclid geometry, Lobachevshy- Bolyai geometry, Riemann geometry, Weyl geometry, K¨ahler geometry and Finsler geometry, ...,etc., are their sub-geometries.
[4642] vixra:1403.0125 [pdf]
On the Universality of Some Smarandache Loops of Bol-Moufang Type
Smarandache quasigroup(loop) is shown to be universal if all its f, g-principal isotopes are Smarandache f, g-principal isotopes. Also, weak Smarandache loops of Bol-Moufang type such as Smarandache: left(right) Bol, Moufang and extra loops are shown to be universal if all their f, g-principal isotopes are Smarandache f, g- principal isotopes. Conversely, it is shown that if these weak Smarandache loops of Bol-Moufang type are universal, then some autotopisms are true in the weak Smaran- dache sub-loops of the weak Smarandache loops of Bol-Moufang type relative to some Smarandache elements. Futhermore, a S in which all its f, g-principal isotopes are Smarandache f, g-principal isotopes is shown to be universal if and only if it is a Smarandache left(right) Bol loop in which all its f, g-principal isotopes are Smarandache f, g-principal isotopes. Also, it is established that a Smarandache inverse property loop in which all its f, g-principal isotopes are Smarandache f, g-principal isotopes is universal if and only if it is a Smarandache Moufang loop in which all its f, g-principal isotopes are Smarandache f, g-principal isotopes. Hence, some of the autotopisms earlier mentioned are found to be true in the Smarandache sub-loops of universal Smarandache: left(right) inverse property loops and inverse property loops.
[4643] vixra:1403.0123 [pdf]
A Multi-Space Model for Chinese Bids Evalua¸tion with Analyzing
A tendering is a negotiating process for a contract through by a tenderer issuing an invitation, bidders submitting bidding documents and the tenderer accepting a bidding by sending out a notification of award. As a useful way of purchasing, there are many norms and rulers for it in the purchasing guides of the World Bank, the Asian Development Bank, · · ·, also in contract conditions of various consultant associations. In China, there is a law and regulation system for tendering and bidding. However, few works on the mathematical model of a tendering and its evaluation can be found in publication. The main purpose of this paper is to construct a Smarandache multi-space model for a tendering, establish an evaluation system for bidding based on those ideas in the references [7] and [8] and analyze its solution by applying the decision approach for multiple objectives and value engineering. Open problems for pseudo-multi-spaces are also presented in the final section.
[4644] vixra:1403.0118 [pdf]
Confinement of Charge Creation and Annihilation Centers by Nakanishi-Lautrup Field
The electromagnetic field model including Nakanishi-Lautrup (NL) field of quantum electrodynamics (QED) can easily treat creation and annihilation of positive and negative charge pairs, although it is difficult for Maxwell's equations to treat them. However, the model does not directly satisfy the charge conservation equation and permits single charge creation and annihilation. It is shown that the potential energy of NL field for a pair of charge creation and annihilation centers is proportional to their distance. It causes the confinement of charge creation and annihilation centers, which means the charge conservation for this model. The quark confinement might be also explained by the energy of NL field.
[4645] vixra:1403.0107 [pdf]
Palindromic Permutations and Generalized Smarandache Palindromic Permutations
The idea of left(right) palindromic permutations(LPPs,RPPs) and left(right) gen- eralized Smarandache palindromic permutations(LGSPPs,RGSPPs) are introduced in symmetric groups Sn of degree n. It is shown that in Sn, there exist a LPP and a RPP and they are unique(this fact is demonstrated using S2 and S3). The dihedral group Dn is shown to be generated by a RGSPP and a LGSPP(this is observed to be true in S3) but the geometric interpretations of a RGSPP and a LGSPP are found not to be rotation and reflection respectively. In S3, each permutation is at least a RGSPP or a LGSPP. There are 4 RGSPPs and 4 LGSPPs in S3, while 2 permutations are both RGSPPs and LGSPPs. A permutation in Sn is shown to be a LPP or RPP(LGSPP or RGSPP) if and only if its inverse is a LPP or RPP(LGSPP or RGSPP) respectively. Problems for future studies are raised.
[4646] vixra:1403.0089 [pdf]
Commonsense Local Realism Refutes Bell's Theorem
With Bell (1964) and his EPR-based mathematics contradicted by experiments, at least one step in his supposedly commonsense theorem must be false. Defining commonsense local realism as the fusion of local-causality (no causal influence propagates superluminally) and physical-realism (some physical properties change interactively), we eliminate all such contradictions and make EPR correlations intelligible by completing the quantum mechanical account in a classical way. Thus refuting the famous inequality at the heart of Bell's mathematics, we show that Bell's theorem is limited by Bell's use of naive realism. Validating the classical mantra that correlated tests on correlated things produce correlated results without mystery, we conclude that Bell's theorem and related experiments negate naive realism, not commonsense local realism.
[4647] vixra:1403.0079 [pdf]
A Monte Carlo Simulation Framework for Testing Cosmological Models
We tested alternative cosmologies using Monte Carlo simulations based on the sampling method of the zCosmos galactic survey. The survey encompasses a collection of observable galaxies with respective redshifts that have been obtained for a given spectroscopic area of the sky. Using a cosmological model, we can convert the redshifts into light-travel times and, by slicing the survey into small redshift buckets, compute a curve of galactic density over time. Because foreground galaxies obstruct the images of more distant galaxies, we simulated the theoretical galactic density curve using an average galactic radius. By comparing the galactic density curves of the simulations with that of the survey, we could assess the cosmologies. We applied the test to the expanding-universe cosmology of de Sitter and to a dichotomous cosmology.
[4648] vixra:1403.0075 [pdf]
A Test of Financial Time-Series Data to Discriminate Among Lognormal, Gaussian and Square-Root Random Walks
This paper aims to offer a testing framework for the structural properties of the Brownian motion of the underlying stochastic process of a time series. In particular, the test can be applied to financial time-series data and discriminate among the lognormal random walk used in the Black-Scholes-Merton model, the Gaussian random walk used in the Ornstein-Uhlenbeck stochastic process, and the square-root random walk used in the Cox, Ingersoll and Ross process. Alpha-level hypothesis testing is provided. This testing framework is helpful for selecting the best stochastic processes for pricing contingent claims and risk management.
[4649] vixra:1403.0074 [pdf]
A Finite-Difference Model for the Thermal History of the Earth
The present study is an investigation of the thermal history of the earth using heat transfer modeling. Assuming that the earth was a hot ball at a homogeneous temperature upon its formation, the model makes the following two predictions about conditions 4.5 Ga later (the earth's approximate present age): (i) there will be a geothermal gradient within a range of 1.5-5.0C per 100 meters in the rst km of the earth crust; and (ii) the earth's crust will be about 45 km thick, which is in agreement with average continental crust thickness. The fact that oceanic crust is much thinner (around 5-10 km thick) is explained by convective heat transfer and plate tectonics. The strong agreement between he predicted thickness of earth's crust with the average actual continental crust thickness, helps confirm the accuracy of the current inner core model of the earth indicating a solid inner core made of iron based on seismological studies.
[4650] vixra:1403.0052 [pdf]
The zeros of Riemann's Function And Its Fundamental Role In Quantum Mechanics
This paper presents a proof of the fundamental connection between the zeros of the Riemann function and quantum mechanics. Two results that unify gravity and electromagnetism, by exact calculation, both the elementary electric charge and mass of the electron. These two results depend directly on the sum of the imaginary parts of the zeros of the Riemann function, exactly following the Hilbert-Polya conjecture. This summatory, is the exponential sums of all the negative values ​​of the imaginary part of all zeros of the Riemann function.The main consequences are: scales of the Planck length gravity becomes repulsive, through the interaction of the gravitinos. Special relativity is a special case of a generalization, in which the geometry of a wormhole (hyperbolic geometry) implies that the energy of the tachyon states is zero, only if the velocity at the outer surface of the wormhole is infinite, or what is the same: an observer at rest can not distinguish an infinite speed of zero velocity, both are equivalent. There is not mere speculation; since only under this assumption the mass of the electron as a function of the non-trivial zeros of the Riemann zeta function is calculated. Time ceases to exist, takes the value zero. These wormholes would explain the quantum entanglement, as well as resolve the paradox of information loss in black holes. Fundamental constants used in this calculation are: elementary charge, gravitational Newton constant, Planck mass, mass of the electron, and fine structure constant for zero momentum.
[4651] vixra:1403.0047 [pdf]
Implications of Pseudo-Hermiticity on Quantum Information Through Dekker Formalism
Hamiltonian Mechanics works for conserved systems. Quantum Mechanics is given in Hamiltonian language. In papers by Dekker and recently by A. Sergi, this problem was circumvented by complexifying the energy and interpreting the dissipative part as Imaginary part. Based on the Dekker model, a following interpretation is presented in Density-Operator language for Pseudo-Hermiticity. Proper forms of Quantum measures are provided, as there is need in this new formalism, namely correcting Pati and Zielinski, Wang.
[4652] vixra:1403.0043 [pdf]
Why a Minimal Length Follows from the Extended Relativity Principle in Clifford Spaces
Recently, novel physical consequences of the Extended Relativity Theory in $C$-spaces (Clifford spaces) were explored and which provided a very different physical explanation of the phenomenon of ``relativity of locality" than the one described by the Doubly Special Relativity (DSR) framework. An elegant $nonlinear$ momentum-addition law was derived that tackled the ``soccer-ball'' problem in DSR. Generalized photon dispersion relations allowed also for energy-dependent speeds of propagation while still $retaining$ the Lorentz symmetry in ordinary spacetimes, but breaking the $extended$ Lorentz symmetry in $C$-spaces. This does $not$ occur in DSR nor in other approaches, like the presence of quantum spacetime foam. In this work we show why a $minimal$ length (say the Planck scale) follows naturally from the Extended Relativity principle in Clifford Spaces. Our argument relies entirely on the Physics behind the extended notion of Lorentz transformations in $ C$-space, and $does ~not$ invoke quantum gravity arguments, nor quantum group deformations of Lorentz/Poincare algebras, nor other prior arguments displayed in the Physics literature. The Extended Relativity Theory in Clifford $Phase$ Spaces requires also the introduction of a $maximal$ scale which can be identified with the Hubble scale. It is found also that $ C$-space physics favors a choice of signature $ ( -, +, +, .... , + ) $.
[4653] vixra:1403.0016 [pdf]
Introduction to the Expanded Rishon Model
We introduce an expansion of the Rishon Model to cover quark generations, (including a previously unnoticed one), provide an explanation for T and V as a topologically convenient moniker representing aspects of phase and polarity within knots (of for example String Theory), and explain particle decay in terms of simple ``phase transform" rules. We identify all current particles (with the exception of ``Top") including the gluon, the Bosons and the Higgs, purely in terms of the underlying mechanism which topologically can be considered to be Rishons. All this is predicated on the simple assumption that all particles in effect photons phase-locked in a repeating pattern inherently obeying Maxwell's equations, in symbiotic support of their own outwardly-propagating electro-magnetic synchotronic radiation, and that Rishons represent a phase ``measurement" (real or imaginary) at key strategic points on the photon's path.
[4654] vixra:1403.0015 [pdf]
Proof of Massless Particles Only Having Two Helicity One-particle States
Why massless particles, for example photons, can only have two helicity one-particle states is the main subject of this work. As we know, the little group which describes massive particle one-particle states' transformations under the Lorentz transformation is SO(3), while the little group describing massless states is ISO(2). In this paper, a new method is proposed to contract SO(3) group to ISO(2) group. We use this contraction method to prove that the particle can only have two helicity one-particle states from the perspective of \emph{kinematics}, when the particle mass trends to zero. Our proof is different from the dynamic explanation in the existing theories.
[4655] vixra:1402.0164 [pdf]
Divergence-Free Versus Cutoff Quantum Field Theory
We review the fundamental rules for constructing the regular and the gauge-invariant quantum field action both in the divergence-free approach and in the cutoff approach. Loop computations in quantum electrodynamics of fermionic spinor matter, and also in quantum gravity of fermionic spinor matter, are presented in both approaches. We explain how the results of the divergence-free method correspond to those of the cutoff method. We argue that in a fundamental theory that contains quantum gravity, the cutoff framework might be necessary, whereby the cutoff parameter and the gravitational coupling could be related to each other quite consistently.
[4656] vixra:1402.0151 [pdf]
Some Observations on Schrödingers’s Affine Connection
In a series of papers written over the period 1944-1948, the great Austrian physicist Erwin Schrödinger presented his ideas on symmetric and non-symmetric affine connections and their possible application to general relativity. Several of these ideas were subsequently presented in his notable 1950 book "Space-Time Structure," in which Schrödinger outlined the case for both metric and general connections, symmetric and otherwise. In the following discussion we focus on one particular connection presented by Schrödinger in that book and its relationship with the non-metricity tensor. We also discuss how this connection overcomes a problem that Hermann Weyl experienced with the connection he proposed in his failed 1918 theory of the combined gravitational-electromagnetic field. A simple physical argument is then presented demonstrating that Schrödingers’s formalism accommodates electromagnetism in a more natural way than Weyl’s theory.
[4657] vixra:1402.0144 [pdf]
Fundamental Unification Theory with the Electron, the Neutrino, and Their Antiparticles
An SU3 unification theory with the electron, the positron, and the neutrino is reviewed. A 10-spacetime gravidynamic unification of the internal charges and the spin is formulated, with a 16-component Majorana-Weyl fermion that consolidates the foregoing three Weyl particles with the antineutrino, and their Dirac conjugates. Vector bosons and scalar (Higgs) particles are consolidated in an antisymmetric tensor of 3rd rank, being the only tensor, apart from the graviton of 10-spacetime, that can couple to the unifying fermion. We write the Lorentz algebra of 10-spacetime in terms of the 4-spacetime Lorentz algebra and the internal O6 factor, the latter expressed via its U3 subalgebra, and construct the pertinent operator representations. We exhibit the complete structure of the unified gauge-Higgs couplings, indicating the source terms of particle masses. On the basis of this simple unification model, we propose the radical idea that all observed bosonic and fermionic particles, whether leptonic or hadronic, may be composed of just the underlying four fundamental fermions.
[4658] vixra:1402.0135 [pdf]
Estimation of Drainable Storage a Geomorphological Approach
Storage of water within a drainage basin is often estimated indirectly by analyzing the recession flow curves as it cannot be directly estimated with the aid of available technologies. However, two major problems with recession analysis are: i) late recession flows, particularly for large basins, are usually not observed ii) and early recession flows indicate that initial storage is infinite, which is not realistic. We address this issue by using the recently proposed geomorphological recession flow model (GRFM), which suggests that storage-discharge relationship for a recession event is exponential for the early recession phase and power-law for the late recession phase, being distinguished from one another by a sharp transition. Then we obtain a simple expression for the 'drainable' storage within a basin in terms of early recession curve characteristics and basin geomorphology. The predicted storage matches well with the observed storage (R^2=0.96), indicating the possibility of reliably estimating storage in river basins for various practical purposes.
[4659] vixra:1402.0116 [pdf]
Finite Mathematics, Finite Quantum Theory and Applications to Gravity and Particle Theory
We argue that the main reason of crisis in quantum theory is that nature, which is fundamentally discrete and even finite, is described by classical mathematics involving the notions of infinitely small, continuity etc. Moreover, since classical mathematics has its own foundational problems which cannot be resolved (as follows, in particular, from G\"{o}del's incompleteness theorems), the ultimate physical theory cannot be based on that mathematics. In the first part of the work we discuss inconsistencies in standard quantum theory and reformulate the theory such that it can be naturally generalized to a formulation based on finite mathematics. It is shown that: a) as a consequence of inconsistent definition of standard position operator, predictions of the theory contradict the data on observations of stars; b) the cosmological acceleration and gravity can be treated simply as {\it kinematical} manifestations of quantum de Sitter symmetry, {\it i.e. the cosmological constant problem does not exist, and for describing those phenomena the notions of dark energy, space-time background and gravitational interaction are not needed}. In the second part we first prove that classical mathematics is a special degenerate case of finite mathematics in the formal limit when the characteristic $p$ of the field or ring in the latter goes to infinity. {\bf This implies that mathematics describing nature at the most fundamental level involves only a finite number of numbers while the notions of limit and infinitely small/large and the notions constructed from them (e.g. continuity, derivative and integral) are needed only in calculations describing nature approximately}. In a quantum theory based on finite mathematics, the de Sitter gravitational constant depends on $p$ and disappears in the formal limit $p\to\infty$, i.e. gravity is a consequence of finiteness of nature. The application to particle theory gives that the notion of a particle and its antiparticle is only approximate and, as a consequence: a) the electric charge and the baryon and lepton quantum numbers can be only approximately conserved; b) particles which in standard theory are treated as neutral (i.e. coinciding with their antiparticles) cannot be elementary. We argue that only Dirac singletons can be true elementary particles and discuss a conjecture that classical time $t$ manifests itself as a consequence of the fact that $p$ changes, i.e. $p$ and not $t$ is the true evolution parameter.
[4660] vixra:1402.0110 [pdf]
Subconstituents of the Standard Model Particles and the Nambu--Jona-Lasinio Model
I propose a simple model for quarks and leptons in order to analyze what could be the building blocks of the Standard Model of particles. I start with the least number of elementary fields and generate using the Nambu--Jona-Lasinio model light masses for the subconstituent fermions. The NJL coupling constant turns out to be of the order of the gravitational coupling constant.
[4661] vixra:1402.0106 [pdf]
Relativity Without Time Dilation and Length Contraction
Special Relativity derived by Einstein presents time and space distorsions and paradoxes. This paper presents an approach where the Lorenz transformations are build on equations with speed variables instead of space and time variables as done by Einstein. The result are transformation rules between inertial frames that are free of time dilation and length contraction for all relativistiv speeds. Particles move according to Galilei relativity and the transformed speeds (virtual speeds) describe the non linearity of the physical magnitudes relative to the Galilei speeds. All the transformation equations already existent for the electric and magnetic fields, deduced on the base of the invariance of the Maxwell wave equations are still valid. The present work shows the importance of including the characteristics of the measuring equipment in the chain of physical interactions to avoid unnatural conclusions like time dilation and lengthcontraction.
[4662] vixra:1402.0085 [pdf]
Graphs and Expressions for Higher-Loop Effective Quantum Action
We present the Feynman graphs and the corresponding expressions, up to 4th loop order, for a generic effective quantum field theory action. Whereas there are 2 graphs in the 2-loop order, and 8 graphs in the 3-loop order, we obtain 43 irreducible graphs in the 4-loop order. These results are obtained using Mathematica programming, where the underlying code is capable of generating graphs and expressions to any desired loop order. We explain the associated programming strategy.
[4663] vixra:1402.0081 [pdf]
Are Galaxies Structured by Riccati Equation? The First Graph of Rational Bar
A mother, a father, and their daughter were taking a picture. They were 5, 7, and 2 feet tall respectively. The parents stood in a row, and their daughter stood in front of her mother. My son saw this and ran quickly in front of the father before the picture was taken. I asked my son why. He answered that he was exactly 4 feet tall. I figured out his reasoning, and afterwards I have become an astrophysicist. A pattern is a distribution of differences. In the array pattern of the above-said four people, the height differences between adults and between kids are equal, and the height differences between females and between males are equal too. This simple pattern can be generalized into any array of numbers. Assume the differences of numbers in a row are equal to the corresponding differences in any other row. That is, there exist the common differences in all rows. Similarly assume the common differences in all columns. Then the pattern is called a rational structure. Assume the number at the bottom left corner is zero, C(0,0) = 0, and denote the series of numbers in the bottom row by U(i) and the series of numbers in the first column by V(j). I found the formula for the rational array: C = U(i) + V(j). This is called Skew Law. I generalized the rows and columns to be curved, and required that the curves cross each other at a right angle. This was exactly my idea of galaxy patterns. In this paper I show that the patterns are governed by the Riccati equation with constant coefficients; and the curves are governed by a type of algebraic equations. The cubic equation of the type gives a pattern which resembles the sharp bar of galaxy NGC 1073. Are all barred galaxies governed by the cubic and higher degrees of algebraic equations? The question will be resolved in the near future.
[4664] vixra:1402.0079 [pdf]
Role of the Universe's Background Gravitational Potential in Relativity Concepts
This paper reconciles General Relativity (GR) and Mach's Principle into a consistent, simple and intuitive alternative theory of gravitation. The background gravitational potential from the Universe's matter distribution plays an important role in relativity. This potential far from massive bodies is c<sup>2</sup>, and determines <i>unit</i> rest mass/energy, which is the essence behind E=mc<sup>2</sup>. The matter distribution creates a local inertial rest frame at every location, in which the Universe gravitational potential is a minimum. A velocity in this frame increases gravitational potential through net blue shift of Universal gravity, causing velocity time dilation, which is a gravitational effect identical to gravitational time dilation. Time dilation increases with velocity, but does not become boundless in general rectilinear motion. The Lorentz factor is the appropriate metric for time dilation only in certain constrained motions. The low velocity approximation of the Lorentz factor scales for all velocities in general rectilinear motion, and speed of light is not the maximum possible speed in such situations. Gravitational time dilation is derived first, and velocity time dilation is derived from it. The mathematics becomes simpler and more intuitive than GR, while remaining consistent with existing experiments. Some experiments are suggested that will show this theory to be more accurate than GR.
[4665] vixra:1402.0075 [pdf]
General Relativity from Planck-Satellite-Data
With the Planck 'constants' length, time, mass and acceleration will be shown, that a Quantum Gravity of the cosmos exists. This paper shows how Planck-Satellite-Data solves Einstein's Field Equations in Friedmann Robertson Walker Metric.
[4666] vixra:1402.0067 [pdf]
Measuring Complexity by Using Reduction to Solve P vs NP and NC & PH
This article prove that NC and PH is proper (especially P is not NP) by using reduction difference. We can prove that NC is proper by using AL0 is not NC. This means L is not P. We can prove P is not NP by using reduction difference between L and P. And we can also prove that PH is proper by using P is not NP.
[4667] vixra:1402.0066 [pdf]
The SU5 Structure of 14-Dimensional Unification
In a 14-dimensional gravidynamic unification model, the spacetime as well as the internal symmetries of 2 lepton-quark generations would be consolidated in a 64-component Weyl fermion. Alternatively, the latter fermionic multiplet can describe 8 charged leptons, with 8 associated neutrinos, and the corresponding antiparticles. In such a framework, the dynamics of vector bosons, as well as of Higgs scalars, would be generated at the quantum level via unified couplings to a vector, an antisymmetric tensor of 3rd rank, and an antisymmetric tensor of 5th rank. We exhibit the complete SU5 structure of the latter couplings. The underlying SU5 would contain a color SU3 symmetry, in the case of the leptons and quarks, or a family SU3 symmetry, in the alternative model of purely leptonic unification. This work begins by writing the Lorentz algebra of 14 dimensional spacetime in terms of its 4-dimensional Lorentz subalgebra, and an internal O10 factor. The latter is expressed via its U5 subalgebra. The fermionic 64-plet is expressed in terms of 32 Weyl fermions in 4 dimensions. Likewise, the pertinent vector and the tensors are expressed in terms of vectors and scalars in 4 dimensions. The emerging picture regarding the fundamental fermions, and their interactions, would lead to aspects that are describable by the O10 and SU5 unification models, whether the grand unified model of leptons and quarks, or the purely leptonic unification model.
[4668] vixra:1402.0064 [pdf]
The Acausal Role of Quantum Phase in the Chiral Anomaly
The chiral potential is inverse square. The family of inverse square potentials includes the vector Lorentz potential of the quantum Hall and Aharonov-Bohm effects, and the centrifugal, Coriolis, and three body potentials. The associated impedances are scale invariant, quantum Hall being the most familiar. Modes associated with scale invariant impedances communicate only quantum phase, not an observable in a single quantum measurement. Modes associated with scale dependent impedances, including among others those of the 1/r monopole and 1/r^3 dipole potentials, communicate both phase and energy. Making this clarifying distinction between phase (relative time) and energy explicit presents a new perspective on the anomaly. This approach is introduced via the Rosetta Stone of modern physics, the hydrogen atom. Precise impedance-based pizero, eta, and eta' branching ratio calculations are presented as ratios of polynomials in powers of the fine structure constant, followed by discussion. Mass generation via chiral symmetry breaking is not addressed in the present paper.
[4669] vixra:1402.0063 [pdf]
Hawking Radiation Quasi-Normal Modes Correspondence and Effective States for Nonextremal Reissner-Nordström Black Holes
It is known that the nonstrictly thermal character of the Hawking radiation spectrum harmonizes Hawking radiation with black hole (BH) quasi-normal modes (QNM). This paramount issue has been recently analyzed in the framework of both Schwarzschild BHs (SBH) and Kerr BHs (KBH). In this assignment, we generalize the analysis to the framework of nonextremal Reissner-Nordström BHs (RNBH). Such a generalization is important because in both SBHs and KBHs an absorbed (or emitted) particle has only mass. Instead, in RNBHs the particle has charge as well as mass. In doing so, we expose that for the RNBH, QNMs can be naturally interpreted in terms of quantum levels for both particle emission and absorption. Conjointly, we generalize some concepts concerning the RNBH's "effective states".
[4670] vixra:1402.0061 [pdf]
Characterizing Dynamics of a Physical System
In this paper, we shall study an electromechanical system, which is capable of showing chaos. Our aim would be to identify deterministic chaos, which effectively means finding out the conditions in which the system would show aperiodicity. We shall study a coupled system, consisting of a Bullard Dynamo driving a Faraday Disk. First, we shall give a brief description of the system (by stating the equations describing the system), then we shall identify the fixed points for the system, and identify different dynamical regimes. Our objective here is to elaborate the methods by which a dynamical system is characterized.
[4671] vixra:1402.0050 [pdf]
The Cubic Equation and 137.036
A special case of the cubic equation, distinguished by having an unusually economical solution, is shown to relate to both the fine structure constant inverse (approximately 137.036) and the sines squared of the quark and lepton mixing angles.
[4672] vixra:1402.0040 [pdf]
On the Perihelion Precession of Solar Planetary Orbits
The present letter presents an improved version of the Azimuthally Symmetric Theory of Gravitation (ASTG-model) which was presented for the first time four years ago (in Nyambuya 2010). Herein, we propose a solution to the standing problem of the lambda-parameters in which effort we put the ASTG-model on a clear pedestal for falsification. The perihelion precessional data of Solar planetary orbits is used to set the theory into motion. As a way of demonstrating the latent power of the new theory, we show in separate letters that -- one of the most important and outstanding problems in astrophysics today -- the Radiation Problem; which is thought to bedevil massive stars during their formation, may find a plausible solution in the ASTG-model. Further, from within the confines of this new theory, we also demonstrate (in a separate letter) that the Emergence of Bipolar Molecular Outflows may very be an azimuthal gravitational phenomenon. Furthermore, we also show (in a separate letter as-well) that the ASTG-model does, to a reasonable extent explain the tilt of Solar planetary orbits.
[4673] vixra:1402.0028 [pdf]
Leptonic SU5 and O10 Unification Incorporating Family SU3
We study the representations and basic multiplets of the O10 algebra in terms of the SU5 tensorial elements, and construct the coupling of the 45 gauge bosons to the 16 Weyl fermions. After exhibiting the coupling terms corresponding to a single generation of quarks and leptons, pertaining to the usual grand unified theory with electroweak SU2xU1 and color SU3 components, we propose a different approach to the underlying grand symmetry as corresponding to a variety of leptonic particles (electron-like and neutrino-like), and where the decomposition of SU5 proceeds via a family SU3 symmetry. We discuss the implications of such an SU3 family symmetry for the structure of the vector boson spectrum in high-energy collider phenomenology. On the other hand, our scheme promotes the idea that the hadronic constituents, rather than being fractionally charged confined quarks, may turn out to be nothing other than leptonic varieties with integral electric charges. The existence of hadrons as extended objects may find explanation in solitonic solutions of the underlying nonlinear gauge theory. We propose the further incorporation of the theory in a 14-dimensional gravidynamic framework.
[4674] vixra:1402.0015 [pdf]
Photon Superluminal
We determine the velocity of the photon outflow from the blackbody in the de Laval nozzle. Derivation is based on the Saint-Venant-Wantzel equation for the thermodynamic of the blackbody photon gas and on the Einstein relation between energy and mass. The application of the derived results for photon rockets is not excluded.
[4675] vixra:1402.0004 [pdf]
General Relativity as Curvature of Space
With the Planck 'constants' length, time, mass and acceleration will be shown, that a Quantum Gravity of the cosmos exists. This paper shows how Einstein's Field Equations in Friedmann Robertson Walker Metric solves the Planck Era context.
[4676] vixra:1402.0002 [pdf]
Realistic Decelerating Cosmology and the Return to Contraction
For cosmological theory without the bizarre vacuum-driven acceleration, and in the spirit of "realistic non-singular cosmology", we examine the effect of adjusting the value of the Hubble fraction in order to obtain a reasonable fit with the large data set of supernovae magnitudes and redshifts. Adopting a value of the Hubble fraction equal to 0.53, we obtain a pleasing fit for a theory with a negative graviton density, with a matter fraction of 1.12, a decelerating parameter of 0.56, and the remaining time before the return to contraction of about 770 Gyr. For a theory with a negative vacuum density, we obtain a pleasing fit with a matter fraction of 1.02, a decelerating parameter of 0.53, and a remaining time of about 125 Gyr.
[4677] vixra:1401.0238 [pdf]
Квантовые эффекты смещения излучения в длинноволновую и коротковолновую части спектра. Quantum Effects of Radiation Displacement to Longer and Shorter Wavelengths.
Исходя из квантовых представлений об электромагнитных колебаниях, показана связь между поглощенной частью энергии квантов межзвездной средой и смещением спектров пропорциональных относительному расстоянию между источником и приемником квантов, а так же влияние на смещение спектров движения объектов относительно друг друга. Based on the concepts of quantum electromagnetic fluctuations, shows the relationship between the absorbed part photon energy of the interstellar medium and the displacement spectra are proportional to the relative distance between source and receiver rays, as well as the effect of the displacement spectra of the motion of objects relative to each other.
[4678] vixra:1401.0237 [pdf]
Novel Physical Consequences of the Extended Relativity in Clifford Spaces
Novel physical consequences of the Extended Relativity Theory in $C$-spaces (Clifford spaces) are explored. The latter theory provides a very different physical explanation of the phenomenon of ``relativity of locality" than the one described by the Doubly Special Relativity (DSR) framework. Furthermore, an elegant $nonlinear$ momentum-addition law is derived in order to tackle the ``soccer-ball'' problem in DSR. Neither derivation in $C$-spaces requires a $curved$ momentum space nor a deformation of the Lorentz algebra. While the constant (energy-independent) speed of photon propagation is always compatible with the generalized photon dispersion relations in $C$-spaces, another important consequence is that these generalized photon dispersion relations allow also for energy-dependent speeds of propagation while still $retaining$ the Lorentz symmetry in ordinary spacetimes, while breaking the $extended$ Lorentz symmetry in $C$-spaces. This does $not$ occur in DSR nor in other approaches, like the presence of quantum spacetime foam. We conclude with some comments on the quantization program and the key role that quantum Clifford-Hopf algebras might have in the future developments since the latter $q$-Clifford algebras naturally contain the $ \kappa$-deformed Poincare algebras which are essential ingredients in the formulation of DSR.
[4679] vixra:1401.0220 [pdf]
Consciousness and Its Effect on the Modification of Space-Time
This paper will show through several experiments that time dilation, or modification of space-time, occurs only when there is a consciousness field or link (CFL) involved, along with velocity and/or acceleration. It also shows that consciousness can be applied remotely at great distance, can be connected directly or indirectly with an event, and that it is only when such a link exists that the relativity theories are applicable. It also reformulates one of the postulates of the special theory of relativity that the frame of reference is not regardless of position and velocity, but it is from the position where the consciousness link is made between all the elements regardless of the velocity. Furthermore, it explains why the universe is finite rather than infinite and that the CFL is responsible for the expansion of the universe.
[4680] vixra:1401.0214 [pdf]
Is the Natario Warp Drive a Valid Candidate for an Interstellar Voyage to the Star System Gliese 667C(GJ 667C)??
Warp Drives are solutions of the Einstein Field Equations that allows superluminal travel within the framework of General Relativity. There are at the present moment two known solutions: The Alcubierre warp drive discovered in $1994$ and the Natario warp drive discovered in $2001$. The major drawback concerning warp drives is the huge amount of negative energy able to sustain the warp bubble.In order to perform an interstellar space travel to a "nearby" star at $22$ light-years away with $3$ potential habitable exo-planets(Gliese $667C$) at superluminal speeds in a reasonable amount of time a ship must attain a speed of about $200$ times faster than light.However the negative energy density at such a speed is directly proportional to the factor $10^{48}$ which is $1.000.000.000.000.000.000.000.000$ times bigger in magnitude than the mass of the planet Earth!! We introduce here a shape function that defines the Natario warp drive as an excellent candidate to low the negative energy density requirements from $10^{48}$ to affordable levels.We also discuss other warp drive drawbacks:collisions with hazardous interstellar matter(asteroids or comets) that may happen in a real interstellar travel from Earth to Gliese $667C$ and Horizons(causally disconnected portions of spacetime).We terminate this work with a description of the star system Gliese $667C$
[4681] vixra:1401.0207 [pdf]
Approach to solve P vs PSPACE with Collapse of Logarithmic and Polynomial Space
This article describes about that P is not PSPACE. If P is PSPACE, we can derive P is L from relation of logarithm and polynomial reduction. But this result contracit Space Hierarchy Theorem. Therefore P is not PSPACE.
[4682] vixra:1401.0205 [pdf]
The SU7 Structure of 18-Dimensional Unification
In an 18-dimensional gravidynamic unification model the spacetime and the internal symmetries of 4 generations of leptons and quarks are consolidated in a 256-component Majorana-Weyl-Dirac fermion. In such a framework, the dynamics of vector bosons as well as Higgs scalars would be generated at the quantum level through a unified coupling of an antisymmetric tensor of rank 3 to the fermions. We exhibit the complete U7 structure of the latter coupling. This extensive work begins by writing the Lorentz algebra of 18 dimensional spacetime in terms of its 4-dimensional Lorentz subalgebra and an internal O14 factor. The latter is expressed via its U7 subalgebra. The 256-component fermion is expressed in terms of 64 Weyl fermions, and their Dirac conjugates, in 4 dimensions. Likewise the 3rd rank antisymmetric tensor is expressed in terms of vectors and scalars in 4 dimensions. The emerging picture regarding the fundamental fermions, and their interactions, would lead to aspects that are well described by a complementing O14 and SU7 grand unification schemes.
[4683] vixra:1401.0179 [pdf]
Symmetry as Turing Machine - Approach to solve P vs NP
This article describes about that P is not NP by using difference of symmetry. Turing Machine (TM) change configuration by using transition functions. This changing keep halting configuration. That is, TM classify these configuration into equivalence class. The view of equivalence class, there are different between P and coNP. Some coNP problem have over polynomial size totally order inputs. These problem cannot reduce P because these totally order must keep. Therefore we cannot reduce some coNP problem to P problem. This means P is not NP.
[4684] vixra:1401.0168 [pdf]
On the Failure of Weyl's 1918 Theory
In 1918 the German mathematician Hermann Weyl developed a non-Riemannian geometry in which electromagnetism appeared to emerge naturally as a consequence of the non-invariance of vector magnitude. Although an initial admirer of the theory, Einstein declared the theory unphysical on the basis of the non-invariance of the line element ds, which is arbitrarily rescaled from point to point in the geometry. We examine the Weyl theory and trace its failure to its inability to accommodate certain vectors that are inherently scale invariant. A revision of the theory is suggested that appears to refute Einstein’s objection.
[4685] vixra:1401.0155 [pdf]
The SU7 Structure of O14 and Boson-Fermion Couplings
We construct the O14 algebra in terms of the tensorial representations of its SU7 subalgebra. Subsequently, we construct the gauge-invariant boson-fermion coupling, and decompose it completely into terms exhibiting color SU3 and family SU3 symmetries. A picture of the particle spectrum emerges where quarks and leptons, as well as vector bosons, would appear as singlets or triplets with respect to family SU3. Accordingly, we conjecture the possible existence of a 4th generation of fermions, as well as the imminent existence of other $W$-like vector bosons, in high-energy collider experiments.
[4686] vixra:1401.0122 [pdf]
A Wave-centric View of Special Relativity
An approach to special relativity is outlined which emphasizes the wave and field mechanisms which physically produce the relativistic effects, with the goal of making them seem more natural to students by connecting more explicitly with prior studies of waves and oscillators.
[4687] vixra:1401.0121 [pdf]
Framework for the Effective Action of Quantum Gauge and Gravitational Fields
We consider the construction of a simplified framework for constructing the manifestly gauge-invariant effective action of non-Abelian quantum gauge, and gravitational, fields. The new framework modifies the bilinear terms that are associated with virtual gauge fields. This is done in a manner that rectifies the singular kernel, simplifies loop computations, and maintains manifest effective gauge invariance. Starting with the invariant Lagrangian for a general non-Abelian gauge theory, we present analysis pertaining to the derivation of the effective propagator and the effective vertices. Similar analysis is extended to the Einstein invariant gravitational Lagrangian. We discuss the possibility of seeding the elements of symmetry breaking, and structuring the underlying gauge algebra, through a mechanism of giving masses to the components of the virtual fields. This mechanism could be a substitute to the Higgs scenario in non-Abelian gauge unification models, and an alternative to compactification in extra-dimensional gravity.
[4688] vixra:1401.0099 [pdf]
The SU9 Structure of E8 and Boson-Fermion Couplings
We construct the E8 algebra in terms of the tensorial representations of its SU9 maximal subalgebra. We then construct the supersymmetric gauge-invariant boson-fermion coupling, and decompose it completely into terms exhibiting color SU3 and family SU5 symmetries. This work promotes a scheme of E8 super grand unification that is based on a perfect symmetry between particles and antiparticles, rather than a symmetry between quark-lepton generations and their enigmatic mirror conjugates. Whereas the emergence of a definite chirality for low-energy weak interactions still depends on the yet unresolved problem of symmetry breaking, a picture of weak decays emerges, in which multiple W vector bosons, rather than a single one, are the fundamental weak mediators between multiple charged leptons and associated multiple neutrinos, or between multiple upquarks and their associated multiple downquarks.
[4689] vixra:1401.0098 [pdf]
Further Insight Relative to Cavity Radiation: A Thought Experiment Refuting Kirchhoff's Law
Kirchhoff's law of thermal emission demands that all cavities contain blackbody, or normal, radiation which is dependent solely on the temperature and the frequency of observation, while remaining independent of the nature of the enclosure. For over 150 years, this law has stood as a great pillar for those who believe that gaseous stars could emit a blackbody spectrum. However, it is well-known that, under laboratory conditions, gases emit in bands and cannot produce a thermal spectrum. Furthermore, all laboratory blackbodies are constructed from nearly ideal absorbers. This fact strongly opposes the validity of Kirchhoff's formulation. Clearly, if Kirchhoff had been correct, then laboratory blackbodies could be constructed of any arbitrary material. Through the use of two cavities in temperature equilibrium with one another, a thought experiment is presented herein which soundly refutes Kirchhoff's law of thermal emission.
[4690] vixra:1401.0097 [pdf]
The Liquid Metallic Hydrogen Model of the Sun and the Solar Atmosphere VIII. `Futile' Processes in the Chromosphere
In the liquid metallic hydrogen solar model (LMHSM), the chromosphere is the site of hydrogen condensation (P.M. Robitaille. The Liquid Metallic Hydrogen Model of the Sun and the Solar Atmosphere IV. On the Nature of the Chromosphere. Progr. Phys., 2013, v. 3, L15-L21). Line emission is associated with the dissipation of energy from condensed hydrogen structures, CHS. Previously considered reactions resulted in hydrogen atom or cluster addition to the site of condensation. In this work, an additional mechanism is presented, wherein atomic or molecular species interact with CHS, but do not deposit hydrogen. These reactions channel heat away from CHS, enabling them to cool even more rapidly. As a result, this new class of processes could complement true hydrogen condensation reactions by providing an auxiliary mechanism for the removal of heat. Such `futile' reactions lead to the formation of activated atoms, ions, or molecules and might contribute to line emission from such species. Evidence that complimentary `futile' reactions might be important in the chromosphere can be extracted from lineshape analysis.
[4691] vixra:1401.0088 [pdf]
Self-Similar Doppler Shift: an Example of Correct Derivation that Einstein Relativity Was Preventing us to Break Through
In this short paper I present a simple but correct derivation of the complete Doppler shift effect. I will prove that Doppler effect of electromagnetic waves is a self-similar process, and therefore Special Relativity, that pretends to be complete for every inertial system, is excluded from that self-similarity of the Doppler effect.
[4692] vixra:1401.0073 [pdf]
The Speed of Light Postulate - Awareness of the Physical Reality
In this article "Thesis about the behavior of the electromagnetic radiation in gravitational field" and "Thesis about the global physical reality of the Universe" are formulated. They give a real explanation of all unexpected and "inexplicable results" of the notable experiments related to the measurement of the speed of light, such as the "Michelson-Morley experiment", the "Sagnac experiment", the "Michelson-Gale-Pearson experiment", the "Miller’s experiments", the "One way speed of light measurements", as well as the "Shapiro time delay effect" and the anomaly in the acceleration of the spaceprobes "Pioneer 10", "Pioneer 11", "Galileo", "Ulysses". Actually, this different vision is a new model of uncertainty of the Universe, which can give an answer of the question about "the origin of the energy" and can explain a lot of problems in the physics today (such as: "the accelerated expansion of the Universe"; "the dark matter and the dark energy in the Universe", etc.), which have been under research for a long time.
[4693] vixra:1401.0060 [pdf]
Realistic Non-Singular Cosmology with Negative Vacuum Density
We present a version of "realistic non-singular cosmology" in which the upper turning point of expansion is provided by negative mass density of vacuum rather than by gravitational radiation. The lower turning point is still provided by the negative pressure of electromagnetic energy. Again, assuming that the temperature of microwave radiation is a true measure of the electromagnetic energy density of the universe, and that the supernovae data of magnitudes and redshifts are reliable, we can determine (tentatively) the parameters of this version of our model, with an appropriate Hubble fraction of 0.475, and a deceleration parameter of 0.65, and estimate the time that passed, about 13.3 Gyr, since the initiation of the expansion phase, and the time that remains, about 55 Gyr, before the return to contraction. The maximum radiational temperature was only about 27,344 K, and the maximum mass density was low enough to give each typical star an ample space of about 5% of a lightyear to proceed with its own activity without disruption.
[4694] vixra:1401.0037 [pdf]
Couplings in the Deep Infrared Limit from M-Theory Does One Numerical Formula Deserve the Benefit of the Doubt?
In this note, we preliminarily discuss the possibility that the expression \alpha_{em}^{-1}=4\pi^{3}+\pi^{2}+\pi has a physical interpretation and can even be helpful in model building. If one interprets this expression in terms of the volumes of l_{p} - sized three-cycles on G_{2} holonomy manifolds and requires that it also comprises effects of the running of the coupling, one can obtain the desired value, but only in a setup which is clearly different from the standard model of particle physics (SM). An understanding of the nature of the link between such putative model and SM is needed. Studying this issue could possibly shed some light on existing problems in model building within string theory (ST), particularly the hierarchy problem. Numerological “success”, which can be achieved if one interprets the formula in terms of volumes of three-cycles on the compactification manifold, as we intend to do here, cannot change the fact that discussion in this note represents merely a heuristic estimate of the feasibility of further research in a certain direction.
[4695] vixra:1401.0034 [pdf]
Space is Discrete for Mass and Continuous for Light
Space is discrete for a moving mass and continuous for an electromagnetic wave. We introduce velocity addition rules for such motion, and from these we derive the second postulate of special relativity — namely, that each observer measures the same value of the speed of light. Thus widely accepted derivations, showing that the two postulates of special relativity necessarily lead to the Lorentz transformations, cannot be correct. We contrast the distance-time implications of our velocity addition rules with the Lorentz transformations. Our theory leads to different time measurements by observers and to special relativity's momentum-energy formulas. However, in our theory length of an object remains invariant, and we do not have a time dilation formula that applies between inertial frames. Study of timescales of quasar variability has yielded observational data showing special relativity's time dilation to be inconsistent with the model of an expanding universe; gamma-ray bursts are giving similar results. These quasars and gamma-ray bursts results are consistent with our time formulas. We suggest other experiments where special relativity and our theory give different predictions, and these can further show special relativity to be wrong and our theory to be correct.
[4696] vixra:1401.0023 [pdf]
First Law of Motion in Periodic Time
First law of motion operative in a time periodic universe with $S^1$ time is formulated. The inertial paths of the particles are defined as circles, with radius $R= T \left|\textbf{v} \right| /(2 \pi)$, where $T$ is the time period of the universe, and $\textbf{v}$ is the velocity of the particle. This law reduces to the Newton's first law of motion in the limit $T \rightarrow \infty$, when the radii $R \rightarrow \infty$, and so the circles open out and become indisthinguighable from the straight line trajectories of the Newtonian universe.
[4697] vixra:1401.0006 [pdf]
Deep Inelastic Gedanken Scattering off Black Holes
We propose a model of quark and lepton subconstituents which extends the Standard Model of particles to the Planck scale and beyond. We perform a Gedanken experiment by scattering a probe deep inside a mini black hole. We assume the result is that at the core of the hole there is a spin 1/2 (or 0) constituent field, grayon, in Minkowski space. The grayon is proposed to replace the singularity of the hole. The grayon interactions are assumed to provide bound states of three grayons which form the quarks and leptons.
[4698] vixra:1312.0247 [pdf]
Approach to Solve P vs NP by Using Bijection Reduction
This article describes about that P is not NP by using bijection reduction between each problems. If injective reduction of each directions between CNFSAT and HornSAT exist, bijection between CNFSAT and HornSAT also exist. If P is NP, this bijection is polynomial time. But HornSAT description is polynomial complex and CNFSAT description is exponential complex. It means that there is no bijection in polynomial time. Therefore P is not NP.
[4699] vixra:1312.0242 [pdf]
Equally Aged Twins Under Different Accelerations
In a very recent book on the philosophy of space and time, Maudlin discusses Langevin's famous twin paradox, emphasizing the incorrectness of attributing the different agings of the two twins to the different accelerations they suffered. Here we extend the argument a little by considering to and fro rectilinear motions under finite accelerations (and continuous velocities).
[4700] vixra:1312.0240 [pdf]
Metrical Model of a Star with Thermal Profile
We consider the extension of the Schwarzschild metric to a counterpart that can describe an extended spherically symmetric stellar body with acceptable density distribution, and with a thermal profile. With a length parameter, apart from the Schwarzschild radius, the proposed metric can fit the values of density, pressure, and temperature, at the stellar surface, and give a complete profile down to the core. Such a metric extension seems to describe a central core with an "energy-producing, explosive core", as well as an inflationary "coronal windy layer", two regions where the mean pressure seems to acquire negative magnitudes. Our illustrative computations and graphical illustrations refer to the sun as a reference example. We discuss the Schwarzschild limit of such metrical model. We also discuss the interior gravitational potential and its repulsive central core.
[4701] vixra:1312.0204 [pdf]
Realistic Non-Singular Cosmology
The radiational contributions, electromagnetic and gravitational, to energy density in the cosmological equation must be negative. This creates natural turning points for a cyclic cosmological model. The negative pressure of the electromagnetic radiation would prevent the collapse of the universe in a prior contracting phase, while the positive pressure of the gravitational radiation would prevent it from expanding forever. Such a cosmological model avoids the problems of a singular past, and evades an ever-accelerating future. The picture is that of an oscillating universe full of stars, that eternally build and destroy the various forms of matter and life, all in a framework of energy conservation. Assuming that the temperature of microwave radiation is a true measure of the electromagnetic energy density of the universe, and that the supernovae data and redshifts are reliable, we can determine (tentatively) the parameters of our model, with an appropriate Hubble fraction of 0.50, and a deceleration parameter of 0.55, and estimate the time that passed, about 12.8 Gyr, since the initiation of the expansion phase, and the time that remains, about 1066 Gyr, before the return to contraction.
[4702] vixra:1312.0194 [pdf]
Gravitation as the Result of the Reintegration of Migrated Electrons and Positrons to Their Atomic Nuclei
This paper presents the mechanism of gravitation based on an approach where the energies of electrons and positrons are stored in fundamental particles (FPs) that move radially and continuously through a focal point in space, point where classically the energies of subatomic particles are thought to be concentrated. FPs store the energy in longitudinal and transversal rotations which define corresponding angular momenta. Forces between subatomic particles are the product of the interactions of their FPs. The laws of interactions between fundamental particles are postulated in that way, that the linear momenta for all the basic laws of physics can subsequently be derived from them, linear momenta that are generated out of opposed pairs of angular momenta of fundamental particles. The flattening of Galaxies' Rotation Curve is derived without the need of the definition of Dark Matter, and the repulsion between galaxies is shown without the need of Dark Energy. The mechanism of the dragging between neutral moving masses is explained (Thirring-Lense-effect) and how gravitation affects the precision of atomic clocks is presented (Hafele-Keating experiment). Finally, the compatibility of the presented approach for gravitation with quantum mechanics is shown, what is not the case with general relativity.
[4703] vixra:1312.0191 [pdf]
Device Search and Selection
Cyber-physical systems (CPS) represent the expansion in computerized interconnectivity. This phenomenon is also moving towards the Internet of Things (IoT) paradigm. Searching functionality plays a vital role in this domain. Many different types of search capabilities are required to build a comprehensive CPS architecture. In CPS, users may want to search smart devices and services. In this chapter, we discuss concepts and techniques related to device search and selection. We briefly discuss different types of device searching approaches where each has its own objectives and applications. One such device searching technique is context-aware searching. In this chapter, we present context-aware sensor search, selection and ranking model called CASSARAM in detail. This model addresses the challenge of efficiently selecting a subset of relevant sensors out of a large set of sensors with similar functionality and capabilities. CASSARAM takes into account user preferences and considers a broad range of sensor characteristics, such as reliability, accuracy, location, battery life, and many more. Later in the chapter, we discuss three different techniques that can be used to improve the efficiently of CASSARAM. We implemented the proof of concept software using Java. Testing and performance evaluation results are also discussed. We also highlight open research challenges and opportunities in order to support future research directions.
[4704] vixra:1312.0173 [pdf]
On Bell's Inequality
We show that when spin eigenfunctions are not fully orthonormal, Bell's inequality does allow local hidden variables. In the limit where spin eigenfunctions are Dirac orthonormal, we recover a significant extremal case. The new calculation gives a possible accounting for $\alpha_{\mathrm{MCM}}-\alpha_{\mathrm{QED}}$.
[4705] vixra:1312.0171 [pdf]
Work Function Measurements of Vanadium Doped Diamond-Like Carbon Films by Ultraviolet Photoelectron Spectroscopy
Vanadium doped diamond-like carbon films prepared by unbalanced magnetron sputtering have been investigated by X-ray and ultraviolet photoelectron spectroscopy measurements for the purpose of revealing electronic structures including values of work function on the surfaces. In addition to these photoelectron measurements, X-ray diffraction measurements have been performed to characterize the crystal structures.
[4706] vixra:1312.0168 [pdf]
Ontological Physics
Ambiguity in physics makes many useful calculations impossible. Here we reexamine physics' foundation in mathematics and discover a new mode of calculation. The double slit experiment is correctly described by the new mode. We show that spacetime emerges from a set of hidden boundary terms. We propose solutions to problems including the limited spectrum of CMB fluctuations and the anomalous flux of ultra-high energy cosmic rays. A fascinating connection between biology and the new structure should have far reaching implications for the understanding and meaning of life.
[4707] vixra:1312.0145 [pdf]
A Continuous Counterpart to Schwarzschild's Liquid Sphere Model
We present a continuous counterpart to Schwarzschild's metrical model of a constant-density sphere. The new model interpolates between a central higher-density spherical concentration of mass and lower-density layers at large distance. Whereas the radial part of pressure shows a positive distribution for all values of radial distance, the angular part of pressure shows negative magnitudes in the upper layers of low density; both pressures vanishing for infinite distance. We speculate that the negative pressure effect might be connected with stellar winds. Studying the motions of photons and massive particles in the gravity field of the continuous model shows similar, however continuous, behaviors to those described before for Schwarzschild's constant-density model.
[4708] vixra:1312.0134 [pdf]
Schwarzschild's Metrical Model of a Liquid Sphere
We study Schwarzschild's metrical model of an incompressible (liquid) sphere of constant density and note the tremendous internal pressures described by the model when applied to a stellar body like the sun. We also study the relativistic radial motion of a photon and a massive particle in the associated gravitational field, with due regard to energy conservation. We note the similarities and the differences between this case and the case of a Schwarzschild singular source with special regard to repulsive effects and penetrability.
[4709] vixra:1312.0118 [pdf]
Hamilton's Principle and the Schwarzschild Metric
The Schwarzschild metric is rearranged to manifest inherent limitations based on the conservation of energy. These limitations indicate that a collapsing surface will not compact below a critical radius to form a black hole.
[4710] vixra:1312.0116 [pdf]
Mobile Sensing Devices and Platforms
A cyber-physical system (CPS) is a system of collaborating computational elements con- trolling physical entities. CPS represents the next stage on the road to the creation of smart cities through the creation of an Internet of Things, data and services. Mobility is one of the major characteristic of both CPS and IoT. In this Chapter, we discuss mobile sensing platforms and their applications towards dierent but interrelated paradigms such as IoT, sensing as a service, and smart cities. We highlight and brie y discuss dierent types of mobile sensing platforms and functionalities they oer. Mobile sensing platforms are more oftenly integrated with smart phones and tablet devices. The resource constrained nature of the mobile devices requires dierent types of designs and architectural implementations. We proposed a software-based mobile sensing platform called Mobile Sensor Data Engine (MOSDEN). It is a plug-in-based scalable and extendible IoT middleware for mobile devices that provide an easy way to collect sensor data from both internal and external sensors. MOSDEN act as intermediary device that collects data from external sensors and upload to the cloud in real-time or on demand. We evaluate MOSDEN in both stand-alone and collaborative environments. The proof of concept is developed on Android platform.
[4711] vixra:1312.0097 [pdf]
Unobservable Potentials to Explain Single Photon and Electron Interference
We show single photon and electron interferences can be calculated without quantum-superposition states by using tensor form (covariant quantization). From the analysis results, the scalar potential which correspond to an indefinite metric vector forms an oscillatory field and causes the interferences. The results clarify the concept of quantum-superposition states is not required for the description of the interference, which leads to an improved understanding of the uncertainty principle and resolution of paradox of reduction of the wave packet, elimination of infinite zero-point energy and derivation of spontaneous symmetry breaking. The results conclude Quantum theory is a kind of deterministic physics without ''probabilistic interpretation''.
[4712] vixra:1312.0089 [pdf]
Relativistic Motion and Schwarzschild Sources
We give an elementary analysis of the classical motion of a particle in the spherically symmetric gravitational field of a Schwarzschild source, with due regard to energy conservation. We observe that whereas a massive particle at large distances could be attracted towards the central source, it would however encounter repulsion as it comes close to the Schwarzschild surface. We also note that there is a limited energy range for which the radial motion is ruled by attraction. An attracted incoming particle reaches a maximum speed at a specific distance greater than the Schwarzschild radius, before decelerating to zero, then bouncing back. Like the radial motion, the orbital motion around a Schwarzschild source would stop at the Schwarzschild radius. A massless photon would always be repelled, with its speed decreasing as it approaches the source, ultimately getting reflected at the Schwarzschild surface. The timing problem associated with surface singularity is resolved by regarding particles as Schwarzschild sources themselves. We depict a picture of ideal Schwarzschild sources as mutually repulsive bubbles endowed with reflecting surfaces.
[4713] vixra:1312.0068 [pdf]
How Arithmetic Generates the Logic of Quantum Experiments
As opposed to the classical logic of true and false, viewed as an axiomatised theory, ordinary arithmetic conveys the three logical values: provable, negatable and logically independent. This research proposes the hypothesis that Axioms of Arithmetic are the fundamental foundation running arithmetical processes in Nature, upon which physical processes rest. And goes on to show, in detail, that under these axioms, quantum mathematics derives and initiates logical independence, agreeing with indeterminacy in quantum experiments. Supporting arguments begin by explaining logical independence in arithmetic, in particular, independence of the square root of minus one. The method traces all sources of information entering arithmetic, needed to write mathematics of the free particle. Wave packets, prior to measurement, are found to be the only part of theory logically independent of axioms; the rest of theory is logically dependent. Ingress of logical independence is via uncaused, unprevented self-reference, sustaining the wave packet, but implying unitarity. Quantum mathematics based on axiomatised arithmetic is established as foundation for the 3-valued logic of Hans Reichenbach, which reconciles quantum theory with experimental anomalies such as the Einstein, Podolsky & Rosen paradox.
[4714] vixra:1312.0058 [pdf]
A Conceptual Model of the Structure of Elementary Particles, Including a Description of the Dark Matter Particle
In the hyperverse model, particles of matter are collapsed and coalesced quanta of space, created by a condensation process to conserve angular momentum and centripetal force. We have proposed that the component quanta have spin, and hypothesize here that upon particle formation, the collapsed and coalesced component quanta are fixed in orientation so that either their north or south poles face the particle center. Six coalesced vortices produce structures that can account for all charge variations, including fractional charges and anti-particles. Fractional charges are net charges, where charge is a consequence of the spin orientations. The model suggests that protons carry a hidden negative one charge, speculated to be what stops the electron from falling into the proton. We hypothesize that "condensation neutrinos" exist, neutrinos made by the natural condensation route of particle creation, and these lack the high kinetic energy of "emission neutrinos", created as a result of atomic decay and collision. Condensation neutrinos would be the most numerous particle, but difficult to detect, and may be the dark matter particle.
[4715] vixra:1312.0056 [pdf]
Effective Action Framework for Divergence-Free Quantum Field Theory
We present the basic effective action framework for divergence-free quantum field theory. We describe the loopwise perturbative development of the effective action for a generic field theory, and indicate the manner by which this development is defined in order to evade the conventional divergences of the associated Feynman integrals.
[4716] vixra:1312.0051 [pdf]
On the Origin of Matter and Gravity: What They Are and Why They Exist
We show a group of equations that appear to represent target values for the mass, radius, and number of elementary particles in the universe: the values of an 'ideal particle'. Quanta and particles are not static; they change with time. The angular momentum of the universe is continually increasing, and this requires a dynamical response to conserve angular momentum. The creation, collapse, and coalescence of quanta conserves angular momentum, resulting in the creation of particles of matter. Matter is condensed space. The increase in the gravitational potential energy of particles matches the accretion rate of energy predicted by this model. This gives a simple, universe-wide mechanism for the creation of matter, and is the reason all elementary particles, of a kind, are identical. The centripetal force of a particle of matter matches the gravitational force; they are the same entity. Gravity is the ongoing accretion of the quanta of space by particles of matter.
[4717] vixra:1312.0050 [pdf]
Gravity is the Accretion of Energy by Matter to Conserve the Continually Increasing Angular Momentum of the Universe
Part two of the Origins of Matter and Gravity paper. We show a group of equations that appear to represent target values for the mass, radius, and number of elementary particles in the universe: the values of an 'ideal particle'. Quanta and particles are not static; they change with time. The angular momentum of the universe is continually increasing, and this requires a dynamical response to conserve angular momentum. The creation, collapse, and coalescence of quanta conserves angular momentum, resulting in the creation of particles of matter. Matter is condensed space. The increase in the gravitational potential energy of particles matches the accretion rate of energy predicted by this model. This gives a simple, universe-wide mechanism for the creation of matter, and is the reason all elementary particles, of a kind, are identical. The centripetal force of a particle of matter matches the gravitational force; they are the same entity. Gravity is the ongoing accretion of the quanta of space by particles of matter.
[4718] vixra:1312.0048 [pdf]
A Universe from Itself: The Geometric Mean Expansion of Space and the Creation of Quanta
We explore indications that the universe is undergoing a geometric mean expansion. Developing this concept requires the creation of two quantum levels, one being the quantum of our quantum mechanics, and another that is much smaller. The generation of quanta is what allows space to expand. We find that quanta are not static entities, but change with time; for example, the energy of quanta decreases, and the number of quanta increases, with time. The observable universe grows while the quantum levels shrink, giving a simple mechanism to explain the expansion of the universe. The universe does not come from nothing; it comes from itself.
[4719] vixra:1312.0045 [pdf]
A Model of Time based on the Expansion of Space
We present a model relating the expansion of space to time. We previously modeled the universe a hyperverse, expanding into the fourth dimension at twice the speed of light. We claim here that the 2c radial expansion gives us the one-way arrow of time. We further hypothesize that the surface of the hyperverse consists of a matrix of vortices, self-similar to the observable hyperverse, and that these vortices, which have the same energy, tangential velocity, and frequency, are the building blocks of both space and matter. Their 2c radial expansion allows a quantization of time. We show that there is an energy connected to time, derived from the centripetal velocity of the vortices. Relative motion decreases centripetal velocity, and consequently, perceived frequency. The time dilation function of special relativity is derived from the ratio of the centripetal velocities of the observer and the observed. Time is created by hyperverse radial expansion and the energy and spin characteristics of the quanta of space.
[4720] vixra:1312.0044 [pdf]
The Hubble Constant is a Measure of the Increase in the Energy of the Universe
We postulate the universe to be the three dimensional surface volume of an expanding, hollow, four dimensional hypersphere, called the hyperverse. Using current measurements, we find that a hyperverse, whose surface volume matches the volume of the observable universe, has a radius of 27.7 billion light years, giving a radial expansion rate of twice the speed of light, and a circumferential expansion rate that matches the Hubble constant. We show that the Hubble constant is a measure of the increase in the energy of the universe, implying the hyperverse surface, our universe, is composed of energy. The 2c radial expansion both sets the speed limit in the universe, and is the basis of time. The hypersphere model provides a positively curved, and closed universe, and its 2c radial expansion rate, and circumferential expansion rate matching the Hubble expansion, give strong support that the universe is the 3D surface volume of an expanding 4D hypersphere.
[4721] vixra:1312.0031 [pdf]
Cosmic Quantization with Respect to the Conservation of Upper-Limit Energy
The conditions of the early universe are not known with any measure of certainty — they are only theories. Therefore, using the assumption that the estimated total energy of the observable universe is conserved, we propose a different lower limit for the gravitational energy; we attempt to unify the subatomic and the large scale universe into one coherent whole; thus, showing that the cosmos behaves like a quantum object. It uses a form of Bohr’s quantization to strengthen the unification of quantum gravity. Our model is simple, yet comprehensive.
[4722] vixra:1312.0023 [pdf]
Space Catalysis from Transition Metals? An Astrochemical Journey
Transition Metals (TM) are proposed to play a role in astrophysical environments in both gas and solid state astrochemistry by co-determining the homogeneous/heterogeneous catalysis represented by the gas/gas and gas/dust grain interactions. Their chemistry is function of temperature, radiation field and chemical composition and, as a consequence, dependent from the astrophysical object in which the TM are localized, i.e interstellar medium (ISM), molecular clouds, hot cores and corinos. Five main categories of TM compounds classified as: a) pure bulk and clusters; b) TM naked ions; c) TM oxides/minerals or inorganic; d) TM-L (L = ligand) with L = sigma and/or pi-donor/acceptor species like H/H2, N/N2, CO, H2O and e) TM-organoligands such as Cp, PAH, R1=°=°=R2 are proposed. Such variety of TM compounds opens the door to an enormous potential contribution to a fine astrochemical synthesis. Particular attention and interest has been applied to the chemistry of simple TM compounds with general formula: [TMm-Xy]+n with +n=total charge and X = non-TM element. Constraining the TM and the X elements on the basis of their reciprocal reactivity and cosmic abundances, the chemistry of TM = Fe coupled with N, O, S open the pathway to the correlated organic chemistry. In particular the chemistry of the iron molecular oxide [FeO]+1 and nitride [FeN]+1 will be analyzed, due to their ability to perform C-C and C-H bond activations, opening the pathway to the oxydation/hydroxylation and nitrogenation/amination of organic substrates contributing, for example, to explain the detected presence of NH, NH2 and CH3OH in diffuse gas, where actual gas-phase and grain surface chemical models cannot adequately explain the data. Summarizing the TM fine chemistry is expected to contribute to the known synthesis of organic compounds leading towards a new path in the astrochemistry field whose qualitative (type of compounds) and quantitative contribution must be unraveled.
[4723] vixra:1312.0019 [pdf]
The Cyclic Variation in the Density of Primes in the Intervals Defined by the Fibonacci Sequence
The Riemann R-function can be used to estimate the number of primes in an interval, where its accuracy is affected by the interval to which it is applied. Here, the successive intervals defined by the Fibonacci sequence will be shown to cause more cycles of R-function over- and under-estimation of primes than any of a large landscape of related sequences (calculations were continued up to one billion). The size of this landscape suggests that a special relationship exists between the Fibonacci sequence and the distribution of primes.
[4724] vixra:1312.0016 [pdf]
Some Mathematics Inspired by 137.036
The experimental value of the fine structure constant inverse from physics (approximately 137.036) is shown to also have an interesting role in pure mathematics. Specifically, 137.036 is shown to occur in the minimal solution to one of several slightly asymmetric equations (that is, equations whose left- and right-hand sides are very similar).
[4725] vixra:1312.0010 [pdf]
Lorentz Violation and Modified Cosmology
We propose a modification of Einstein-Cartan gravity equations and study the related applications to cosmology, in an attempt to account for cosmological mass discrepancies without resorting to dark matter. The deviation from standard model of cosmology is noticeable when the Hubble parameter becomes comparable to or less than a characteristic scale.
[4726] vixra:1311.0203 [pdf]
Function Estimating Number of Pairs of Primes (P,q) for All Z of Form Z=p+q
This paper derives a function that estimates number of unique ways you can write z as z = p + q, where p and q are prime numbers, for every z E N that can be written in that form.
[4727] vixra:1311.0192 [pdf]
Effective Dynamic Iso-Sphere Inopin Holographic Rings: Inquiry and Hypothesis
In this preliminary work, we focus on a particular iso-geometrical, iso-topological facet of iso-mathematics by suggesting a developing, generalized approach for encoding the states and transitions of spherically-symmetric structures that vary in size. In particular, we introduce the notion of "effective iso-radius" to facilitate a heightened characterization of dynamic iso-sphere Inopin holographic rings (IHR) as they undergo "iso-transitions" between "iso-states". In essence, we propose the existence of "effective dynamic iso-sphere IHRs". In turn, this emergence drives the construction of a new "effective iso-state" platform to encode the generalized dynamics of such iso-complex, non-linear systems in a relatively straightforward approach of spherical-based iso-topic liftings. The initial results of this analysis are significant because they lead to alternative modes of research and application, and thereby pose the question: do these effective dynamic iso-sphere IHRs have application in physics and chemistry? Our hypothesis is: yes. To answer this inquiry and assess this conjecture, this developing work should be subjected to further scrutiny, collaboration, improvement, and hard work via the scientific method in order to advance it as such.
[4728] vixra:1311.0181 [pdf]
Disposing Classical Field Theory, Part V
This paper serves two aims: First, it wraps up the previous parts. Second, it shows that vast areas of particle physics are yet waiting to be explored through relatively inexpensive and small experiments. I divided these into three sections. It is my hope that a future generation of physicists will rediscover the wealth of experimental simplicity for its own, and not for the pleasure and confirmation of any given physical theory. Experiments document the current state of affairs. They live through their repetition and continuous revision by experts and amateurs. This demands experiments to be cheap, common, and ubiquitously carried out.
[4729] vixra:1311.0171 [pdf]
A Microscopic Interpretation of the SM Higgs Mechanism
A model is presented where the Higgs mechanism of the Standard Model is deduced from the alignment of a strongly correlated fermion system in an internal space with $A_4$ symmetry. The ground state is constructed and its energy calculated. Finally, it is claimed that the model may be derived from a field theory in 6+1 dimensions.
[4730] vixra:1311.0164 [pdf]
On Global Solution of Incompressible Navier-Stokes Equations
The fluid equations, named after Claude-Louis Navier and George Gabriel Stokes, describe the motion of fluid substances. These equations arise from applying Newton's second law to fluid motion, together with the assumption that the stress in the fluid is the sum of a diffusing viscous term (proportional to the gradient of velocity) and a pressure term - hence describing viscous flow. Due to specific of NS equations they could be transformed to full/partial inhomogeneous parabolic differential equations: differential equations in respect of space variables and the full differential equation in respect of time variable and time dependent inhomogeneous part. Finally, orthogonal polynomials as the partial solutions of obtained Helmholtz equations were used for derivation of analytical solution of incompressible fluid equations in 1D, 2D and 3D space for rectangular boundary. Solution in 2D and 3D space for any shaped boundary was expressed in term of 2D and 3D global solution of Helmholtz equation accordantly.
[4731] vixra:1311.0145 [pdf]
Consistent Extra Time Dimensions: Cosmological Inflation with Inflaton Potential Identically Equal to Zero
Inflation supported by a real massless scalar inflaton field $\varphi$ whose potential is identically equal to zero is described. Assuming that inflation takes place after the Plank scale (after quantum gravity effects are important), zero potential is concomitant with an initial condition for $\varphi$ that is exponentially more probable than an initial condition that assumes an initial inflaton potential of order of the Planck mass. The Einstein gravitational field equations are formulated on an eight-dimensional spacetime manifold of four space dimensions and four time dimensions. The field equations are sourced by a cosmological constant $\Lambda$ and the real massless scalar inflaton field $\varphi$. Two solution classes for the coupled Einstein field equations are obtained that exhibit temporal exponential \textbf{deflation of three of the four time dimensions} and temporal exponential inflation of three of the four space dimensions. For brevity this phenomenon is sometimes simply called ``inflation." We show that \textbf{the extra time dimensions do not generally induce the exponentially rapid growth of fluctuations of quantum fields.} Comoving coordinates for the two \textbf{unscaled} dimensions are chosen to be $(x^4 , x^8 )$ (unscaled means a constant scale factor equal to one). The $x^4$ coordinate corresponds to our universe's observed physical time dimension, while the $x^8$ coordinate corresponds to a new spatial dimension that may be compact. $\partial_{x^8}$ terms of $\varphi$ and the metric are seen to play the role of an effective inflaton potential in the dynamical field equations. In this model, after ``inflation" the observable physical macroscopic world appears to a classical observer to be a homogeneous, isotropic universe with three space dimensions and one time dimension.
[4732] vixra:1311.0143 [pdf]
Quantum Interpretation of the Impedance Model
Quantum Interpretations try to explain emergence of the world we observe from formal quantum theory. Impedances govern the flow of energy, are helpful in such attempts. We include quantum impedances in comparisons of selected interpretations.
[4733] vixra:1311.0115 [pdf]
The Iso-Dual Tesseract
In this work, we deploy Santilli's iso-dual iso-topic lifting and Inopin's holographic ring (IHR) topology as a platform to introduce and assemble a tesseract from two inter-locking, iso-morphic, iso-dual cubes in Euclidean triplex space. For this, we prove that such an "iso-dual tesseract" can be constructed by following a procedure of simple, flexible, topologically-preserving instructions. Moreover, these novel results are significant because the tesseract's state and structure are directly inferred from the one initial cube (rather than two distinct cubes), which identifies a new iso-geometrical inter-connection between Santilli's exterior and interior dynamical systems.
[4734] vixra:1311.0101 [pdf]
A Clifford Algebra Based Grand Unification Program of Gravity and the Standard Model : A Review Study
A Clifford $ Cl ( 5, C ) $ Unified Gauge Field Theory formulation of Conformal Gravity and $ U (4 ) \times U ( 4 ) \times U(4) $ Yang-Mills in $ 4D$, is reviewed, along with its implications for the Pati-Salam group $ SU (4) \times SU(2)_L \times SU(2)_R$, and $Trinification$ GUT models of $3$ fermion generations based on the group $ SU (3)_C \times SU (3)_L \times SU(3)_R$. We proceed with a brief review of a unification program of $4D$ Gravity and $SU(3) \times SU (2) \times U (1)$ Yang-Mills emerging from $8D$ pure Quaternionic Gravity. A realization of $E_8$ in terms of the $Cl(16) = Cl (8) \otimes Cl(8)$ generators follows, as a preamble to Tony Smith's $E_8$ and $ Cl(16) = Cl(8) \otimes Cl(8)$ unification model in $8D$. The study of Chiral Fermions and Instanton Backgrounds in $ {\bf CP}^2, {\bf CP}^3$ related to the problem of obtaining $3$ fermion generations is thoroughly studied. We continue with the evaluation of the coupling constants and particle masses based on the geometry of bounded complex homogeneous domains and geometric probability theory. An analysis of neutrino masses, Cabbibo-Kobayashi-Maskawa quark-mixing matrix parameters and neutrino-mixing matrix parameters follows. We finalize with some concluding remarks about other proposals for the unification of Gravity and the Standard Model, like string, $M, F$ theory and Noncommutative and Nonassociative Geometry.
[4735] vixra:1311.0073 [pdf]
A Novel Method for Calculating Free Energy Difference Between Systems
Calculating free energy differences is a topic of substantial interest and has many applications including chemical reactions which are used in organic chemistry, biochemistry and medicines. In equilibrium free energy methods that are used in molecular simulations, one molecule is transformed into another to calculate the energy difference. However, when the compared molecules have different number of atoms, these methods cannot be directly applied since the corresponding transformation involves breaking covalent bonds which will cause a phase transition and impractical sampling. Thus, Quantum Mechanical Simulations, which are significantly more demanding computationally, are usually combined to calculate free energies of chemical reactions. Here we show that the free energies can be calculated by simple classical molecular simulations followed by analytic or numerical calculations. In this method each molecule is transformed into its replica with the VDW and Coulomb terms of the different atoms relaxed in order to eliminate the partition function difference arising from these terms. Then, since each transformed system can be treated as non interacting systems, the remaining difference in the (originally highly complex) partition function can be directly calculated. Since molecular force fields can often be automatically generated and the calculations suggested here are rather simple the method can form a basis for automated free energy computation of chemical reactions.
[4736] vixra:1311.0048 [pdf]
Simplest Explanation of Dark Matter and Dark Energy
Are given simplest, concise answers to the astronomical open questions. The first Einstein's equations did not described all of the observations. Thus, already Einstein added to the space-time the non-material cosmological constant, describing so the non-material Dark Energy. As You know the historical Einstein equations G=T explore the space, gaps (non-material things on left) with matter tensor on the right. To empirically describe early states near Singularity, one is introducing the f(R) Gravity. Last is non-material Dark Matter, as proved in this paper. These non-material things I simply call "ethers".
[4737] vixra:1311.0047 [pdf]
An Analytic Mathematical Model to Explain the Spiral Structure and Rotation Curve of NGC 3198
PACS:98.62.-g An analytical model of galactic morphology is presented. This model presents resolutions to two inter-related parameters of spiral galaxies: one being the flat velocity rotation profile and the other being the spiral morphology of such galaxies. This model is a mathematical transformation dictated by the general theory of relativity applied to rotating polar coordinate systems that conserve the metric. The model shows that the flat velocity rotation profile and spiral shape of certain galaxies are both products of the general theory. Validation of the model is presented by application to 878 rotation curves provided by Salucci, and by comparing the results of a derived distance modulus to those using Cepheid variables, water masers and Tully-Fisher calculations. The model suggests means of determining galactic linear density, mass and angular momentum. We also show that the morphology of NGC 3198 is congruent to the geodesic of a rotating reference frame and is therefore gravitationally viscous and self bound.
[4738] vixra:1311.0043 [pdf]
A Case for Local Realism
The "Schrödinger cat" states supposed by quantum mechanics need not be considered intrinsically probabilistic or otherwise inconsistent with the existence of the particle in the physically real state assumed by classical physics. The further states contemplated by the formalism of standard quantum mechanics could be states, not of the particle itself, but of the apparatus - oscillatory disturbances induced by reaction as the particle is measured and mimicking the wave characteristics of a particle. If quantum states are understood in this way, much of what has seemed mysterious in quantum behaviour becomes consistent with local realism.
[4739] vixra:1311.0031 [pdf]
Exterior and Interior Dynamic Iso-Sphere Holographic Rings with an Inverse Iso-Duality
In this preliminary work, we use a dynamic iso-unit function to iso-topically lift the "static" Inopin holographic ring (IHR) of the unit sphere to an interconnected pair of "dynamic iso-sphere IHRs" (iso-DIHR), where the IHR is simultaneously iso-dual to both a magnified "exterior iso-DIHR" and de-magnified ``interior iso-DIHR". For both the continuously-varying and discretely-varying cases, we define the dynamic iso-amplitude-radius of one iso-DIHR as being equivalent to the dynamic iso-amplitude-curvature of its counterpart, and conversely. These initial results support the hypothesis that a new IHR-based mode of iso-geometry and iso-topology may be in order, which is significant because the interior and exterior zones delineated by the IHR are fundamentally "iso-dual inverses" and may be inferred from one another.
[4740] vixra:1311.0030 [pdf]
Mandelbrot Iso-Sets: Iso-Unit Impact Assessment
In this introductory work, we use Santilli's iso-topic lifting as a cutting-edge platform to explore Mandelbrot's set. The objective is to upgrade Mandelbrot's complex quadratic polynomial with iso-multiplication and then computationally probe the effects on this revolutionary fractal. For this, we define the "iso-complex quadratic polynomial" and engage it to generate a locally iso-morphic array of "Mandelbrot iso-sets" by varying the iso-unit, where the connectedness property is topologically preserved in each case. The iso-unit broadens and strengthens the chaotic analysis, and authorizes an enhanced classification and demystification such complex systems because it equips us with an additional degree of freedom: the new Mandelbrot iso-set array is an improvement over the traditional Mandelbrot set because it is significantly more general. In total, the experimental results exemplify dynamic iso-spaces and indicate two modes of topological effects: scale-deformation and boundary-deformation. Ultimately, these new and preliminary developments spark further insight into the emerging realm of iso-fractals.
[4741] vixra:1311.0019 [pdf]
The Analysis of Barcelo,Finazzi and Liberati applied to both Alcubierre and Natario Warp Drive Spacetimes: Horizons,Infinite Doppler Blueshifts and Quantum Instabilities(Natario <> Alcubierre)
Warp Drives are solutions of the Einstein Field Equations that allows superluminal travel within the framework of General Relativity. There are at the present moment two known solutions: The Alcubierre warp drive discovered in $1994$ and the Natario warp drive discovered in $2001$. However as stated by both Alcubierre and Natario themselves the warp drive violates all the known energy conditions because the stress energy momentum tensor is negative implying in a negative energy density. While from a classical point of view the negative energy is forbidden the Quantum Field Theory allows the existence of very small amounts of it being the Casimir effect a good example as stated by Alcubierre himself.The major drawback concerning negative energies for the warp drive is the huge amount of negative energy able to sustain the warp bubble.In order to perform an interstellar space travel to a "nearby" star at $20$ light-years away with $3$ potential habitable exo-planets(Gliese $667c$) at superluminal speeds in a reasonable amount of time a ship must attain a speed of about $200$ times faster than light.However the negative energy density at such a speed is directly proportional to the factor $10^{48}$ which is $1.000.000.000.000.000.000.000.000$ times bigger in magnitude than the mass of the planet Earth!!! Some years ago Barcelo,Finazzi and Liberati published a work in which the composed mixed tensor $\langle {T_\mu}^\nu\rangle$ obtained from the negative energy density tensor $T_{\mu\nu}$ $\mu=0,\nu=0$ of the $1+1$ dimensional Alcubierre warp drive metric diverges when the velocity of the ship $vs$ exceeds the speed of light.(see pg $2$ in \cite{ref19}).We demonstrate in this work that in fact this do not happens and their results must be re-examined.We introduce here a shape function that defines the Natario warp drive spacetime as an excellent candidate to low the negative energy density requirements from $10^{48}$ to affordable levels.We also discuss Horizons and Doppler Blueshifts that affects the Alcubierre spacetime but not the Natario counterpart.
[4742] vixra:1311.0016 [pdf]
Toward a Topological Iso-String Theory in 4D Iso-Dual Space-Time: Hypothesis and Preliminary Construction
We propose a preliminary framework that engages iso-triplex numbers and deformation order parameters to encode the spatial states of Iso Open Topological Strings (Iso-OTS) for fermions and the temporal states of Iso Closed Topological Strings (Iso-CTS) for bosons, where space and time are iso-dual. The objective is to introduce an elementary Topological Iso-String Theory (TIST) that complies with the holographic principle and fundamentally represents the twisting, winding, and deforming of helical, spiral, and vortical information structures---by default---for attacking superfluidic motion patterns and energy states with iso-topic lifting. In general, these preliminary results indicate a cutting-edge, flexible, consistent, and powerful iso-mathematical framework with considerable representational capability that warrants further examination, collaboration, construction, and discipline.
[4743] vixra:1311.0014 [pdf]
Where to Search for Quantum Gravity
The well known expectations to unify general relativity with quantum mechanics did not give some essential results up to now. If quantum gravity has such a poor set of tiny effects as in existing models, it may not be of great interest for physics. But there is another approach to quantum gravity, in which it seems to be a common ground of general relativity and quantum mechanics rather than their consequence. Important features of the author's model of low-energy quantum gravity, such as the quantum mechanism of classical gravity, the quantum mechanism of redshift and the specific relaxation of any photonic flux, are described here. In the model, the Newton and Hubble constants may be computed, but the latter is not connected with any expansion. The model fits SN1a and GRB observations very well without dark energy. Some possibilities to verify the model in ground-based experiments are discussed.
[4744] vixra:1311.0010 [pdf]
Towards a 4-D Extension of the Quantum Helicity Rotator with a Hyperbolic Rotation Angle of Gravitational Nature.
In this paper we present an anachronistic pre-YM and pre-GR attempt to formulate an alternative mathematical physics language in order to treat the problem of the electron in twentieth century physics. We start the construction of our alternative to the Minkowski-Laue consensus by putting spin in the metric. This allows us to simplify Lorentz transformations as metric transformations with invariant coordinates. Using the developed formalism on the Pauli-Dirac level,we expand the quantum helicity operators into helicity rotators and then extend them from the usual 3-D expressions to 4-D variants. We connect the resulting 4-D Dirac-Weyl hyperbolic rotators to mathematical expressions that are very similar to their analogues in the pre General Relativity attempts towards a relativistic theory of gravity. This relative match motivates us to interpret the 4-D hyperbolic rotation angle as possibly gravitational in nature. At the end we apply the 4D hyperbolic rotator to the Dirac equation and investigate how it might change this equation and the related Lagrangian. We are curious to what extend the result enters the realm of quantum gravity and thus might be beyond pre-GR relativistic theories of gravity of Abraham, Nordstr{\"o}m, Mie and Einstein.
[4745] vixra:1311.0005 [pdf]
Sedeonic Theory of Massive Fields
In the present paper we develop the description of massive fields on the basis of space-time algebra of sixteen-component sedeons. The generalized sedeonic second-order equation for the potential of massive field is proposed. It is shown that this equation can be reformulated in the form of a system of Maxwell-like equations for the field strengths. We also discuss the generalized sedeonic first-order equation for massive field.
[4746] vixra:1310.0262 [pdf]
The Gravitational Origin of Velocity Time Dilation: A Generalization of the Lorentz Factor for Comparable Masses
Does velocity time dilation (clock drift) depend on a body's velocity in the Center of Gravity Frame of the local gravitational system, rather than on relative velocities between bodies? Experiments that have measured differential clock rates have conclusively proven this to be true, and hinted at a gravitational origin of velocity time dilation. Extending this understanding, a generalized form of the velocity time dilation metric (Lorentz factor) including masses is derived. This allows prediction of velocity time dilation between bodies of any mass ratio, including comparable masses such as the Earth-Moon system. This is not possible using the Lorentz factor in its current form. The generalized form of the Lorentz factor remains consistent with results of all experiments conducted.
[4747] vixra:1310.0255 [pdf]
Demystification of the Geometric Fourier Transforms
As it will turn out in this paper, the recent hype about most of the Clifford Fourier transforms is not worth the pain. Almost every one that has a real application is separable and these transforms can be decomposed into a sum of real valued transforms with constant multivector factors. This fact makes their interpretation, their analysis and their implementation almost trivial.<BR> <b>Keywords:</b> geometric algebra, Clifford algebra, Fourier transform, trigonometric transform, convolution theorem.
[4748] vixra:1310.0252 [pdf]
Thermodynamic Response Functions and Maxwell Relations for a Kerr Black Hole
Assuming the existence of a fundamental thermodynamic relation, the classical thermodynamics of a black hole with mass and angular momentum is given. New definitions of the response functions and $TdS$ equations are introduced and mathematical analogous of the Euler equation and Gibbs-Duhem relation are founded. Thermodynamic stability is studied from concavity conditions, resulting in an unstable equilibrium at all the domain except for a region of local stable equilibrium. The Maxwell relations are written, allowing to build the thermodynamic squares. Our results shown an interesting analogy between thermodynamics of gravitational and magnetic systems.
[4749] vixra:1310.0249 [pdf]
Extending Fourier Transformations to Hamilton’s Quaternions and Clifford’s Geometric Algebras
We show how Fourier transformations can be extended to Hamilton’s algebra of quaternions. This was initially motivated by applications in nuclear magnetic resonance and electric engineering. Followed by an ever wider range of applications in color image and signal processing. Hamilton’s algebra of quaternions is only one example of the larger class of Clifford’s geometric algebras, complete algebras encoding a vector space and all its subspace elements. We introduce how Fourier transformations are extended to Clifford algebras and applied in electromagnetism, and in the processing of images, color images, vector field and climate data.<br> <b>Keywords:</b> Clifford geometric algebra, quaternion Fourier transform, Clifford Fourier transform, Clifford Fourier-Mellin transform, Mulitvector wavepackets, Spacetime Fourier transform.<br> AMS Subj. Class. 15A66, 42A38
[4750] vixra:1310.0248 [pdf]
The Quest for Conformal Geometric Algebra Fourier Transformations
Conformal geometric algebra is preferred in many applications. Clifford Fourier transforms (CFT) allow holistic signal processing of (multi) vector fields, different from marginal (channel wise) processing: Flow fields, color fields, electromagnetic fields, ... The Clifford algebra sets (manifolds) of $\sqrt{-1}$ lead to continuous manifolds of CFTs. A frequently asked question is: What does a Clifford Fourier transform of conformal geometric algebra look like? We try to give a first answer.<BR> <b>Keywords:</b> Clifford geometric algebra, Clifford Fourier transform, conformal geometric algebra, horosphere.<BR> AMS Subj. Class. 15A66, 42A38
[4751] vixra:1310.0236 [pdf]
Model-Independent Cosmological Tests
Several non-fiducial cosmological tests are discussed allowing various models to be conclusively differentiated from one another. The four models tested include ΛCDM, the Friedmann-Lemaitre metric and a recently proposed steady state with and without local expansion. The cosmological tests range from redshift versus distance modulus to the angular size and time dependence of distant objects. Recent observations allow strict constraints on both the size and evolution of faint blue galaxies up to 23B. The solution to the faint blue galaxy problem is further discussed relative to size versus absolute magnitude, number densities and observations of minimal evolution. Models with expanding metrics are ruled out due to incorrect predictions of angular diameter distance and time-dependence. Observations instead depict a steady state universe with asymptotically flat gravitational potential and embedded bulk flows.
[4752] vixra:1310.0235 [pdf]
Scale-Invariant Embeddings in a Riemannian Spacetime
A framework for calculations in a semi-Riemannian space with the typical metric connection and curvature expressions is developed, with an emphasis on deriving them from an embedding function as a more fundamental object than the metric tensor. The scale-invariant and 'linearizing' logarithmic nature of an 'infinitesimal embedding' of a tangent space into its neighbourhood is observed, and a composition scheme of spacetime scenarios from 'outer' non-invariant and 'inner' scale-invariant embeddings is briefly outlined.
[4753] vixra:1310.0231 [pdf]
Saint-Venant's Principle: Experimental and Analytical
Mathematical provability , then classification, of Saint-Venant's Principle are discussed. Beginning with the simplest case of Saint-Venant's Principle, four problems of elasticity are discussed mathematically. It is concluded that there exist two categories of elastic problems concerning Saint-Venant's Principle: Experimental Problems, whose Saint-Venant's Principle is established in virtue of supporting experiment, and Analytical Problems, whose Saint-Venant's decay is proved or disproved mathematically, based on fundamental equations of linear elasticity. The boundary-value problems whose stress boundary condition consists of Dirac measure, a "singular distribution ", can not be dealt with by the mathematics of elasticity for " proof " or "disproof " of their Saint-Venant's decay, in terms of mathematical coverage.
[4754] vixra:1310.0228 [pdf]
A Curious Identity Involving G, Electric Permittivity, Botlzmann Constant
There seems to be an identity that relates certain universal constants: k = sqrt(4πeG) = 8.617(22023) × 10−11 k = Boltzmann Constant; e = Electric Permittivity; G = Gravitational Constant. There is no real physics that establishes it; it is only discovered accidentally when an attempt is made to combine the two inverse square laws of electrostatic and gravity into one complex inverse square law with complex-valued fundamental particles. The identity may be a clue that relates the forces of gravity, electromagnetism and the nuclear forces.
[4755] vixra:1310.0211 [pdf]
Enumeration of All Primitive Pythagorean Triples with Hypotenuse Less Than or Equal to N
All primitive Pythagorean triples with hypotenuse less than or equal to N can be counted with the general formulas for generating sequences of Pythagorean triples ordered by $c-b$. The algorithm calculates the interval $(1,m)$ such that $c=N$ then $\nu$ from each $m$ is calculated to get the interval $(n_1,n_\nu)$ then $(m,n_\nu)=1$ is used for counting. It can be enumerated manually if $N$ is small but for large $N$ the algorithm must be implemented with any computer programming languages.
[4756] vixra:1310.0198 [pdf]
Dynamic Iso-Topic Lifting with Application to Fibonacci's Sequence and Mandelbrot's Set
In this exploration, we introduce and define "dynamic iso-spaces", which are cutting-edge iso-mathematical constructions that are built with "dynamic iso-topic liftings" for "dynamic iso-unit functions". For this, we consider both the continuous and discrete cases. Subsequently, we engineer two simple examples that engage Fibonacci's sequence and Mandelbrot's set to define a "Fibonacci dynamic iso-space" and a "Mandelbrot dynamic iso-space", respectively. In total, this array of resulting iso-structures indicates that a new branch of iso-mathematics may be in order.
[4757] vixra:1310.0191 [pdf]
Programming a Planck Black-Hole Simulation Hypothesis Universe and the Cosmological Constant
The Simulation Hypothesis proposes that all of reality is an artificial simulation, analogous to a computer simulation. Outlined here is a low computational cost method for programming cosmic microwave background parameters in Planck time Simulation Hypothesis Universe. The model initializes `micro Planck-size black-holes' as entities that embed the Planck units. For each incremental unit of Planck time, the universe expands by adding 1 micro black-hole, a dark energy is not required. The mass-space parameters increment linearly, the electric parameters in a sqrt-progression, thus for electric parameters the early black-hole transforms most rapidly. The velocity of expansion is constant and is the origin of the speed of light, the Hubble constant becomes a measure of the black-hole radius and the CMB radiation energy density correlates to the Casimir force. A peak frequency of 160.2GHz correlates to a 14.624 billion year old black-hole. The cosmological constant, being the age when the simulation reaches the limit, approximates $t$ = 10$^{123} t_p$.
[4758] vixra:1310.0186 [pdf]
Fractal Structure of the Universe
Hierarchic fractal structure of the Universe enabling to redefine its observed characteristics is considered. According to the hypothesis the Universe consists of an infinite number of spatial and hierarchic fractal levels of matter that are nested within each other (fractals mean self-similar events, processes and spatial forms) and represented as moving spaces, presumably of a spherical shape. Distinguished in ascending order are the following basic fractals of the Universe that are conventionally connected with the kinds of matter interaction: nuclear, atomic, electromagnetic and gravitational. It can also be assumed that there exist fractals which are older than the gravitational ones. Each fractal is characterized by finite geometrical dimensions and finite value of its own energy, consequently, by the finite value of the spatial density of energy. Whenever the lower hierarchic level fractals (nuclear) transit to the higher levels (gravitational), the space and spatial density of energy inflate, and their information variety increases (as a sum of informational variety of spatial and energy forms and their levels). At transit from the lower level fractals to the higher level fractals the energy density inflates which leads to that (i) each fractal type is characterized by the finite maximum transfer velocity of internal physical interaction, and (ii) this velocity increases. It means that the maximum transit physical interaction velocity within the gravitational fractal exceeds the velocity of light which is the maximum for the electromagnetic fractal while the maximum transit physical interaction velocity of the atomic fractal is lower than the velocity of light (it is reduced by the fine structure constant). The fractal structure of the Universe hypothesis makes it possible for the authors to put forward several other assumptions: dark energy does not exist and the apparent effect of its presence in the electromagnetic fractal as well as appearance of asymmetry between the matter and antimatter are explained by its finite geometrical dimensions, spherical shape and rotational motion of this sphere around at least one of the axes.
[4759] vixra:1310.0160 [pdf]
Commentary Relative to the Seismic Structure of the Sun: Internal Rotation, Oblateness, and Solar Shape
Helioseismological studies have the ability to yield tremendous insight with respect to the internal structure and shape of the solar body. Such observations indicate that while the convection zone displays differential rotation, the core rotates as a rigid body. The latter is located below the tachocline layer, where powerful shear stresses are believed to occur. Beyond simple oblateness, seismological studies indicate that the Sun displays significant higher order shape terms (quadrupole, hexadecapole) which may, or may not, vary with the solar cycle. In this work, such seismological findings are briefly discussed with the intent of highlighting that 1) the differential rotation of the convection zone, 2) the rigid body rotation of the core, 3) the presence of the tachocline layer and 4) the appearance of higher order shape terms, all lend support to the idea that the solar body is composed of material in the condensed state. In this regard, the existence of the tachocline layer in the solar interior and the solid body rotation of the core constitute the nineteenth and twentieth lines of evidence that the Sun is condensed matter.
[4760] vixra:1310.0159 [pdf]
Commentary on the Radius of the Sun: Optical Illusion or Manifestation of a Real Surface?
In modern solar theory, the photospheric surface merely acts as an optical illusion. Gases cannot support the existence of such a boundary. Conversely, the liquid metallic hydrogen model supports the idea that the Sun has a distinct surface. Observational astronomy continues to report increasingly precise measures of solar radius and diameter. Even the smallest temporal variations in these parameters would have profound implications relative to modeling the Sun and understanding climate fluctuations on Earth. A review of the literature convincingly demonstrates that the solar body does indeed possess a measurable radius which provides, along with previous discussions (Robitaille P.M. On the Presence of a Distinct Solar Surface: A Reply to Herv´e Faye. Progr. Phys., 2011, v. 3, 75–78.), the twenty-first line of evidence that the Sun is comprised of condensed-matter.
[4761] vixra:1310.0158 [pdf]
Commentary on the Liquid Metallic Hydrogen Model of the Sun: Insight Relative to Coronal Holes, Sunspots, and Solar Activity
While mankind will always remain unable to sample the interior of the Sun, the presence of sunspots and coronal holes can provide clues as to its subsurface structure. Insight relative to the solar body can also be gained by recognizing that the Sun must exist in the condensed state and support a discrete lattice structure, as required for the production of its continuous spectrum. In this regard, the layered liquid metallic hydrogen lattice advanced as a condensed model of the Sun (Robitaille P.M. Liquid Metallic Hydrogen: A Building Block for the Liquid Sun. Progr. Phys., 2011, v. 3, 60–74; Robitaille P.M. Liquid Metallic Hydrogen II: A Critical Assessment of Current and Primordial Helium Levels in Sun. Progr. Phys., 2013, v. 2, 35–47; Robitaille J.C. and Robitaille P.M. Liquid Metallic Hydrogen III. Intercalation and Lattice Exclusion Versus Gravitational Settling and Their Consequences Relative to Internal Structure, Surface Activity, and Solar Winds in the Sun. Progr. Phys., 2013, v. 2, in press) provides the ability to add structure to the solar interior. This constitutes a significant advantage over the gaseous solar models. In fact, a layered liquid metallic hydrogen lattice and the associated intercalation of non-hydrogen elements can help to account for the position of sunspots and coronal holes. At the same time, this model provides a greater understanding of the mechanisms which drive solar winds and activity.
[4762] vixra:1310.0157 [pdf]
Commentary on the Liquid Metallic Hydrogen Model of the Sun II. Insight Relative to Coronal Rain and Splashdown Events
Coronal rain represents blobs of solar material with a width of ~300 km and a length of ~700 km which are falling from the active region of the corona towards the solar surface along loop-like paths. Conversely, coronal showers are comprised of much larger bulks of matter, or clumps of solar rain. Beyond coronal rain and showers, the expulsion of solar matter from the surface, whether through flares, prominences, or coronal mass ejections, can result in massive disruptions which have been observed to rise far into the corona, return towards the Sun, and splashdown onto the photosphere. The existence of coronal rain and the splashdown of mass ejections onto the solar surface constitute the twenty-third and twenty-fourth lines of evidence that the Sun is condensed matter.
[4763] vixra:1310.0156 [pdf]
Commentary on the Liquid Metallic Hydrogen Model of the Sun III. Insight into Solar Lithium Abundances
The apparent depletion of lithium represents one of the greatest challenges to modern gaseous solar models. As a result, lithium has been hypothesized to undergo nuclear burning deep within the Sun. Conversely, extremely low lithium abundances can be easily accounted for within the liquid metallic hydrogen model, as lithium has been hypothesized to greatly stabilize the formation of metallic hydrogen (E. Zurek et al. A little bit of lithium does a lot for hydrogen. Proc. Nat. Acad. Sci. USA, 2009, v. 106, no. 42, 17640–17643). Hence, the abundances of lithium on the solar surface can be explained, not by requiring the nuclear burning of this element, but rather, by suggesting that the Sun is retaining lithium within the solar body in order to help stabilize its liquid metallic hydrogen lattice. Unlike lithium, many of the other elements synthesized within the Sun should experience powerful lattice exclusionary forces as they are driven out of the intercalate regions between the layered liquid metallic hydrogen hexagonal planes (Robitaille J.C. and Robitaille P.M. Liquid Metallic Hydrogen III. Intercalation and Lattice Exclusion Versus Gravitational Settling and Their Consequences Relative to Internal Structure, Surface Activity, and Solar Winds in the Sun. Progr. Phys., 2013, v. 2, in press). As for lithium, its stabilizing role within the solar interior helps to account for the lack of this element on the surface of the Sun.
[4764] vixra:1310.0155 [pdf]
Commentary Relative to the Emission Spectrum of the Solar Atmosphere: Further Evidence for a Distinct Solar Surface
The chromosphere and corona of the Sun represent tenuous regions which are characterized by numerous optically thin emission lines in the ultraviolet and X-ray bands. When observed from the center of the solar disk outward, these emission lines experience modest brightening as the limb is approached. The intensity of many ultraviolet and X-ray emission lines nearly doubles when observation is extended just beyond the edge of the disk. These findings indicate that the solar body is opaque in this frequency range and that an approximately two fold greater region of the solar atmosphere is being sampled outside the limb. These observations provide strong support for the presence of a distinct solar surface. Therefore, the behavior of the emission lines in this frequency range constitutes the twenty fifth line of evidence that the Sun is comprised of condensed matter.
[4765] vixra:1310.0154 [pdf]
The Liquid Metallic Hydrogen Model of the Sun and the Solar Atmosphere I. Continuous Emission and Condensed Matter Within the Chromosphere
The continuous spectrum of the solar photosphere stands as the paramount observation with regard to the condensed nature of the solar body. Studies relative to Kirchhoff’s law of thermal emission (e.g. Robitaille P.-M. Kirchhoff’s law of thermal emission: 150 years. Progr. Phys., 2009, v. 4, 3–13.) and a detailed analysis of the stellar opacity problem (Robitaille P.M. Stellar opacity: The Achilles’ heel of the gaseous Sun. Progr. Phys., 2011, v. 3, 93–99) have revealed that gaseous models remain unable to properly account for the generation of this spectrum. Therefore, it can be stated with certainty that the photosphere is comprised of condensed matter. Beyond the solar surface, the chromospheric layer of the Sun also generates a weak continuous spectrum in the visible region. This emission exposes the presence of material in the condensed state. As a result, above the level of the photosphere, matter exists in both gaseous and condensed forms, much like within the atmosphere of the Earth. The continuous visible spectrum associated with the chromosphere provides the twenty-sixth line of evidence that the Sun is condensed matter.
[4766] vixra:1310.0153 [pdf]
The Liquid Metallic Hydrogen Model of the Sun and the Solar Atmosphere II. Continuous Emission and Condensed Matter Within the Corona
The K-corona, a significant portion of the solar atmosphere, displays a continuous spectrum which closely parallels photospheric emission, though without the presence of overlying Fraunhofer lines. The E-corona exists in the same region and is characterized by weak emission lines from highly ionized atoms. For instance, the famous green emission line from coronium (FeXIV) is part of the E-corona. The F-corona exists beyond the K/E-corona and, like the photospheric spectrum, is characterized by Fraunhofer lines. The F-corona represents photospheric light scattered by dust particles in the interplanetary medium. Within the gaseous models of the Sun, the K-corona is viewed as photospheric radiation which has been scattered by relativistic electrons. This scattering is thought to broaden the Fraunhofer lines of the solar spectrum such that they can no longer be detected in the K-corona. Thus, the gaseous models of the Sun account for the appearance of the K-corona by distorting photospheric light, since they are unable to have recourse to condensed matter to directly produce such radiation. Conversely, it is now advanced that the continuous emission of the K-corona and associated emission lines from the E-corona must be interpreted as manifestations of the same phenomenon: condensed matter exists in the corona. It is well-known that the Sun expels large amounts of material from its surface in the form of flares and coronal mass ejections. Given a liquid metallic hydrogen model of the Sun, it is logical to assume that such matter, which exists in the condensed state on the solar surface, continues to manifest its nature once expelled into the corona. Therefore, the continuous spectrum of the K-corona provides the twenty-seventh line of evidence that the Sun is composed of condensed matter.
[4767] vixra:1310.0152 [pdf]
The Liquid Metallic Hydrogen Model of the Sun and the Solar Atmosphere III. Importance of Continuous Emission Spectra from Flares, Coronal Mass Ejections, Prominences, and Other Coronal Structures
The solar corona and chromosphere are often marked by eruptive features, such as flares, prominences, loops, and coronal mass ejections, which rise above the photospheric surface. Coronal streamers and plumes can also characterize the outer atmosphere of the Sun. All of these structures, fascinating in their extent and formation, frequently emit continuous spectra and can usually be observed using white-light coronagraphs. This implies, at least in part, that they are comprised of condensed matter. The continuous spectra associated with chromospheric and coronal structures can be viewed as representing the twenty-eighth line of evidence, and the eighth Planckian proof, that the Sun is condensed matter. The existence of such objects also suggests that the density of the solar atmosphere rises to levels well in excess of current estimates put forth by the gaseous models of the Sun. In this work, the densities of planetary atmospheres are examined in order to gain insight relative to the likely densities of the solar chromosphere. Elevated densities in the solar atmosphere are also supported by coronal seismology studies, which can be viewed as constituting the twenty-ninth line of evidence that the Sun is composed of condensed matter.
[4768] vixra:1310.0151 [pdf]
The Liquid Metallic Hydrogen Model of the Sun and the Solar Atmosphere IV. On the Nature of the Chromosphere
The chromosphere is the site of weak emission lines characterizing the flash spectrum observed for a few seconds during a total eclipse. This layer of the solar atmosphere is known to possess an opaque Hα emission and a great number of spicules, which can extend well above the photosphere. A stunning variety of hydrogen emission lines have been observed in this region. The production of these lines has provided the seventeenth line of evidence that the Sun is comprised of condensed matter (Robitaille P.M. Liquid Metallic Hydrogen II: A critical assessment of current and primordial helium levels in Sun. Progr. Phys., 2013, v. 2, 35–47). Contrary to the gaseous solar models, the simplest mechanism for the production of emission lines is the evaporation of excited atoms from condensed surfaces existing within the chromosphere, as found in spicule. This is reminiscent of the chemiluminescence which occurs during the condensation of silver clusters (Konig L., Rabin I., Schultze W., and Ertl G. Chemiluminescence in the Agglomeration of Metal Clusters. Science, v. 274, no. 5291, 1353–1355). The process associated with spicule formation is an exothermic one, requiring the transport of energy away from the site of condensation. As atoms leave localized surfaces, their electrons can occupy any energy level and, hence, a wide variety of emission lines are produced. In this regard, it is hypothesized that the presence of hydrides on the Sun can also facilitate hydrogen condensation in the chromosphere. The associated line emission from main group and transition elements constitutes the thirtieth line of evidence that the Sun is condensed matter. Condensation processes also help to explain why spicules manifest an apparently constant temperature over their entire length. Since the corona supports magnetic field lines, the random orientations associated with spicule formation suggests that the hydrogen condensates in the chromosphere are not metallic in nature. Spicules provide a means, not to heat the corona, but rather, for condensed hydrogen to rejoin the photospheric layer of the Sun. Spicular velocities of formation are known to be essentially independent of gravitational effects and highly supportive of the hypothesis that true condensation processes are being observed. The presence of spicules brings into question established chromospheric densities and provides additional support for condensation processes in the chromosphere, the seventh line of evidence that the Sun is comprised of condensed matter.
[4769] vixra:1310.0150 [pdf]
The Liquid Metallic Hydrogen Model of the Sun and the Solar Atmosphere V. On the Nature of the Corona
The E-corona is the site of numerous emission lines associated with high ionization states (i.e. FeXIV-FeXXV).Modern gaseous models of the Sun require that these states are produced by atomic irradiation, requiring the sequential removal of electrons to infinity, without an associated electron acceptor. This can lead to computed temperatures in the corona which are unrealistic (i.e. ~30–100 MK contrasted to solar core values of ~16 MK). In order to understand the emission lines of the E-corona, it is vital to recognize that they are superimposed upon the K-corona, which produces a continuous spectrum, devoid of Fraunhofer lines, arising from this same region of the Sun. It has been advanced that the K-corona harbors self-luminous condensed matter (Robitaille P.M. The Liquid Metallic Hydrogen Model of the Sun and the Solar Atmosphere II. Continuous Emission and Condensed Matter Within the Corona. Progr. Phys., 2013, v. 3, L8–L10; Robitaille P.M. The Liquid Metallic Hydrogen Model of the Sun and the Solar Atmosphere III. Importance of Continuous Emission Spectra from Flares, Coronal Mass Ejections, Prominences, and Other Coronal Structures. Progr. Phys., 2013, v. 3, L11–L14). Condensed matter can possess elevated electron affinities which may strip nearby atoms of their electrons. Such a scenario accounts for the high ionization states observed in the corona: condensed matter acts to harness electrons, ensuring the electrical neutrality of the Sun, despite the flow of electrons and ions in the solar winds. Elevated ionization states reflect the presence of materials with high electron affinities in the corona, which is likely to be a form of metallic hydrogen, and does not translate into elevated temperatures in this region of the solar atmosphere. As a result, the many mechanisms advanced to account for coronal heating in the gaseous models of the Sun are superfluous, given that electron affinity, not temperature, governs the resulting spectra. In this regard, the presence of highly ionized species in the corona constitutes the thirty-first line of evidence that the Sun is composed of condensed matter.
[4770] vixra:1310.0149 [pdf]
The Liquid Metallic Hydrogen Model of the Sun and the Solar Atmosphere VI. Helium in the Chromosphere
Molecular hydrogen and hydrides have recently been advanced as vital agents in the generation of emission spectra in the chromosphere. This is a result of the role they play in the formation of condensed hydrogen structures (CHS) within the chromosphere (P.M. Robitaille. The Liquid Metallic Hydrogen Model of the Sun and the Solar Atmosphere IV. On the Nature of the Chromosphere. Progr. Phys., 2013, v. 3, 15–21). Next to hydrogen, helium is perhaps the most intriguing component in this region of the Sun. Much like other elements, which combine with hydrogen to produce hydrides, helium can form the well-known helium hydride molecular ion, HeH+, and the excited neutral helium hydride molecule, HeH*. While HeH+ is hypothesized to be a key cosmological molecule, it's possible presence in the Sun, and that of its excited neutral counterpart, has not been considered. Still, these hydrides are likely to play a role in the synthesis of CHS, as the He I and He II emission lines strongly suggest. In this regard, the study of helium emission spectra can provide insight into the condensed nature of the Sun, especially when considering the 10830 Å line associated with the 2<sup>3</sup>P-2<sup>3</sup>S triplet state transition. This line is strong in solar prominences and can be seen clearly on the disk. The excessive population of helium triplet states cannot be adequately explained using the gaseous models, since these states should be depopulated by collisional processes. Conversely, when He-based molecules are used to build CHS in a liquid metallic hydrogen model, an ever increasing population of the 2<sup>3</sup>S and 2<sup>3</sup>P states might be expected. The overpopulation of these triplet states leads to the conclusion that these emission lines are unlikely to be produced through random collisional or photon excitation, as required by the gaseous models. This provides a significant hurdle for these models. Thus, the strong 2<sup>3</sup>P-2<sup>3</sup>S lines and the overpopulation of the helium triplet states provides the thirty-second line of evidence that the Sun is comprised of condensed matter.
[4771] vixra:1310.0148 [pdf]
The Liquid Metallic Hydrogen Model of the Sun and the Solar Atmosphere VII. Further Insights into the Chromosphere and Corona
In the liquid metallic hydrogen model of the Sun, the chromosphere is responsible for the capture of atomic hydrogen in the solar atmosphere and its eventual re-entry onto the photospheric surface (P.M. Robitaille. The Liquid Metallic Hydrogen Model of the Sun and the Solar Atmosphere IV. On the Nature of the Chromosphere. Prog. Phys., 2013, v. 3, L15–L21). As for the corona, it represents a diffuse region containing both gaseous plasma and condensed matter with elevated electron affinity (P.M. Robitaille. The Liquid Metallic Hydrogen Model of the Sun and the Solar Atmosphere V. On the Nature of the Corona. Prog. Phys., 2013, v. 3, L22–L25). Metallic hydrogen in the corona is thought to enable the continual harvest of electrons from the outer reaches of the Sun, thereby preserving the neutrality of the solar body. The rigid rotation of the corona is offered as the thirty-third line of evidence that the Sun is comprised of condensed matter. Within the context of the gaseous models of the Sun, a 100 km thick transition zone has been hypothesized to exist wherein temperatures increase dramatically from 104–106 K. Such extreme transitional temperatures are not reasonable given the trivial physical scale of the proposed transition zone, a region adopted to account for the ultra-violet emission lines of ions such as CIV, OIV, and Si IV. In this work, it will be argued that the transition zone does not exist. Rather, the intermediate ionization states observed in the solar atmosphere should be viewed as the result of the simultaneous transfer of protons and electrons onto condensed hydrogen structures, CHS. Line emissions from ions such as CIV, OIV, and Si IV are likely to be the result of condensation reactions, manifesting the involvement of species such as CH <sub>4</sub>, SiH<sub>4</sub>, H<sub>3</sub>O<sup>+</sup> in the synthesis of CHS in the chromosphere. In addition, given the presence of a true solar surface at the level of the photosphere in the liquid metallic hydrogen model, it follows that the great physical extent of the chromosphere is supported by gas pressure, much like the atmosphere of the Earth. This constitutes the thirty-fourth line of evidence that the Sun is comprised of condensed matter.
[4772] vixra:1310.0145 [pdf]
A Thermodynamic History of the Solar Constitution - I: The Journey to a Gaseous Sun
History has the power to expose the origin and evolution of scientific ideas. How did humanity come to visualize the Sun as a gaseous plasma? Why is its interior thought to contain blackbody radiation? Who were the first people to postulate that the density of the solar body varied greatly with depth? When did mankind first conceive that the solar surface was merely an illusion? What were the foundations of such thoughts? In this regard, a detailed review of the Sun’s thermodynamic history provides both a necessary exposition of the circumstance which accompanied the acceptance of the gaseous models and a sound basis for discussing modern solar theories. It also becomes an invitation to reconsider the phase of the photosphere. As such, in this work, the contributions of Pierre Simon Laplace, Alexander Wilson, William Herschel, Hermann von Helmholtz, Herbert Spencer, Richard Christopher Carrington, John Frederick William Herschel, Father Pietro Angelo Secchi, Hervé August Etienne Albans Faye, Edward Frankland, Joseph Norman Lockyer, Warren de la Rue, Balfour Stewart, Benjamin Loewy, and Gustav Robert Kirchhoff, relative to the evolution of modern stellar models, will be discussed. Six great pillars created a gaseous Sun: 1) Laplace’s Nebular Hypothesis, 2) Helmholtz’ contraction theory of energy production, 3) Andrew’s elucidation of critical temperatures, 4) Kirchhoff’s formulation of his law of thermal emission, 5) Plücker and Hittorf’s discovery of pressure broadening in gases, and 6) the evolution of the stellar equations of state. As these are reviewed, this work will venture to highlight not only the genesis of these revolutionary ideas, but also the forces which drove great men to advance a gaseous Sun.
[4773] vixra:1310.0144 [pdf]
A Thermodynamic History of the Solar Constitution - II: The Theory of a Gaseous Sun and Jeans’ Failed Liquid Alternative
In this work, the development of solar theory is followed from the concept that the Sun was an ethereal nuclear body with a partially condensed photosphere to the creation of a fully gaseous object. An overview will be presented of the liquid Sun. A powerful lineage has brought us the gaseous Sun and two of its main authors were the direct scientific descendants of Gustav Robert Kirchhoff: Franz Arthur Friedrich Schuster and Arthur Stanley Eddington. It will be discovered that the seminal ideas of Father Secchi and Hervé Faye were not abandoned by astronomy until the beginning of 20th century. The central role of carbon in early solar physics will also be highlighted by revisiting George Johnstone Stoney. The evolution of the gaseous models will be outlined, along with the contributions of Johann Karl Friedrich Zöllner, James Clerk Maxwell, Jonathan Homer Lane, August Ritter, William Thomson, William Huggins, William Edward Wilson, George Francis FitzGerald, Jacob Robert Emden, Frank Washington Very, Karl Schwarzschild, and Edward Arthur Milne. Finally, with the aid of Edward Arthur Milne, the work of James Hopwood Jeans, the last modern advocate of a liquid Sun, will be rediscovered. Jeans was a staunch advocate of the condensed phase, but deprived of a proper building block, he would eventually abandon his non-gaseous stars. For his part, Subrahmanyan Chandrasekhar would spend nine years of his life studying homogeneous liquid masses. These were precisely the kind of objects which Jeans had considered for his liquid stars.
[4774] vixra:1310.0143 [pdf]
Liquid Metallic Hydrogen: A Building Block for the Liquid Sun
Liquid metallic hydrogen provides a compelling material for constructing a condensed matter model of the Sun and the photosphere. Like diamond, metallic hydrogen might have the potential to be a metastable substance requiring high pressures for formation. Once created, it would remain stable even at lower pressures. The metallic form of hydrogen was initially conceived in 1935 by Eugene Wigner and Hillard B. Huntington who indirectly anticipated its elevated critical temperature for liquefaction (Wigner E. and Huntington H. B. On the possibility of a metallic modification of hydrogen. J. Chem. Phys., 1935, v.3, 764–770). At that time, solid metallic hydrogen was hypothesized to exist as a body centered cubic, although a more energetically accessible layered graphite-like lattice was also envisioned. Relative to solar emission, this structural resemblance between graphite and layered metallic hydrogen should not be easily dismissed. In the laboratory, metallic hydrogen remains an elusive material. However, given the extensive observational evidence for a condensed Sun composed primarily of hydrogen, it is appropriate to consider metallic hydrogen as a solar building block. It is anticipated that solar liquid metallic hydrogen should possess at least some layered order. Since layered liquid metallic hydrogen would be essentially incompressible, its invocation as a solar constituent brings into question much of current stellar physics. The central proof of a liquid state remains the thermal spectrum of the Sun itself. Its proper understanding brings together all the great forces which shaped modern physics. Although other proofs exist for a liquid photosphere, our focus remains solidly on the generation of this light.
[4775] vixra:1310.0142 [pdf]
On the Presence of a Distinct Solar Surface: A Reply to Hervé Faye
In this exposition, the existence of the solar surface will be briefly explored. Within the context of modern solar theory, the Sun cannot have a distinct surface. Gases are incapable of supporting such structures. The loss of a defined solar surface occurred in 1865 and can be directly attributed to Herv´e Faye (Faye H. Sur la constitution physique du soleil. Les Mondes, 1865, v.7, 293–306). Modern theory has echoed Faye affirming the absence of this vital structural element. Conversely, experimental evidence firmly supports that the Sun does indeed possess a surface. For nearly 150 years, astronomy has chosen to disregard direct observational evidence in favor of theoretical models.
[4776] vixra:1310.0141 [pdf]
On Solar Granulations, Limb Darkening, and Sunspots: Brief Insights in Remembrance of Father Angelo Secchi
Father Angelo Secchi used the existence of solar granulation as a central line of reasoning when he advanced that the Sun was a gaseous body with a photosphere containing incandescent particulate matter (Secchi A. Sulla Struttura della Fotosfera Solare. Bullettino Meteorologico dell’Osservatorio del Collegio Romano, 30 November 1864, v.3(11), 1–3). Secchi saw the granules as condensed matter emitting the photospheric spectrum, while the darkened intergranular lanes conveyed the presence of a gaseous solar interior. Secchi also considered the nature of sunspots and limb darkening. In the context of modern solar models, opacity arguments currently account for the emissive properties of the photosphere. Optical depth is thought to explain limb darkening. Both temperature variations and magnetic fields are invoked to justify the weakened emissivities of sunspots, even though the presence of static magnetic fields in materials is not usually associated with modified emissivity. Conversely, within the context of a liquid metallic hydrogen solar model, the appearance of granules, limb darkening, and sunspots can be elegantly understood through the varying directional emissivity of condensed matter. A single explanation is applicable to all three phenomena. Granular contrast can be directly associated with the generation of limb darkening. Depending on size, granules can be analyzed by considering Kolmogoroff’s formulations and Bénard convection, respectively, both of which were observed using incompressible liquids, not gases. Granules follow the 2-dimensional space filling laws of Aboav-Weiner and Lewis. Their adherence to these structural laws provides supportive evidence that the granular surface of the Sun represents elements which can only be constructed from condensed matter. A gaseous Sun cannot be confined to a 2-dimensional framework. Mesogranules, supergranules, and giant cells constitute additional entities which further support the idea of a condensed Sun. With respect to sunspots, the decrease in emissivity with increasing magnetic field strength lends powerful observational support to the idea that these structures are comprised of liquid metallic hydrogen. In this model, the inter-atomic lattice dimensions within sunspots are reduced. This increases the density and metallic character relative to photospheric material, while at the same time decreasing emissivity. Metals are well known to have lowered directional emissivities with respect to non-metals. Greater metallicity produces lower emissivity. The idea that density is increased within sunspots is supported by helioseismology. Thus, a liquid metallic hydrogen model brings with it many advantages in understanding both the emissivity of the solar surface and its vast array of structures. These realities reveal that Father Secchi, like Herbert Spencer and Gustav Kirchhoff, was correct in his insistence that condensed matter is present on the photosphere. Secchi and his contemporaries were well aware that gases are unable to impart the observed structure.
[4777] vixra:1310.0140 [pdf]
On the Temperature of the Photosphere: Energy Partition in the Sun
In this note, energy partition within the Sun is briefly addressed. It is argued that the laws of thermal emission cannot be directly applied to the Sun, as the continuous solar spectrum (Tapp ~6,000 K) reveals but a small fraction of the true solar energy profile. Without considering the energy linked to fusion itself, it is hypothesized that most of the photospheric energy remains trapped in the Sun’s translational degrees of freedom and associated convection currents. The Sun is known to support both convective granules and differential rotation on its surface. The emission of X-rays in association with eruptive flares and the elevated temperatures of the corona might provide some measure of these energies. At the same time, it is expected that a fraction of the solar energy remains tied to the filling of conduction bands by electrons especially within sunspots. This constitutes a degree of freedom whose importance cannot be easily assessed. The discussion highlights how little is truly understood about energy partition in the Sun.
[4778] vixra:1310.0139 [pdf]
Stellar Opacity: The Achilles’ Heel of the Gaseous Sun
The standard gaseous model of the Sun is grounded on the concept of local thermal equilibrium. Given this condition, Arthur Milne postulated that Kirchhoff’s law could be applied within the deep solar interior and that a blackbody spectrum could be generated in this region, based solely on equilibrium arguments. Varying internal solar opacity then ensured that a blackbody spectrum could be emitted at the photosphere. In this work, it is demonstrated that local thermal equilibrium and solar opacity arguments provide a weak framework to account for the production of the thermal spectrum. The problems are numerous, including: 1) the validity of Kirchhoff’s formulation, 2) the soundness of local thermal equilibrium arguments, 3) the requirements for understanding the elemental composition of the Sun, and 4) the computation of solar opacities. The OPAL calculations and the Opacity Project will be briefly introduced. These represent modern approaches to the thermal emission of stars. As a whole, this treatment emphasizes the dramatic steps undertaken to explain the origins of the continuous solar spectrum in the context of a gaseous Sun.
[4779] vixra:1310.0138 [pdf]
Lessons from the Sun
In this brief note, the implications of a condensed Sun will be examined. A celestial body composed of liquid metallic hydrogen brings great promise to astronomy, relative to understanding thermal emission and solar structure. At the same time, as an incompressible liquid, a condensed Sun calls into question virtually everything which is currently believed with respect to the evolution and nature of the stars. Should the Sun be condensed, then neutron stars and white dwarfs will fail to reach the enormous densities they are currently believed to possess. Much of cosmology also falls into question, as the incompressibility of matter curtails any thought that a primordial atom once existed. Aging stars can no longer collapse and black holes will know no formative mechanism. A condensed Sun also hints that great strides must still be made in understanding the nature of liquids. The Sun has revealed that liquids possess a much greater potential for lattice order than previously believed. In addition, lessons may be gained with regards to the synthesis of liquid metallic hydrogen and the use of condensed matter as the basis for initiating fusion on Earth.
[4780] vixra:1310.0137 [pdf]
Magnetic Fields and Directional Spectral Emissivity in Sunspots and Faculae: Complimentary Evidence of Metallic Behavior on the Surface of the Sun
Sunspots and faculae are related phenomena and constitute regions of elevated magnetic field intensity on the surface of the Sun. These structures have been extensively studied in the visible range. In this regard, it has been recognized that the intensity contrast of faculae, relative to the photosphere, increases considerably as the line of observation moves from the center to the limb of the Sun. Such center to limb variation (CLV) suggests that the directional spectral emissivity of the faculae increases at the same time that photospheric directional emissivity decreases. Since the directional spectral emissivity of faculae increases towards the limb, these structures, along with sunspots, provide strong evidence for metallic behavior at the level of the solar surface. This further strengthens claims that the body of the Sun is not gaseous, but rather, comprised of condensed matter.
[4781] vixra:1310.0136 [pdf]
Liquid Metallic Hydrogen II. A Critical Assessment of Current and Primordial Helium Levels in the Sun
Before a solar model becomes viable in astrophysics, one must consider how the elemental constitution of the Sun was ascertained, especially relative to its principle components: hydrogen and helium. Liquid metallic hydrogen has been proposed as a solar structural material for models based on condensed matter (e.g. Robitaille P.-M. Liquid Metallic Hydrogen: A Building Block for the Liquid Sun. Progr. Phys., 2011, v. 3, 60–74). There can be little doubt that hydrogen plays a dominant role in the universe and in the stars; the massive abundance of hydrogen in the Sun was established long ago. Today, it can be demonstrated that the near isointense nature of the Sun’s Balmer lines provides strong confirmatory evidence for a distinct solar surface. The situation relative to helium remains less conclusive. Still, helium occupies a prominent role in astronomy, both as an element associated with cosmology and as a byproduct of nuclear energy generation, though its abundances within the Sun cannot be reliably estimated using theoretical approaches. With respect to the determination of helium levels, the element remains spectroscopically silent at the level of the photosphere. While helium can be monitored with ease in the chromosphere and the prominences of the corona using spectroscopic methods, these measures are highly variable and responsive to elevated solar activity and nuclear fragmentation. Direct assays of the solar winds are currently viewed as incapable of providing definitive information regarding solar helium abundances. As a result, insight relative to helium remains strictly based on theoretical estimates which couple helioseismological approaches to metrics derived from solar models. Despite their “state of the art” nature, helium estimates based on solar models and helioseismology are suspect on several fronts, including their reliance on solar opacities. The best knowledge can only come from the solar winds which, though highly variable, provide a wealth of data. Evaluations of primordial helium levels based on 1) the spectroscopic study of H-II regions and 2) microwave anisotropy data, remain highly questionable. Current helium levels, both within the stars (Robitaille J.C. and Robitaille P.-M. Liquid Metallic Hydrogen III. Intercalation and Lattice Exclusion versus Gravitational Settling, and Their Consequences Relative to Internal Structure, Surface Activity, and Solar Winds in the Sun. Progr. Phys., 2013, v. 2, in press) and the universe at large, appear to be overstated. A careful consideration of available observational data suggests that helium abundances are considerably lower than currently believed.
[4782] vixra:1310.0135 [pdf]
Liquid Metallic Hydrogen III. Intercalation and Lattice Exclusion Versus Gravitational Settling and Their Consequences Relative to Internal Structure, Surface Activity, and Solar Winds in the Sun
Invocation of a liquid metallic hydrogen model (Robitaille P.M. Liquid Metallic Hydrogen: A Building Block for the Liquid Sun. Progr. Phys., 2011, v. 3, 60–74; Robitaille P.M. LiquidMetallic Hydrogen II: A Critical Assessment of Current and Primordial Helium Levels in Sun. Progr. Phys., 2013, v. 2, 35–47) brings with it a set of advantages for understanding solar physics which will always remain unavailable to the gaseous models. Liquids characteristically act as solvents and incorporate solutes within their often fleeting structural matrix. They possess widely varying solubility products and often reject the solute altogether. In that case, the solute becomes immiscible. “Lattice exclusion” can be invoked for atoms which attempt to incorporate themselves into liquid metallic hydrogen. In order to conserve the integrity of its conduction bands, it is anticipated that a graphite-like metallic hydrogen lattice should not permit incorporation of other elements into its in-plane hexagonal hydrogen framework. Based on the physics observed in the intercalation compounds of graphite, non-hydrogen atoms within liquid metallic hydrogen could reside between adjacent hexagonal proton planes. Consequently, the forces associated with solubility products and associated lattice exclusion envisioned in liquid metallic hydrogen for solutes would restrict gravitational settling. The hexagonal metallic hydrogen layered lattice could provide a powerful driving force for excluding heavier elements from the solar body. Herein lies a new exfoliative force to drive both surface activity (flares, coronal mass ejections, prominences) and solar winds with serious consequences relative to the p–p reaction and CNO cycle in the Sun. At the same time, the idea that non-hydrogen atomic nuclei can exist between layers of metallic hydrogen leads to a fascinating array of possibilities with respect to nucleosynthesis. Powerful parallels can be drawn to the intercalation compounds of graphite and their exfoliative forces. In this context, solar winds and activity provide evidence that the lattice of the Sun is not only excluding, but expelling helium and higher elements from the solar body. Finally, exfoliative forces could provide new mechanisms to help understand the creation of planets, satellites, red giants, and even supernova.
[4783] vixra:1310.0134 [pdf]
Commentary Relative to the Distribution of Gamma-Ray Flares on the Sun: Further Evidence for a Distinct Solar Surface
High energy gamma-ray flares are almost always observed near the limb of the Sun and are seldom, if ever, visualized in the central region of the solar disc. As such, they exhibit a powerful anisotropy best explained by invoking a true photospheric surface. In this regard, the anisotropic nature of the gamma-ray emissions from high-energy flares constitute the eighteenth line of evidence that the Sun is condensed matter.
[4784] vixra:1310.0132 [pdf]
On the Validity of the Riemann Hypothesis.
In this paper, we have established a connection between The Dirichlet series with the Mobius function $M (s) = \sum_{n=1}^{\infty} \mu (n) /n^s$ and a functional representation of the zeta function $\zeta (s)$ in terms of its partial Euler product. For this purpose, the Dirichlet series $M (s) $ has been modified and represented in terms of the partial Euler product by progressively eliminating the numbers that first have a prime factor 2, then 3, then 5, ..up to the prime number $p_r $ to obtain the series $M(s,p_r)$. It is shown that the series $M(s)$ and the new series $M(s,p_r)$ have the same region of convergence for every $p_r$. Unlike the partial sum of $M(s)$ that has irregular behavior, the partial sum of the new series exhibits regular behavior as $p_r$ approaches infinity. This has allowed the use of integration methods to compute the partial sum of the new series and to examine the validity of the Riemann Hypothesis.
[4785] vixra:1310.0129 [pdf]
Water, Hydrogen Bonding, and the Microwave Background
In this work, the properties of the water are briefly revisited. Though liquid water has a fleeting structure, it displays an astonishingly stable network of hydrogen bonds. Thus, even as a liquid, water possesses a local lattice with short range order. The presence of hydroxyl (O-H) and hydrogen (H-OH2) bonds within water, indicate that it can simultaneously maintain two separate energy systems. These can be viewed as two very different temperatures. The analysis presented uses results from vibrational spectroscopy, extracting the force constant for the hydrogen bonded dimer. By idealizing this species as a simple diatomic structure, it is shown that hydrogen bonds within water should be able to produce thermal spectra in the far infrared and microwave regions of the electromagnetic spectrum. This simple analysis reveals that the oceans have a physical mechanism at their disposal, which is capable of generating the microwave background.
[4786] vixra:1310.0128 [pdf]
Global Warming and the Microwave Background
In the work, the importance of assigning the microwave background to the Earth is addressed while emphasizing the consequences for global climate change. Climate models can only produce meaningful forecasts when they consider the real magnitude of all radiative processes. The oceans and continents both contribute to terrestrial emissions. However, the extent of oceanic radiation, particularly in the microwave region, raises concerns. This is not only since the globe is covered with water, but because the oceans themselves are likely to be weaker emitters than currently believed. Should the microwave background truly be generated by the oceans of the Earth, our planet would be a much less efficient emitter of radiation in this region of the electromagnetic spectrum. Furthermore, the oceans would appear unable to increase their emissions in the microwave in response to temperature elevation, as predicted by Stefan’s law. The results are significant relative to the modeling of global warming.
[4787] vixra:1310.0127 [pdf]
Kirchhoff’s Law of Thermal Emission: 150 Years
In this work, Kirchhoff’s law (Kirchhoff G. Monatsberichte der Akademie der Wissenschaften zu Berlin, sessions of Dec. 1859, 1860, 783–787) is being revisited not only to mark its 150th anniversary but, most importantly, to highlight serious overreaching in its formulation. At the onset, Kirchhoff’s law correctly outlines the equivalence between emission and absorption for an opaque object under thermal equilibrium. This same conclusion had been established earlier by Balfour Stewart (Stewart B. Trans. Royal Soc. Edinburgh, 1858, v. 22(1), 1–20). However, Kirchhoff extends the treatment beyond his counterpart, stating that cavity radiation must always be black, or normal: depending only on the temperature and the frequency of observation. This universal aspect of Kirchhoff’s law is without proper basis and constitutes a grave distortion of experimental reality. It is readily apparent that cavities made from arbitrary materials (epsilon < 1) are never black. Their approach to such behavior is being driven either by the blackness of the detector, or by black materials placed near the cavity. Ample evidence exists that radiation in arbitrary cavities is sensitive to the relative position of the detectors. In order to fully address these issues, cavity radiation and the generalization of Kirchhoff’s law are discussed. An example is then taken from electromagnetics, at microwave frequencies, to link results in the resonant cavity with those inferred from the consequences of generalization.
[4788] vixra:1310.0126 [pdf]
Blackbody Radiation and the Loss of Universality: Implications for Planck’s Formulation and Boltzmann’s Constant
Through the reevaluation of Kirchhoff’s law (Robitaille P. M. L. IEEE Trans. Plasma Sci., 2003, v. 31(6), 1263–1267), Planck’s blackbody equation (Planck M. Ann. der Physik, 1901, v. 4, 553–356) loses its universal significance and becomes restricted to perfect absorbers. Consequently, the proper application of Planck’s radiation law involves the study of solid opaque objects, typically made from graphite, soot, and carbon black. The extension of this equation to other materials may yield apparent temperatures, which do not have any physical meaning relative to the usual temperature scales. Real temperatures are exclusively obtained from objects which are known solids, or which are enclosed within, or in equilibrium with, a perfect absorber. For this reason, the currently accepted temperature of the microwave background must be viewed as an apparent temperature. Rectifying this situation, while respecting real temperatures, involves a reexamination of Boltzmann’s constant. In so doing, the latter is deprived of its universal nature and, in fact, acts as a temperature dependent variable. In its revised form, Planck’s equation becomes temperature insensitive near 300 K, when applied to the microwave background.
[4789] vixra:1310.0125 [pdf]
COBE: A Radiological Analysis
The COBE Far Infrared Absolute Spectrophotometer (FIRAS) operated from ~30 to ~3,000 GHz (1–95 cm-1) and monitored, from polar orbit (~900 km), the ~3 K microwave background. Data released from FIRAS has been met with nearly universal admiration. However, a thorough review of the literature reveals significant problems with this instrument. FIRAS was designed to function as a differential radiometer, wherein the sky signal could be nulled by the reference horn, Ical. The null point occurred at an Ical temperature of 2.759 K. This was 34 mK above the reported sky temperature, 2.725 0.001 K, a value where the null should ideally have formed. In addition, an 18 mK error existed between the thermometers in Ical, along with a drift in temperature of ~3 mK. A 5 mK error could be attributed to Xcal; while a 4 mK error was found in the frequency scale. A direct treatment of all these systematic errors would lead to a ~64 mK error bar in the microwave background temperature. The FIRAS team reported ~1 mK, despite the presence of such systematic errors. But a 1 mK error does not properly reflect the experimental state of this spectrophotometer. In the end, all errors were essentially transferred into the calibration files, giving the appearance of better performance than actually obtained. The use of calibration procedures resulted in calculated Ical emissivities exceeding 1.3 at the higher frequencies, whereas an emissivity of 1 constitutes the theoretical limit. While data from 30–60 GHz was once presented, these critical points are later dropped, without appropriate discussion, presumably because they reflect too much microwave power. Data obtained while the Earth was directly illuminating the sky antenna, was also discarded. From 300–660 GHz, initial FIRAS data had systematically growing residuals as frequencies increased. This suggested that the signal was falling too quickly in the Wien region of the spectrum. In later data releases, the residual errors no longer displayed such trends, as the systematic variations had now been absorbed in the calibration files. The FIRAS team also cited insufficient bolometer sensitivity, primarily attributed to detector noise, from 600–3,000 GHz. The FIRAS optical transfer function demonstrates that the instrument was not optimally functional beyond 1,200 GHz. The FIRAS team did not adequately characterize the FIRAS horn. Established practical antenna techniques strongly suggest that such a device cannot operate correctly over the frequency range proposed. Insufficient measurements were conducted on the ground to document antenna gain and field patterns as a full function of frequency and thereby determine performance. The effects of signal diffraction into FIRAS, while considering the Sun/Earth/RF shield, were neither measured nor appropriately computed. Attempts to establish antenna side lobe performance in space, at 1,500 GHz, are well outside the frequency range of interest for the microwave background (<600 GHz). Neglecting to fully evaluate FIRAS prior to the mission, the FIRAS team attempts to do so, on the ground, in highly limited fashion, with a duplicate Xcal, nearly 10 years after launch. All of these findings indicate that the satellite was not sufficiently tested and could be detecting signals from our planet. Diffraction of earthly signals into the FIRAS horn could explain the spectral frequency dependence first observed by the FIRAS team: namely, too much signal in the Jeans-Rayleigh region and not enough in the Wien region. Despite popular belief to the contrary, COBE has not proven that the microwave background originates from the universe and represents the remnants of creation.
[4790] vixra:1310.0124 [pdf]
Calibration of Microwave Reference Blackbodies and Targets for Use in Satellite Observations: An Analysis of Errors in Theoretical Outlooks and Testing Procedures
Microwave reference blackbodies and targets play a key role in astrophysical and geophysical studies. The emissivity of these devices is usually inferred from return-loss experiments which may introduce at least 10 separate types of calibration errors. The origin of these inaccuracies depends on test conditions and on the nature of each target. The most overlooked errors are related to the geometry adapted in constructing reference loads and to the effects of conduction or convection. Target shape and design can create an imbalance in the probabilities of absorption and emission. This leads to loss of radiative equilibrium, despite the presence of a thermodynamic steady state. Heat losses or gains, through conduction and convection, compensate for this unexpected physical condition. The improper calibration of blackbodies and targets has implications, not only in global climate monitoring, but also relative to evaluating the microwave background
[4791] vixra:1310.0123 [pdf]
The Planck Satellite LFI and the Microwave Background: Importance of the 4 K Reference Targets
Armed with ~4K reference targets, the Planck satellite low frequency instrument (LFI) is intended to map the microwave anisotropies of the sky from the second Lagrange point, L2. Recently, the complete design and pre-flight testing of these ~4K targets has been published (Valenziano L. et al., JINST 4, 2009, T12006). The receiver chain of the LFI is based on a pseudo-correlation architecture. Consequently, the presence of a ~3K microwave background signal at L2 can be established, if the ~4K reference targets function as intended. Conversely, demonstration that the targets are unable to provide the desired emission implies that the ~3K signal cannot exist, at this location. Careful study reveals that only the second scenario can be valid. This analysis thereby provides firm evidence that the monopole of the microwave background, as initially detected by Penzias and Wilson, is being produced by the Earth itself.
[4792] vixra:1310.0121 [pdf]
WMAP: A Radiological Analysis
In this work, results obtained by the WMAP satellite are analyzed by invoking established practices for signal acquisition and processing in nuclear magnetic resonance (NMR) and magnetic resonance imaging (MRI). Dynamic range, image reconstruction, signal to noise, resolution, contrast, and reproducibility are specifically discussed. WMAP images do not meet accepted standards in medical imaging research. WMAP images are obtained by attempting to remove a galactic foreground contamination which is 1,000 times more intense than the desired signal. Unlike water suppression in biological NMR, this is accomplished without the ability to affect the signal at the source and without a priori knowledge. Resulting WMAP images have an exceedingly low signal to noise (maximum 1–2) and are heavily governed by data processing. Final WMAP internal linear combination (ILC) images are made from 12 section images. Each of these, in turn, is processed using a separate linear combination of data. The WMAP team extracts cosmological implications from their data, while ignoring that the ILC coefficients do not remain constant from year to year. In contrast to standard practices in medicine, difference images utilized to test reproducibility are presented at substantially reduced resolution. ILC images are not presented for year two and three. Rather, year-1 data is signal averaged in a combined 3-year data set. Proper tests of reproducibility require viewing separate yearly ILC images. Fluctuations in the WMAP images arise from the inability to remove the galactic foreground, and in the significant yearly variations in the foreground itself. Variations in the map outside the galactic plane are significant, preventing any cosmological analysis due to yearly changes. This occurs despite the masking of more than 300 image locations. It will be advanced that any “signal” observed by WMAP is the result of foreground effects, not only from our galaxy, but indeed yearly variations from every galaxy in the Universe. Contrary to published analysis, the argument suggests there are only questionable findings in the anisotropy images, other than those related to image processing, yearly galactic variability, and point sources. Concerns are also raised relative to the validity of assigning brightness temperatures in this setting.
[4793] vixra:1310.0120 [pdf]
On the Origins of the CMB: Insight from the COBE, WMAP, and Relikt-1 Satellites
The powerful “Cosmic Microwave Background (CMB)” signal currently associated with the origins of the Universe is examined from a historical perspective and relative to the experimental context in which it was measured. Results from the COBE satellite are reviewed, with particular emphasis on the systematic error observed in determining the CMB temperature. The nature of the microwave signal emanating from the oceans is also discussed. From this analysis, it is demonstrated that it is improper for the COBE team to model the Earth as a 285K blackbody source. The assignment of temperatures to objects that fail to meet the requirements set forth in Kirchhoff’s law constitutes a serious overextension of the laws of thermal emission. Using this evidence, and the general rule that powerful signals are associated with proximal sources, the CMB monopole signal is reassigned to the oceans. In turn, through the analysis of COBE, WMAP, and Relikt-1 data, the dipole signal is attributed to motion through a much weaker microwave field present both at the position of the Earth and at the second Lagrange point.
[4794] vixra:1310.0119 [pdf]
A High Temperature Liquid Plasma Model of the Sun
In this work, a liquid model of the Sun is presented wherein the entire solar mass is viewed as a high density/high energy plasma. This model challenges our current understanding of the densities associated with the internal layers of the Sun, advocating a relatively constant density, almost independent of radial position. The incompressible nature of liquids is advanced to prevent solar collapse from gravitational forces. The liquid plasma model of the Sun is a non-equilibrium approach, where nuclear reactions occur throughout the solar mass. The primary means of addressing internal heat transfer are convection and conduction. As a result of the convective processes on the solar surface, the liquid model brings into question the established temperature of the solar photosphere by highlighting a violation of Kirchhoff’s law of thermal emission. Along these lines, the model also emphasizes that radiative emission is a surface phenomenon. Evidence that the Sun is a high density/high energy plasma is based on our knowledge of Planckian thermal emission and condensed matter, including the existence of pressure ionization and liquid metallic hydrogen at high temperatures and pressures. Prior to introducing the liquid plasma model, the historic and scientific justifications for the gaseous model of the Sun are reviewed and the gaseous equations of state are also discussed.
[4795] vixra:1310.0118 [pdf]
On the Earth Microwave Background: Absorption and Scattering by the Atmosphere
The absorption and scattering of microwave radiation by the atmosphere of the Earth is considered under a steady state scenario. Using this approach, it is demonstrated that the microwave background could not have a cosmological origin. Scientific observations in the microwave region are explained by considering an oceanic source, combined with both Rayleigh and Mie scattering in the atmosphere in the absence of net absorption. Importantly, at high frequencies, Mie scattering occurs primarily with forward propagation. This helps to explain the lack of high frequency microwave background signals when radio antennae are positioned on the Earth’s surface.
[4796] vixra:1310.0117 [pdf]
The Little Heat Engine: Heat Transfer in Solids, Liquids and Gases
In this work, an introductory exposition of the laws of thermodynamics and radiative heat transfer is presented while exploring the concepts of the ideal solid, the lattice, and the vibrational, translational, and rotational degrees of freedom. Analysis of heat transfer in this manner helps scientists to recognize that the laws of thermal radiation are strictly applicable only to the ideal solid. On the Earth, such a solid is best represented by either graphite or soot. Indeed, certain forms of graphite can approach perfect absorption over a relatively large frequency range. Nonetheless, in dealing with heat, solids will eventually sublime or melt. Similarly, liquids will give way to the gas phase. That thermal conductivity eventually decreases in the solid signals an inability to further dissipate heat and the coming breakdown of Planck’s law. Ultimately, this breakdown is reflected in the thermal emission of gases. Interestingly, total gaseous emissivity can decrease with increasing temperature. Consequently, neither solids, liquids, or gases can maintain the behavior predicted by the laws of thermal emission. Since the laws of thermal emission are, in fact, not universal, the extension of these principles to non-solids constitutes a serious overextension of the work of Kirchhoff, Wien, Stefan and Planck.
[4797] vixra:1310.0116 [pdf]
On the Nature of the Microwave Background at the Lagrange 2 Point. Part I
In this work, the nature of the microwave background is discussed. It is advanced that the 2.725 K monopole signal, first detected by Penzias and Wilson, originates from the Earth and therefore cannot be detected at the Lagrange 2 point (L2). Results obtained by the COBE, Relikt-1, and WMAP satellites are briefly reviewed. Attention is also placed on the upcoming PLANCK mission, with particular emphasis on the low frequency instrument (LFI). Since the LFI on PLANCK can operate both in absolute mode and in difference mode, this instrument should be able to unequivocally resolve any question relative to the origin of the 2.725K monopole signal. The monopole will be discovered to originate from the Earth and not from the Cosmos. This will have implications relative to the overall performance of the PLANCK satellite, in particular, and for the future of astrophysics, in general.
[4798] vixra:1310.0115 [pdf]
Max Karl Ernst Ludwig Planck (1858-1947)
October 4th, 2007 marks the 60th anniversary of Planck’s death. Planck was not only the father of Quantum Theory. He was also a man of profound moral and ethical values, with far reaching philosophical views. Though he lived a life of public acclaim for his discovery of the Blackbody radiation formula which bares his name, his personal life was beset with tragedy. Yet, Planck never lost his deep faith and belief in a personal God. He was admired by Einstein, not so much for his contributions to physics, but rather, for the ideals which he embodied as a person. In this work, a brief synopsis is provided on Planck, his life, and his philosophical writings. It is hoped that this will serve as an invitation to revisit the philosophical works of the man who, more than any other, helped set the course of early 20th century physics.
[4799] vixra:1310.0114 [pdf]
The Earth Microwave Background (EMB), Atmospheric Scattering and the Generation of Isotropy
In this work, the presence of substantial microwave power in the atmosphere of the Earth is discussed. It is advanced that this atmospheric microwave power constitutes pools of scattered photons initially produced, at least in substantial part, by the ~3K microwave background. The existence of these microwave pools of photons can serve to explain how the Earth, as an anisotropic source, is able to produce an Earth Microwave Background (EMB) at ~3K which is isotropic.
[4800] vixra:1310.0113 [pdf]
A Critical Analysis of Universality and Kirchhoff's Law: A Return to Stewart's Law of Thermal Emission
It has been advanced, on experimental (P.-M. Robitaille, IEEE Trans. Plasma Sci., 2003, v. 31(6), 1263–1267) and theoretical (P.-M. Robitaille, Progr. Phys., 2006, v. 2, 22–23) grounds, that blackbody radiation is not universal and remains closely linked to the emission of graphite and soot. In order to strengthen such claims, a conceptual analysis of the proofs for universality is presented. This treatment reveals that Gustav Robert Kirchhoff has not properly considered the combined effects of absorption, reflection, and the directional nature of emission in real materials. In one instance, this leads to an unintended movement away from thermal equilibrium within cavities. Using equilibrium arguments, it is demonstrated that the radiation within perfectly reflecting or arbitrary cavities does not necessarily correspond to that emitted by a blackbody.
[4801] vixra:1310.0112 [pdf]
Blackbody Radiation and the Carbon Particle
Since the days of Kirchhoff, blackbody radiation has been considered to be a universal process, independent of the nature and shape of the emitter. Nonetheless, in promoting this concept, Kirchhoff did require, at the minimum, thermal equilibrium with an enclosure. Recently, the author stated (P.-M. Robitaille, IEEE Trans. Plasma Sci., 2003, v. 31(6), 1263–1267; P.-M. Robitaille, Progr. in Phys., 2006, v. 2, 22–23), that blackbody radiation is not universal and has called for a return to Stewart’s law (P.-M. Robitaille, Progr. in Phys., 2008, v. 3, 30–35). In this work, a historical analysis of thermal radiation is presented. It is demonstrated that soot, or lampblack, was the standard for blackbody experiments throughout the 1800s. Furthermore, graphite and carbon black continue to play a central role in the construction of blackbody cavities. The advent of universality is reviewed through the writings of Pierre Prévost, Pierre Louis Dulong, Alexis Thérèse Petit, Jean Baptiste Joseph Fourier, Siméon Denis Poisson, Frédérick Hervé de la Provostaye, Paul Quentin Desain, Balfour Stewart, Gustav Robert Kirchhoff, and Max Karl Ernst Ludwig Planck. These writings illustrate that blackbody radiation, as experimentally produced in cavities and as discussed theoretically, has remained dependent on thermal equilibrium with at least the smallest carbon particle. Finally, Planck’s treatment of Kirchhoff’s law is examined in detail and the shortcomings of his derivation are outlined. It is shown once again, that universality does not exist. Only Stewart’s law of thermal emission, not Kirchhoff’s, is fully valid.
[4802] vixra:1310.0110 [pdf]
Forty Lines of Evidence for Condensed Matter - The Sun on Trial: Liquid Metallic Hydrogen as a Solar Building Block
Our Sun has confronted humanity with overwhelming evidence that it is comprised of condensed matter. Dismissing this reality, the standard solar models continue to be anchored on the gaseous plasma. In large measure, the endurance of these theories can be attributed to 1) the mathematical elegance of the equations for the gaseous state, 2) the apparent success of the mass-luminosity relationship, and 3) the long-lasting influence of leading proponents of these models. Unfortunately, no direct physical finding supports the notion that the solar body is gaseous. Without exception, all observations are most easily explained by recognizing that the Sun is primarily comprised of condensed matter. However, when a physical characteristic points to condensed matter, \textit{a postori} arguments are invoked to account for the behavior using the gaseous state. In isolation, many of these treatments appear plausible. As a result, the gaseous models continue to be accepted. There seems to be an overarching belief in solar science that the problems with the gaseous models are few and inconsequential. In reality, they are numerous and, while often subtle, they are sometimes daunting. The gaseous equations of state have introduced far more dilemmas than they have solved. Many of the conclusions derived from these approaches are likely to have led solar physics down unproductive avenues, as deductions have been accepted which bear little or no relationship to the actual nature of the Sun. It could be argued that, for more than 100 years, the gaseous models have prevented mankind from making real progress relative to understanding the Sun and the universe. Hence, the Sun is now placed on trial. Forty lines of evidence will be presented that the solar body is comprised of, and surrounded by, condensed matter. These `proofs' can be divided into seven broad categories: 1) Planckian, 2) spectroscopic, 3) structural, 4) dynamic, 5) helioseismic, 6) elemental, and 7) earthly. Collectively, these lines of evidence provide a systematic challenge to the gaseous models of the Sun and expose the many hurdles faced by modern approaches. Observational astronomy and laboratory physics have remained unable to properly justify claims that the solar body must be gaseous. At the same time, clear signs of condensed matter interspersed with gaseous plasma in the chromosphere and corona have been regrettably dismissed. As such, it is hoped that this exposition will serve as an invitation to consider condensed matter, especially metallic hydrogen, when pondering the phase of the Sun.
[4803] vixra:1310.0109 [pdf]
An Analysis of Universality in Blackbody Radiation
Through the formulation of his law of thermal emission, Kirchhoff conferred upon blackbody radiation the quality of universality [G. Kirchhoff, Annalen der Physik, 1860, v. 109, 275]. Consequently, modern physics holds that such radiation is independent of the nature and shape of the emitting object. Recently, Kirchhoff’s experimental work and theoretical conclusions have been reconsidered [P. M. L. Robitaille. IEEE Transactions on Plasma Science, 2003, v. 31(6), 1263]. In this work, Einstein’s derivation of the Planckian relation is reexamined. It is demonstrated that claims of universality in blackbody radiation are invalid.
[4804] vixra:1310.0108 [pdf]
The Solar Photosphere: Evidence for Condensed Matter
The stellar equations of state treat the Sun much like an ideal gas, wherein the photosphere is viewed as a sparse gaseous plasma. The temperatures inferred in the solar interior give some credence to these models, especially since it is counterintuitive that an object with internal temperatures in excess of 1 MK could be existing in the liquid state. Nonetheless, extreme temperatures, by themselves, are insufficient evidence for the states of matter. The presence of magnetic fields and gravity also impact the expected phase. In the end, it is the physical expression of a state that is required in establishing the proper phase of an object. The photosphere does not lend itself easily to treatment as a gaseous plasma. The physical evidence can be more simply reconciled with a solar body and a photosphere in the condensed state. A discussion of each physical feature follows: (1) the thermal spectrum, (2) limb darkening, (3) solar collapse, (4) the solar density, (5) seismic activity, (6) mass displacement, (7) the chromosphere and critical opalescence, (8) shape, (9) surface activity, (10) photospheric/coronal flows, (11) photospheric imaging, (12) the solar dynamo, and (13) the presence of Sun spots. The explanation of these findings by the gaseous models often requires an improbable combination of events, such as found in the stellar opacity problem. In sharp contrast, each can be explained with simplicity by the condensed state. This work is an invitation to reconsider the phase of the Sun.
[4805] vixra:1310.0106 [pdf]
The Uniqueness of Rational Structure and its Graph
Galaxies are the basic components of the universe. A massive Hubble Space Telescope photos survey reveals that the diversity of galaxies in the early universe was as varied as the many galaxy types seen today. Therefore, understanding galaxies is the great challenge to humans. This paper deals with the disk-typed galaxies which is called spirals. In longer wavelength image, galaxy arms are mostly gone, and spiral galaxies fall to two types: ordinary and barred. The ordinary ones are basically an axi-symmetric disk whose stellar density decreases exponentially outwards. It is called the exponential disk. It is straightforward to show that any exponential disk has infinite nets of orthogonal curves such that the stellar density on one side of each curve is in constant ratio to the density on the other side of the curve. These curves are call proportion curves or Darwin curves. It happens that the Darwin curves of exponential disk are all golden spirals. Amazingly, astronomers found out that the arms of ordinary spiral galaxies are all golden spirals. Therefore, I had a proposition in 2004 that a two dimensional structure is called a rational one if there exists at least one orthogonal net of Darwin curves in the structure plane. Now in this paper, the mathematical solution to rational structure is completely obtained. We prove that rational structure is unique.
[4806] vixra:1310.0099 [pdf]
The Geodesic Precession as a 3-D Schouten Precession Plus a Gravitational Thomas Precession.
The Gravity Probe B (GP-B) experiment measured the geodetic precession due to parallel transport in a curved space-time metric, as predicted by de Sitter, Fokker and Schiff. Schiff included the Thomas precession in his treatment and argued that it should be zero in a free fall orbit. We review the existing interpretations regarding the relation between the Thomas precession and the geodetic precession for a gyroscope in a free fall orbit. Schiff and Parker had contradictory views on the status of the Thomas precession in a free fall orbit, a contradiction that continues to exist in the literature. In the second part of this paper we derive the geodetic precession as a global Thomas Precession by use of the Equivalent Principle and some elements of hyperbolic geometry, a derivation that allows the treatment of GP--B physics in between SR and GR courses.
[4807] vixra:1310.0083 [pdf]
Fuzzy L-Open Sets and Fuzzy L-Continuous Functions
Recently in 1997, Sarker in [8] introduced the concept of fuzzy ideal and fuzzy local function between fuzzy topological spaces. In the present paper, we introduce some new fuzzy notions via fuzzy ideals. Also, we generalize the notion of L-open sets due to Jankovic and Homlett [6]. In addition to, we generalize the concept of L-closed sets, L- continuity due to Abd El-Monsef et al. [2]. Relationships between the above new fuzzy notions and other relevant classes are investigated.1
[4808] vixra:1310.0077 [pdf]
Relativity and the Universe Gravitational Potential
This paper reconciles General Relativity (GR) and Mach's Principle into a consistent, simple and intuitive alternative theory of gravitation. The Universe's ubiquitous background gravitational potential plays an important role in relativity concepts. This gravitational potential (energy per unit mass) far from all massive bodies is c<sup>2</sup>, which determines <i>unit</i> rest mass/energy, and is the essence behind E = mc<sup>2</sup>. The Universal matter distribution creates a local inertial rest frame at every location, in which the Universe gravitational potential is a minimum. A velocity in this frame increases this gravitational potential through net blue shift of Universal gravity, causing velocity time dilation, which is a gravitational effect identical to gravitational time dilation. Velocity time dilation from this increase of gravitational potential is found to be same as computed from the low velocity approximation of Lorentz factor. The current Lorentz Factor is applicable only in situations where a local potential is dominant compared to the Universe potential. Time dilation increases with velocity, but does not become boundless for general rectilinear motion in the Universe. Speed of light is not the maximum possible speed in such situations, but only in circumstances where the Lorentz Factor is the appropriate metric. Gravitational time dilation is derived first, and velocity time dilation is derived from it. The mathematics becomes much simpler and more intuitive than GR, while remaining consistent with existing experiments. Some experiments are suggested that will show this theory to be more accurate than GR.
[4809] vixra:1310.0070 [pdf]
The Mass of the Photon
The mass of a photon is derived. Frequencies of light are shown to represent infinitesimal differences in speed just below c. Formulas of Newton, Einstein, Planck, Lorentz, Doppler and de Broglie for relativity, frequency, energy, velocity-addition and waveforms of matter are all linked using simple mathematical terms into a single set of formulas that all describe the same phenomena: matter, movement and energy. The physical laws governing the astronomically large are the same laws governing the microscopically small.
[4810] vixra:1310.0044 [pdf]
An Approximation for Primes
An approximation heuristic for the prime counting function Pi(x) is presented. It is numerically shown, that the heuristic is on average as good as Li(x)-0.5Li(sqrt(x)) for x up to 100,000.
[4811] vixra:1310.0043 [pdf]
Delayed Choice and Weak Measurement in the Nested Mach-Zehnder Interferometer
This note discusses recent weak measurements of the nested Mach-Zehnder interferometer, measurements that can be interpreted in terms of state vectors traveling both forwards and backwards in time. A complementary interpretation is presented, from the perspective of the scale invariant quantum Hall and far field photon impedances. A delayed choice variant on the recent experiment is proposed, distinguishing between these two and other interpretations.
[4812] vixra:1309.0179 [pdf]
A Preliminary Study on Rational Structure and Barred Galaxies
Newton's understanding of solar system opened a new era of human civilization. Accordingly, the understanding of galaxies will stimulate a new period of human social harmony. Galaxies are the most independent components of the universe. A massive Hubble Space Telescope photos survey reveals that the diversity of galaxies in the early universe was as varied as the many galaxy types seen today. In 2001, I proposed the rational model of galaxy structure. Now in this paper I present a preliminary rational structure theory. The mathematical background needed is no more than the preliminary results of traditional college courses of complex functions and partial differential equations. Its application to barred galaxies is discussed. If the application is successful then three simple and important examples of its testification are, the structural simulation of barred galaxy light distribution with rational structure, the simulation of spiral arms with the Darwin curves of rational structure, and the simulation of observational galaxy kinematics (e.g., rotation curves) with the New Universal Gravity dictated by the rational structure and the principle of force line conservation.
[4813] vixra:1309.0177 [pdf]
Derivation of the Complete Doppler Effect Formula by Means of a (Reflecting) Newtonian Telescope and Some Additional Consequences
In this short paper it is given to you a derivation of the Complete Doppler effect formula in the framework of Galilean Relativity, and such a formalism is compared with that of Special Relativity (SR). Then, it is shown how useful this enhanced Galilean Relativity can be. An example of a proton-antiproton computation is provided as exercise, and finally, it is proved that time dilation isn't necessary for explaining some phenomena, as the time of light of cosmic ray muons.
[4814] vixra:1309.0160 [pdf]
Two Numerical Counterexamples Contrary to the Phase Matching Condition for Quantum Search
In this short comment, by giving two numerical counterexamples directly contrary to the phase matching condition presented by Long \emph{et al.}, I show that Grover's conclusion that his algorithm can be extended to the case when the two phase inversions are replaced by arbitrary phases (Phys. Rev. Lett. {\bf 80}, 4329--4332, 1998) in a sense is correct.
[4815] vixra:1309.0154 [pdf]
On a Simpler, Much More General and Truly Marvellous Proof of Fermat's Last Theorem (I)
English mathematics Professor, Sir Andrew John Wiles of the University of Cambridge finally and conclusively proved in 1995 Fermat's Last Theorem which had for 358 years notoriously resisted all gallant and spirited efforts to prove it even by three of the greatest mathematicians of all time -- such as Euler, Laplace and Gauss. Sir Professor Andrew Wiles's proof employs very advanced mathematical tools and methods that were not at all available in the known World during Fermat's days. Given that Fermat claimed to have had the `truly marvellous' proof, this fact that the proof only came after $358$ years of repeated failures by many notable mathematicians and that the proof came from mathematical tools and methods which are far ahead of Fermat's time, this has led many to doubt that Fermat actually did possess the `truly marvellous' proof which he claimed to have had. In this short reading, via elementary arithmetic methods, we demonstrate conclusively that Fermat's Last Theorem actually yields to our efforts to prove it. This proof is so elementary that anyone with a modicum of mathematical prowess in Fermat's days and in the intervening 358 years could have discovered this very proof. This brings us to the tentative conclusion that Fermat might very well have had the `truly marvellous' proof which he claimed to have had and his `truly marvellous' proof may very well have made use of elementary arithmetic methods.
[4816] vixra:1309.0149 [pdf]
A Complexity of Bridge Double Dummy Problem
This paper presents an analysis of complexity of a bridge double dummy problem. Values of both, a state-space (search-space) complexity and a game tree complexity have been estimated.
[4817] vixra:1309.0147 [pdf]
The Proper Kinematic Properties of Non Inertial Rigid Frame Reference as the 4-Invariants
The article proposes and checked invariant equations for proper kinematic properties of rigid frame. From these conditions follow the equation of motion of its proper tetrad and equations of inverse kinematics problem, i.e., the differential equations that solve the problem of recovering the motion parameters of a rigid frame of reference for known his proper acceleration and angular velocity. In particular, we show that if moving reference frame have original Thomas precession, then she a relatively new lab frame will have a combination of two rotations: a new proper Thomas precession and Wigner rotation, which together give the original frequency of the Thomas precession.
[4818] vixra:1309.0130 [pdf]
Storkey Learning Rules for Hopfield Networks
We summarize the Storkey Learning Rules for the Hopfield Model, and evaluate performance relative to other learning rules. Hopfield Models are normally used for auto-association, and Storkey Learning Rules have been found to have good balance between local learning and capacity. In this paper we outline different learning rules and summarise capacity results. Hopfield networks are related to Boltzmann Machines: they are the same as fully visible Boltzmann Machines in the zero temperature limit. Perhaps renewed interest in Boltzmann machines will produce renewed interest in Hopfield learning rules?
[4819] vixra:1309.0129 [pdf]
An Effective Neutrosophic Set-Based Preprocessing Method for Face Recognition
Face recognition (FR) is a challenging task in biometrics due to various illuminations, poses, and possible noises. In this paper, we propose to apply a novel neutrosophic set (NS)- based preprocesssing method to simultaneously remove noise and enhance facial features in original face images.
[4820] vixra:1309.0119 [pdf]
Simultaneous Maximisation in Economic Theory
This document provides translations of two important but neglected articles of Bruno de Finetti, Problemi di "optimum" and Problemi di "optimum" vincolato. The articles deal with the theory of simultaneously maximising a number of functions. A major field of application is economic theory, which through the assumption of rational, maximising behaviour of all participants defines a set of interdependent maximum problems as its model of the world.
[4821] vixra:1309.0118 [pdf]
Reconsidering Nash: the Nash Equilibrium is Inconsistent
This paper tries to rekindle the revolution in economic theory that Von Neumann and Morgenstern intended to unleash, but that was smothered by Nash. The assumption of rational, maximising behaviour by the participants in a social exchange economy leads to a set of interdependent maximum problems. The appropriate treatment of this kind of problem is provided by the theory of simultaneous maximisation, systematically studied for the first time by Bruno de Finetti. Cooperative game theory is a branch of simultaneous maximisation. Non-cooperative game theory errs in deriving the first-order conditions and yields conflicting outcomes for equivalent problems; a prime example is the Cournot equilibrium's differing from the Bertrand equilibrium. The analysis implies that large parts of positive economics must be revised, that normative economics is impossible, and that a stronger assumption is needed for economics to make determinate predictions of how money makes the world go round.
[4822] vixra:1309.0117 [pdf]
Introducing the Metric Laplace Equation: A Disturbing Proposed Solution to the Cosmological Constant Problems
In spite of the widespread fanfare of the 1998 discovery of a positive accelerating expansion and the subsequent need for a “Dark Energy” placeholder in physics, the one geometric component that seems to share a relationship, the Cosmological Constant, has become shrouded in even more questions by it. After a century of concentrated efforts, the mounting lack of forthcoming answers has “driven” the NSF/ NASA/DOE Dark Energy Task Force to consider whether general relativity is “incorrect”. In keeping with this reluctant but forced skepticism we subject an early competitor to general relativity, Gunnar Nordstr¨oms version of the Poisson equation, to a more stringent definition utilizing an asymmetric property of the Fundamental Theorem of Calculus. We derive from this property a new metric Laplacian definition for flatness that in perturbed spherical symmetry form greatly resembles the Schwarzschild solution. However, this metric version would seem capable of uniting gravity with QFT by utilizing the widely considered equivalence of the Cosmological Constant with a proposed large value vacuum energy density but at the expense of differential topology and our understanding of tensors in general. A much larger penalty though seems to be that it results in a geometrical counterpoint to the physical explanations for general relativity, QFT and energy density.
[4823] vixra:1309.0084 [pdf]
Revisited Pound-Rebka Experiment Shows Einstein's Relativity is Just an Inaccurate Approximation to Reality
The Pound-Rebka experiment is a famous experiment to test the theory of general relativity and is taken as paradigm for probing that Einstein's relativity is true. In this experiment there are two Doppler effects involved, namely the gravitational Doppler effect predicted by GR (General Relativity) and the inertial Doppler effect predicted by SR (Special Relativity). Each kind of effect is modelled by its own equations. In this experiment, the aim was to balance both effects in order to attain a null effective Doopler effect, so electromagnetic frequency resulted to yield the same measured value as the original emitted frequency. That implied that if photons were emitted from the top of the tower towards the ground detector, the gravitational Doppler effect would be a Doppler blue shitf, it is to say, a frequency increase. But, if those photons were emitted fron the ground towards the top of the tower, the detector sited in that top would measure a lower frequency for the same gravitacional Doppler effect. In order to achieve the relative inertial movement required by SR, the emission source of photons was placed in the center of a loudspeaker cone, so by vibrating the speaker cone the source moved with varying speed, thus creating varying Doppler shifts.
[4824] vixra:1309.0080 [pdf]
The Truly Paradoxical Case of the Symmetrically Accelerated Twins (Paper II)
This is the second installment in a four part series, the aim of the work being to introduce absolute motion into Einstein's Special Theory of Relativity (STR). Herein, we depart from the traditional case where one twin stays put while the other rockets into space, we consider the case of identically accelerated twins. Both twins depart at uniform relativistic speeds in opposite directions for a round trip from the Earth on their 21th birthday destined into space to some distant constellation that is a distance L_0 in the rest frame of the Earth. A proper application of Einstein's STR tells us that the Earth bound observers will conclude that on the day of reunion, both twins must both have aged the same albeit their clocks (which where initially synchronized with that of the Earth bound observers) will have registered a duration less than that registered by the Earth bound observers. In the traditional twin paradox, it is argued that the stay at home twin will have aged more than the traveling twin and the asymmetry is attributed to the fact that the traveling twin's frame of reference is not an inertial reference frame during the periods of acceleration and deceleration making it ``illegal" for the traveling twin to use the STR in their frame, thus ``resolving" the paradox. This same argument does not hold in the case considered here as both twins will undergo identical experiences where each twin sees the other as the one that is in motion. This means, each twin must conclude that the other twin is the one that is younger. They will conclude that their ages must be numerically different, thus disagreeing with the Earth bound observers that their ages are the same. This leads us to a true paradox that throws Einstein's Philosophy of Relativity into complete disarray.
[4825] vixra:1309.0069 [pdf]
The Truth About Climate Change
Climatology occupies the intersection of science policy and public understanding of science. In such a prominent position, the wide spectrum of climate opinions is remarkable. Society has achieved a paradigm in which global warming subscribers and non-subscribers are largely segregated by political affiliation. Since science is non-political, only a misunderstanding of the science can facilitate such a segregation. In the first section we analyze a recent study by Cook \emph{et al.} finding overwhelming scientific endorsement for the greenhouse theory of anthropogenic global warming (AGW). We find the popular reporting on Cook's result is not accurate. The aim of the following section is to clarify the science behind the most popular climate arguments and introduce the reader to some evidence that is not widely publicized. Even the astute non-climatologist should come away from this report with an enhanced understanding of relevant issues in modern climate science.
[4826] vixra:1309.0068 [pdf]
On a New and Novel Solution to Einstein's Famous Twin Paradox Without Invoking Accelerations of the Travelling Twin (Paper I)
This is the first instalment in a four part series, the aim of the work being to introduce absolute motion into Einstein's Special Theory of Relativity (STR). In the traditional treatment of Einstein's famous twin paradox, it is argued that the stay at home twin will age more than the ``travelling" twin and the asymmetry is attributed to the fact that the travelling twin's reference system is not an inertial reference system during the periods of acceleration and deceleration thus making it ``illegal" for the ``travelling" twin to use the STR in their reference system, hence ``resolving" the paradox altogether. From within the domains, confines and provinces of Einstein's STR, we argue without considering the accelerations and decelerations, where we show that, indeed, it is the ``travelling" twin that is younger at the point of reunion. This brings us to a point of admission that there is indeed a twin who really does the travelling and another that does t he staying at home. Hidden within the labyrinth of its seemingly coherent and consistent structure and fabric, does Einstein's STR imply absolute motion -- we ask? This is the question that we leave hanging in the mind of the reader. In the next reading, we propose a new version of the twin paradox, where the scenario is truly symmetric from either of the twin's reference systems -- we have coined this, the ``Symmetric Twin Paradox (STP)". This version (STP) unearths an irretrievable contradiction hidden at the deepest and subtle level of Einstein's STR. It is shown that Einstein's STR is unable to resolve this irretrievable contradiction, even if the accelerations and decelerations are taken into. Not even Einstein's General Theory of Relativity can be brought to the rescue in the case of the STP. In our third instalment, we shall setforth a new version of the STR where absolute motion is permitted. This version solves the symmetric twin paradox and any known paradox of relativity. Lastly, we apply this new STR where absolute motion is permitted to experimental efforts that have been made to measure absolute motion. Most well trained physicists tend to ignore completely, readings purporting to go against Einstein's STR. We would like to persuade our reader to make a brief stop and consider for a minute, what we have to say in our four part series of readings.
[4827] vixra:1309.0056 [pdf]
Still About Non-Planar Twistor Diagrams
A question about how non-planar Feynman diagrams could be represented in twistor Grassmannian approach inspired a re-reading of the recent article by <A HREF="http://arxiv.org/pdf/1212.5605v1.pdf">recent article</A> by Nima Arkani-Hamed et al. This inspired the conjecture that non-planar twistor diagrams correspond to non-planar Feynman diagrams and a concrete proposal for realizing the earlier proposal that the contribution of non-planar diagrams could be calculated by transforming them to planar ones by using the procedure applied in knot theories to eliminate crossings by reducing the knot diagram with crossing to a combination of two diagrams for which the crossing is replaced with reconnection.
[4828] vixra:1309.0055 [pdf]
What Are the Counterparts of Einstein's Equations in TGD?
The original motivation of this work was related to Platonic solids. The playing with Einstein's equations and the attempts to interpret them physically forced the return to an old interpretational problem of TGD. TGD allows enormous vacuum degeneracy for Kähler action but the vacuum extremals are not gravitational vacua. Could this mean that TGD forces to modify Einstein's equations? Could space-time surfaces carrying energy and momentum in GRT framework be vacua in TGD context? Of course, also in GRT context cosmological constant means just this and an experimental fact, is that cosmological constant is non-vanishing albeit extremely small. </p><p> Trying to understand what is involved led to the realization that the hypothesis that preferred extremals correspond to the solutions of Einstein-Maxwell equations with cosmological constant is too restricted in the case of vacuum extremals and also in the case of standard cosmologies imbedded as vacuum extremals. What one must achieve is the vanishing of the divergence of energy momentum tensor of Kähler action expressing the local conservation of energy momentum currents. The most general analog of Einstein's equations and Equivalence Principle would be just this condition giving in GRT framework rise to the Einstein-Maxwell equations with cosmological constant. The vanishing or light-likeness of Kähler current guarantees the vanishing of the divergence for the known extremals. </p><p> One can however wonder whether it could be possible to find some general ansätze allowing to satisfy this condition. This kind of ansätze can be indeed found and can be written as kG+∑ Λ<sub>i</sub>P<sub>i</sub>=T, where Λ<sub>i</sub> are cosmological "constants" and P<sub>i</sub> are mutually orthogonal projectors such that each projector contribution has a vanishing divergence. One can interpret the projector contribution in terms of topologically condensed matter, whose energy momentum tensor the projectors code in the representation kG=-∑Λ<sub>i</sub>P<sub>i</sub>+T. Therefore Einstein's equations with cosmological constant are generalized. This generalization is not possible in General Relativity, where Einstein's equations follow from a variational principle. This kind of ansätze can be indeed found and involve the analogs of cosmological constant, which are however not genuine constants anymore. Therefore Einstein's equations with cosmological constant are generalized. This generalization is not possible in General Relativity, where Einstein's equations follow from a variational principle. </p><p> The suggested quaternionic preferred extremals and preferred extremals involving Hamilton-Jacobi structure could be identified as different families characterized by the little group of particles involved and assignable to time-like/light-like local direction. One should prove that this ansatz works also for all vacuum extremals. This progress - if it really is progress - provides a more refined view about how TGD Universe differs from the Universe according to General Relativity and leads also to a model for how the cosmic honeycomb structure with basic unit cells having size scale 10<sup>8</sup> ly could be modelled in TGD framework.
[4829] vixra:1309.0054 [pdf]
Could One Find a Geometric Realization for Genetic and Memetic Codes?
The idea that that icosahedral structures assignable to water clusters could define a geometric representation of some kind of code is very intriguing. Genetic code is of course the code that comes first in mind. The observation that the number of faces of tetrahedron (icosahedron) is 4 (20) raises the question whether genetic code might have a geometric representation. In TGD framework also a second code emerges: I have christened it memetic code. Also memetic code could have a geometric realization. Another purely TGD-based notion is that of dark DNA allowing to assign the states of dark protons with DNA,RNA, tRNA and amino-acids and to predict correctly the numbers of DNA codons coding for a given amino-acid in vertebrate genetic code. A further element is the possibility of strong gravitation in TGD Universe meaning that space-time geometry and topology can be highly non-trivial even in condensed matter length scales. These ingredients allow to imagine geometric representations of genetic and memetic code.
[4830] vixra:1309.0053 [pdf]
Comparison of TGD Inspired Theory of Consciousness with Some Other Theories of Consciousness
This work has been inspired by two books. The first book "On intelligence" is by Jeff Hawkins. The second book "Consciousness: the science of subjectivity" is by Antti Revonsuo. </p><p> Jeff Hawkins has developed a highly interesting and inspiring vision about neo-cortex, one of the few serious attempts to build a unified view about what brain does and how it does it. Since key ideas of Hawkins have quantum analogs in TGD framework, there is high motivation for developing a quantum variant of this vision. The vision of Hawkins is very general in the sense that all parts of neo-cortex would run the same fundamental algorithm, which is essentially checking whether the sensory input can be interpreted in terms of standard mental images stored as memories. This process occurs at several abstraction levels and involve massive feedback. If it succeeds at all these levels the sensory input is fully understood. </p><p> TGD suggests a generalization of this process. Quantum jump defining moment of consciousness would be the fundamental algorithm realized in all scales defining an abstraction hierarchy. Negentropy Maximization Principle (NMP) would be the variational principle driving this process and in optimal case lead to an experience of understanding at all levels of the scale hierarchy realized in terms of generation of negentropic entanglement. The analogy of NMP with second law suggests strongly thermodynamical analogy and p-adic thermodynamics used in particle mass calculations might be also seen as effective thermodynamics assignable to NMP. </p><p> In the following I will first discuss the ideas of Hawkins and then summarize some relevant aspects of quantum TGD and TGD inspired theory of consciousness briefly in the hope that this could make representation comprehensible for the reader having no background in TGD (I hope I have achieved this). The representation involves some new elements: reduction of the old idea about motor action as time reversal of sensory perception to the anatomy of quantum jump in zero energy ontology (ZEO); interaction free measurement for photons and photons as a non-destructive reading mechanism of memories and future plans (time reversed memories) represented 4-dimensionally as negentropically entangled states approximately invariant under quantum jumps (this resolves a basic objection against identifying quantum jump as moment of consciousness) leading to the identification of analogs of imagination and internal speech as fundamental elements of cognition; and a more detailed quantum model for association and abstraction processes. </p><p> I will also compare various theories and philosophies of consciousness with TGD approach following the beautifully organized representation of Revonsuo. Also anomalies of consciousness are briefly discussed. My hope is that this comparison would make explicit that TGD based ontology of consciousness indeed circumvents the difficulties against monistic and dualistic approaches and also survives the basic objections that I have been able to invent hitherto.
[4831] vixra:1309.0052 [pdf]
Are Dark Photons Behind Biophotons?
TGD approach leads to a prediction that biophotons result when dark photons with large value of effective Planck constant and large wavelength transform to ordinary photons with same energy. The collaboration with Lian Sidorov stimulated a more detailed look for what biophotons are. Also the recent progress in understanding the implications of basic vision behind TGD inspired theory of consciousness erved as an additional motivation for a complementary treatment. <OL> <LI>The anatomy of quantum jump in zero energy ontology (ZEO) allows to understand basic aspects of sensory and cognitive processing in brain without ever mentioning brain. Sensory perception - motor action cycle with motor action allowing interpretation as time reversed sensory perception reflects directly the fact that state function reductions occur alternately to the two opposite boundaries of causal diamond (which itself or rather, quantum superposition of CDs, changes in the process). <LI>Also the abstraction and de-abstraction processes in various scales which are essential for the neural processing emerge already at the level of quantum jump. The formation of associations is one aspect of abstraction since it combines different manners to experience the same object. Negentropic entanglement of two or more mental images (CDs) gives rise to rules in which superposed n-particle states correspond to instances of the rule. Tensor product formation generating negentropic entanglement between new mental images and earlier ones generates longer sequences of memory mental images and gives rise to negentropy gain generating experience of understanding, recognition, something which has positive emotional coloring. Quantum superposition of perceptively equivalent zero energy states in given resolution gives rise to averaging. Increasing the abstraction level means poorer resolution so that the insignificant details are not perceived. <LI> Various memory representations should be approximately invariant under the sequence of quantum jumps. Negentropic entanglement gives rise to this kind of stabilization. The assumption that self model is a negentropically entangled system which does not change in state function reduction, leads to a problem. If the conscious information about this kind of subself corresponds to change of negentropy in quantum jump, it seems impossible to get this information. Quite generally, if moment of consciousness corresponds to quantum jump and thus change, how it is possible to carry conscious information about quantum state? Interaction free measurement however allows to circumvent the problem: non-destructive reading of memories and future plans becomes possible in arbitrary good approximation. </p><p> This memory reading mechanism can be formulated for both photons and photons and these two reading mechanisms could correspond to visual memories as imagination and auditory memories as internal speech. Therefore dark photons decaying to biophotons could be crucial element of imagination and the notion bio-phonon could also make sense and even follow as a prediction. This would also suggest a correlation of biophoton emission with EEG for which there is a considerable evidence. The observation that biophotons seem to be associated only with the right hemisphere suggests that at least some parts of right hemisphere prefer dark photons and are thus specialized to visual imagination: spatial relationships are the speciality of the right hemisphere. Some parts of left hemisphere at least might prefer dark phonons: left hemisphere is indeed the verbal hemisphere specialized to linear linguistic cognition. </OL> In the sequel I shall discuss biophotons in TGD Universe as decay products of dark photons and propose among other things an explanation for the hyperbolic decay law in terms of quantum coherence and echo like mechanism guaranteing replication of memory representations. Applications to biology, neuroscience, and consciousness are discussed and also the possible role of biophotons for remote mental interactions is considered. Also evidence for biophonons (Taos hum) is discussed.
[4832] vixra:1309.0051 [pdf]
"Applied Biophysics of Activated Water" and the Basic Mechanism of Water Memory in TGD Framework
A considerable progress has occurred in the understanding of TGD inspired theory of consciousness and quantum biology during the first half of 2013. The new picture allows also much better understanding of quantum biology. The work during the first half of 2013 has allowed to develop in detail several ideas about the role of the magnetic body. Magnetic flux tubes serve as correlates for the formation of quantum coherence and directed attention. The phase transitions changing the value of h<sub>eff</sub> leading to a change of flux tube length and the reconnections of the flux tubes play a key role in bio-catalysis. The dark photons propagating along MEs parallel to the flux tubes make possible resonant interactions between the entities at their ends and the proposed view about sensory, memory, and cognitive representations relies on hypothesis that the braiding of flux tubes defines negentropically entangled systems representing information which is read consciously and non-destructively in good approximation by using interaction free quantum measurements. Dark photons transform to ordinary photons in energy conserving manner and biophotons are identified as outcome of this process. </p><p> With this background one return to the old question "What is the exact mechanism of homeopathic healing?". I have considered already earlier answers to this question but they have not been completely convincing. It turns out that one manages to add the missing piece to the puzzle by making the simple question "What is the molecular cause of illness and how homeopathic remedy eliminates it?". Amusingly, I could have identified this piece for years ago but for some reason did not pose the correct question. </p><p> The resulting model of homeopathic healing is amazingly simple and at the same time a universal model of bio-catalysis. The entity mimicking the invader molecules "steals" its cyclotron frequencies by varying the thickness of magnetic flux tube and thus magnetic field strength and cyclotron frequency until reconnection with the molecule's magnetic body becomes possible and fusion to single quantum coherent system occurs. In TGD inspired theory of consciousness this process corresponds to directed attention and conscious recognition of the presence of the invader molecule. After this the mimicking entity freezes the thickness of the flux tubes in question becoming thus capable of mimicking the invader molecule and attach to the receptors of the invader molecule and steal the attention of the host organism and induce healing. </p><p> The results summarized in the book <A HREF="http://www.amazon.com/Applied-Biophysics-Activated-Water-Applications/dp/9814271187">"Applied Biophysics of Activated Water"</A> of Vysotskii et al provide a test bench for the proposal and allow to formulate it in a more detailed manner. The basic message of the book is that the activation process yields water with anomalous physical properties including water memory and having highly non-trivial - in general positive - effects on living matter. The identification of activated water as ordered water appearing in cell interior is proposed. </p><p> This and the general features of the activation process inspire the question whether the analog of the activation process might have taken place during pre-biotic evolution and generated ordered water making DNA stable. Molecular mimicry making possibly immune system would have emerged at the same step and meant also the emergence of symbolic representations. Also the pairs formed by receptor molecules and molecules attaching to them would have emerged at this crucial step when dark matter enters the game. One of they key questions is whether it is dark water molecule clusters or dark DNA that performs the mimicry of various molecules. The results of the book lend support for the model based on dark DNA.
[4833] vixra:1309.0050 [pdf]
Hypnosis as Remote Mental Interaction
In TGD framework one can argue that hypnosis represents an example about the fact that brain is not "private property": hypnotist uses the biological body and brain of the subject as instrument. Therefore remote mental interaction is in question. This idea generalizes: if one accepts self hierarchy, one can assign to any kind of higher level structure - family, organization, species, .... - a higher level self and magnetic body carrying dark matter, and these magnetic bodies can use lower level magnetic bodies as their instruments to realize their intentions. Biological bodies would be an important level in the hierarchy, which would continue down to cellular, molecular, and perhaps to even lower levels. </p><p> This view challenges the prevailing views about brain as a sole seat of consciousness and the assumption that conscious entities assigned with brains are completely isolated. Given magnetic body can use several biological bodies although one can assign to it the one providing the sensory input - at least during wake-up state. Note however that it is easy to produce illusion that some foreign object is part of biological body. </p><p> For more than decade ago I proposed a model for so called bicamerality based on the notion of semitrance. In semitrance the brain of subject becomes partially entangled with a higher level self - in this case the self of family or more general social group uses the biological body of member for its purposes. Higher level self gives its commands and advice interpreted by the bicameral as "God's voice". The consciousness of schizophrenic might be basically bicameral. Also hypnotic state and dream consciousness are candidates for bicameral consciousness. </p><p> In this article I develop essentially this idea but using as input the recent understanding of about TGD inspired theory of consciousness and quantum biology and end up with a proposal for a detailed mechanism for how the magnetic body hijacks some parts of the brain of the subject: prefrontal cortex and anterior cingulate cortex are argued to be the most plausible targets of hijacking. Also a mechanism explaining how the sensory hallucinations and motor actions are induced by hypnotist by inhibiting a halting mechanism preventing imagined motor actions to become real and sensory imagination to become "qualiafied".
[4834] vixra:1309.0049 [pdf]
About Concrete Realization of Remote Metabolism
The idea of "remote metabolism" (or quantum credit card, as I have also called it) emerged more than a decade ago - and zero energy ontology (ZEO) provides the justification for it. The idea is that the system needing energy sends negative energy to a system able to receive the negative energy and make a transition to a lower energy state. This kind of mechanism would be ideal for biology, where rapid reactions to a changing environment are essential for survival. Originally this article was intended to summarize a more detailed model of remote metabolism but the article expanded to a considerably more detailed view about TGD inspired biology than the earlier vision. </p><p> It is shown that the basic notions of the theory of Ling about cell metabolism inspired by various anomalies have natural counterparts in TGD based model relying on the notion of magnetic body. Remote metabolism can be considered as a universal metabolic mechanism with magnetic body of ATP, or system containing it, carrying the metabolic energy required by the biological user. In particular, the role of ATP is discussed in Ling's theory and from the point of view of TGD-inspired theory of consciousness. </p><p> It is easy to imagine new technologies relying on negative energy signals propagating to the geometric past and ZEO justifies these speculations. Remote metabolism could make possible a new kind of energy technology. The discoveries of Tesla made more than a century ago plus various free energy anomalies provide excellent material for developing these ideas, and one ends up with a concrete proposal for how dark photons and dark matter could be produced in capacitor-like systems analogous to cell membranes and acting as Josephson junctions and how energy could be extracted from "large" magnetic bodies. </p><p> The model identifies Josephson frequency with the subharmonic of the frequency characterizing the periodicity of a periodic voltage perturbation assumed to correspond to cyclotron frequency in biological applications. Together with quantization conditions for charge and effective Planck constant it leads to precise quantitative predictions for capacitor-like systems acting as dark capacitors. Also a relationship between the magnetic field at the magnetic body of the system and the voltage of the capacitor-like Josephson junction emerges. </p><p> The predictions allow new quantitative insights about biological evolution as emergence of Josephson junctions realized as capacitor-like systems both at the level of cell, DNA and proteins, and brain. h<sub>eff</sub> can be related to Josephson frequency and cyclotron frequency and thus to measurable parameters. h<sub>eff</sub> serves as a kind of intelligence quotient and its maximization requires the maximization of both the voltage and area of the membrane-like capacitor system involved. This is what has happened during evolution. Indeed, the internal cell membranes, cortical layers and DNA double strand in chromosomes are strongly folded, and the value of membrane electric field is roughly twice the value of the electric field for which di-electric breakdown occurs in air. Even 40 Hz thalamocortical resonance frequency can be understood in the framework of the model. </p><p> The claimed properties of Tesla's "cold electricity" strongly suggest interpretation in terms of dark matter in TGD sense. This leads to a proposal that a transition to dark phase occurs when the value of voltage equals the rest mass of charged particle involved. This criterion generalizes to the case of cell membrane and relates the values of h<sub>eff</sub>, p-adic prime p, and threshold potential for various charged particles to each other. The idea that nerve pulse corresponds to the breakdown of super-conductivity as a transition from dark to ordinary phase receives additional support. The resulting picture conforms surprisingly well with the earlier speculations involving dark matter and p-adically scaled variants of weak and color interactions in biologically relevant length scales. An extremely simple mechanism producing ATP involving only the kicking of two protonic Cooper pairs through the cell membrane by Josephson photon as a basic step is proposed. Also the proposal that neutrino Cooper pairs could be highly relevant not only for cognition and but also metabolism finds support.
[4835] vixra:1309.0048 [pdf]
Could Photosensitive Emulsions Make Dark Matter Visible?
The article <A HREF="http://restframe.com/downloads/tachyon_monopoles.pdf">"Possible detection of tachyon monopoles in photographic emulsions"</A> by Keith Fredericks describes in detail very interesting observations by him and also by many other researchers about strange tracks in photographic emulsions induced by various (probably) non-biological mechanisms and also by the exposure to human hands (touching by fingertips) as in the experiments of Fredericks. That the photographic emulsion itself consists of organic matter (say gelatin) might be of significance as also the fact that practically all experimental arrangements involve di-electric breakdown (nerve pulses in the experiments of Fredericks). For particle physicist it is very difficult to accept the proposed interpretation as particle tracks and even more difficult to agree with the identification of particles as tachyonic magnetic monopoles. A more natural interpretation seems to be as "photographs" of pre-existing structures - either completely standard but not well-understood or reflecting new physics associated with the living matter. In TGD framework the identification as images of magnetic flux tubes carrying dark matter and associated with the emulsion - less probably with the source - is natural and the images would be very much analogous to those obtained by Peter Gariaev's group by illuminating DNA sample with visible light.
[4836] vixra:1309.0047 [pdf]
Some Fresh Ideas About Twistorialization of TGD
The article by Tim Adamo titled <A HREF="http://arxiv.org/pdf/1308.2820.pdf">"Twistor actions for gauge theory and gravity"</A> considers the formulation of N=4 SUSY gauge theory directly in twistor space instead of Minkowski space. The author is able to deduce MHV formalism, tree level amplitudes, and planar loop amplitudes from action in twistor space. Also local operators and null polygonal Wilson loops can be expressed twistorially. This approach is applied also to general relativity: one of the challenges is to deduce MHV amplitudes for Einstein gravity. The reading of the article inspired a fresh look on twistors and a possible answer to several questions (I have written two chapters about twistors and TGD giving a view about development of ideas). </p><p> Both M<sup>4</sup> and CP<sub>2</sub> are highly unique in that they allow twistor structure and in TGD one can overcome the fundamental "googly" problem of the standard twistor program preventing twistorialization in general space-time metric by lifting twistorialization to the level of the imbedding space containg M<sup>4</sup> as a Cartesian factor. Also CP<sub>2</sub> allows twistor space identifiable as flag manifold SU(3)/U(1)× U(1) as the self-duality of Weyl tensor indeed suggests. This provides an additional "must" in favor of sub-manifold gravity in M<sup>4</sup>× CP<sub>2</sub>. Both octonionic interpretation of M<sup>8</sup> and triality possible in dimension 8 play a crucial role in the proposed twistorialization of H=M<sup>4</sup>× CP<sub>2</sub>. It also turns out that M<sup>4</sup>× CP<sub>2</sub> allows a natural twistorialization respecting Cartesian product: this is far from obvious since it means that one considers space-like geodesics of H with light-like M<sup>4</sup> projection as basic objects. p-Adic mass calculations however require tachyonic ground states and in generalized Feynman diagrams fermions propagate as massless particles in M<sup>4</sup> sense. Furthermore, light-like H-geodesics lead to non-compact candidates for the twistor space of H. Hence the twistor space would be 12-dimensional manifold CP<sub>3</sub>× SU(3)/U(1)× U(1). </p><p> Generalisation of 2-D conformal invariance extending to infinite-D variant of Yangian symmetry; light-like 3-surfaces as basic objects of TGD Universe and as generalised light-like geodesics; light-likeness condition for momentum generalized to the infinite-dimensional context via super-conformal algebras. These are the facts inspiring the question whether also the "world of classical worlds" (WCW) could allow twistorialization. It turns out that center of mass degrees of freedom (imbedding space) allow natural twistorialization: twistor space for M<sup>4</sup>× CP<sub>2</sub> serves as moduli space for choice of quantization axes in Super Virasoro conditions. Contrary to the original optimistic expectations it turns out that although the analog of incidence relations holds true for Kac-Moody algebra, twistorialization in vibrational degrees of freedom does not look like a good idea since incidence relations force an effective reduction of vibrational degrees of freedom to four. The Grassmannian formalism for scattering amplitudes generalizes practically as such for generalized Feynman diagrams. The Grassmannian formalism for scattering amplitudes generalizes for generalized Feynman diagrams: the basic modification is due to the presence of CP<sub>2</sub> twistorialization required by color invariance and the fact that 4-fermion vertex -rather than 3-boson vertex- and its super counterparts define now the fundamental vertices.
[4837] vixra:1309.0036 [pdf]
Quantum Theory Depending on Maxwell Equations
This article try to unified the four basic forces by Maxwell equations, the only experimental theory. Self-consistent Maxwell equation with the current from electromagnetic field is proposed. and is solved to four kinds of electrons and the structures of particles. The static properties and decay and scattering are reasoned, all meet experimental data. The momentum-energy tensor of the electromagnetic field coming to the equation of general relativity is discussed. In the end that the conformation elementarily between this theory and QED and weak theory is discussed compatible, except some bias in some analysis.
[4838] vixra:1309.0035 [pdf]
Is the State of Low Energy Stable? Negative Energy, Dark Energy and Dark Matter
The principle, which says “the state of low energy is stable”, is one of the fundamental principles of Physics, and it has its influences on across all the fields of Physics. In this article, we will reveal that this principle is an incomplete. It is stable at a low energy state in the case of positive mass (energy), whereas, it is stable at a high energy state in the case of negative mass (energy). Due to this, “the problem of transition to minus infinite energy level” does not occur, therefore negative energy and positive energy can coexist. Moreover, we will show that negative energy provides an explanation for dark matter and dark energy, which are the biggest issues posed to cosmology at the present. We demonstrate the ratio between matter, dark matter and dark energy through this model, and computer simulation shows that this assumption is appropriate. ΛCDM model expects that the ratio of matter and dark matter will be constant, but this model suggests that as the universe expands, the gravitational effect of matter vs. dark matter differs. Therefore, it is necessary to investigate the change of the ratio (Ω_d/Ω_m).
[4839] vixra:1309.0030 [pdf]
Neutrosophic Soft Set
In this paper we study the concept of neutrosophic set of Smarandache. We have introduced this concept in soft sets and de¯ned neutrosophic soft set. Some de¯nitions and operations have been intro- duced on neutrosophic soft set. Some properties of this concept have been established.
[4840] vixra:1309.0029 [pdf]
A Neutrosophic Soft Set Approach to a Decision Making Problem
The decision making problems in an imprecise environment has found paramount importance in recent years. Here we consider an object recognition problem in an imprecise environment. The recognition strategy is based on multiobserver input parameter data set.
[4841] vixra:1309.0019 [pdf]
Vector Field Computations in Clifford's Geometric Algebra
Exactly 125 years ago G. Peano introduced the modern concept of vectors in his 1888 book "Geometric Calculus - According to the Ausdehnungslehre (Theory of Extension) of H. Grassmann". Unknown to Peano, the young British mathematician W. K. Clifford (1846-1879) in his 1878 work "Applications of Grassmann's Extensive Algebra" had already 10 years earlier perfected Grassmann's algebra to the modern concept of geometric algebras, including the measurement of lengths (areas and volumes) and angles (between arbitrary subspaces). This leads currently to new ideal methods for vector field computations in geometric algebra, of which several recent exemplary results will be introduced.
[4842] vixra:1309.0008 [pdf]
Yang-Mills Gauge Invariant Theory for Space Curved Electromagnetic Field
It was proposed new gauge invariant Lagrangian, where the gauge field interact with the charged electromagnetic fields. Gauge invariance was archived by replacing of particle mass with new one invariant of the field $F_{\mu\nu}F^{\mu\nu}$ multiplied with calibration constant $\alpha_g$. It was shown that new proposed Lagrangian generates similar Dirac and electromagnetic field equations. Solution of Dirac equations for a free no massless particle answers to the 'question of the age' why free particle deal in experiments like a de Broil wave. Resulting wave functions of the new proposed Lagrangian will describe quantized list of bespinor particles of different masses. Finally, it was shown that renormalization of the new proposed Lagrangian is similar to QED in case similarity of new proposed Lagrangian to classic QED.
[4843] vixra:1308.0154 [pdf]
Comment on "Representation and Prediction of Molecular Diffusivity of Nonelectrolyte Organic Compounds in Water at Infinite Dilution Using the Artificial Neural Network-Group Contribution Method [Gharagheizi et al., J. Chem. Eng. Data 2011, 56, 1741-1750]
In their article, Gharagheizi et al. [J. Chem. Eng. Data 2011, 56, 1741-1750] claim to develop a model to predict the molecular diffusivity of "nonelectrolyte organic compounds" in water using an artificial neural network-group contribution method. Many of these compounds have ionizable functionalities with pKa values that result in either substantial or effectively complete ionization in water (making them electrolytes, in contrast to the claims of Gharagheizi et al.). Consequently, the model developed and applied by Gharagheizi et al. does not computationally model the actual speciation(s) of each compound expected to be present under the experimental conditions for which the underlying data has been obtained and erroneously classifies many organic compounds as non-electrolytes.
[4844] vixra:1308.0153 [pdf]
Comment on "Comparison of New and Existing Threshold Methods for Evaluating Sulfur Compounds in Different Base Wines"
In their article (Cliff et al., J. Sens. Stud., 2011, 26, 184-196), the authors plot the log mean aroma threshold concentrations for three sulfur aroma compound analytes on the y-axis against the qualitative descriptors of the wine (i.e., "model," "neutral," and "fruity") on the x-axis, and proceed to fit three log-linear regression models through the sets of data. The statistical validity of this exercise seems problematic (particularly with only three datapoints, and four significant figures in the resulting quoted regression constants), and the reason behind this choice of data analysis is unclear. Some type of statistical test (e.g., ANOVA) designed to investigate relative trend differences between categorical variables would perhaps be more appropriate, especially when such nebulous categorical descriptors as "neutral" and "fruity" are being employed along the x-axis and ordered in an arbitrary manner.
[4845] vixra:1308.0146 [pdf]
Unification of Mass and Gravitation in a Common 4D Higgs Compatible Theory
Since the 1920s, the formulas of mass and gravitation have worked perfectly with high accuracy. However, the basic principle of these two phenomena remains unknown. What is the origin of mass? How can spacetime be curved by mass? What is the mechanism by which spacetime creates gravitation? . . . The solution to these enigmas lies in general relativity. A thorough examination of the Einstein Field Equations highlights some minor inconsistencies concerning the sign and the meaning of tensors. Solving these inconsistencies fully explains the curvature of spacetime, mass, and gravitation, without modifying the mathematics of general relativity. Moreover, this explanation shows that mass and gravitation are two similar phenomena that can be unified in a common 4D Higgs compatible theory. Applications to astrophysics are also very important: black holes, dark matter, dark energy. . .
[4846] vixra:1308.0142 [pdf]
Vacuum Energy
English: (translation) This article provides an alternative way to calculate the vacuum energy spacetime and matches, for a factor of 3 multiplied by the proton-electron mass ratio, with the classical value. Eliminating the error of a factor of $10^{120}$ arises with the quantum field theory (QFT). Spanish: (original) Se ofrece en este artículo una forma alternativa de calcular la presión de vacío del espacio--tiempo y que coincide, en un factor de 3 por la relación de masas del protón y del electron, con el valor clásico. Eliminando el error de un factor de $10^{120}$ que surge con la teoría cuántica de campos (QFT).
[4847] vixra:1308.0141 [pdf]
Disposing Classical Field Theory, Part IV
It is shown a.o. that a gauge invariant scalar (classical and quantum theoretical) electrodynamical field is a trivial field theory; in fact, it is shown that a non-zero scalar gauge field will not be charge/mass conservating, unless it is zero. The classical action of a flux of charged and neutral particles is calculated. It is shown that this action is a spinor field $\Psi$ which satisfies $\Box \Psi=0$, i.e.: the action of neutral and charged currents of particles spreads at the speed of light, a result which was already shown in Part1 of this paper by other means. The fact that both solutions differ only by a constant factor $\gamma^0$, suggests that electromagnetic and gravitational field are of the same nature. Now, why is the gravitational force so much weaker than electromagnetic one? A hint can perhaps be given with Part2.
[4848] vixra:1308.0137 [pdf]
Calculation of the Hubble Parameter from Geometry
It is shown that the field equations of Einstein gravity sourced by a real massless scalar inflaton field $\varphi$, with inflaton potential identically equal to zero, cast on an eight-dimensional pseudo-Riemannian manifold $\mathbb{X}_{4,4}$ (a spacetime of four space dimensions and four time dimensions) admit a solution that exhibits temporal exponential \textbf{deflation of three of the four time dimensions} and temporal exponential inflation of three of the four space dimensions. [The signature and dimension of $\mathbb{X}_{4,4}$ are chosen because its tangent spaces satisfy a triality principle \cite{Nash2010} (Minkowski vectors and spinors are equivalent).] Comoving coordinates for the two \textbf{unscaled} dimensions are chosen to be $(x^4 \leftrightarrow \textrm{ time}, x^8 \leftrightarrow \textrm{ space})$. The $x^4$ coordinate corresponds to our universe's observed physical time dimension. The $x^8$ coordinate corresponds to a compact spatial dimension with circumference $C_8$. $C_8$ determines the initial value of the Hubble parameter $H$. Most importantly, this model describes an initially inflating/deflating Universe created with inflaton potential identically equal to zero, which is an initial condition that is exponentially more probable than an initial condition that assumes an initial inflaton potential of order of the Planck mass. This model predicts that the Hubble parameter $H$ during inflation is $ H = \frac{\pi}{3 \, C_8} $.
[4849] vixra:1308.0134 [pdf]
A Complete Relativity Theory Predicts with Precision the Neutrino Velocities Reported by OPERA, MINOS, and ICARUS
The present paper utilizes the recently proposed Complete Relativity Theory (CR) for the prediction of neutrino velocity in a prototypical neutrino velocity experiment. The derived expression for the relative difference of the neutrino velocity with respect to the velocity of light is a function of the anticipation time t, the traveled distance D and the light velocity c, measured on Earth. It is independent neither on the traveling particle type nor on its energy level. With regard to fast neutrinos it is shown that the derived equation predicts with precision the results reported by OPERA, MINOS, and ICARUS. Since CR postulates that all physical entities, including the velocity of light, are relativistic entities, it follows that even though the results of the aforementioned experiments fail to support the neutrino superluminality claim, their precise prediction based on a theory that diametrically opposes SR, provides strong evidence for the inadequacy of SR in accounting for the dynamics of quasi-luminal particles. The aforementioned notwithstanding, a direct calculation of SR’s predictions for the above mentioned studies yields grossly incorrect results.
[4850] vixra:1308.0133 [pdf]
The Dark Side Revealed: A Complete Relativity Theory Predicts the Content of the Universe
Dark energy and dark matter constitute about 95% of the Universe. Nonetheless, not much is known about them. Existing theories, including General Relativity, fail to provide plausible definitions of the two entities, or to predict their amounts in the Universe. The present paper proposes a new special relativity theory, called Complete Relativity theory (CR) that is anchored in Galileo’s relativity, but without the notion of a preferred frame. The theory results are consistent with Newtonian and Quantum mechanics. More importantly, the theory yields natural definitions of dark energy and dark matter and predicts the content of the Universe with high accuracy.
[4851] vixra:1308.0129 [pdf]
Quantum Gravity Galactic Mass Spectrum II Revised Version
A formula, derived from general relativity, is given that can be used to generate a cosmological mass spectrum. The spectrum values come out as kilograms and are determine by two input parameters galactic epoch birth time, t_b, and a scale factor theta_0.
[4852] vixra:1308.0118 [pdf]
Programming Planck Units from a Virtual Electron; a Simulation Hypothesis (Summary)
The Simulation Hypothesis proposes that all of reality, including the earth and the universe, is in fact an artificial simulation, analogous to a computer simulation, and as such our reality is an illusion. In this essay I describe a method for programming mass, length, time and charge (MLTA) as geometrical objects derived from the formula for a virtual electron; $f_e = 4\pi^2r^3$ ($r = 2^6 3 \pi^2 \alpha \Omega^5$) where the fine structure constant $\alpha$ = 137.03599... and $\Omega$ = 2.00713494... are mathematical constants and the MLTA geometries are; M = (1), T = ($2\pi$), L = ($2\pi^2\Omega^2$), A = ($4\pi \Omega)^3/\alpha$. As objects they are independent of any set of units and also of any numbering system, terrestrial or alien. As the geometries are interrelated according to $f_e$, we can replace designations such as ($kg, m, s, A$) with a rule set; mass = $u^{15}$, length = $u^{-13}$, time = $u^{-30}$, ampere = $u^{3}$. The formula $f_e$ is unit-less ($u^0$) and combines these geometries in the following ratio M$^9$T$^{11}$/L$^{15}$ and (AL)$^3$/T, as such these ratio are unit-less. To translate MLTA to their respective SI Planck units requires an additional 2 unit-dependent scalars. We may thereby derive the CODATA 2014 physical constants via the 2 (fixed) mathematical constants ($\alpha, \Omega$), 2 dimensioned scalars and the rule set $u$. As all constants can be defined geometrically, the least precise constants ($G, h, e, m_e, k_B$...) can also be solved via the most precise ($c, \mu_0, R_\infty, \alpha$), numerical precision then limited by the precision of the fine structure constant $\alpha$.
[4853] vixra:1308.0117 [pdf]
Notoph-Graviton-Photon Coupling
In the sixties Ogievetskiı and Polubarinov proposed the concept of notoph, whose helicity properties are complementary to those of photon. Later, Kalb and Ramond (and others) developed this theoretical concept. And, at the present times it is widely accepted. We analyze the quantum theory of antisymmetric tensor fields with taking into account mass dimensions of notoph and photon. It appears to be possible the description of both photon and notoph degrees of freedom on the basis of the modified Bargmann-Wigner formalism for the symmetric second-rank spinor. Next, we proceed to derive equations for the symmetric tensor of the second rank on the basis of the Bargmann-Wigner formalism in a straightforward way. The symmetric multispinor of the fourth rank is used. It is constructed out of the Dirac 4-spinors. Due to serious problems with the interpretation of the results obtained on using the standard procedure we generalize it, and we obtain the spin-2 relativistic equations, which are consistent with the general relativity. The importance of the 4-vector field (and its gauge part) is pointed out. Thus, we present the full theory which contains the photon, the notoph (the Kalb-Ramond field) and the graviton. The relations of this theory with the higher spin theories are established. In fact, we deduced the gravitational field equations from relativistic quantum mechanics. The relations of this theory with scalar-tensor theories of gravitation and f(R) are discussed. We estimate possible interactions, fermion-notoph, graviton-notoph, photon-notoph, and we conclude that they will be probably seen in experiments in the next few years. PACS number: 03.65.Pm , 04.50.-h , 11.30.Cp
[4854] vixra:1308.0112 [pdf]
There is not Dark Energy
English (traslate): In this article is to explain why the measures provide the result that lack in the universe around $99\%$ of gravitational energy, contrary to the calculations. Spanish (original): Se trata en este artículo de explicar el porqué las medidas ofrecen el resultado de que falta en el Universo alrededor del $99\%$ de energía gravitatoria, en contra de los cálculos.
[4855] vixra:1308.0104 [pdf]
On the Divergence of the Negative Energy Density Equation in both Alcubierre and Natario Warp Drive Spacetimes: No Divergence At All
Warp Drives are solutions of the Einstein Field Equations that allows superluminal travel within the framework of General Relativity. There are at the present moment two known solutions: The Alcubierre warp drive discovered in $1994$ and the Natario warp drive discovered in $2001$. However as stated by both Alcubierre and Natario themselves the warp drive violates all the known energy conditions because the stress energy momentum tensor is negative implying in a negative energy density. While from a classical point of view the negative energy is forbidden the Quantum Field Theory allows the existence of very small amounts of it being the Casimir effect a good example as stated by Alcubierre himself.The major drawback concerning negative energies for the warp drive is the huge amount of negative energy able to sustain the warp bubble.In order to perform interstellar space travel to "nearby" stars at $20$ light-years away with potential habitable exo-planets(eg:Gliese $581$) at superluminal speeds in a reasonable amount of time a ship must attain a speed of about $200$ times faster than light.However the negative energy density at such a speed is directly proportional to the factor $10^{48}$ which is $1.000.000.000.000.000.000.000.000$ times bigger in magnitude than the mass of the planet Earth!!! Some years ago Hiscock published a work in which the composed mixed tensor $\langle {T_\mu}^\nu\rangle$ obtained from the negative energy density tensor $T_{\mu\nu}$ $\mu=0,\nu=0$ of the two-dimensional Alcubierre warp drive metric diverges when the velocity of the ship $vs$ exceeds the speed of light.(see pg $2$ in \cite{ref4}).We demonstrate in this work that in fact this do not happens and the Hiscock result must be re-examined.We introduce here a shape function that defines the Natario warp drive spacetime as an excellent candidate to low the negative energy density requirements from $10^{48}$ to affordable levels.We also discuss Horizons and Doppler Blueshifts that affects the Alcubierre spacetime but not the Natario counterpart.
[4856] vixra:1308.0073 [pdf]
Flaws in Black Hole Theory and General Relativity
All alleged black hole models pertain to a universe that is spatially infinite, is eternal, contains only one mass, is not expanding, and is asymptotically flat or asymptotically not flat. But the alleged big bang cosmology pertains to a universe that is spatially finite (one case) or spatially infinite (two different cases), is of finite age, contains radiation and many masses including multiple black holes (some of which are primordial), is expanding, and is not asymptotically anything. Thus the black hole and the big bang contradict one another - they are mutually exclusive. It is surprisingly easy to prove that neither General Relativity nor Newton's theory predicts the black hole. Despite numerous claims for discovery of black holes in their millions, nobody has ever actually found one. It is also not difficult to prove that General Relativity violates the usual conservation of energy and momentum. Fundamentally there are contradictions contained in black hole theory, big bang cosmology, and General Relativity. Numerical methods are therefore to no avail.
[4857] vixra:1308.0071 [pdf]
Zeros Distribution of the Riemann Zeta-Function
Horizontal and vertical distributions of complex zeros of the Riemann zeta-function in the critical region are being found in general form in the paper on the basis of standard methods of function theory of complex variable.
[4858] vixra:1308.0058 [pdf]
Super-Clifford Gravity, Higher Spins, Generalized Supergeometry and much more
An $extended$ Orthogonal-Symplectic Clifford Algebraic formalism is developed which allows the novel construction of a graded Clifford gauge field theory of gravity. It has a direct relationship to higher spin gauge fields, bimetric gravity, antisymmetric metrics and biconnections. In one particular case it allows a plausible mechanism to cancel the cosmological constant contribution to the action. The possibility of embedding these Orthogonal-Symplectic Clifford algebras into an infinite dimensional algebra, coined $Super$-Clifford Algebra is described. Finally, some physical applications of the geometry of $Super$-Clifford spaces to Generalized Supergeometries, Double Field Theories, $ U$-duality, $11D$ supergravity, $M$-theory, and $ E_7, E_8, E_{11}$ algebras are outlined.
[4859] vixra:1308.0052 [pdf]
Protium and Antiprotium in Riemannian Dual 4D Space-Time
In this preliminary paper, we apply the Riemannian dual (fractional quantum Hall superfluidic) space-time topology and the six-coloring Gribov vacuum to protium and antiprotium. The results suggest that it may be possible to generalize this framework to all atomic elements. Therefore, this subject warrants further scrutiny, collaboration, refinement, and investigation.
[4860] vixra:1308.0051 [pdf]
Initiating Santilli's Iso-Mathematics to Triplex Numbers, Fractals, and Inopin's Holographic Ring: Preliminary Assessment and New Lemmas
In a preliminary assessment, we begin to apply Santilli's iso-mathematics to triplex numbers, Euclidean triplex space, triplex fractals, and Inopin's 2-sphere holographic ring (HR) topology. In doing so, we successfully identify and define iso-triplex numbers for iso-fractal geometry in a Euclidean iso-triplex space that is iso-metrically equipped with an iso-2-sphere HR topology. As a result, we state a series of lemmas that aim to characterize these emerging iso-mathematical structures. These initial outcomes indicate that it may be feasible to engage this encoding framework to systematically attack a broad range of problems in the disciplines of science and mathematics, but a thorough, rigorous, and collaborative investigation should be in order to challenge, refine, upgrade, and implement these ideas.
[4861] vixra:1308.0050 [pdf]
Effective State, Hawking Radiation and Quasi-Normal Modes for Kerr Black Holes
The non-strictly continuous character of the Hawking radiation spectrum generates a natural correspondence between Hawking radiation and black hole (BH) quasi-normal modes (QNM). In this work, we generalize recent results on this important issue to the framework of Kerr BHs (KBH). We show that for the KBH, QNMs can be naturally interpreted in terms of quantum levels. Thus, the emission or absorption of a particle is in turn interpreted in terms of a transition between two different levels. At the end of the paper, we also generalize some concepts concerning the "effective state" of a KBH.
[4862] vixra:1308.0036 [pdf]
Disposing Classical Field Theory, Part III
It is shown that neutral currents map 1-1 to charged currents and that charge conservation implies mass conservation. This has consequences for quantum theory, quantum field theory, and cosmology, which are explored.
[4863] vixra:1308.0033 [pdf]
Numerical Integration of the Negative Energy Density in the Natario Warp Drive Spacetime Using $3$ Different Natario Shape Functions
Warp Drives are solutions of the Einstein Field Equations that allows superluminal travel within the framework of General Relativity. There are at the present moment two known solutions: The Alcubierre warp drive discovered in $1994$ and the Natario warp drive discovered in $2001$. However as stated by both Alcubierre and Natario themselves the warp drive violates all the known energy conditions because the stress energy momentum tensor is negative implying in a negative energy density. While from a classical point of view the negative energy is forbidden the Quantum Field Theory allows the existence of very small amounts of it being the Casimir effect a good example as stated by Alcubierre himself.The major drawback concerning negative energies for the warp drive is the huge amount of negative energy able to sustain the warp bubble.Ford and Pfenning computed the amount of negative energy needed to maintain an Alcubierre warp drive and they arrived at the result of $10$ times the mass of the entire Universe for a stable warp drive configuration rendering the warp drive impossible.We introduce here $3$ new shape functions that defines the Natario warp drive spacetime and we choose one of these functions as an excellent candidate to low the negative energy density requirements to affordable levels.We demonstrate in this work that both Alcubierre and Natario warp drives have two warped regions and not only one.One of these warped regions is associated with geometry and the other is associated with negative energy requirements.We also discuss Horizons and Doppler Blueshifts that affects the Alcubierre spacetime but not the Natario counterpart.
[4864] vixra:1308.0026 [pdf]
The Structurization of a Set of Positive Integers and Its Application to the Solution of the Twin Primes Problem
One of causes why Twin Primes problem was unsolved over a long period is that pairs of Twin Primes (PTP) are considered separately from other pairs of Twin Numbers (PTN). By purpose of this work is research of connections between different types of PTN. For realization of this purpose by author was developed the "Arithmetic of pairs of Twin Numbers" (APTN). In APTN are defined three types PTN. As shown in APTN all types PTN are connected with each other by relations which represent distribution of prime and composite positive integers less than n between them. On the basis of this relations (axioms APTN) are deduced formulas for computation of the number of PTN (NPTN) for each types. In APTN also is defined and computed Average value of the number of pairs are formed from odd prime and composite positive integers $ < n $ . Separately AVNPP for prime and AVNPC for composite positive integers. We also deduced formulas for computation of deviation NPTN from AVNPP and AVNPC. It was shown that if $n$ go to infinity then NPTN go to AVNPC or AVNPP respectively that permit to apply formulas for AVNPP and AVNPC to computation of NPTN. At the end is produced the proof of the Twin Primes problem with help of APTN. It is shown that if n go to infinity then NPTP go to infinity.
[4865] vixra:1307.0166 [pdf]
Cosmological Observations as a Hidden Key to Quantum Gravity
Some important consequences of the model of low-energy quantum gravity by the author are described, which give a possibility to re-interpret such cosmological observations as redshifts of remote objects and the dimming of Suprnovae 1a without any expansion of the Universe and without dark energy, but as manifestations of quantum gravity.
[4866] vixra:1307.0150 [pdf]
Geometric Analysis of Grover's Search Algorithm in the Presence of Perturbation
For an initial uniform superposition over all possible computational basis states, we explore the performance of Grover's search algorithm geometrically when imposing a perturbation on the Walsh-Hadamard transformation contained in the Grover iteration. We give the geometric picture to visualize the quantum search process in the three-dimensional space and show that Grover's search algorithm can work well with an appropriately chosen perturbation. Thereby we corroborate Grover's conclusion that if such perturbation is small, then this will not create much of an impact on the implementation of this algorithm. We also prove that Grover's path cannot achieve a geodesic in the presence of a perturbation of the Fubini-Study metric.
[4867] vixra:1307.0132 [pdf]
A New Picture In The Particle under The Frame of Three Dimensional Einstein Theory
By the analogy with Special Relativity, it shows that there is a fundamental length in three-dimensional space, in which the dimension of time has been a constant, and the remaining coordinates are satisfied with the same mathematical structure as the Lorentz transformation. According to the hypothesis, the energy $\omega_i$ is corresponding to the $x_i$, and the force $\emph{F}$ is analogous with coordinate $\emph{t}$. After proceeding with this, there is a natural way to explain the quark confinement. Finally, this theory is promoted to the curved situation, where the origination of cosmological constant $\Lambda$ is connected with three-dimensional curvature. In addition, it also can construct the relationship between thermodynamics and statistical mechanics(TSM) and three-dimensional Einstein theory. Under the correspondence of three-dimensional cosmology, the new physical meaning of revolutionary factor R can be found.
[4868] vixra:1307.0126 [pdf]
Quantum Search in a Four-Complex-Dimensional Subspace
For there to be $M> 1$ target items to be searched in an unsorted database of size $N$, with $M/N\ll 1$ for a sufficiently large $N$, we explore the performance of Grover's search algorithm when considering some possible situations that may arise in a four-complex-dimensional subspace, for which in the case of identical rotation angles $\phi=\theta$, we give the maximum success probabilities of finding a desired state and their corresponding numbers of Grover iterations in an approximate fashion. Our analysis reveals that the case of identical rotation angles $\phi=\theta$ is energetically favorable compared to the case $\left| {\theta - \phi } \right|\gg 0$ for boosting the probability to detect a desired state.
[4869] vixra:1307.0086 [pdf]
A Crazy It From A Misleading Bit: How A Zero-Referenced Fundamental Theorem of Calculus Loses Information And May Be Misleading Mathematical Physics
Imagine for a moment an endless diamond, completely solid and pristine. No flaws or gaps in the structure. Suppose at some point in the density of the material something strange occurs in that a deformation appears out of nowhere which splits into two waves. Should these two waves interact again with each other, the deformation disappears. However should they separate enough then one of the waves, which we shall call the baryon wave, is stable by itself. This wave has the strange property that it is a traveling decrease in density of the diamond. There is no "other" material, only the decrease in something we shall have to think of as the vacuum (energy) density. The wave apparently has the ability to pass through or combine into structures with other baryon waves but has no ability to disappear back into a pristine diamond if there is no second deformation wave present, which we do not cover in this essay. Suppose that a certain combination of these first waves were to become sentient. Would they be able to detect that they are a moving wave or would their perceptions lead them to a misunderstanding of how these waves effect the very substance that they are traveling within and so not realize there exists another class of solutions for tensors (scalars, vectors and so on)? Is there a way to determine that the actual density is a fairly good model for their universe? If so, what mathematics would be required in order to describe it much better than other models which the sentient waves have based on their physical perceptions? It is due to this question that we present our proposed answer of a modication of calculus. In order to fully describe the baryon wave, the dimensions they create with their presence, the limited radius distortions they cause and even the substance itself we must re-evaluate our understanding of calculus in order to model them as the derivatives of finite Action area integrals. We propose that in order to understand how the universe stores information, we must have a foundational basis for these areas, and in order to understand how it processes information we must ensure that we have within the literature all classes of its derivatives (directional derivatives, divergence, etc.). We do not go into details in this essay, but our proposed future path is to accomplish this via a modification of Gunnar Nordstroem's gravitational theory, an early competing model to General Relativity worked on by Nordstroem, Einstein and Fokker (see [1] for a recent review). This model was discarded by Einstein and others since it did not predict gravitational lensing, a problem which our modication would seem to have the possibility of remedying (see final assertions). Therefore in this essay we introduce our different view point of calculus, named "Area" Calculus in order to distinguish the concept from the mainstream variety which we will refer to as "Single Function" Calculus.
[4870] vixra:1307.0075 [pdf]
The Truth About Geometric Unity
In May of 2013 a pair of articles appeared on the Guardian newspaper website featuring a new candidate "theory of everything" called Geometric Unity. A cursory reading of each article gives the impression that Geometric Unity was developed by Eric Weinstein, but a closer reading reveals that Weinstein is not credited as such. The truth about Geometric Unity is that it was authored by this writer in several papers beginning in 2009. This article will describe the development and prominent features of the new theory.
[4871] vixra:1307.0070 [pdf]
The Analysis of Harold White Applied to the Natario Warp Drive Spacetime. from $10$ Times the Mass of the Universe to Arbitrary Low Levels of Negative Energy Density Using a Continuous Natario Shape Function with Power Factors.
Warp Drives are solutions of the Einstein Field Equations that allows superluminal travel within the framework of General Relativity. There are at the present moment two known solutions: The Alcubierre warp drive discovered in $1994$ and the Natario warp drive discovered in $2001$. However as stated by both Alcubierre and Natario themselves the warp drive violates all the known energy conditions because the stress energy momentum tensor is negative implying in a negative energy density. While from a classical point of view the negative energy is forbidden the Quantum Field Theory allows the existence of very small amounts of it being the Casimir effect a good example as stated by Alcubierre himself.The major drawback concerning negative energies for the warp drive is the huge amount of negative energy able to sustain the warp bubble.Ford and Pfenning computed the amount of negative energy needed to maintain an Alcubierre warp drive and they arrived at the result of $10$ times the mass of the entire Universe for a stable warp drive configuration rendering the warp drive impossible.However Harold White manipulating the parameter $@$ in the original shape function that defines the Alcubierre spacetime demonstrated that it is possible to low these energy density requirements.We repeat here the Harold White analysis for the Natario spacetime and we arrive at similar conclusions.From $10$ times the mass of the Universe we also manipulated the parameter $@$ in the original shape function that defines the Natario spacetime and we arrived at arbitrary low results.We demonstrate in this work that both Alcubierre and Natario warp drives have two warped regions and not only one.We also discuss Horizons and Doppler Blueshifts.The main reason of this work is to demonstrate that Harold White point of view is entirely correct.
[4872] vixra:1307.0057 [pdf]
Conversion Spacetime in Energy
English: (Traslation) It tries to show that what we call here as Energy is usually a change of over time, or also a Uniformly Accelerated Motion. Both undermine the space-time converted into energy. This phenomenon, besides being reversible, it can be done very efficiently by an electronic device like a vacuum tube. Discussed here some possible dimensions such device, but without electron optics, and the influence of electromagnetic radiation losses and maximum deviation for an electron eccentric. In one of the appendices also shows how you can extend the phenomenon of time dilation (measured redshift in the Sun [1]) for electrically charged bodies. And whose values match with calculations based on the difference over time. Spanish: (Original) Se trata de mostrar aquí que lo que denominamos habitualmente como Energía es un cambio de transcurso de tiempo, o, también, un Movimiento Uniformemente Acelerado. Ambos merman el espacio-tiempo para convertirse en energía. Este fenómeno, aparte de ser reversible, es posible hacerlo muy eficientemente mediante un dispositivo electrónico similar a un tubo de vacío. Se analizan aquí algunas posibles dimensiones de tal dispositivo, aunque sin óptica electrónica, y la influencia de pérdidas por radiación electromagnética así como la desviación máxima para un electrón excéntrico. En uno de los Apéndices se muestra además como es posible extender el fenómeno de dilatación temporal (corrimiento al rojo medido en el Sol [1]) para cuerpos con carga eléctrica. Y cuyos valores coinciden con los cálculos basados en la diferencia del transcurso del tiempo.
[4873] vixra:1307.0044 [pdf]
Implicaciones Fisicas de la Eleccion de Fases y Base de Helicidad en las Ecuaciones de la Mecanica Cuantica Relativista
In this Work it was shown that the bispinors eigenstates imply different physical phenomena in relativistic quantum mechanics, if they are constructed with the relative phase factor introduced between 2-spinors. This fact opposes the non-relativistic theory. Thus, the pahse factor has importance. The consequences of the basis choice are also studied. The basis is understood as the linear independent system of the field functions which generate the spinor space. I choose this system in such a way that the functions are not the eigenstates of the S_z, but the eigenstates of the helicity operator. I call this basis as the helicity basis. The representations (1/2,0)+(0,1/2) and (1,0)+(0,1) are consiidered. As a result, the corresponding state functions have different behaviour with respect to the parity and the charge conjugation comparing with the usual basis.
[4874] vixra:1307.0035 [pdf]
A Modification of Cosmological Friedman-Robertson-Walk Metric
In this paper we firstly present an explicit dynamical equation which satisfies the general principle of relativity under the framework of classical mechanics. In light of this fact, the necessity of Einstein's equivalence principle for the gravity being geometrized should be reexamined. Especially, Einstein's (strong) equivalence principle claims that the inertial force is equivalent to the gravitational force on physical effect. But in fact the new dynamical equation proves that the essence of the inertial force is the real force exerted on the reference object, which can actually be all kinds of forces such as the gravitational force, electromagnetic force and so on. Therefore, in this context we only retain the numerical equality between the inertial mass and gravitational mass and abandon Einstein's (strong) equivalence principle. Consequently, the candidate for the standard clock should be corrected into the mathematical clock which duplicates the real clock equipped by the observer himself. Then an adjusted physical picture for how to convert the gravitational force into a geometric description on space-time is presented. On the other hand, we point out that all cosmological observations are made by the observer at the present time on the earth, instead of any other observers including the comoving observers in the earlier unverse. On this basis, we introduce an extra factor $b(t)$ in $FRW$ cosmological metric to depict the gravitational time dilation effect since the local proper clock may run in a faster and faster rate with the expanding of the universe. In this way, we may obtain a positive value of $\rho+3p$ and avoid the introduction of dark energy in the current universe.
[4875] vixra:1307.0034 [pdf]
Energy Gap of Superconductor Close to Tc Without CC+
We derived the energy gap of superconductor close to Tc, without using the usual methods of creation-annulation operators CC+. our approximations ar in good agreement with the numerical estimates and theoretical results.
[4876] vixra:1307.0033 [pdf]
Riemann's R-Function and the Distribution of Primes
Riemann's R-function is shown to alternately under- and over-estimate the number of primes in the intervals defined by the Fibonacci numbers, specifically from the interval [55,89] to the interval [317811,514229].
[4877] vixra:1307.0024 [pdf]
Modified Saint-Venant’s Principe With Example
The Statement of Modified Saint-Venant’s Principle is suggested. The axisymmetrical deformation of the infinite circular cylinder loaded by an equilibrium system of forces on its near end is discussed and its formulation of Modified Saint-Venant’s Principle is established. It is evident that finding solutions of boundary-value problems is a precise and pertinent approach to establish Saint-Venant type decay of elastic problems.
[4878] vixra:1307.0004 [pdf]
Galactic Classification
There are two types of fundamental quantum gravitational mass amplitude states that are denoted by the subscripts D and P. The D amplitudes lead to Einstein's usual general relativity mass density functions. The P amplitudes lead to Einstein's additional pressure mass densities, 3P/c<sup>2</sup>. Both of these densities appear in the stress energy momentum tensor of general relativity. Here they appear as solutions to a non-linear Schrödinger equation and carry three quantising parameters (l<sub>D</sub>,m) and (l<sub>P</sub>,m), The l<sub>D</sub>,l<sub>P</sub> values are subsets of the usual electronic quantum variable l which is here denoted by l' to avoid confusion. The m parameter is exactly the same as the electronic quantum theory m, there the z component of angular momentum. In this paper, these parametric relations are briefly displayed followed by an account of the connection to the spherical harmonic functions symmetry system that is necessarily involved. Taken together, the two types of mass density can be integrated over configuration space to give quantised general relativity galactic masses in the form of cosmological mass spectra as was shown in previous papers. Here this aspect has been extended to ensure that every galaxy component of the spectra has a quantised black hole core with a consequent quantised surface area. This is achieved by replacing the original free core radius parameter r<sub>ε</sub> with the appropriate Schwarzschild radius associated with the core mass. Explanations are given for the choices of two further, originally free, parameters, t<sub>b</sub>, θ<sub>0</sub>. The main result from this paper is a quantum classification scheme for galaxies determined by the form of their dark matter spherical geometry.
[4879] vixra:1306.0236 [pdf]
On the Real Representations of the Poincare Group
The formulation of quantum mechanics with a complex Hilbert space is equivalent to a formulation with a real Hilbert space and particular density matrix and observables. We study the real representations of the Poincare group, motivated by the fact that the localization of complex unitary representations of the Poincare group is incompatible with causality, Poincare covariance and energy positivity. We review the map from the complex to the real irreducible representations—finite- dimensional or unitary—of a Lie group on a Hilbert space. Then we show that all the finite-dimensional real representations of the identity component of the Lorentz group are also representations of the parity, in contrast with many complex representations. We show that any localizable unitary representation of the Poincare group, compatible with Poincare covariance, verifies: 1) it is self-conjugate (regardless it is real or complex); 2) it is a direct sum of irreducible representations which are massive or massless with discrete helicity. 3) it respects causality; 4) it is an irreducible representation of the Poincare group (including parity) if and only if it is: a)real and b)massive with spin 1/2 or massless with helicity 1/2. Finally, the energy positivity problem is discussed in a many-particles context.
[4880] vixra:1306.0233 [pdf]
The Projective Line as a Meridian
Descriptions of 1-dimensional projective space in terms of the cross ratio, in one-dimensional geometry as a projective line, in two-dimensional geometry as a circle, and in three-dimensional geometry as a regulus. A characterization of projective 3-space is given in terms of polarity. This paper differs from the original version by addition of a section showing that the circle is distinguished from other meridians by its compactness and the existence of exponential functions.
[4881] vixra:1306.0232 [pdf]
Chisholm-Caianiello-Fubini Identities for S=1 Barut-Muzinich-Williams Matrices
The formulae of the relativistic products are found S=1 Barut-Muzinich-Williams matrices. They are analogs of the well-known Chisholm-Caianiello-Fubini identities. The obtained results can be useful in the higher-order calculations of the high-energy processes with S=1 particles in the framework of the 2(2S+1 Weinberg formalism, which recently attracted attention again. PACS numbers: 02.90.+p, 11.90.+t, 12.20.Ds
[4882] vixra:1306.0231 [pdf]
Self/anti-Self Charge Conjugate States in the Helicity Basis
We construct self/anti-self charge conjugate (Majorana-like) states for the (1/2,0)+(0,1/2)$ representation of the Lorentz group, and their analogs for higher spins within the quantum field theory. The problem of the basis rotations and that of the selection of phases in the Dirac-like and Majorana-like field operators are considered. The discrete symmetries properties (P, C, T) are studied. Particular attention has been paid to the question of (anti)commutation of the Charge conjugation operator and the Parity in the helicity basis. Dynamical equations have also been presented. In the (1/2,0)+(0,1/2) representation they obey the Dirac-like equation with eight components, which has been first introduced by Markov. Thus, the Fock space for corresponding quantum fields is doubled (as shown by Ziino). The chirality and the helicity (two concepts which are frequently confused in the literature) for Dirac and Majorana states have been discussed.
[4883] vixra:1306.0229 [pdf]
It, Bit, Object, Background
In recent years, the notion that information may be the basis for reality, rather than the other way around, has become more popular. Here we consider the issue within the context of a general relation between the role of physical objects against the background in shaping the pattern of distinctions that can then be translated into information. It is found that from this perspective, in classical physics substance is more fundamental than information, while in general relativity they are on an equal footing. Quantum superposition and collapse, on the other hand, introduce new considerations. A foundational principle is introduced to give an explanation for quantum superposition, and from this principle it becomes evident that to the extent that one frames the nature of quantum objects in terms of this dichotomy, in quantum theory information is more fundamental. This implies that the description of quantum objects in a superposition is dependent on features of the background, as these features set boundary conditions on such manifestations. Thus, if this principle really does underlie quantum mechanics, it means that the term "background independent quantum theory" has to be considered a contradiction, which has implications for the search for a quantum theory of gravity.
[4884] vixra:1306.0206 [pdf]
On the Gravitational Mass
The paper presents a natural definition of gravitational mass without invoking new entities. The approach suggested expands the application field of the law of gravitational interaction between material objects located in a gravitating medium. The paper demonstrates the existence of a functional relationship between the inertial and gravitational masses, which has brought us to a conclusion that the condition under which the inertial and gravitational masses would be equal is rather speculative and not practically realizable. We give here real examples of existence of the negative gravitational mass, as well as natural cases of the gravitational repulsion. Some cases of gravitational dipole as a physical object existing in our natural environment are also presented.
[4885] vixra:1306.0185 [pdf]
Unveiling the Conflict of the Speed of Light Postulate: Outlined Mathematical Refutation of the Special Relativity
This paper reveals the mathematical contradictory aspects of Einstein’s speed of light postulate and the Lorentz transformation (LT) equations. Essential analyses of the equations, leading to the intelligible refutation of the mathematical foundation of the Special Relativity Theory (SRT), are emphasized in an outlined structure.
[4886] vixra:1306.0179 [pdf]
Speakable and Unspeakable in Special Relativity. I. Synchronization and Clock Rhythms.
The traditional presentation of special relativity is made from a rupture with previous ideas, such as the notion of absolute motion, emphasizing the antagonism of the Lorentz-Poincaré's views and Einstein's ideas. However, a weaker formulation of the postulates allows to recover all the mathematical results from Einstein's special relativity and reveals that both viewpoints are merely different perspectives of one and the same theory. The apparent contradiction simply stems from different procedures for clock "synchronization," associated with different choices of the coordinates used to describe the physical world. Even very fundamental claims, such as the constancy of the speed of light, relativity of simultaneity and relativity of time dilation, are seen to be no more than a consequence of a misleading language adopted in the description of the physical reality, which confuses clock rhythms with clock time readings. Indeed, the latter depend on the "synchronization" adopted, whereas the former do not. As such, these supposedly fundamental claims are not essential aspects of the theory, as reality is not altered by a mere change of coordinates. The relation between the rhythms of clocks in relative motion is derived with generality. This relation, which is not the standard textbook expression, markedly exposes the indeterminacy of special relativity, connected with the lack of knowledge of the value of the one-way speed of light. Moreover, the theory does not collapse and remains valid if some day the one-way speed of light is truly measured and the indeterminacy is removed. It is further shown that the slow transport method of "synchronization" cannot be seen as distinct from Einstein's procedure.
[4887] vixra:1306.0172 [pdf]
Crystal Cell and Space Lattice Symmetries in Clifford Geometric Algebra
The structure of crystal cells in two and three dimensions is fundamental for many material properties. In two dimensions atoms (or molecules) often group together in triangles, squares and hexagons (regular polygons). Crystal cells in three dimensions have triclinic, monoclinic, orthorhombic, hexagonal, rhombohedral, tetragonal and cubic shapes. The geometric symmetry of a crystal manifests itself in its physical properties, reducing the number of independent components of a physical property tensor, or forcing some components to zero values. There is therefore an important need to efficiently analyze the crystal cell symmetries. Mathematics based on geometry itself offers the best descriptions. Especially if elementary concepts like the relative directions of vectors are fully encoded in the geometric multiplication of vectors.
[4888] vixra:1306.0170 [pdf]
It And Bit
It is broadly believed that everything in the universe is found to be made from a few basic building blocks called fundamental particles, governed by four fundamental forces. However, physicists such as John Archibald Wheeler suggested that information is fundamental to the physics of the universe. According to this it from bit doctrine, all things physical are information-theoretic in origin. This doctrine is based in the old Copenhagen interpretation of quantum mechanics; an interpretation which is internally inconsistent and no applicable to the cosmos as a whole. Modern consistent interpretations of quantum mechanics eliminate the old myths about measurement processes and observers' consciousness and reintroduce the idea of a wholly physical reality, invalidating the it from bit doctrine. Utilizing a new phase space formulation of quantum mechanics developed recently by the author, the concepts of bit and it are reconsidered. We introduce the new states D as quantum bits and the new Hamiltonians H as quantum its. The new concepts of it and bit introduced in this work have a well-defined and rigorous definition, unlike Wheeler's concepts. Moreover, the new concepts apply on situations where the traditional wavefunction theory does not work. The it H is not derivable from the bit D and, as a consequence, the old it from bit doctrine gets substituted by the new it and bit. After showing why the physical entropy used in the science of thermodynamics is not a measure of the ignorance of human observers, the final part of this Essay is devoted to emphasize the importance that the bit acquires in modern science when confronted to the delicious multiplicity of the far-from-equilibrium regimes, where the certainty of Newtonian and Schrödinger motion begin to fade in favour of a complex non-geometrical, 'living', conception of Nature: an it and bit conception.
[4889] vixra:1306.0165 [pdf]
Experimental Evidence for a Non-Globally Trace-Preserving Povm
Abstract. A well-known experiment from 1986 involving entangled pairs is examined. The data, which until now have not been modeled quantitatively, is shown not to be in agreement with the quantum measurement postulate using von Neumann projectors. On the other hand, the data agree with the postulate using a more general positive operator valued measure (POVM). The peculiarity of the POVM proposed here is that it is only conditionally a POVM; i.e. it is not complete (trace-preserving) on the entire Hilbert space but only on a subset, although the POVM elements are positive semidefinite observables on the entire space. The state vector of the aforementioned experiment is in the subset where completeness holds. An extension of the conditional POVM is then applied to a proposed experiment involving three- particle Greenberger-Horne-Zeilinger (GHZ) entangled states. As with the Aspect experiment, completeness holds for the conditional POVM upon application to the GHZ state. Violation of the Bell inequality in the GHZ experiment does not occur upon application of von-Neumann projectors; however the conditional POVM allows for Bell inequality violation.
[4890] vixra:1306.0158 [pdf]
Interactive 3D Space Group Visualization with CLUCalc and the Clifford Geometric Algebra Description of Space Groups
A new interactive software tool is described, that visualizes 3D space group symmetries. The software computes with Clifford (geometric) algebra. The space group visualizer (SGV) originated as a script for the open source visual CLUCalc, which fully supports geometric algebra computation. Selected generators (Hestenes and Holt, JMP, 2007) form a multivector generator basis of each space group. The approach corresponds to an algebraic implementation of groups generated by reflections (Coxeter and Moser, 4th ed., 1980). The basic operation is the reflection. Two reflections at non-parallel planes yield a rotation, two reflections at parallel planes a translation, etc. Combination of reflections corresponds to the geometric product of vectors describing the individual reflection planes. We first give some insights into the Clifford geometric algebra description of space groups. We relate the choice of symmetry vectors and the origin of cells in the geometric algebra description and its implementation in the SGV to the conventional crystal cell choices in the International Tables of Crystallography (T. Hahn, Springer, 2005). Finally we briefly explain how to use the SGV beginning with space group selection. The interactive computer graphics can be used to fully understand how reflections combine to generate all 230 three-dimensional space groups. <b>Mathematics Subject Classification (2000).</b> Primary 20H15; Secondary 15A66, 74N05, 76M27, 20F55 . <b>Keywords.</b> Clifford geometric algebra, interactive software, space groups, crystallography, visualization.
[4891] vixra:1306.0156 [pdf]
Interactive 3D Space Group Visualization with CLUCalc and Crystallographic Subperiodic Groups in Geometric Algebra
The Space Group Visualizer (SGV) for all 230 3D space groups is a standalone PC application based on the visualization software CLUCalc. We first explain the unique geometric algebra structure behind the SGV. In the second part we review the main features of the SGV: The GUI, group and symmetry selection, mouse pointer interactivity, and visualization options.We further introduce the joint use with the International Tables of Crystallography, Vol. A [7]. In the third part we explain how to represent the 162 socalled subperiodic groups of crystallography in geometric algebra. We construct a new compact geometric algebra group representation symbol, which allows to read off the complete set of geometric algebra generators. For clarity we moreover state explicitly what generators are chosen. The group symbols are based on the representation of point groups in geometric algebra by versors.
[4892] vixra:1306.0155 [pdf]
Visualization of Fundamental Symmetries in Nature
Most matter in nature and technology is composed of crystals and crystal grains. A full understanding of the inherent symmetry is vital. A new interactive software tool is demonstrated, that visualizes 3D space group symmetries. The software computes with Clifford (geometric) algebra. The space group visualizer (SGV) is a script for the open source visual CLUCalc, which fully supports geometric algebra computation. In our presentation we will first give some insights into the geometric algebra description of space groups. The symmetry generation data are stored in an XML file, which is read by a special CLUScript in order to generate the visualization. Then we will use the Space Group Visualizer to demonstrate space group selection and give a short interactive computer graphics presentation on how reflections combine to generate all 230 three-dimensional space groups.
[4893] vixra:1306.0145 [pdf]
Algorithm for Conversion Between Geometric Algebra Versor Notation and Conventional Crystallographic Symmetry-Operation Symbols
This paper establishes an algorithm for the conversion of conformal geometric algebra (GA) [3, 4] versor symbols of space group symmetry-operations [6–8, 10] to standard symmetry-operation symbols of crystallography [5]. The algorithm is written in the mathematical language of geometric algebra [2–4], but it takes up basic algorithmic ideas from [1]. The geometric algebra treatment simplifies the algorithm, due to the seamless use of the geometric product for operations like intersection, projection, rejection; and the compact conformal versor notation for all symmetry operations and for geometric elements like lines and planes. The transformations between the set of three geometric symmetry vectors <i>a,b,c</i>, used for generating multivector versors, and the set of three conventional crystal cell vectors <b>a,b,c</b> of [5] have already been fully specified in [8] complete with origin shift vectors. In order to apply the algorithm described in the present work, all locations, axis vectors and trace vectors must be computed and oriented with respect to the conventional crystall cell, i.e. its origin and its three cell vectors.
[4894] vixra:1306.0144 [pdf]
Physical-Layer Encryption on the Public Internet: a Stochastic Approach to the Kish-Sethuraman Cipher
While information-theoretic security is often associated with the one-time pad and quantum key distribution, noisy transport media leave room for classical techniques and even covert operation. Transit times across the public internet exhibit a degree of randomness, and cannot be determined noiselessly by an eavesdropper. We demonstrate the use of these measurements for information-theoretically secure communication over the public internet.
[4895] vixra:1306.0142 [pdf]
Arguments and Model for Quantum Consciousness, Modification of Quantum Collapse, and Panpsychism
As first, a mechanism how quantum coherence in the brain can last long enough is shown. This mechanism is based on very light elementary particles. Then the arguments follow as why consciousness should be a quantum phenomenon and how such an introduction of quantum consciousness modifies the formalism of quantum mechanics. This can also be tested by an experiment. Without use of quantum mechanics it is shown how to atomize consciousness and how to explain the Libet experiment, and why a location of feeling of consciousness is an important paradox. It is also shown that panpsychism is an answer to many questions about consciousness. The author claims that consciousness is physically so fundamental that it is not a result of some complex phenomena, but it is so fundamental as quantum physics and space-time.
[4896] vixra:1306.0139 [pdf]
Fitzgerald-Lorentz Contraction: Real or Apparent
After a summary introduction to Fitzgerald-Lorentz contraction, and a short revision of some classic and modern opinions on its real or apparent nature, this paper introduces two arguments proving that Fitzgerald-Lorentz contraction can only be apparent. The first of them also proves the deformed appearance disagrees with certain physical laws, pointing to a breaking of Lorentz symmetry that questions the Principle of Relativity. Being also consequences of Lorentz transformation, time dilation and phase difference in synchronization could only be apparent deformations, which open the debate on the physical meaning of Lorentz transformation.
[4897] vixra:1306.0135 [pdf]
Interactive Visualization of Plane Space Groups with the Space Group Visualizer
This set of instructions shows how to successfully display the 17 two-dimensional (2D) space groups in the interactive crystal symmetry software Space Group Visualizer (SGV) [6]. The SGV is described in [4]. It is based on a new type of powerful geometric algebra visualization platform [5]. The principle is to select in the SGV a three-dimensional super space group and by orthogonal projection produce a view of the desired plane 2D space group. The choice of 3D super space group is summarized in the lookup table Table 1. The direction of view for the orthographic projection needs to be adapted only for displaying the plane 2D space groups Nos. 3, 4 and 5. In all other cases space group selection followed by orthographic projection immediately displays one cell of the desired plane 2D space group. The full symmetry selection, interactivity and animation features for 3D space groups offered by the SGV software become thus also available for plane 2D space groups. A special advantage of this visualization method is, that by canceling the orthographic projection (remove the tick mark of Orthographic View in drop down menu Visualization), every plane 2D space group is seen to be a subgroup of a corresponding 3D super space group.
[4898] vixra:1306.0133 [pdf]
Tutorial on Fourier Transformations and Wavelet Transformations in Cliord Geometric Algebra
First, the basic concept multivector functions and their vector derivative in geometric algebra (GA) is introduced. Second, beginning with the Fourier transform on a scalar function we generalize to a real Fourier transform on GA multivector-valued functions (f : R^3 -> Cl(3,0)). Third, we show a set of important properties of the Clifford Fourier transform (CFT) on Cl(3,0) such as dierentiation properties, and the Plancherel theorem. We round o the treatment of the CFT (at the end of this tutorial) by applying the Clifford Fourier transform properties for proving an uncertainty principle for Cl(3,0) multivector functions. For wavelets in GA it is shown how continuous Clifford Cl(3,0)- valued admissible wavelets can be constructed using the similitude group SIM(3), a subgroup of the ane group of R^3. We express the admissibility condition in terms of the CFT and then derive a set of important properties such as dilation, translation and rotation covariance, a reproducing kernel, and show how to invert the Clifford wavelet transform of multivector functions. We explain (at the end of this tutorial) a generalized Clifford wavelet uncertainty principle. For scalar admissibility constant it sets bounds of accuracy in multivector wavelet signal and image processing. As concrete example we introduce multivector Clifford Gabor wavelets, and describe important properties such as the Clifford Gabor transform isometry, a reconstruction formula, and (at the end of this tutorial) an uncertainty principle for Clifford Gabor wavelets. Keywords: vector derivative, multivector-valued function, Clifford (geometric) algebra, Clifford Fourier transform, uncertainty principle, similitude group, geometric algebra wavelet transform, geometric algebra Gabor wavelets.
[4899] vixra:1306.0130 [pdf]
The Clifford Fourier Transform in Real Clifford Algebras
We use the recent comprehensive research [17, 19] on the manifolds of square roots of -1 in real Clifford’s geometric algebras Cl(p,q) in order to construct the Clifford Fourier transform. Basically in the kernel of the complex Fourier transform the imaginary unit j in C (complex numbers) is replaced by a square root of -1 in Cl(p,q). The Clifford Fourier transform (CFT) thus obtained generalizes previously known and applied CFTs [9, 13, 14], which replaced j in C only by blades (usually pseudoscalars) squaring to -1. A major advantage of real Clifford algebra CFTs is their completely real geometric interpretation. We study (left and right) linearity of the CFT for constant multivector coefficients in Cl(p,q), translation (x-shift) and modulation (w-shift) properties, and signal dilations. We show an inversion theorem. We establish the CFT of vector differentials, partial derivatives, vector derivatives and spatial moments of the signal. We also derive Plancherel and Parseval identities as well as a general convolution theorem. Keywords: Clifford Fourier transform, Clifford algebra, signal processing, square roots of -1.
[4900] vixra:1306.0128 [pdf]
Analysis of Point Clouds Using Conformal Geometric Algebra
This paper presents some basics for the analysis of point clouds using the geometrically intuitive mathematical framework of conformal geometric algebra. In this framework it is easy to compute with osculating circles for the description of local curvature. Also methods for the fitting of spheres as well as bounding spheres are presented. In a nutshell, this paper provides a starting point for shape analysis based on this new, geometrically intuitive and promising technology. Keywords: geometric algebra, geometric computing, point clouds, osculating circle, fitting of spheres, bounding spheres.
[4901] vixra:1306.0127 [pdf]
Clifford Fourier Transform on Multivector Fields and Uncertainty Principles for Dimensions N = 2 (Mod 4) and N = 3 (Mod 4)
First, the basic concepts of the multivector functions, vector differential and vector derivative in geometric algebra are introduced. Second, we dene a generalized real Fourier transform on Clifford multivector-valued functions ( f : R^n -> Cl(n,0), n = 2,3 (mod 4) ). Third, we show a set of important properties of the Clifford Fourier transform on Cl(n,0), n = 2,3 (mod 4) such as dierentiation properties, and the Plancherel theorem, independent of special commutation properties. Fourth, we develop and utilize commutation properties for giving explicit formulas for f x^m; f Nabla^m and for the Clifford convolution. Finally, we apply Clifford Fourier transform properties for proving an uncertainty principle for Cl(n,0), n = 2,3 (mod 4) multivector functions. Keywords: Vector derivative, multivector-valued function, Clifford (geometric) algebra, Clifford Fourier transform, uncertainty principle.
[4902] vixra:1306.0126 [pdf]
Uncertainty Principle for Clifford Geometric Algebras Cl(n,0), N = 3 (Mod 4) Based on Clifford Fourier Transform
First, the basic concepts of the multivector functions, vector differential and vector derivative in geometric algebra are introduced. Second, we define a generalized real Fourier transform on Clifford multivector-valued functions (f : Rn -> Cl(n,0), n = 3 (mod 4)). Third, we introduce a set of important properties of the Clifford Fourier transform on Cl(n,0), n = 3 (mod 4) such as differentiation properties, and the Plancherel theorem. Finally, we apply the Clifford Fourier transform properties for proving a directional uncertainty principle for Cl(n,0), n = 3 (mod 4) multivector functions. Keywords. Vector derivative, multivector-valued function, Clifford (geometric) algebra, Clifford Fourier transform, uncertainty principle. Mathematics Subject Classication (2000). Primary 15A66; Secondary 43A32.
[4903] vixra:1306.0105 [pdf]
Investigation of the Formalism of Particle Dynamics Under the Framework of Classical Mechanics
In this paper we reconstruct the formulism of particle dynamics under the framework of classical mechanics according to the causal consistency principle, and obtain a new particle dynamical equation. In this derivation there is a most natural and simple assumption that an absolute background of space exists. Because in essence, the absolute background of space must be distinguished from the relative scales of space. The existence of an absolute background can not only be mostly compatible with the physical logic in special theory of relativity, but also retains the most fundamental elements in our intuitional experience. Certainly, the absolute background of space is also the underlying part of Newton's absolute view of space-time. In the application of the new dynamical equation, inertial reference frames are no longer required and inertial forces are no longer introduced by hand. This new dynamical equation can be directly applied in any reference frame which is irrotational with respect to the absolute background of space, namely a moderate general principle of relativity is realized on particle dynamics. The nature of the inertial force is nothing but the real forces exerted on the reference object. Further analysis illustrates that the new particle dynamical equation is more in line with the empirical laws of classical mechanics experiments, than the traditional theoretical formula of Newton's second law.
[4904] vixra:1306.0098 [pdf]
On Lorentz Transformation and Special Relativity: Critical Mathematical Analyses and Findings
In this paper, the Lorentz transformation equations are closely examined in connection with the constancy of the speed of light postulate of the special relativity. This study demonstrates that the speed of light postulate is implicitly manifested in the transformation under the form of space-to-time ratio invariance, which has the implication of collapsing the light sphere to a straight line, and rendering the frames of reference origin-coordinates undetermined with respect to each other. Yet, Lorentz transformation is shown to be readily constructible based on this conflicting finding. Consequently, the formulated Lorentz transformation is deemed to generate mathematical contradictions, thus defying its tenability. A rationalization of the isolated contradictions is then established. An actual interpretation of the Lorentz transformation is presented, demonstrating the unreality of the space-time conversion property attributed to the transformation.
[4905] vixra:1306.0097 [pdf]
Our Current Concept of Locality May be Incomplete (Talk Slides)
The predictions of Bell's inequalities, and their subsequent experimental verification in the form of correlations between spacelike separated events have led to the prevailing current view that `nature is non-local'. Here we examine the possibility that our current concept of locality may at present not be sufficiently differentiated, and that by using 'nature' synonymously with `spacetime' we may have missed an implication of special relativity which by rendering a more complete conception of locality permits such quantum correlations without either hidden variables or violations of locality.
[4906] vixra:1306.0096 [pdf]
Windowed Fourier Transform of Two-Dimensional Quaternionic Signals
In this paper, we generalize the classical windowed Fourier transform (WFT) to quaternion-valued signals, called the quaternionic windowed Fourier transform (QWFT). Using the spectral representation of the quaternionic Fourier transform (QFT), we derive several important properties such as reconstruction formula, reproducing kernel, isometry, and orthogonality relation. Taking the Gaussian function as window function we obtain quaternionic Gabor filters which play the role of coefficient functions when decomposing the signal in the quaternionic Gabor basis. We apply the QWFT properties and the (right-sided) QFT to establish a Heisenberg type uncertainty principle for the QWFT. Finally, we briefly introduce an application of the QWFT to a linear time-varying system. Keywords: quaternionic Fourier transform, quaternionic windowed Fourier transform, signal processing, Heisenberg type uncertainty principle
[4907] vixra:1306.0095 [pdf]
Clifford Algebra Cl(3,0)-valued Wavelets and Uncertainty Inequality for Clifford Gabor Wavelet Transformation
The purpose of this paper is to construct Clifford algebra Cl(3,0)-valued wavelets using the similitude group SIM(3) and then give a detailed explanation of their properties using the Clifford Fourier transform. Our approach can generalize complex Gabor wavelets to multivectors called Clifford Gabor wavelets. Finally, we describe some of their important properties which we use to establish a new uncertainty principle for the Clifford Gabor wavelet transform.
[4908] vixra:1306.0094 [pdf]
Clifford Algebra Cl(3,0)-valued Wavelet Transformation, Clifford Wavelet Uncertainty Inequality and Clifford Gabor Wavelets
In this paper, it is shown how continuous Clifford Cl(3,0)-valued admissible wavelets can be constructed using the similitude group SIM(3), a subgroup of the affine group of R^3. We express the admissibility condition in terms of a Cl(3,0) Clifford Fourier transform and then derive a set of important properties such as dilation, translation and rotation covariance, a reproducing kernel, and show how to invert the Clifford wavelet transform of multivector functions. We invent a generalized Clifford wavelet uncertainty principle. For scalar admissibility constant it sets bounds of accuracy in multivector wavelet signal and image processing. As concrete example we introduce multivector Clifford Gabor wavelets, and describe important properties such as the Clifford Gabor transform isometry, a reconstruction formula, and an uncertainty principle for Clifford Gabor wavelets. Keywords: Similitude group, Clifford Fourier transform, Clifford wavelet transform, Clifford Gabor wavelets, uncertainty principle.
[4909] vixra:1306.0092 [pdf]
Two-Dimensional Clifford Windowed Fourier Transform
Recently several generalizations to higher dimension of the classical Fourier transform (FT) using Clifford geometric algebra have been introduced, including the two-dimensional (2D) Clifford Fourier transform (CFT). Based on the 2D CFT, we establish the two-dimensional Clifford windowed Fourier transform (CWFT). Using the spectral representation of the CFT, we derive several important properties such as shift, modulation, a reproducing kernel, isometry and an orthogonality relation. Finally, we discuss examples of the CWFT and compare the CFT and the CWFT.
[4910] vixra:1306.0091 [pdf]
An Uncertainty Principle for Quaternion Fourier Transform
We review the quaternionic Fourier transform (QFT). Using the properties of the QFT we establish an uncertainty principle for the right-sided QFT. This uncertainty principle prescribes a lower bound on the product of the effective widths of quaternion-valued signals in the spatial and frequency domains. It is shown that only a Gaussian quaternion signal minimizes the uncertainty. Key words: Quaternion algebra, Quaternionic Fourier transform, Uncertainty principle, Gaussian quaternion signal, Hypercomplex functions Math. Subj. Class.: 30G35, 42B10, 94A12, 11R52
[4911] vixra:1306.0089 [pdf]
Clifford Fourier Transformation and Uncertainty Principle for the Clifford Geometric Algebra Cl(3,0)
First, the basic concept of the vector derivative in geometric algebra is introduced. Second, beginning with the Fourier transform on a scalar function we generalize to a real Fourier transform on Clifford multivector-valued functions (f: R^3 -> Cl(3,0)). Third, we show a set of important properties of the Clifford Fourier transform on Cl(3,0) such as differentiation properties, and the Plancherel theorem. Finally, we apply the Clifford Fourier transform properties for proving an uncertainty principle for Cl(3,0) multivector functions. Keywords: vector derivative, multivector-valued function, Clifford (geometric) algebra, Clifford Fourier transform, uncertainty principle.
[4912] vixra:1306.0076 [pdf]
Do Photons exist in Spacetime?
Under our current worldview, it is taken to be so obvious an assumption that objects characterized by $v=c$ exist in spacetime that it is not even bothered to explicitly mention it. This paper will present 4 simple arguments based on the special theory of relativity which at least suggest that this obvious assumption should be put into question. These arguments cannot be considered conclusive, but when considered together they support the case that this question should be seriously investigated.
[4913] vixra:1306.0075 [pdf]
Asymmetry Due to Quantum Collapse (Paper)
This paper points out an internal tension between quantum collapse and expressions which set eigenstates equal to superposition states in a different basis and thereby imply that pre-measurement and immediate post-measurement states are of the same kind. Its resolution appears to be either to discard the collapse postulate or to consider such states to be of distinct kinds with respect to their association with a superposition of properties.
[4914] vixra:1306.0066 [pdf]
Asymmetry Due to Quantum Collapse (Poster)
This poster points out an internal tension between quantum collapse and expressions which set eigenstates equal to superposition states in a different basis and thereby imply that pre-measurement and immediate post-measurement states are of the same kind. Its resolution appears to be either to discard the collapse postulate or to consider such states to be of distinct kinds with respect to their association with a superposition of properties.
[4915] vixra:1306.0063 [pdf]
Connection Between Gravity and Electromagnetism
A new interpretation of electrodynamics and gravity is presented, based on the idea that the electromagnetic and gravitational properties of vacuum are connected. The space and time are treated as imaginary concepts. With this, the electrodynamics and gravitational phenomena can be explained with a Galilean invariant vacuum. Also a new way to explain the gravitational attraction will result.
[4916] vixra:1306.0062 [pdf]
Counterfeit/obsolete Equipment and Nuclear Safety Issues of Vver-1000 Reactors at Kudankulam, India
Counterfeit equipment is becoming a major threat to nuclear safety globally. The Kudankulam Nuclear Power Plant (KKNPP) in India housing two VVER-1000 reactors, imported from Russia is being delayed because of counterfeit, substandard and obsolete equipment. The polar crane, the limb of the reactor, is defective as its hoisting capacity is less than 80% of its nameplate capacity. The crane is used for installing the equipment inside the reactor building and also for removing spent fuel. The contract said that there will be no weld in the beltline of the reactor pressure vessel (RPV). The received vessels have two circumferential welds on the beltline. RPV, the heart of the reactor is irreplaceable and hence determines the life of the reactor. RPV and polar crane are safety grade equipment. The core-damage frequency(CDF) of the reactor in the contract was 10e−7reactor-years, while the supplied reactor has a CDF of 10e−5reactor-years. Two units of generator transformers were received as damaged and these were dismantled and reassembled at the site. This paper finds evidences of the unethical practices of sale and use of obsolete and counterfeit reactor equipment and discusses the global catastrophic risks with reference to the international nuclear safety standards.
[4917] vixra:1306.0051 [pdf]
How Dirac and Majorana Equations Are Related
Majorana and Dirac equations are usually considered as two different and mutually exclusive equations. In this paper we demonstrate that both of them can be considered as a special cases of the more general equation.
[4918] vixra:1306.0014 [pdf]
Femtotechnologies. Step I Atom Hydrogen
It is considered unpromising today to study huge interval between nucleus and atom external shell, so called femtoregion, spread from nanometers to femtometers. But without knowledge of atoms spatial structure and their fields it is impossible to construct molecules correctly, and to build nanoobjects further. Femtotechnologies have to lay down in a theoretical basis of nanotechnologies without which development of applied researches is impossible. In work the femtoregion of the simplyest element, atom of hydrogen, is considered. It is shown that the electron in atom of hydrogen has the difficult spatial structure taking which into account allows to specify fundamental constants, such as a constant of thin structure, the speed of light, Bohr radius of an electron. It is shown that on the basis of these constants it is possible to construct the fundamental scales scaling both internal and external fields of atoms. It allows to formulate macroquantum laws that govern the Universe. It means that without research atoms femtoregion it is impossible to eliminate an abyss which arose between gravitation and electromagnetism. It is shown that our model removes a number of theoretical contradictions and is perfectly confirmed by the last astrophysical experiments.
[4919] vixra:1306.0012 [pdf]
Moller Formula Fails the Experimental Test
This formula is believed to be energy good formula. But we forgot ald fashion experiments, how mankind has measured calories in calorimeters to build the Structure of Science-philosophy. I propose to future advance humankind (after Second Coming) to put a cosmic structure into calorimeter! This rough, "rude" way will certainly give the answers to energy problem in General Relativity (GR). One of mind torturing difficulties is hypothesized non-locality of energy [R.J.Epp, Phys.Rev.D, 62, 124018, 2000]. But the GR is local theory. Thus, the formulas are out of GR.
[4920] vixra:1305.0202 [pdf]
Initiating the Newtonian Gravitational N-Body Spherical Simplification Algorithm on the Inopin Holographic Ring Topology
We propose a preliminary algorithm which is designed to reduce aspects of the n-body problem to a 2-body problem for holographic principle compliance. The objective is to share an alternative view-point on the n-body problem to try and generate a simpler solution in the future. The algorithm operates 2D and 3D data structures to initiate the encoding of the chaotic dynamical system equipped with modified superfluid order parameter fields in both 3D and 4D versions of the Inopin holographic ring (IHR) topology. For the algorithm, we arbitrarily select one point-mass to be the origin and, from that reference frame, we subsequently engage a series of instructions to consolidate the residual (n-1)-bodies to the IHR. Through a step-by-step example, we demonstrate that the algorithm yields "IHR effective" (IHRE) net quantities that enable us to hypothetically define an IHRE potential, kinetic, and Lagrangian.
[4921] vixra:1305.0201 [pdf]
The Majorana Spinor Representation of the Poincare Group
There are Poincare group representations on complex Hilbert spaces, like the Dirac spinor field, or real Hilbert spaces, like the electromagnetic field tensor. The Majorana spinor is an element of a 4 dimensional real vector space. The Majorana spinor field is a space-time dependent Majorana spinor, solution of the free Dirac equation. The Majorana-Fourier and Majorana-Hankel transforms of Majorana spinor fields are defined and related to the linear and angular momenta of a spin one-half representation of the Poincare group. We show that the Majorana spinor field with finite mass is an unitary irreducible projective representation of the Poincare group on a real Hilbert space. Since the Bargmann-Wigner equations are valid for all spins and are based on the free Dirac equation, these results open the possibility to study Poincare group representations with arbitrary spins on real Hilbert spaces.
[4922] vixra:1305.0198 [pdf]
Noise-Enhanced Transmission Efficacy of Aperiodic Signals in Nonlinear Systems
We study the aperiodic signal transmission in a static nonlinearity in the context of aperiodic stochastic resonance. The performance of a nonlinearity over that of the linear system is defined as the transmission efficacy. The theoretical and numerical results demonstrate that the noise-enhanced transmission efficacy effects occur for different signal strengths in various noise scenarios.
[4923] vixra:1305.0174 [pdf]
Rational Structure, General Solution and Naked Barred Galaxies
Rational structure in two dimension means that not only there exists an orthogonal net of curves in the plane but also, for each curve, the stellar density on one side of the curve is in constant ratio to the density on the other side of the curve. Such a curve is called a proportion curve or a Darwin curve. Such a distribution of matter is called a rational structure. Spiral galaxies are blended with dust and gas. Their longer wavelength (e.g. infrared) images present mainly the stellar distribution, which is called the naked galaxies. Jin He found many evidences that galaxies are rational stellar distribution. We list a few examples. Firstly, galaxy components (disks and bars) can be fitted with rational structure. Secondly, spiral arms can be fitted with Darwin curves. Thirdly, rational structure dictates New Universal Gravity which explains constant rotation curves simply and elegantly. This article presents the systematic theory of rational structure, its general solution and geometric meaning. A preliminary application to spiral galaxies is also discussed.
[4924] vixra:1305.0172 [pdf]
Theoretical Study of Quantization of Magnetic Flux in a Superconducting Ring
We refined the concepts of electric current and fluxoid, and London’s equation that specify quantum phenomena of moving electrons and magnetic flux in a closed circuit similar to a superconducting ring, so as not to violate the uncertainty principle. On this basic the relation between the electron motion and magnetic flux in a superconductor has been theoretically investigated by means of Faraday’s law and/or canonical momentum relation. The fact that minimum unit of the quantized magnetic flux is hc/2e does not mean the concurrent motion of the two electrons in a Cooper pair as is known so far. However, it is shown to be related with independent motion of the each electron in a superconducting state.
[4925] vixra:1305.0170 [pdf]
Hyper Fast Interstellar Travel Within General Relativity:The Alcubierre and Natario Warp Drive Spacetimes:From Science Fiction to Science Fact
Warp Drives are solutions of the Einstein Field Equations that allows superluminal travel within the framework of General Relativity.There are at the present moment two known solutions: The Alcubierre warp drive discovered in $1994$ and the Natario warp drive discovered in $2001$.The warp drive seems to be very attractive because allows interstellar space travel at arbitrarily large speeds avoiding the light speed limit,time dilatation and mass increase paradoxes of Einstein Special Relativity.This is an introductory article without mathematics written for the general public outlining a brief resume of the "status quo" of the warp drive science from an Historical perspective.We cover the $3$ major obstacles raised by modern science against the physical integrity of the warp drive as a dynamical spacetime that can carry a ship at faster than light speeds.We show using a clear and accessible language that the Natario warp drive emerges as a "winner" against all the $3$ obstacles and can be regarded as a valid candidate for faster than light space travel.Our goal in writing a non-technical paper on this subject is to captive the interest of potential common readers that might would appreciate this subject avoiding the details of complex mathematical explanations.
[4926] vixra:1305.0169 [pdf]
Octonionic Ternary Gauge Field Theories Revisited
An octonionic ternary gauge field theory is explicitly constructed based on a ternary-bracket defined earlier by Yamazaki. The ternary infinitesimal gauge transformations do obey the key $closure$ relations $ [ \delta_1, \delta_2] = \delta_3 $. An invariant action for the octonionic-valued gauge fields is displayed after solving the previous problems in formulating a non-associative octonionic ternary gauge field theory. These octonionc ternary gauge field theories constructed here deserve further investigation. In particular, to study their relation to Yang-Mills theories based on the $G_2$ group which is the automorphism group of the Octonions and their relevance to Noncommutative and Nonassociative Geometry.
[4927] vixra:1305.0147 [pdf]
Creator's Standard Equation, General Solution and Naked Barred Galaxies
We have not found the general solution to the Creator's equation system. However, we have outlined the strategy for determining the solution. Firstly, we should study the stretch equation which is the first order linear and homogeneous partial differential equation, and find its all stretches which correspond to the given vector field (i.e., the gradient of the logarithmic stellar density). Our solution G(x,y), however, must be simultaneously the modulus of some analytic complex function. It is called the modulus stretch. Secondly, among all possible modulus stretches, we find the right solution (i.e., the orthogonal net of curves) which satisfies the Creator's standard equation.
[4928] vixra:1305.0141 [pdf]
Electrical Forces Are not Conservative
Abstract: English:This article shows how electrical force can be powered by asymmetric systems, as with gravitational force. Attempts to explain also summarily where emerges that energy. Spanish:En este artículo se muestra como de la fuerza eléctrica se puede obtener energía mediante sistemas asimétricos, al igual que con fuerza gravitatoria. Se intenta explicar además, someramente de donde surge esa energía.
[4929] vixra:1305.0139 [pdf]
Comment on "Using COSMOtherm to Predict Physicochemical Properties of Poly- and Perfluorinated Alkyl Substances (PFASs)"
In their study, Wang et al. [Environ. Chem. 2011, 8, 389] use the COSMOtherm software program in an attempt to shed some insights into the physicochemical properties of various poly- and perfluorinated compounds. During the conformation dependent pKa investigations on n-perfluorooctanoic acid (PFOA), the authors appear to make a critical error in their analyses. Wang et al. appear to have allowed a transition from an acid form geometry to a very different anionic form geometry, which is not a correct way of calculating free energy changes during acid dissociation. This error explains why these authors obtained widely ranging conformation dependent pKa values (0.9 to 2.9) for PFOA dissociation. As has been previously shown [Rayne and Forest, J. Mol. Struc. THEOCHEM 2010, 949, 60], there are negligible conformation dependent effects on the pKa value of PFOA.
[4930] vixra:1305.0137 [pdf]
Zanaboni Theory and Saint-Venant's Principle: Updated
Zanaboni Theory is mathematically analyzed in this paper. The conclusion is that Zanaboni Theorem is invalid and not a proof of Saint-Venant's Principle; Discrete Zanaboni Theorem and Zanaboni's energy decay are inconsistent with Saint-Venant's decay; the inconsistency, discussed here, between Zanaboni Theory and Saint-Venant's Principle provides more proofs that Saint-Venant's Principle is not generally true.
[4931] vixra:1305.0136 [pdf]
Saint-Venant's Principle: Rationalized and Rational
The problem of statement of Saint-Venant's Principle is concerned. Statement of Boussinesq or Love is ambiguous so that its interpretations are in contradiction with each other. Rationalized Statement of Saint-Venant’s Principle of elasticity is suggested to rule out the ambiguity of Statements of Boussinesq and Love. Rational Saint-Venant's Principle is suggested to fit and guide applications of the principle to fields of continuum physics and cover the analogical case as well as the non-analogical case discovered and discussed in this paper . `` Constraint-free " problems are suggested and `` Constraint-free " Rational Saint-Venant's Principle or Rational Saint-Venant's Principle with Relaxed Boundary Condition is developed to generalize the principle and promote its applications to fields of continuum physics . Applications of Analogical Rational Saint-Venant's Principle and `` Constraint-free " Rational Saint-Venant's Principle are exemplified, emphasizing `` properness " of the boundary-value problems. Three kinds of properly posed boundary-value problems, i.e., the boundary-value problem with the undetermined boundary function, the boundary-value problem with the implicit boundary condition and the boundary-value problem with the explicit boundary condition, are suggested for both `` constrained " and `` constraint-free " problems.
[4932] vixra:1305.0129 [pdf]
The Irrelevance of Bell Inequalities in Physics :Comments on the DRHM Paper
It was shown in [1], cited in the sequel as DRHM, that upon a correct use of the respective statistical data, the celebrated Bell inequalities cannot be violated by quantum systems. This paper presents in more detail the surprisingly elementary, even if rather subtle related basic argument in DRHM, and does so together with a few comments which, hopefully, may further facilitate its wider understanding.
[4933] vixra:1305.0100 [pdf]
Orbital Averages and the Secular Variation of the Orbits
Orbital averages are employed to compute the secular variation of the elliptical planetary elements in the orbital plane in presence of perturbing forces of various kinds. They are also useful as an aid in the computation of certain complex integrals. An extensive list of computed integrals is given.
[4934] vixra:1305.0094 [pdf]
The Creator's Equation System Without Composite Functions
Is the sum of rational structures also a rational structure? It is called the Creator's big question for humans. Numerical calculation suggests that it is approximately rational for the fitted parameter values of barred spiral galaxies. However, we need mathematical justification. The authors present the Creator's equation system without composite functions, the equation system being the necessary and sufficient condition for rational structure. However, we have not found its general solution. Please help us find the general solution.
[4935] vixra:1305.0085 [pdf]
A Complex and Triplex Framework for Encoding the Riemannian Dual Space-Time Topology Equipped with Order Parameter Fields
In this work, we forge a powerful, easy-to-visualize, flexible, consistent, and disciplined abstract vector framework for particle and astro physics that is compliant with the holographic principle. We demonstrate that the structural properties of the complex number and the sphere enable us to introduce and define the triplex number---an influential information structure that is similar to the 3D hyper-complex number by D. White and P. Nylander---which identifies a 3D analogue of (2D) complex space. Consequently, we engage the complex and triplex numbers as abstract vectors to systematically encode the state space of the Riemannian dual 3D and 4D space-time topologies, where space and time are dual and interconnected; we use the triplex numbers (with triplex multiplication) to extend 1D and 2D algebraic systems to 3D and 4D configurations. In doing so, we equip space-time with order parameter fields for topological deformations. Finally, to exemplify our motivation, we provide three example applications for this framework.
[4936] vixra:1305.0084 [pdf]
Spontaneous Symmetry Breaking in Nonlinear Dynamic System
Spontaneous Symmetry Breaking(SSB), in contrast to explicit sym-metry breaking, is a spontaneous process by which a system governed by a symmetrical dynamic ends up in an asymmetrical state. So the symmetry of the equations is not re ected by the individual solutions, but it is re ected by the symmetrically coexistence of asymmetrical solutions. SSB provides a way of understanding the complexity of nature without renouncing fundamental symmetries which makes us believe or prefer symmetric to asymmetric fundamental laws. Many illustrations of SSB are discussed from QFT, everyday life to nonlinear dynamic system.
[4937] vixra:1305.0082 [pdf]
Numerical Solution of Nonlinear Sine-Gordon Equation with Local RBF-Based Finite Difference Collocation Method
This paper presents the local radial basis function based on finite difference (LRBF-FD) for the sine-Gordon equation. Advantages of the proposed method are that this method is mesh free unlike finite difference (FD) and finite element (FE) methods, and its coefficient matrix is sparse and well-conditioned as compared with the global RBF collocation method (GRBF). Numerical results show that the LRBF-FD method has good accuracy as compared with GRBF.
[4938] vixra:1305.0052 [pdf]
The Creator's Equation
Is the sum of rational structures also a rational structure? It is called the Creator's big question for humans. Numerical calculation suggests that it is approximately rational for the fitted parameter values of barred spiral galaxies. However, we need mathematical justification. The authors are very old and are not experts in mathematics. Please help us humans to resolve the question.
[4939] vixra:1305.0045 [pdf]
Smarandache Seminormal Subgroupoids
In this paper, we defined the Smarandache seminormal subgroupoids. We have proved some results for finding the Smarandache seminormal subgroupoids in Z(n) when n is even and n is odd.
[4940] vixra:1305.0038 [pdf]
Enumeration of Self-Avoiding Walks in a Lattice
A self-avoiding walk (SAW) is a path on a lattice that does not pass through the same point more than once. We develop a method for enumerating self-avoiding walks in a lattice by decomposing them into smaller pieces called tiles, solving particular cases on the square, triangular and cubic lattices. We also show that enumeration of SAWs in a lattice is related to enumeration of edge-connected shapes, for example polyominoes.
[4941] vixra:1305.0029 [pdf]
Quantum Information and Cosmology: the Connections
The information or knowledge we can get from a quantum state, depends on the interaction of this state with the measurement process itself. As we know, this involves an uncertainty in the amount of information and accuracy of this knowledge. Now we ask: is there an information content intrinsic and independent of the observer, in quantum reality? Our answer is a resounding, yes. More precisely, we show that this information is encoded on surfaces, and specifically in circular compactifications. That is, there is a holograph on surfaces. In this encoding of quantum information will call as the strong holographic principle. Due to the application of this principle, it will show as the Higgs vacuum value implies a subtle correction by entropic uncertainty. The connections of quantum information, to cosmology, are clearly shown when the values of the dark energy density approxeq = ln2;and matter density Omega_{m}= Omega_{c}+ Omega_{b}, approxeq = 1-ln2. It showed that the equation of the energy-momentum has five solutions by factoring by two components which appear in terms of mass and imaginary momentum. Far from being a mere mathematical artifice, we see that these states should exist, and that our interpretation of them is that every particle appears to be a mixture of two states, one of them unobservable, having an imaginary component. In other words: these involve imaginary states faster than the speed of light, without contradicting Special Relativity, as we shall see. Failure to observe Cherenkov radiation forces us to determine which are virtual states. These five states also appear to be related to the minimum number of microstates which generate the group E8. Finally: all these results lead us to postulate a particle candidate to the dark matter, of approximately 9,2 Gev.
[4942] vixra:1305.0011 [pdf]
Lorentz Transformation And The Relative Velocity
In previous publication, we have imagined a thought experiment, showing that the relative velocities of two observers in a uniform lineaire motion, are different, unlike the predictions of Lorentz transformation. Thanks to this experiment, we show that assuming the equality of the relative velocities, leads inevitably to contradictions. Based on the axioms of the affine space, and their implications, an explanation is provided to understand the source of these contradictions.
[4943] vixra:1305.0007 [pdf]
Free Fermions on Causal Sets
We construct a Dirac theory on causal sets; a key element in the construction being that the causet must be regarded as emergent in an appropriate sense too. We further notice that mixed norm spaces are a key element in the construction allowing for negative norm particles and ``ghosts''.
[4944] vixra:1304.0164 [pdf]
The Extended Born's Reciprocal Relativity Theory :Division, Jordan, N-Ary Algebras, and Higher Order Finsler Spaces
We extend the construction of Born's Reciprocal Relativity theory in ordinary phase spaces to an extended phase space based on Quaternions. The invariance symmetry group is the (pseudo) unitary quaternionic group $ U ( N_+, N_-, {\bf H} ) $ which is isomorphic to the unitary symplectic group $ USp( 2N_+, 2 N_-, {\bf C} )$. It is explicitly shown that the quaternionic group $ U ( N_+, N_-, {\bf H} ) $ leaves invariant both the quadratic norm (corresponding to the generalized Born-Green interval in the extended phase space) and the tri-symplectic $ 2 $-form. The study of Octonionic, Jordan and ternary algebraic structures associated with generalized spacetimes (and their phase spaces) described by Gunaydin and collaborators is reviewed. A brief discussion on $n$-plectic manifolds whose Lie $ n$-algebra involves multi-brackets and $n$-ary algebraic structures follows. We conclude with an analysis on the role of higher-order Finsler geometry in the construction of extended relativity theories with an upper and lower bound to the higher order accelerations (associated with the higher order tangent and cotangent spaces).
[4945] vixra:1304.0162 [pdf]
Theoretical Basis of in Vivo Tomographic Tracer Kinetics
In vivo tracer kinetics, as probed by current tomographic techniques, is revisited from the point of view of fluid kinematics. Proofs of the standard intravascular advective perfusion model from first premises reveal underlying assumptions and demonstrate that all single input models apply at best to undefined tube-like systems, not to the ones defined by tomography, \textit{i.e.} the voxels. In particular, they do not and cannot account for the circulation across them. More generally, it is simply not possible to define a single non-zero steady volumetric flow rate per voxel. Restarting from the fact that kinematics requires the definition of six volumetric flow rates per voxel, one for each face, minimalist, 4D spatiotemporal analytic models of the advective transport of intravascular tracers in the whole organ of interest are obtained. Their many parameters, plasmatic volumetric flow rates and volumes, can be readily estimated at least in some specific cases. Estimates should be quasi-absolute in homogeneous tissue regions, regardless of the tomographic technique. Potential applications such as dynamic angio-tractography are presented. By contrast, the transport of mixed intra/extravascular tracers cannot be described by conservation of the mass alone and requires further investigation. Should this theory eventually supersede the current one(s), it shall have a deep impact on our understanding of the circulatory system, hemodynamics, perfusion, permeation and metabolic processes and on the clinical applications of tracer tracking tomography to numerous pathologies.
[4946] vixra:1304.0158 [pdf]
Products of Generalised Functions
An elementary algebra of products of generalised functions is constructed. A way of multiplying the defined generalised functions with polynomials is also given. The theory is given for single-variable functions but it can be easily generalised to the multi-variable case.
[4947] vixra:1304.0144 [pdf]
The Creator's Quest for Humans
According to Western religion, galaxies are designed by the Creator. Dr. Jin He found many evidences that galaxies are rational distribution of stars. Rational structure in two dimension means that not only there exists an orthogonal net of curves in the plane but also, for each curve, the stellar density on one side of the curve is in constant ratio to the density on the other side of the curve. Such a curve is called a proportion curve or a Darwin curve. Such a distribution of matter is called a rational structure. There are plenty of evidences for rational galaxy structure. We list a few examples. Firstly, galaxy stellar distribution can be fitted to rational structure. Secondly, spiral arms can be fitted to Darwin curves. Thirdly, rational structure dictates New Universal Gravity which explains constant rotation curves elegantly. However, there has been no systematic study on rational structure. This letter presents a general partial differential equation whose solution must be rational structure. Also given in the letter is the geometric meaning of the equation. The general solution to the equation is called the Creator's open quest for humans which has not been answered yet.
[4948] vixra:1304.0142 [pdf]
Quantum Theory of Galactic Dynamics
Much of the introductory section of this paper is devoted to displaying some previously obtained formulae, incorporating a change of notation and variables and giving some explanation of the relation of the work to Newtonian gravitation theory. This section all refers to a quantisation of gravity concentrated on and limited to galaxies with totally spherically symmetric cores and halos. Only the radial variable r is involved and the emphasis is on the dark matter concept. All the following sections are devoted to generalising the theory to additionally incorporate a dependence of galactic structure on the and spherical angular coordinates. The theory is derived using Schr¨odinger quantum theory in much the same way as it was used in developing the theory of atomic structure. The theoretical structure to be developed in this papers is a hybrid formulation involving three fundamental theoretical facets, general relativity, Schr¨odinger quantum mechanics and a new theoretical version of isothermal gravity self equilibrium. The combined structure has only become possible because of the discovery of an infinite discrete set of equilibrium states associated with this later theory, the l parameter states. The configuration space structure of these states has been found to be available in Schr¨odinger theory from a special inverse square law potential which appears to supply an inverse cube self attraction to the origin that maintains galaxies in an isolated steady state self gravity quantum condition.
[4949] vixra:1304.0123 [pdf]
Comment on "QSAR Modeling is not 'Push a Button and Find a Correlation': A Case Study of Toxicity of (Benzo-)triazoles on Algae"
In their manuscript, Gramatica et al. [Mol. Inf. 2012, 31, 817-835] claim to conduct quantitative structure-activity relationship (QSAR) modeling on a suite of triazoles, benzotriazoles, and additional azo-aromatic compounds. However, a number of the compounds examined by these authors do not appear to be triazoles, benzotriazoles, or other azo-aromatic compounds. In some cases, the authors also appear to publish incorrect molecular structures which may affect the structural descriptors employed for QSAR development.
[4950] vixra:1304.0109 [pdf]
The Twilight of the Scientific Age
This brief article presents the introduction and draft of the fundamental ideas developed at length in the book of the same title, which gives a challenging point of view about science and its history/philosophy/sociology. Science is in decline. After centuries of great achievements, the exhaustion of new forms and fatigue have reached our culture in all of its manifestations including the pure sciences. Our society is saturated with knowledge which does not offer people any sense in their lives. There is a loss of ideals in the search for great truths and a shift towards an anodyne specialized industry.
[4951] vixra:1304.0104 [pdf]
Primes in the Intervals [kn,(k+1)n]
Abstract. In this paper, we prove: (a) for every integer n > 1 and a fixed integer k less than or equal to n, there exists a prime number p in between kn and (k + 1)n, and (b) conjectures of Legendre, Oppermann, Andrica, Brocard, and Improved version of Legendre conjecture as a particular case of (a).
[4952] vixra:1304.0086 [pdf]
Flux Divergence Method of Solving Einstein Equations
Trying to solve Einstein Equation with new integral of motion (conserved quantity) for system. The integrals of motion are important. Let us say, that discovery of new one means the Nobel Prize. Using them the "unsolvable" problems solve. Several dust collapse solutions satisfy the new formulas. I solved also collapse of perfect liquid ball with unexpected result. After the perfect liquid I have got result(s) for the real liquid. Critically discussed known conserved quantities. Revealed the nature of Dark Energy. Cancelled the cyclic Universe hypothesis (World will never be shrinking).
[4953] vixra:1304.0056 [pdf]
Discovering Taxon Specific Oligomer Repeats in Microbial Genomes
Using the computational approach, we studied the oligonucleotides repeats in current available bacterial whole genomes. Though, repeats only count for a small portion in bacterial genomes, they still prevail. Our study shows, some of these oligonucleotides have a large copy number in genomes while maintain its taxon specificity. Generally, a length larger than 12 is enough to make a oligonucleotides repeats genus specific. Longer oligonucleotides will become more specific and be the species or strain marker sequences. We show here some examples in archaea and bacteria with different specific taxon levels. As we have a large volume of computational results, we make it available online by our TSOR server.It deals with user’s query and in this thesis we give examples on how to use this server. Moreover as these TSOR sequences are both specific and highly repeated, they would become possible nice candidate for biased microbial community genomes amplification
[4954] vixra:1304.0055 [pdf]
Efficient Statistical Significance Approximation for Local Association Analysis of High-Throughput Time Series Data
Local association analysis, such as local similarity analysis and local shape analysis, of biological time series data helps elucidate the varying dynamics of biological systems. However, their applications to large scale high-throughput data are limited by slow permutation procedures for statistical signicance evaluation. We developed a theoretical approach to approximate the statistical signicance of local similarity and local shape analysis based on the approximate tail distribution of the maximum partial sum of independent identically distributed (i.i.d) and Markovian random variables. Simulations show that the derived formula approximates the tail distribution reasonably well (starting at time points > 10 with no delay and > 20 with delay) and provides p-values comparable to those from permutations. The new approach enables ecient calculation of statistical signicance for pairwise local association analysis, making possible all-to-all association studies otherwise prohibitive. As a demonstration, local association analysis of human microbiome time series shows that core OTUs are highly synergetic and some of the associations are body-site specic across samples. The new approach is implemented in our eLSA package, which now provides pipelines for faster local similarity and shape analysis of time series data. The tool is freely available from eLSA's website: http://meta.usc.edu/softs/lsa.
[4955] vixra:1304.0054 [pdf]
Developing Statistical and Algorithmic Methods for Shotgun Metagenomics and Time Series Analysis
Recent developments in experimental molecular techniques, such as microarray, next generation sequencing technologies, have led molecular biology into a high-throughput era with emergent omics research areas, including metagenomics and transcriptomics. Massive-size omics datasets generated and being generated from the experimental laboratories put new challenges to computational biologists to develop fast and accurate quantitative analysis tools. We have developed two statistical and algorithmic methods, GRAMMy and eLSA, for metagenomics and microbial community time series analysis. GRAMMy provides a unied probabilistic framework for shotgun metagenomics, in which maximum likelihood method is employed to accurately compute Genome Relative Abundance of microbial communities using the Mixture Model theory (GRAMMy). We extended the Local Similarity Analysis technique (eLSA) to time series data with replicates, capturing statistically signicant local and potentially time-delayed associations. Both methods are validated through simulation studies and their capability to reveal new biology is also demonstrated through applications to real datasets. We implemented GRAMMy and eLSA as C++ extensions to Python, with both superior computational eciency and easy-to-integrate programming interfaces. GRAMMy and eLSA methods will be increasingly useful tools as new omics researches accelerating their pace.http://meta.usc.edu/softs/lsa.
[4956] vixra:1304.0053 [pdf]
Charmonium with an Effective Morse Molecular Potential
The Morse molecular potential is used for the rst time as an effective potential for the overall interaction in charmonium. This procedure allows the calculation of the rotational contributions of P states, the radii of ve S states, and an absolute threshold for bound states. The calculation of the latter provides important information on the character of the recently found levels X(3915), X(3940), Psi(4040), X(4050), X(4140), Psi(4160), X(4160), X(4250), X(4260), X(4350), Psi(4415), X(4430), and X(4660).
[4957] vixra:1304.0052 [pdf]
Analytical Results on Systems Arising in Enzymatic Reactions with Application to Phosphofructokinase Model
A reaction-diffusion system based on some biological systems, arising in enzymatic reactions, has been considered. The iterative method by means of a fixed point theorem has been applied in order to solve this system of coupled nonlinear partial differential equations. The existence, uniqueness and positiveness of the solution to system with Robin-type boundary condition have been obtained. A biochemical system has been extended and solved analytically. Quasi-steady states and linear stability analysis have been proved.
[4958] vixra:1304.0033 [pdf]
Problems with the "End of Growth" Hypothesis and Its Generalization
The hypothesis that the global economy has entered, or is nearing, the "end of growth" has been proposed in the literature. Overall, we find no significant evidence to support this hypothesis, or that such economic limits are about to be reached in the near-term. Our conclusions in no way diminish concerns over current and proposed rates of non-renewable resource extraction and the negative impacts of continuing human population growth and industrial expansion on the biosphere. However, any natural system limits that have been reached or exceeded (or are about to be) do not appear to be causing sufficiently large negative feedbacks on global economic growth within the scope of the most commonly employed socio-economic indicators in order to warrant claims that future economic growth will halt or regress.
[4959] vixra:1304.0026 [pdf]
Avoiding an Imaginary Connection in the Dirac Equation
In a Majorana basis, the Dirac equation for a free spin one-half particle is a 4x4 real matrix differential equation. When including the effects of the electromagnetic interaction, the Dirac equation is a complex equation due to the presence of an imaginary connection in the covariant derivative, related with the phase of the spinor. In this paper we study the solutions of the Dirac equation with the null and Coulomb potentials and notice that there is a real matrix that squares to -1, relating the imaginary and real components of these solutions. We show that these solutions can be obtained from the solutions of two non-linear 4x4 real matrix differential equations with a real matrix as the connection of the covariant derivative.
[4960] vixra:1304.0024 [pdf]
Lorentz Violation and Modified Geodesics
We propose a modification of proper time, which is dependent on vierbein and spin connection. It explicitly breaks local Lorentz gauge symmetry, while preserving diffeomorphism invariance. In the non-relativistic limit, the geodesics are consistent with galactic rotation curves without invoking dark matter.
[4961] vixra:1304.0023 [pdf]
Special Relativity With Apparent Simultaneity
We consider a model of special relativity in which standard simultaneity is replaced by an alternative defined per observer by the direct appearance of simultaneity. The postulates of special relativity are interpreted to permit it, using a corresponding measure of distance chosen so that the measurement of light’s speed remains invariant with a value of c. The relativistic Doppler effect and Lorentz transformation of time are derived from direct observations without consideration of a delay of light. Correspondence of the model with SR is further shown by finding a displaced observer whose measure of apparent simultaneity is identical to a given observer’s measure of standard simultaneity. The advantages of apparent simultaneity include unifying apparent delay of light with relative simultaneity, and unifying changes to relative simultaneity with change in observer position. With speculative interpretation the model implies an equivalence of time and distance.
[4962] vixra:1304.0019 [pdf]
Time Dependent Schrödinger Equation for Black Hole Evaporation: no Information Loss
In 1976 S. Hawking claimed that “Because part of the information about the state of the system is lost down the hole, the final situation is represented by a density matrix rather than a pure quantum state” (Verbatim from ref. 2). This was the starting point of the popular “black hole (BH) information paradox”. In a series of papers, together with collaborators, we naturally interpreted BH quasi-normal modes (QNMs) in terms of quantum levels discussing a model of excited BH somewhat similar to the historical semi-classical Bohr model of the structure of a hydrogen atom. Here we ex- plicitly write down, for the same model, a time dependent Schrödinger equation for the system composed by Hawking radiation and BH QNMs. The physical state and the correspondent wave function are written in terms of an unitary evolution matrix instead of a density matrix. Thus, the final state results to be a pure quantum state instead of a mixed one. Hence, Hawking’s claim is falsified because BHs result to be well defined quantum mechanical systems, having ordered, discrete quantum spectra, which respect 't Hooft's assumption that Schröedinger equations can be used universally for all dynamics in the universe. As a consequence, information comes out in BH evaporation in terms of pure states in an unitary time dependent evolution. In Section 4 of this paper we show that the present approach permits also to solve the entanglement problem connected with the information paradox.
[4963] vixra:1304.0016 [pdf]
Localization Formulas About Two Killing Vector Fields
In this article, we will discuss the smooth $(X_{M}+\sqrt{-1}Y_{M})$-invariant forms on M and to establish a localization formulas. As an application, we get a localization formulas for characteristic numbers.
[4964] vixra:1304.0015 [pdf]
Comment on "Sorption of Organic Chemicals to Soil Organic Matter: Influence of Soil Variability and pH Dependence"
In their article, Bronner and Goss [Environ. Sci. Technol., 2011, 45, 1307-1312] investigate the pH dependence of organic chemical sorption to soil organic matter. The authors report a log Koc value for benzoyl chloride in aqueous solution, despite this compound having a known hydrolysis half-life of only 16 seconds in water. This timeframe is far too short to allow the measurement of any equilibrium based partitioning coefficients. Consequently, one suspects that the log Koc value reported for benzoyl chloride is likely that of its hydrolysis product: benzoic acid. The authors also may have chosen two experimental conditions (pH 4.5 and 7.2) between which the ionization state of the carboxylic acids in the organic matter may have changed very little, and could instead have remained in effectively the same net ionization state between the two experimental pH endpoints. Thus, there does not appear to be sufficient evidence in this work to support the general claim therein that "protonation/deprotonation of carboxylic groups in humic matter has no significant influence on sorption ... even for polar organic molecules."
[4965] vixra:1304.0014 [pdf]
Comment on "QSPR Study on the Bioconcentration Factors of Nonionic Organic Compounds in Fish by Characteristic Root Index and Semiempirical Molecular Descriptors"
In their article, Sacan et al. [J. Chem. Inf. Comput. Sci. 2004, 44, 985-992] construct a quantitative structure-property relationship model to predict the bioconcentration factors of purportedly nonionic organic compounds. A number of the compounds examined by these authors are not nonionic as claimed, but instead have associated pKa values that would render the molecules significantly, and - in some cases - effectively entirely, ionized under conditions relevant for bioconcentration in freshwater and/or marine aquatic systems.
[4966] vixra:1304.0011 [pdf]
Lie Algebrized Gaussians for Image Representation
We present an image representation method which is derived from analyzing Gaussian probability density function (\emph{pdf}) space using Lie group theory. In our proposed method, images are modeled by Gaussian mixture models (GMMs) which are adapted from a globally trained GMM called universal background model (UBM). Then we vectorize the GMMs based on two facts: (1) components of image-specific GMMs are closely grouped together around their corresponding component of the UBM due to the characteristic of the UBM adaption procedure; (2) Gaussian \emph{pdf}s form a Lie group, which is a differentiable manifold rather than a vector space. We map each Gaussian component to the tangent vector space (named Lie algebra) of Lie group at the manifold position of UBM. The final feature vector, named Lie algebrized Gaussians (LAG) is then constructed by combining the Lie algebrized Gaussian components with mixture weights. We apply LAG features to scene category recognition problem and observe state-of-the-art performance on 15Scenes benchmark.
[4967] vixra:1304.0007 [pdf]
Comment on "Dependence of Persistence and Long-Range Transport Potential on Gas-Particle Partitioning in Multimedia Models"
In their article, Gotz et al. [Environ. Sci. Technol., 2008, 42, 3690-3696] use three different multimedia contaminant fate models to analyze the impact of implementing a two-particle-size polyparameter linear free energy relationship approach on metrics of persistence and long-range transport, and on calculated concentrations of semivolatile organic chemicals in the Arctic. One of the twelve compounds investigated is 2,4-D (2,4-dichlorophenoxyacetic acid), which is effectively entirely dissociated in aqueous systems. The authors do not appear to have considered the ionization of 2,4-D during their multimedia modeling exercises, particularly the effects of ionization on octanol-water and air-water partitioning behavior. Consequently, all modeling results presented for 2,4-D appear to be in significant error and should not be employed for risk assessment purposes.
[4968] vixra:1304.0006 [pdf]
A Perdurable Defence to Weyl's Unified Theory
Einstein dealt a lethal blow to Weyl's unified theory by arguing that Weyl's theory was at the very least, it was beautiful and at best, un-physical, because its concept of variation of the length of a vector from one point of space to the other meant that certain absolute quantities, such as the ``fixed" spacing of atomic spectral lines and the Compton wavelength of an Electron for example, would change arbitrarily as they would have to depend on their prehistories. This venomous criticism of Einstein to Weyl's theory remains much alive today as it was on the first day Einstein pronounced it. We demonstrate herein that one can overcome Einstein's criticism by recasting Weyl's theory into a new Weyl-kind of theory were the length of vectors are preserved as is the case in Riemann geometry. In this new Weyl theory, the Weyl gauge transformation of the Riemann metric and the electromagnetic field are preserved
[4969] vixra:1303.0226 [pdf]
Relations Between Distorted and Original Angles in STR
Using the Oblique-Length Contraction Factor, which is a generalization of Lorentz Contraction Factor, one shows several trigonometric relations between distorted and original angles of a moving object lengths in the Special Theory of Relativity.
[4970] vixra:1303.0223 [pdf]
Problems with the "Oil Curse" Hypothesis and Its Generalization
The concept of an "oil curse" has been widely debated in the literature. Two clear camps have emerged: (1) those that favor the basic hypothesis or a modified version thereof, and (2) those that find little generalizable empirical evidence for the negative impacts of oil and gas development on the political, socio-economic, and/or environmental trajectories of oil and gas producing nations relative to their non-oil and gas producing counterparts. Overall, we find no significant evidence to support a generalizable concept of an oil curse. In general, our results do not seek to dismiss concerns regarding the potential impacts of oil and gas development on some regions, but rather to illustrate that any such impacts do not appear to be universal in either their direction or magnitude. Similar to what other groups have found, we see evidence that - in some cases - increased oil and gas development appears to correlate with improved socio-economic indicators. In other cases, the evidence is ambiguous at best in light of the large numbers of confounding variables that are effectively impossible to rigorously account for in order to obtain clear and unequivocal negative causal mechanisms between oil and gas development and the status of a society.
[4971] vixra:1303.0202 [pdf]
Mobile Robot Navigation Using Artificial Landmarks and GPS
移動ロボットのナビゲーションを行うにはロボットが 十分に現在位置と周囲の環境を認識する必要がある。そ のために、ロボットにレーザーレンジスキャナや超音波 センサ、カメラ、オドメトリ、GPS (Global Positioning System) 等のセンサを搭載することで、ロボットは現在 位置・姿勢、周囲の様子、移動距離、周囲の物との距離 等を知ることができるようになる。しかし、センサか らの情報には誤差が含まれており、移動している環境 や搭載しているセンサにより生じる誤差が累積される ことで、現在の位置がわからなくなり、走行経路から 外れて、目的地へたどりつけなくなることがある。正 しい位置を認識するには、定期的に誤差を解消し、位 置の校正を行う必要がある。位置校正を向上させるた めに、ロボットにSLAM (Simultaneous Localization and Mapping)[1] アルゴリズムやKalman Filter[2] などの制 御技術が導入される。
[4972] vixra:1303.0200 [pdf]
Comment on "Policies for Chemical Hazard and Risk Priority Setting: Can Persistence, Bioaccumulation, Toxicity, and Quantity Information be Combined?"
In their article, Arnot and Mackay [Environ. Sci. Technol., 2008, 42, 4648-4654] use 200 chemicals from the Canadian Domestic Substances List (DSL) to illustrate a model that integrates persistence, bioaccumulation, toxicity, and quantity information for a specific substance to assess chemical exposure, hazard, and risk. The authors claim that the DSL chemicals used in their study are not expected to appreciably ionize at environmental pH. In contrast, a number of the compounds in this study have ionizable functional groups with environmentally relevant pKa values, meaning the corresponding partitioning properties are highly pH dependent, thereby rendering the modeling approach applied by these authors subject to a fatal conceptual and practical flaw. In addition, several compounds in the authors' dataset are expected to hydrolyze rapidly in aquatic systems, resulting in negligible environmental persistence.
[4973] vixra:1303.0196 [pdf]
Scientific Errors and Ambiguities in Prominent Submissions to Canadian Environmental Assessments: A Case Study of the Jackpine Mine Expansion Project
In Canada, as in many other developed nations, natural resource development projects meeting certain criteria are required to undergo an environmental assessment (EA) process to determine potential human and ecological health impacts. As part of the Canadian EA process, the Canadian Environmental Assessment Agency generally considers submissions by members of the public and experts. While the allowance of external submissions during EA hearings forms an important component of a functional participatory democracy, little attention appears to have been given regarding the quality of such EA submissions. In particular, submissions to EA hearings by prominent individuals and/or groups may be weighted more heavily in the overall decision making framework than those from non-experts. Important questions arise through the allowance and consideration of external submissions to EAs, such as whether inaccuracies in any such submissions may misdirect the EA decision makers to reach erroneous conclusions, and if such inaccuracies do result in sub-optimal EA processes, how the issues should be addressed. In the current work, a representative recent external submission from a prominent public individual and group to the Shell Canada Jackpine Mine Expansion (JPME) Project EA hearings was examined. The case study submission to the JPME EA hearings appears to contain a number of significant scientific errors and/or ambiguities, demonstrating that the EA process in Canada appears to allow potentially flawed submissions from prominent individuals and/or groups, and these problematic submissions may result in unnecessary delays, expenses, or even erroneous decisions. From a public policy perspective, it is desirable that the Canadian EA process be reformed to minimize contributions that may not result in an accurate assessment of the underlying science for the project(s) under consideration.
[4974] vixra:1303.0186 [pdf]
The Non-Generalizability of The First Law of Petropolitics
In 2006, Friedman wrote an influential and widely cited article in Foreign Policy magazine [May/June (2006) 28-36] stating The First Law of Petropolitics. This law held that the quality of governance in oil-rich petrolist states is inversely correlated with a causative relationship from increasing oil prices. In contrast, we find no generally consistent governance patterns among oil-rich petrolist states related to the price of oil that support any claims for a First Law of Petropolitics.
[4975] vixra:1303.0175 [pdf]
Comment on "QSPR Model for Bioconcentration Factors of Nonpolar Organic Compounds Using Molecular Electronegativity Distance Vector Descriptors"
In their article, Qin et al. [Mol Divers (2010) 14:67-80] construct a quantitative structure-property relationship model to predict the bioconcentration factors of purportedly nonpolar organic compounds. A number of the compounds examined by these authors are not nonpolar as claimed, but instead have associated pKa values that would render the molecules significantly, and - in some cases - effectively entirely, ionized under conditions relevant for bioconcentration in freshwater and/or marine aquatic systems.
[4976] vixra:1303.0172 [pdf]
On Retracts, Absolute Retracts, and Folds in Cographs
Let G and H be two cographs. We show that the problem to determine whether H is a retract of G is NP-complete. We show that this problem is fixed-parameter tractable when parameterized by the size of H. When restricted to the class of threshold graphs or to the class of trivially perfect graphs, the problem becomes tractable in polynomial time. The problem is also solvable in linear time when one cograph is given as an induced subgraph of the other. We characterize absolute retracts for the class of cographs. Foldings generalize retractions. We show that the problem to fold a trivially perfect graph onto a largest possible clique is NP-complete. For a threshold graph this folding number equals its chromatic number and achromatic number.
[4977] vixra:1303.0162 [pdf]
Comment on "Serum Albumin Binding of Structurally Diverse Neutral Organic Compounds: Data and Models"
In their article, Endo and Goss [Chem. Res. Toxicol. 2011, 24, 2293-2301] claim to measure the bovine serum albumin water partition coefficients for 83 structurally diverse neutral organic chemicals and correlate the resulting values against corresponding octanol-water partition coefficients and polyparameter linear free energy relationship models based on descriptors for the neutral forms of each compound. However, several compounds in the authors' dataset would be significantly ionized under the experimental conditions being modeled against, and such ionization must be accounted for in any serum albumin binding modeling efforts.
[4978] vixra:1303.0150 [pdf]
Comment on "Correlation of Aqueous pKa Values of Carbon Acids with Theoretical Descriptors: A DFT Study"
In their article, Charif et al. [J. Mol. Struct. THEOCHEM 818 (2007) 1] used the B3LYP/6-311++G(d,p) density functional level of theory to estimate gas phase standard state (298.15 K, 1 atm) free energies of acid dissociation (ΔacidG°(g)) for 21 carbon acids. These authors then examined correlations between their B3LYP/6-311++G(d,p) ΔacidG°(g) values and corresponding experimental aqueous pKa measurements. Large errors are evident between experimental values and the B3LYP/6-311++G(d,p) calculated ΔacidG°(g) for propanedioic acid, diethyl ester, dimedone, isopropylidene malonate, barbituric acid, and toluene from this study. The findings call into question the generality of the correlation between ΔacidG°(g) and aqueous experimental pKa values for carbon acids proposed by Charif et al., and also highlight the need for additional studies to investigate what other carbon acid moieties may be outliers. In the present case, either the experimental aqueous pKa of toluene in the literature is incorrect, or the quantitative structure-property relationship proposed in Charif et al. is subject to large outliers that greatly diminish its broad applicability.
[4979] vixra:1303.0148 [pdf]
Comment on “Thermodynamic Stability of Neutral and Anionic Pfos: a Gas-Phase, N-Octanol, and Water Theoretical Study”
In their article, Montero-Campillo et al. [J. Phys. Chem. A, 114 (2010) 10148-10155] use the B3LYP density functional with the 6-311+G(d,p) basis set to calculate the relative thermodynamic stabilities of the 89 linear and branched perfluorooctane sulfonic acid (PFOS) isomers in their molecular acid and dissociated anionic forms for the gas phase and aqueous and n-octanol solvent phases. A substantial body of work over the past decade has clearly demonstrated the inability of the B3LYP functional (and the majority of other widely employed density functionals) to accurately represent the relative thermodynamic stabilities of linear and branched alkanes (including perhydro, poly- and perhalogenated, and other functionalized derivatives). It has been specifically demonstrated using a range of theoretical methods (semiempirical, Hartree-Fock [HF], various density functionals, and second order Moller-Plesset perturbation theory) that the B3LYP branching error for perhydroalkane isomerizations also applies to perfluoroalkanes, and particularly to classes of compounds such as the 89 PFOS isomers, as well as the perfluoroalkanoic acids and perfluoralkyl sulfonyl/acyl fluorides in their acid and (where applicable) anionic forms. Consequently, the relative thermodynamic stabilities of the molecular acid and anionic PFOS isomers at the B3LYP/6-311+G(d,p) level of theory put forward by Montero-Campillo et al. are in substantial error, and the authors and readers are referred elsewhere to more accurate calculations.
[4980] vixra:1303.0147 [pdf]
Syntactic - Semantic Axiomatic Theories in Mathematics
A more careful consideration of the recently introduced "Grossone Theory" of Yaroslav Sergeev, [1], leads to a considerable enlargement of what can constitute possible legitimate mathematical theories by the introduction here of what we may call the {\it Syntactic - Semantic Axiomatic Theories in Mathematics}. The usual theories of mathematics, ever since the ancient times of Euclid, are in fact axiomatic, [1,2], which means that they are {\it syntactic} logical consequences of certain assumed axioms. In these usual mathematical theories {\it semantics} can only play an {\it indirect} role which is restricted to the inspiration and motivation that may lead to the formulation of axioms, definitions, and of the proofs of theorems. In a significant contradistinction to that, and as manifestly inspired and motivated by the mentioned Grossone Theory, here a {\it direct} involvement of {\it semantics} in the construction of axiomatic mathematical theories is presented, an involvement which gives semantics the possibility to act explicitly, effectively, and altogether directly upon the usual syntactic process of constructing the logical consequences of axioms. Two immediate objections to what appears to be an unprecedented and massive expansion of what may now become legitimate mathematical theories given by the {\it syntactic - semantic axiomatic theories} introduced here can be the following : the mentioned direct role of semantics may, willingly or not, introduce in mathematical theories one, or both of the "eternal taboo-s" of {\it inconsistency} and {\it self-reference}. Fortunately however, such concerns can be alleviated due to recent developments in both inconsistent and self-referential mathematics, [1,2]. Grateful recognition is acknowledged here for long and most useful ongoing related disccussions with Yaroslav Sergeev.
[4981] vixra:1303.0142 [pdf]
Comment on "Oxidation of Antibiotics During Water Treatment with Potassium Permanganate: Reaction Pathways and Deactivation [Hu et al., Environ. Sci. Technol., 2011, 45, 3635-3642]"
In their work, Hu et al. [Environ. Sci. Technol., 2011, 45, 3635-3642] investigate the oxidation of three antibiotics (ciprofloxacin, lincomycin, and trimethoprim) by potassium permanganate in buffered solutions at pH 7. The authors propose detailed mechanistic pathways for the oxidation of these substrates, but apparently do not consider the acid/base behavior of the compounds under consideration, resulting in erroneous mechanistic interpretations throughout the manuscript.
[4982] vixra:1303.0141 [pdf]
Comment on "Determination of Diffusion Coefficient of Organic Compounds in Water Using a Simple Molecular-Based Method [Gharagheizi, Ind. Eng. Chem. Res. 2012, 51, 2797-2803]"
In his article, Gharagheizi [Ind. Eng. Chem. Res. 2012, 51, 2797-2803] claims to develop a novel three-parameter equation for the calculation/prediction of the diffusion coefficient of nonelectrolyte organic compounds in water at infinite dilution. In contrast, many of the compounds investigated in this work are electrolytes in pure water at infinite dilution. Consequently, the molecular modeling efforts on the non-ionized molecular speciation of each compound were - in many cases - conducted on species that would not be dominantly present under the experimental conditions the modeling efforts are being developed against.
[4983] vixra:1303.0136 [pdf]
Five Departures in Logic, Mathematics, and Thus Either We Like It, or not in Physics as Well ...
Physics depends on ”physical intuition”, much of which is formulated in terms of Mathematics. Mathematics itself depends on Logic. The paper presents three latest novelties in Logic which have major consequences in Mathematics. Further, it presents two possible significant departures in Mathematics itself. These five departures can have major implications in Physics. Some of them are indicated, among them in Quantum Mechanics and Relativity.
[4984] vixra:1303.0131 [pdf]
Novel Remarks on Point Mass Sources, Firewalls, Null Singularities and Gravitational Entropy
A continuous family of static spherically symmetric solutions of Einstein's vacuum field equations with a $spatial$ singularity at the origin $ r = 0 $ is found. These solutions are parametrized by a real valued parameter $ \lambda$ (ranging from $ 0 $ to $ \infty$) and such that the radial horizon's location is $displaced$ continuously towards the singularity ($ r = 0 $) as $ \lambda $ increases. In the limit $ \lambda \rightarrow \infty$, the location of the singularity and horizon $merges$ leading to a $null$ singularity. In this extreme case, any infalling observer hits the null singularity at the very moment he/she crosses the horizon. This fact may have important consequences for the resolution of the fire wall problem and the complementarity controversy in black holes. Another salient feature of these solutions is that it leads to a modification of the Newtonian potential consistent with the effects of the generalized uncertainty principle (GUP) associated to a minimal length. The field equations due to a delta-function point-mass source at $ r = 0 $ are solved and the Euclidean gravitational action corresponding to those solutions is evaluated explicitly. It is found that the Euclidean action is precisely equal to the black hole entropy (in Planck area units). This result holds in any dimensions $ D \ge 3 $. The study of the Nonperturbative Renormalization Group flow of the metric $ g_{\mu v} [ k ] $ in terms of the momentum scale $ k $ and its relationship to these family of metrics parametrized by $ \lambda$ deserves further investigation.
[4985] vixra:1303.0122 [pdf]
Parameterized Special Theory of Relativity (PSTR)
We have parameterized Einstein’s thought experiment with atomic clocks, supposing that we knew neither if the space and time are relative or absolute, nor if the speed of light was ultimate speed or not. We have obtained a Parameterized Special Theory of Relativity (PSTR), first introduced in 1982. Our PSTR generalized not only Einstein’s Special Theory of Relativity, but also our Absolute Theory of Relativity, and introduced three more possible Relativities to be studied in the future. After the 2011 CERN’s superluminal neutrino experiments, we recall our ideas and invite researchers to deepen the study of PSTR, ATR, and check the three new mathematically emerged Relativities 4.3, 4.4, and 4.5.
[4986] vixra:1303.0114 [pdf]
New Universal Gravity and Rational Galaxy Structure
New Universal Gravity: To any point on a rational structure, there correspond three proportion surfaces which pass the point and are orthogonal to each other. To any proportion surface there exists the corresponding component of the gravitational force at the point whose direction is normal to the surface (pointing to the larger matter density) and whose magnitude is proportional to the Gaussian curvature of the surface at the point and proportional to the total mass contained in the closed surface. The new gravity generalizes Newtonian theory and gives a unified explanation to both discrete and smooth natural structures (i.e., Solar system and galaxies). It is the inevitable truth of nature if gravity must satisfy divergence theorem and galaxies must be rational structure.
[4987] vixra:1303.0103 [pdf]
Comment on "Visualising the Equilibrium Distribution and Mobility of Organic Contaminants in Soil Using the Chemical Partitioning Space [Wong and Wania, J. Environ. Monit., 2011, 13, 1569-1578]"
In their article, Wong and Wania [J. Environ. Monit., 2011, 13, 1569-1578] claim to estimate the partitioning properties (air-water partition coefficients and soil organic carbon-water partition coefficients) of twenty neutral organic chemicals using poly-parameter linear free energy relationships. Five of the 20 compounds in this study have ionizable functional groups with environmentally relevant pKa values, meaning the corresponding partitioning properties are highly pH dependent, thereby rendering the modeling approach applied by these authors subject to a fatal conceptual and practical flaw.
[4988] vixra:1303.0093 [pdf]
Comment on "The Sorptive Capacity of Animal Protein [DeBruyn and Gobas. 2007. Environ Toxicol Chem 26:1803-1808]"
In their article, DeBruyn and Gobas [2007. Environ Toxicol Chem 26:1803-1808] claim to "present a compilation and meta-analysis of published data to estimate the relative sorptive capacities of animal proteins and lipids for neutral organic chemicals." However, the dataset of these authors contains compounds that would be effectively entirely ionized at physiological pH values, rendering the assumption of neutrality and any subsequent analyses based thereupon incorrect.
[4989] vixra:1303.0088 [pdf]
Foundations of Santilli Isonumber Theory
1.Foudations of Santilli isonumber theory.I:isonumber theory of the first kind;2.Santilli isonumber theory.II:isonumber theory of the second kind;3.Fermat last theorem and its applications;4.the proofs of binary Goldbach theorem using only partial primes;5.Santilli isocryptographic theory.Disproofs of Riemann hypothesis.
[4990] vixra:1303.0073 [pdf]
Comment on "Modelling Physico-Chemical Properties of (Benzo)triazoles, and Screening for Environmental Partitioning [bhhatarai and Gramatica, Water Res. 45, 2011, 1463-1471]"
In their article, Bhhatarai and Gramatica [Water Res. 45, 2011, 1463-1471] employ a quantitative structure-property relationship approach to model the physico-chemical properties of compounds they refer to as benzotriazoles, and to subsequently screen these compounds for environmental partitioning behavior. A substantial number of these compounds are not benzotriazoles and do not have similar properties as benzotriazoles. Consequently, it appears that the approach, assumptions, and results in this work must be viewed as potentially fundamentally flawed.
[4991] vixra:1303.0071 [pdf]
Comment on "Are Mechanistic and Statistical QSAR Approaches Really Different? MLR Studies on 158 Cycloalkyl-Pyranones [Bhhatarai et al., Mol. Inf. 2010, 29, 511-522]"
In their study, Bhhatarai et al. [Mol. Inf. 2010, 29, 511-522] develop quantitative structure-activity relationships (QSARs) for the inhibition of HIV protease by 158 so-called 4-OH cycloalkyl-pyranones. A number of compounds termed 4-OH cycloalkyl-pyranones in this work do not appear to be cycloalkyl-pyranones.
[4992] vixra:1303.0070 [pdf]
Comment on "Acid-Catalyzed Conversion of Xylose, Xylan and Straw into Furfural by Microwave-Assisted Reaction [Yemis and Mazza, 2011, Bioresour. Technol. 102, 7371-7378]"
In their article, Yemis and Mazza [Yemis and Mazza, 2011, Bioresour. Technol. 102, 7371-7378] study the effects of different Bronsted acids, temperatures, times, substrate concentrations, and pH on the acid-catalyzed conversion of xylose, xylan and straw into furfural by microwave-assisted reaction. The authors appear to incorrectly classify phosphoric acid as a mineral acid, and claim to achieve pH values in solutions of acetic and formic acid below the apparent theoretical limits.
[4993] vixra:1303.0068 [pdf]
A Neutrosophic Multicriteria Decision Making Method
This work presents a method of multicriteria decision making using neutrosophic sets. Besides studying some interesting mathematical properties of the method, algorithm viz neut-MCDM is presented. The work also furnishes the fundamentals of neutrosophic set theory succinctly, to provide a …rst introduction of neutrosophic sets for the MCDM community. To illustrate the computational details, neut-MCDM has been applied to the problem of university faculty selection against a given set of criteria.
[4994] vixra:1303.0064 [pdf]
We are Looking for Modern Newton
Newton discovered the dynamic law of universal gravity, based on his principles of kinetic physics and Kepler's three laws of planetary motion in the Solar system. However, astronomers observed larger material systems in the universe that are galaxies. If Newton's theory was applicable to galaxies then stars would rotate around the galaxy center at a speed decreasing with the distance from the center. However, astronomical observation shows that the speed is constant regardless of the distance. This is called the problem of constant rotational curves. It is the dark cloud hanging over twentieth century physics. Fortunately, Dr. Jin He found out that the observational galaxy structure is rational. This suggests Jin He might be a modern Kepler. In this article we present Cylindrical Conjecture on galaxy force field based on Jin He's observational result. The conjecture simply proves constant rotational curves. We are looking for a modern Newton who will develop the conjecture into a systematic theory on galaxy dynamics, be the conjecture a cosmic truth.
[4995] vixra:1303.0051 [pdf]
Climate Change and Biofuel Wheat Production in Southern Saskatchewan: Long-Term Climate Trends Versus Climate Modeling Predictions
Climate modeling work has suggested biofuel wheat production in southern Saskatchewan, Canada, during the mid-21st century will be influenced by increasing annual precipitation, including precipitation increases in every month except July and August, increasing daily mean, minimum, and maximum air temperatures throughout the year, and substantial increases in the risk of wheat heat shock (temperatures>32.0 C). In the current study, we compare prior modeling predictions to historical trends in the number of days with maximum temperatures >32.0 C during July and August, the number of hours with maximum temperatures >32.0 C during July, as well as monthly and annual total precipitation, mean daily temperatures, and mean maximum daily temperatures for climate stations throughout southern Saskatchewan. We find no evidence of increasing trends for wheat heat shock days or hours during the mid-summer period in this region. In contrast, the majority of stations exhibit significantly declining temporal trends in wheat heat shock days and hours. Historical precipitation and temperature trends for the climate stations under consideration in southern Saskatchewan display significant inter- and intra-station heterogeneity throughout the year in terms of whether or not trends are evident, as well as their magnitude and direction. Consequently, caution must be exercised when extrapolating any case study analyses at a particular location to larger geographic areas of the province. Based on our analyses of historical climate data for southern Saskatchewan, it is unclear whether climate models are accurately predicting future climate change impacts on biofuel wheat production for this region in the mid-21st century.
[4996] vixra:1303.0045 [pdf]
On the P-Untestability of Combinational Faults
Abstract—We describe the p-untestability of faults in combinational circuits. They are similar to redundant faults, but are defined probabilistically. P-untestable fault is a fault that is not detectable after N random pattern simulation or a fault, FAN either proves to be redundant or aborts after K backtracks. We chose N to be about 1000000 and K to be about 1000. We provide a p-untestability detectability algorithm that works in about 85% of the cases, with average of about 14% false negatives. The algorithm is a simple hack to FAN and uses structural information and can be easily implemented. The algorithm does not prove redundancy completely but establishes a fault as a probabilistically redundant, meaning a fault with low probability of detection or no detection.
[4997] vixra:1303.0043 [pdf]
Pror Compaction Scheme for Larger Circuits and Longer Vectors with Deterministic Atpg
Abstract—Reverse order restoration ROR techniques have found great use in sequential automatic test pattern generation ATPG, esp. spectral and perturbation-based ATPG. This paper deals with improving ROR for that purpose. We introduce parallel-fault multipass 2-level polynomial reverse order restoration PROR algorithms with constant complexity of the form H(n)G(n) + c where H(n) is the number of vectors to be released this iteration and G(n) is the attenuation factor. In PROR H(n) = nk and G(n) here is 1
[4998] vixra:1303.0039 [pdf]
Quantum Impedances, Entanglement, and State Reduction
The measurement problem, the mechanism of quantum state reduction, has remained an open question for nearly a century. The 'quantum weirdness' of the problem was highlighted by the introduction of the Einstein-Podolsky-Rosen paradox in 1935. Motivated by Bell's Theorem, nonlocality was first experimentally observed in 1972 by Clauser and Freedman in the entangled states of an EPR experiment, and is now an accepted fact. Special relativity requires that no energy is transferred in the nonlocal collapse of these entangled two-body wavefunctions, that no work is done, no information communicated. In the family of quantum impedances those which are scale invariant, the Lorentz and centrifugal impedances, satisfy this requirement. This letter explores their role in the collapse of the wave function
[4999] vixra:1303.0038 [pdf]
Gaussian Quadrature of the Integrals Int_(-Infty)^infty F(x) dx / Cosh(x)
The manuscript delivers nodes and their weights for Gaussian quadratures with a "non-classical" weight in the integrand defined by a reciprocal hyperbolic cosine. The associated monic orthogonal polynomials are constructed; their coefficients are simple multiples of the coefficients of Hahn polynomials. A final table shows the abscissae-weight pairs for up to 128 nodes.
[5000] vixra:1303.0037 [pdf]
Fquantum :A Quantum Computing Fault Simulator
Abstract—We like to introduce fQuantum, a Quantum Computing Fault Simulator and new quantum computing fault model based on Hadamard, PauliX, PauliY and PauliZ gates, and the traditional stuckat-1 SA1 and stuck-at-0 SA0 faults. We had close to 100% fault coverage on most circuits. The problem with lower coverage comes from function gates, which we will deal with, in future versions of this paper.
[5001] vixra:1303.0013 [pdf]
Gauss-Laguerre and Gauss-Hermite Quadrature on 64, 96 and 128 Nodes
The manuscript provides tables of abscissae and weights for Gauss-Laguerre integration on 64, 96 and 128 nodes, and abscissae and weights for Gauss-Hermite integration on 96 and 128 nodes.
[5002] vixra:1303.0010 [pdf]
Cosmology in Context:current Studies of the Early Universe Through Astronomy and Particle Physics, Experiments, Observations, and Theories.
This is a comprehensive review of the published research in cosmology focusing on the time period from the big bang to the last scattering of cosmic microwave background radiation. This is a period of approximately 380,000 years. Theoretical, observational, and experimental research with a bearing on cosmology will be covered. First, a time line of events from the big bang to last scattering of CMB photons will be provided. Then, a review of theoretical research related to the big bang, cosmic inflation, and baryogenesis will be covered. Next, a review of observational as well as experimental work on the cosmic microwave background, big bang nucleosynthesis, and efforts to directly detect gravitational waves. After that, a look at research on the edge of accepted cosmology such as loop quantum cosmology, and the possible time variation of fundamental constants. Last but not least this author will present a tiny, and novel theoretical idea, a Lagrangian which captures all of the physics of the standard model of cosmology.
[5003] vixra:1303.0004 [pdf]
Newtonian Gravitation with Radially Varying Velocity-Dependent Mass
A new extension to Newtonian celestial mechanics is examined. We focus on the scenario of a point-like body with negligible mass orbiting a spherically symmetric massive body. We take the implicitly time-dependent mass of electrodynamics one step further. We let the mass of the orbiting body vary not only with the velocity, but also with the position within the gravitational field. We find a family of expressions for the gravitational acceleration that explains the anomalous precession of perihelion of the planets and in the strong field limit results in orbits in close agreement with the predictions of the Schwarzschild solution. Regarding the orbital velocity of a body in circular orbit and the acceleration of a body at rest, the new theory gives the same results as classically. This is not the case with the post-Newtonian expansion even if terms at the third post-Newtonian, 3PN, level are included. Arguably, the major benefit of the new theory is that it presents a method that is much less intricate and more practical to deal with than general relativity, while reproducing most of its results, at least in the spherically symmetric case. While the differences between the final expression and the corresponding expression from the post-Newtonian expansion are small and subtle, the new theory gives results that in several ways are closer to both the classical results and to what the Schwarzschild solution predicts.
[5004] vixra:1302.0168 [pdf]
Massive Long Range Photons and the Variation of the Fine Structure Constant
Prevailing and conventional wisdom holds that intermediate gauge Bosons for long range interactions such as the gravitational and electromagnetic interactions must be massless as is assumed to be the case for the photon which mediates the electromagnetic interaction. We have argued in Nyambuya (2013) that it should in-principle be possible to have massive photons. The problem of whether or not these photons will lead to short or long range interactions has not been answered. Naturally, because these photons are massive, one would without much pondering and excogitation on the matter assume that these photons can only take part in short range interactions. Contrary to this and to conventional wisdom; via a subtlety, we show within the confines of Proca Electrodynamics, that massive photons should be able to take part in long range interactions without any problem. While leaving the speed of light as an invariant fundamental constant, the resulting theory leads to a time variation in the Fundamental Constants of Nature, namely the permittivity and permeability of free space. In-turn, this leads to a plausible explanation as to why the fine structure constant strongly appears to be varying over cosmological times.
[5005] vixra:1302.0155 [pdf]
How Much Degrees Of Temperature A Warp Drive Achieves When At Superluminal Speeds?? The Analysis Of Gonzalez-Diaz Applied To The Natario Warp Drive Spacetime
Warp Drives are solutions of the Einstein Field Equations that allows superluminal travel within the framework of General Relativity.There are at the present moment two known solutions: The Alcubierre warp drive discovered in $1994$ and the Natario warp drive discovered in $2001$.The warp drive seems to be very attractive because allows interstellar space travel at arbitrarily large speeds avoiding the time dilatation and mass increase paradoxes of Special Relativity. However it suffers from a very serious drawback: \newline In order to travel significant interstellar distances in reasonable amounts of time a ship would need to attain at least $200$ times the speed of light in order to visit a star like Gliese 581 at $20$ light-years with potential habitable exoplanets. \newline Some years ago Gonzalez-Diaz discovered that a warp drive at luminal speed develops an Horizon similar to the Schwarzschild Event Horizon that appears in the $\infty$ and at superluminal speeds this Horizon approaches the position of the spaceship.This means to say that as fast the ship goes by then as near to the ship the Horizon is formed.Plus as fast the ship goes by as hotter the Horizon becomes placing any ship and its astronauts in a dangerous thermal bath. \newline Gonzalez-Diaz applied its conclusions to the Alcubierre warp drive. \newline In this work we apply the analysis of Gonzalez-Diaz to the Natario warp drive and we arrive at an interesting result:When at luminal speed the Horizon is formed not in the $\infty$ but in the end of the Natario warped region and as fast the ship goes by superluminally this Horizon do not approaches the ship remaining inside the Natario warped region and keeping a constant temperature.This makes the Natario warp drive a better candidate for interstellar space travel when compared to its Alcubierre counterpart.
[5006] vixra:1302.0138 [pdf]
Integral Mean Estimates for the Polar Derivative of a Polynomial
Let $ P(z) $ be a polynomial of degree $ n $ having all zeros in $|z|\leq k$ where $k\leq 1,$ then it was proved by Dewan \textit{et al} \cite{d} that for every real or complex number $\alpha$ with $|\alpha|\geq k$ and each $r\geq 0$ $$ n(|\alpha|-k)\left\{\int\limits_{0}^{2\pi}\left|P\left(e^{i\theta}\right)\right|^r d\theta\right\}^{\frac{1}{r}}\leq\left\{ \int\limits_{0}^{2\pi}\left|1+ke^{i\theta}\right|^r d\theta\right\}^{\frac{1}{r}}\underset{|z|=1}{Max}|D_\alpha P(z)|. $$ \indent In this paper, we shall present a refinement and generalization of above result and also extend it to the class of polynomials $P(z)=a_nz^n+\sum_{\nu=\mu}^{n}a_{n-\nu}z^{n-\nu},$ $1\leq\mu\leq n,$ having all its zeros in $|z|\leq k$ where $k\leq 1$ and thereby obtain certain generalizations of above and many other known results.
[5007] vixra:1302.0132 [pdf]
On the Orbits of the Magnetized Kepler Problems in Dimension $2k+1$
It is demonstrated that, for the recently introduced classical magnetized Kepler problems in dimension $2k+1$, the non-colliding orbits in the ``external configuration space" $\mathbb R^{2k+1}\setminus\{\mathbf 0\}$ are all conics, moreover, a conic orbit is an ellipse, a parabola, and a branch of a hyperbola according as the total energy is negative, zero, and positive. It is also demonstrated that the Lie group ${\mr {SO}}^+(1,2k+1)\times {\bb R}_+$ acts transitively on both the set of oriented elliptic orbits and the set of oriented parabolic orbits.
[5008] vixra:1302.0129 [pdf]
A New Basis for Cosmology Without Dark Energy
It is shown that small quantum effects of the model of low-energy quantum gravity by the author give a possibility of another interpretation of cosmological observations without an expansion of the Universe and dark energy.
[5009] vixra:1302.0113 [pdf]
Comments on the Recent Experiments by the Group of Michael Persinger
Michael Persinger's group reports three very interesting experimental findings related to EEG, magnetic fields, photon emissions from brain, and macroscopic quantum coherence. The findings provide also support for the proposal of Hu and Wu that nerve pulse activity could induce spin flips of spin networks assignable to cell membrane. In this article I analyze the experiments from TGD point of view. It turns out that the experiments provide support for several TGD inspired ideas about living matter. Magnetic flux quanta as generators of macroscopic quantum entanglement, dark matter as a hierarchy of macroscopic quantum phases with large effective Planck constant, DNA-cell membrane system as a topological quantum computer with nucleotides and lipids connected by magnetic flux tubes with ends assignable to phosphate containing molecules, and the proposal that "dark" nuclei consisting of dark proton strings could provide a representation of the genetic code. The proposal of Hu and Wu translates to the assumption that lipids of the two layers of the cell membrane are accompanied by dark protons which arrange themselves to dark protonic strings defining a dark analog of DNA double strand.
[5010] vixra:1302.0112 [pdf]
Robert Kiehn's Ideas About Falaco Solitons and Generation of Turbulent Wake from TGD Perspective
I have been reading two highly interesting articles by Robert Kiehn. There are very many contacts on TGD inspired vision and its open interpretational problems. The notion of Falaco soliton has surprisingly close resemblance with K\"ahler magnetic flux tubes defining fundamental structures in TGD Universe. Fermionic strings are also fundamental structures of TGD accompanying magnetic flux tubes and this supports the vision that these string like objects could allow reduction of various condensed matter phenomena such as sound waves -usually regarded as emergent phenomena allowing only highly phenomenological description - to the fundamental microscopic level in TGD framework. This can be seen as the basic outcome of this article. Kiehn proposed a new description for the generation of various instability patterns of hydrodynamics flows (Kelvin-Helmholtz and Rayleigh-Taylor instabilities) in terms of hyperbolic dynamics so that a connection with wave phenomena like interference and diffraction would emerge. The role of characteristic surfaces as surfaces of tangential and also normal discontinuities is central for the approach. In TGD framework the characteristic surfaces have as analogs light-like wormhole throats at which the signature of the induced 4-metric changes and these surfaces indeed define boundaries of two phases and of material objects in general. This inspires a more detailed comparison of Kiehn's approach with TGD.
[5011] vixra:1302.0111 [pdf]
The Most Recent Indications for Anomalies from TGD Perspective
Some of the most recent experimental indications for anomalies on astrophysics, cosmology, and particle physicists are briefly discussed with an interpretation based on basic predictions of TGD.
[5012] vixra:1302.0110 [pdf]
What P-Adic Icosahedron Could Mean? and What About P-Adic Manifold?
The original focus of this article was p-adic icosahedron. The discussion of attempt to define this notion however leads to the challenge of defining the concept of p-adic sphere, and more generally, that of p-adic manifold, and this problem soon became the main target of attention since it is one of the key challenges of also TGD. There exists two basic philosophies concerning the construction of both real and p-adic manifolds: algebraic and topological approach. Also in TGD these approaches have been competing: algebraic approach relates real and p-adic space-time points by identifying the common rationals. Finite pinary cutoff is however required to achieve continuity and has interpretation in terms of finite measurement resolution. Canonical identification maps p-adics to reals and vice versa in a continuous manner but is not consistent with p-adic analyticity nor field equations unless one poses a pinary cutoff. It seems that pinary cutoff reflecting the notion of finite measurement resolution is necessary in both approaches. This represents a new notion from the point of view of mathematics. a) One can try to generalize the theory of real manifolds to p-adic context. The basic problem is that p-adic balls are either disjoint or nested so that the usual construction by gluing partially overlapping spheres fails. This leads to the notion of Berkovich disk obtained as a completion of p-adic disk having path connected topology (non-ultrametric) and containing p-adic disk as a dense subset. This plus the complexity of the construction is heavy price to be paid for path-connectedness. A related notion is Bruhat-Tits tree defining kind of skeleton making p-adic manifold path connected. The notion makes sense for the p-adic counterparts of projective spaces, which suggests that p-adic projective spaces (S<sup>2</sup> and CP<sub>2</sub> in TGD framework) are physically very special. b) Second approach is algebraic and restricts the consideration to algebraic varieties for which also topological invariants have algebraic counterparts. This approach looks very natural in TGD framework - at least for imbedding space. Preferred extremals of Kähler action can be characterized purely algebraically - even in a manner independent of the action principle - so that they might make sense also p-adically. Number theoretical universality is central element of TGD. Physical considerations force to generalize the number concept by gluing reals and various p-adic number fields along rationals and possible common algebraic numbers. This idea makes sense also at the level of space-time and of "world of classical worlds" (WCW). Algebraic continuation between different number fields is the key notion. Algebraic continuation between real and p-adic sectors takes place along their intersection which at the level of WCW correspond to surfaces allowing interpretation both as real and p-adic surfaces for some value(s) of prime p. The algebraic continuation from the intersection of real and p-adic WCWs is not possible for all p-adic number fields. For instance, real integrals as functions of parameters need not make sense for all p-adic number fields. This apparent mathematical weakness can be however turned to physical strength: real space-time surfaces assignable to elementary particles can correspond only some particular p-adic primes. This would explain why elementary particles are characterized by preferred p-adic primes. The p-adic prime determining the mass scale of the elementary particle could be fixed number theoretically rather than by some dynamical principle formulated in real context (number theoretic anatomy of rational number does not depend smoothly on its real magnitude!). Although Berkovich construction of p-adic disk does not look promising in TGD framework, it suggests that the difficulty posed by the total disconnectedness of p-adic topology is real. TGD in turn suggests that the difficulty could be overcome without the completion to a non-ultrametric topology. Two approaches emerge, which ought to be equivalent. a) The TGD inspired solution to the construction of path connected effective p-adic topology is based on the notion of canonical identification mapping reals to p-adics and vice versa in a continuous manner. The trivial but striking observation was that canonical identification satisfies triangle inequality and thus defines an Archimedean norm allowing to induce real topology to p-adic context. Canonical identification with finite measurement resolution defines chart maps from p-adics to reals and vice versa and preferred extremal property allows to complete the discrete image to hopefully space-time surface unique within finite measurement resolution so that topological and algebraic approach are combined. Finite resolution would become part of the manifold theory. p-Adic manifold theory would also have interpretation in terms of cognitive representations as maps between realities and p-adicities. b) One can ask whether the physical content of path connectedness could be also formulated as a quantum physical rather than primarily topological notion, and could boil down to the non-triviality of correlation functions for second quantized induced spinor fields essential for the formulation of WCW spinor structure. Fermion fields and their n-point functions could become part of a number theoretically universal definition of manifold in accordance with the TGD inspired vision that WCW geometry - and perhaps even space-time geometry - allow a formulation in terms of fermions. This option is a mere conjecture whereas the first one is on rigorous basis.
[5013] vixra:1302.0109 [pdf]
Matter-Antimatter Asymmetry, Baryo-Genesis, Lepto-Genesis, and TGD
The generation of matter-antimatter asymmetry remains still poorly understood. Same is true about the generation of matter. In TGD framework the generation of matter can be explained in terms of cosmic strings carrying dark energy identified as K\ähler magnetic energy. Their decay to ordinary and dark matter would be the analog for the decay of the inflaton field to matter and matter-antimatter asymmetry would be generated in this process. The details of the process have not been considered hitherto. The attempt to see whether the counterparts for the visions about lepto-genesis from right-handed inert neutrinos, baryo-genesis from leptons, and generation of antimatter asymmetry claimed to be possible in standard model framework, could make sense in TGD led to a much more detailed vision about how the primordial cosmic strings carrying only right handed neutrinos could decay to ordinary matter. It also turned out that the "official" version of TGD for which quarks and leptons correspond to different chiralities of imbedding space spinors is enough to achieve also matter antimatter asymmetry.
[5014] vixra:1302.0108 [pdf]
Exploring the Possiblities of Discovering Properties of the Higgs Boson Via Its Interactions in the Solar Environment
Experiments at the LHC (Large Hadron Collider) at CERN (CERN) have recently announced the discovery of the Higgs boson. They are shying away from calling it the Higgs boson until its properties have been measured. Due to the dificulties of measuring the Higgs boson's properties at CERN, which are exacerbated by the LHC shutdown, we consider the possibilities of measuring Higgs properties elsewhere. Our analyses focus on the prospects of Higgs measurements from the Sun, but our conclusions are probably applicable to other Sun-like objects, such as stars.
[5015] vixra:1302.0105 [pdf]
Is the Field of Numbers a Real Physical Field? On the Frequent Distribution and Masses of the Elementary Particles
Frequent distributions of the databases of the numerical values obtained by resolving algorithms, which describe physical and other processes, give a possibility for bonding the probability of that results the algorithms get. In the frequent distribution of the fractions of integers (rational numbers), local maxima which meet the ratios of masses of the elementary particles have been found.
[5016] vixra:1302.0103 [pdf]
The Extended Relativity Theory in Clifford Phase Spaces and Modifications of Gravity at the Planck/hubble Scales
We extend the construction of Born's Reciprocal Phase Space Relativity to the case of Clifford Spaces which involve the use of $polyvectors$ and a $lower/upper$ length scale. We present the generalized polyvector-valued velocity and acceleration/force boosts in Clifford Phase Spaces and find an $explicit$ Clifford algebraic realization of the velocity and acceleration/force boosts. Finally, we provide a Clifford Phase-Space Gravitational Theory based in gauging the generalization of the Quaplectic group and invoking Born's reciprocity principle between coordinates and momenta (maximal speed of light velocity and maximal force). The generalized gravitational vacuum field equations are explicitly displayed. We conclude with a brief discussion on the role of higher-order Finsler geometry in the construction of extended relativity theories with an upper and lower bound to the higher order accelerations (associated with the higher order tangent and cotangent spaces). We explain how to find the procedure that will allow us to find the $n$-ary analog of the Quaplectic group transformations which will now mix the $X, P, Q, .......$ coordinates of the higher order tangent (cotangent) spaces in this extended relativity theory based on Born's reciprocal gravity and $n$-ary algebraic structures.
[5017] vixra:1302.0092 [pdf]
A Theoretical Solution for Ventricular Septal Defects And Pulmonary Vein Stenosis
Ventricular Septal Defects (VSD) and Pulmonary Vein Stenosis (PVS) are both normally non- life- threatening problems for survivors of early childhood. However, it can be a large hindrance to many patients who want a normal life. With this proposed solution, patients should be able to achieve a life mostly free of problems. Hopefully, only regular check-ups will be required after the initial treatment.
[5018] vixra:1302.0091 [pdf]
On Dialectics of Physical Systems, Schrodinger Equation and Collapse of the Wave Function: Critical Behaviour for Quantum Computing
This paper is intended to show the Schrodinger equation, within its structure, allows the manifestation of the wave function collapse within a very natural way of reasoning. In fact, as we will see, nothing new must be inserted to the classical quantum mechanics, viz., only the dialectics of the physical world must be interpreted under a correct manner. We know the nature of a physical system turns out to be quantical or classical, and, once under the validity of the Schrodinger equation to provide the evolution of this physical system, the dialectics, quantum or classical, mutually exclusive, must also be under context through the Schrodinger equation, issues within the main scope of this paper. We will show a classical measure, the obtention of a classical result, emerges from the structure of the Schrodinger equation, once one demands the possibility that, over a chronological domain, the system begins to provide a classical dialectic, showing the collapse may be understood from both: the structure of the Schrodinger equation as well as from the general solution to this equation. The general solution, even with a dialectical change of description, leads to the conservation of probability, obeying the Schrodinger equation. These issues will turn out to be a consequence of a general potential energy operator, obtained in this paper, including the possibility of the classical description of the physical system, including the possibility of interpretation of the collapse of the quantum mechanical state vector within the Schrodinger equation scope.
[5019] vixra:1302.0088 [pdf]
Indefinite Summation
This paper about indefinite summation describes a natural approach to discrete calculus. Two natural operators for discrete difference and summation are defined. They preserve symmetry and show a duality in contrast to the classical operators. Several summation and differentiation algorithms will be presented.
[5020] vixra:1302.0080 [pdf]
Theory of Colorless and Electrically Neutral Quarks: Neutrino-like Quarks
The theory of colored and electrically charged gauge bosons introduced by the author postulates the existence of colorless and electrically neutral quarks which play the same role in decay processes as neutrinos. We discuss here about the colorless and electrically neutral quarks.
[5021] vixra:1302.0070 [pdf]
Octonion in Superstring Theory
In this paper, we have made an attempt to discuss the role of octonions in superstring theory (i.e. a theory of everything to describe the unification of all four types of forces namely gravitational, electromagnetic, weak and strong) where, we have described the octonion representation of the superstring(SS) theory as the combination of four complex (C) spaces namely associated with the gravitational (G-space), electromagnetic (EM-space), weak (W-space) and strong (S-space) interactions. We have discussed the octonionic differential operator, octonionic valued potential wave equation, octonionic field equation and other various quantum equations of superstring theory in simpler, compact and consistent manner. Consequently, the generalized Dirac-Maxwell’s equations are studied with the preview of superstring theory by means of octonions.
[5022] vixra:1302.0060 [pdf]
The Natario Warp Drive Using Lorentz Boosts According to the Harold White Spacetime Metric Potential $\theta$.
Warp Drives are solutions of the Einstein Field Equations that allows superluminal travel within the framework of General Relativity.There are at the present moment two known solutions: The Alcubierre warp drive discovered in $1994$ and the Natario warp drive discovered in $2001$.The warp drive seems to be very attractive because allows interstellar space travel at arbitrarily large speeds avoiding the time dilatation and mass increase paradoxes of Special Relativity. However it suffers from a very serious drawback:Interstellar space is not empty:It is fulfilled with photons and particle dusts and a ship at superluminal speeds would impact these obstacles in highly energetic collisions disrupting the warp field and placing the astronauts in danger.This was pointed out by a great number of authors like Clark,Hiscock,Larson,McMonigal,Lewis,O'Byrne,Barcelo,Finazzi and Liberati. \newline In order to travel significant interstellar distances in reasonable amounts of time a ship would need to attain $200$ times the speed of light but according to Clark,Hiscock and Larson the impact between the ship and a single photon of Cosmic Background Radiation(COBE)would release an amount of energy equal to the photosphere of a star like the Sun.And how many photons of COBE we have per cubic centimeter of space between Earth and Gliese 581 a star at $20$ light-years with potential habitable exoplanets?This serious problem seems to have no solution at first sight. \newline However some years ago Harold White from NASA appeared with an idea that may well solve this problem:According to him the ship never surpass the speed of light but the warp field generates a Lorentz Boost resulting in an apparent superluminal speed as seen by the astronauts on-board the ship and by observers on the Earth while the warp bubble is always below the light speed with the ability to manoeuvre against these obstacles avoiding the lethal collisions. \newline Harold White applied its conclusions for the Alcubierre warp drive. \newline In this work we examine the feasibility of the White idea for the Natario warp drive using clear mathematical arguments and we arrived at the conclusion that the line of reason of Harold White is correct and can be applied to the Natario geometry.
[5023] vixra:1302.0042 [pdf]
Colored and Electrically Charged Gauge Bosons and Their Related Quarks
We propose a model of baryon and lepton number conserving interactions in which the two states of a quark, a colored and electrically charged state and a colorless and electrically neutral state, can transform into each other through the emission or absorption of a colored and electrically charged gauge boson. A novel feature of the model is that the colorless and electrically neutral quarks carry away the missing energy in decay processes as do neutrinos.
[5024] vixra:1302.0037 [pdf]
Quantum Structure
The logical structure of the standard model is isomorphic to the geometric structure of the modified cosmological model (MCM). We introduce a new particle representation scheme and show that it is invariant under CPT. In this representation spin arises as an ordinary physical process. The final character of the Higgs boson is predicted. Wavefunction collapse, the symmetry (anti-symmetry) of the wavefunction and some recent experimental results are discussed.
[5025] vixra:1302.0032 [pdf]
Declining Relative Humidity in the Thompson River Valley near Kamloops, British Columbia: 1990-2012
Potential time trends in relative humidity (RH) were investigated for the Kamloops climate station in south-central British Columbia, Canada, between 1990 and 2012. Mean monthly 6 am and 3 pm RH at Kamloops achieve annual minima during the March to September period with substantially higher early morning RH compared to the mid-afternoon period. Significant temporal declines in RH throughout the year are evident ranging from 1.5 to 5.7\%/decade. No significantly increasing temporal trends in RH were found. The findings indicate that a continuation of declining trends in RH for the study area may increase the quantity of dust and other atmospheric particulate generation from both natural and anthropogenic sources, possibly resulting in additional threats to local and regional air quality, thereby necessitating inclusion in air quality management planning and modeling efforts.
[5026] vixra:1302.0027 [pdf]
On the K-Mer Frequency Spectra of Organism Genome and Proteome Sequences with a Preliminary Machine Learning Assessment of Prime Predictability
A regular expression and region-specific filtering system for biological records at the National Center for Biotechnology database is integrated into an object-oriented sequence counting application, and a statistical software suite is designed and deployed to interpret the resulting k-mer frequencies---with a priority focus on nullomers. The proteome k-mer frequency spectra of ten model organisms and the genome k-mer frequency spectra of two bacteria and virus strains for the coding and non-coding regions are comparatively scrutinized. We observe that the naturally-evolved (NCBI/organism) and the artificially-biased (randomly-generated) sequences exhibit a clear deviation from the artificially-unbiased (randomly-generated) histogram distributions. Furthermore, a preliminary assessment of prime predictability is conducted on chronologically ordered NCBI genome snapshots over an 18-month period using an artificial neural network; three distinct supervised machine learning algorithms are used to train and test the system on customized NCBI data sets to forecast future prime states---revealing that, to a modest degree, it is feasible to make such predictions.
[5027] vixra:1302.0013 [pdf]
Octonion Dark Matter
In this paper, we have made an attempt to discuss the role of octonions in gravity and dark matter where, we have described the octonion space as the combination of two quaternionic spaces namely gravitational G-space and electromagnetic EM-space. It is shown that octonionic hot dark matter contains the photon and graviton (i.e. massless particles) while the octonionic cold dark matter is associated with the W-;Z (massive) bosons.
[5028] vixra:1302.0008 [pdf]
Octonion Electrodynamics in Isotropic and Chiral Medium
Starting with the Dirac-Maxwell’s equations in presence of electric and magnetic sources in an isotropic medium of dyons, we have derived the generalized octonion Maxwell’s equations in isotropic medium. And the octonion formulation of generalized electromagnetic fields in chiral medium has also been developed in compect, simple and consistent manner.
[5029] vixra:1302.0006 [pdf]
Folding a Pattern
We propose a reorganisation of the standard model and their mesons in order to build supersymmetric multiplets. The presentation is open to improvements to choose the adequate candidates in each recombination.
[5030] vixra:1302.0001 [pdf]
Split Octonion Electrodynamics and Energy-Momentum Conservation Laws for Dyons
Starting with the usual definations of octonions and split octonions in terms of Zorn vector matrix realization, we have made an attempt to write the consistent form of generalized Maxwell’s equations in presence of electric and magnetic charges. We have thus written the generalized split octonion potential wave equations and the generalized fields equation of dyons in split octonions. Accordingly the split octonion forms of generalized Dirac Maxwell’s equations are obtained in compact and consistent manner. Accordingly, we have made an attempt to investigate the work energy theorem or “Poynting Theorem”, Maxwell stress tensor and Lorentz invariant for generalized fields of dyons in split octonion electrodynamics. Our theory of dyons in split octonion formulations is discussed in term of simple and compact notations. This theory reproduces the dynamic of electric (magnetic) in the absence of magnetic (electric) charges.
[5031] vixra:1301.0175 [pdf]
Some New Wedge Products
Quarks are described mathematically by (3 x 3) matrices. To include these quarkonian mathematical structures into Geometric Algebra it is helpful to restate Geometric Algebra in the mathematical language of (3 x 3) matrices. It will be shown in this paper how (3 x 3) permutation matrices can be interpreted as unit vectors. <b> Special emphasis will be given to the definition of some wedge products which fit better to this algebra of (3 x 3) matrices than the usual Geometric Algebra wedge product. </b> And as S3 permutation symmetry is flavour symmetry a unifi ed flavour picture of Geometric Algebra will emerge.
[5032] vixra:1301.0160 [pdf]
Horizon Problem Resolution
We present a model that offers a resolution to the Horizon Problem of cosmology and eliminates the need for Inflation. It also suggests a possible new origin for the Cosmic Microwave Background Radiation. In addition, this model eliminates the need to invoke Dark Energy and Dark Matter to explain the accelerated expansion of the universe. In essence, it implies that there is no accelerated expansion by fitting the model to Type 1a Supernovae and Gamma Ray Burst data with a reduced chi-square (goodness-of-fit) of 0.99, using only the Hubble constant as a parameter.
[5033] vixra:1301.0151 [pdf]
Temperature and Precipitation Trends in Southwestern Saskatchewan Tell a Complex Long-Term Climate Change Story
Historical climate trends in southwestern Saskatchewan, Canada were analyzed using parametric linear regression and non-parametric Mann-Kendall trend detection approaches over various timeframes between 1886 and 2010. We find substantial variability for this region in the significance and magnitude of any temporal trends for daily maximum, minimum, and mean temperatures on an annual basis - as well as during the winter, spring, summer, and autumn periods - that is dependent on the time period chosen. Similar results are obtained for precipitation data in the study area. The results demonstrate that temperature and precipitation trends in southwestern Saskatchewan tell a complex long-term climate change story, containing substantial temporal trend heterogeneity, thereby necessitating caution when interpreting long-term climatic data - particularly in the context of larger-scale regional or global observations and predictions.
[5034] vixra:1301.0122 [pdf]
A String Model for Preons and the Standard Model Particles
A preon model for standard model particles is proposed based on spin 1/2 fermion and spin 0 boson constituents. The preons are quantum mechanical strings. Implications to the number of generations, heavy boson states, and dark matter are briefly considered.
[5035] vixra:1301.0120 [pdf]
Preliminary Study in Healthy Subjects of Arm Movement Speed
Many clinical studies have shown that the arm movement of patients with neurological injury is often slow. In this paper, the speed analysis of arm movement is presented, with the aim of evaluating arm movement automatically using a Kinect camera. The consideration of arm movement appears trivial at rst glance, but in reality it is a very complex neural and biomechanical process that can potentially be used for detecting a neurological disorder. This is a preliminary study, on healthy subjects, which investigates three dierent arm-movement speeds: fast, medium and slow. With a sample size of 27 subjects, our developed algorithm is able to classify the three dierent speed classes (slow, normal, and fast) with overall error of 5.43% for interclass speed classication and 0.49% for intraclass classication. This is the rst step towards enabling future studies that investigate abnormality in arm movement, via use of a Kinect camera.
[5036] vixra:1301.0116 [pdf]
Matrix Transformation and Solutions of Wave Equation of Free Electromagnetic Field
In this paper, the generalized di erential wave equation for free electromagnetic eld is transformed and formulated by means of matrixes. Then Maxwell wave equation and the second form of wave equation are deduced by matrix transformation. The solutions of the wave equations are discussed . Finally, two di erential equations of vibration are established and their solutions are discussed .
[5037] vixra:1301.0101 [pdf]
Octonion Electrodynamics and Physics Beyond Standard Model
Historical developments of standard model and physics beyond the standard model are summarized in this thesis to understand the behavior of monopoles and dyons in current grand unified theories and quark confinement problems relevant for their production and detection. On the other hand, the various roles of four division algebras (namely the algebras of real numbers, complex numbers, quaternions and octonions) in different branches of physics and mathematics are also summarized followed by the summery of the work done in different chapters of present thesis.
[5038] vixra:1301.0094 [pdf]
British Columbia's Carbon Tax: Greenhouse Gas Emission and Economic Trends Since Introduction
In 2008, the Canadian province of British Columbia introduced a carbon tax starting at CAD$10 per tonne of carbon dioxide equivalent (CO2e) and rising by CAD$5/tonne CO2e/year to a 2012-2013 value of CAD$30/tonne CO2e. In the current work, we find no clear evidence over the short post-tax period of record that unequivocally links British Columbia's carbon tax to significant reductions in provincial greenhouse gas emissions. There are indications the implementation of this tax may have negatively impacted British Columbia's economic performance relative to the rest of Canada. A longer post-tax period of record is likely necessary in order to reliably determine what, if any, economic and environmental effects have been generated from British Columbia's carbon tax.
[5039] vixra:1301.0078 [pdf]
Riemann Zeros Quantum Chaos Functional Determinants and Trace Formulae
We study the relation between the Guzwiller Trace for a dynamical system and the Riemann-Weil trace formula for the Riemann zeros, using the Bohr-Sommerfeld quantization condition and the fractional calculus we obtain a method to define implicitly a potential , we apply this method to define a Hamiltonian whose energies are the square of the Riemann zeros (imaginary part) , also we show that for big ‘x’ the potential is very close to an exponential function. In this paper and for simplicity we use units so • Keywords: = Riemann Hypothesis, WKB semiclassical approximation, Gutzwiller trace formula, Bohr-Sommerfeld quantization,exponential potential.
[5040] vixra:1301.0073 [pdf]
The Decline of Global Per Capita Renewable Internal Freshwater Resources
Supplies of per capita renewable internal freshwater resources are declining at alarming rates around the globe, necessitating efforts to better manage population growth and the use and distribution of freshwater. All major geographic regions saw substantial reductions in per capita renewable internal freshwater supplies between 1962 and 2011. Over this period, the global per capita freshwater stock declined by 54%, with decreases of 75% in Sub-Saharan Africa, 71% in the Middle East and North Africa, 64% in South Asia, 61% in Latin America and the Caribbean, 52% in East Asia and the Pacific, and 41% in North America. At current rates of depletion, global per capita renewable internal freshwater resources are projected to decline by 65% compared to 1962 values before stabilizing, having regional variation ranging from 60% in East Asia and the Pacific to 86% of the Middle East and North Africa. Sub-Saharan Africa is predicted to reach a negative per capita renewable internal freshwater balance by the year 2120. Per capita renewable internal freshwater resources are declining more rapidly in low income countries than their middle and high income counterparts. All countries except Hungary and Bulgaria experienced declines in their per capita renewable internal freshwater supply between 1962 and 2011. Most countries (55%) experienced a decline of between 60% to 80% in per capita renewable internal freshwater resources over this period. The majority of nations are projected to maintain positive per capita renewable internal freshwater balances under steady-state conditions, although overall declines of between 80% to almost 100% from 1962 levels are dominant (~52% of all countries). A group of 28 nations is projected to reach zero per capita internal freshwater resources within the near future. African countries dominate the list of nations projected to reach zero per capita internal freshwater resources, comprising 16 of the 28 countries - of which six are landlocked. A further group of 25 nations have data records that are too short, and recent population dynamics that are generally too complex, for reliable trend extrapolation. Close attention will need to be paid to the per capita renewable internal freshwater resource trends for these countries over the coming decades in order to obtain a better understanding of their resource depletion rates.
[5041] vixra:1301.0069 [pdf]
The Partitioning of Disparlure Between Hydrophobic Organic Solvents and Water
The partitioning behavior of disparlure ((7R,8S)-7,8-epoxy-2-methyloctadecane) - a sex pheromone of the gypsy moth, Lymantria dispar - between aqueous solutions and the organic solvents chloroform and n-heptane has been re-evaluated. Prior estimates from the literature of the aqueous-organic solvent partitioning coefficients (log P) for disparlure in these two solvent systems appear to have been underestimated by about 5-6 orders of magnitude. In the current work, we provide corrected log P(chloroform/water) and log P(heptane/water) values for disparlure of 9.87 and 9.15, respectively.
[5042] vixra:1301.0068 [pdf]
Low Post-Secondary Tuitions in Canada Are not a Wealth Transfer from the Poor to the Rich
Between 2007/2008 and 2012/2013, inflation adjusted undergraduate tuition fees for full-time Canadian students increased significantly in all disciplines. All disciplines except dentistry also exhibited substantial increases in inflation adjusted graduate tuition fees for full-time Canadian students over this period. In contrast to prior claims in the literature, we show that low tuition rates in the Canadian post-secondary system do not redistribute wealth from the poor to the rich. For each dollar of taxpayer derived financial support going into the Canadian college and university system, the wealthiest families paid almost the entire amount. Consequently, it appears that regardless of current or proposed tuition rates, the Canadian post-secondary system is a wealth transfer from the rich to the poor.
[5043] vixra:1301.0061 [pdf]
Are Photons Massless or Massive?
Prevailing and conventional wisdom as drawn from both Professor Einstein's Special Theory of Relativity and our palatable experience, holds that photons are massless particles and that, every particle that travels at the speed of light must -- accordingly, be massless. Amongst other important but now resolved problems in physics, this assumption led to the Neutrino Mass Problem -- namely, ``Do neutrinos have mass?'' Neutrinos appear very strongly to travel at the speed of light and according to the afore-stated, they must be massless. Massless neutrinos have a problem in that one is unable to explain the phenomenon of neutrino oscillations because this requires massive neutrinos. Experiments appear to strongly suggest that indeed, neutrinos most certainly are massive particles. While this solves the problem of neutrino oscillation, it directly leads to another problem, namely that of ``How can a massive particle travel at the speed of light? Is not this speed a preserve and prerogative of only massless particles?'' We argue herein that in principle, it is possible for massive particles to travel at the speed of light. In presenting the present letter, our hope is that this may aid or contribute significantly in solving the said problem of ``How can massive particles travel at the speed of light?
[5044] vixra:1301.0058 [pdf]
Revisiting QRS Detection Methodologies for Portable, Wearable, Battery-Operated, and Wireless ECG Systems
Cardiovascular diseases are the number one cause of death worldwide. Currently, portable batteryoperated systems such as mobile phones with wireless ECG sensors have the potential to be used in continuous cardiac function assessment that can be easily integrated into daily life. These portable point-of-care diagnostic systems can therefore help unveil and treat cardiovascular diseases. The basis for ECG analysis is a robust detection of the prominent QRS complex, as well as other ECG signal characteristics. However, it is not clear from the literature which ECG analysis algorithms are suited for an implementation on a mobile device. We investigate current QRS detection algorithms based on three assessment criteria: 1) robustness to noise, 2) parameter choice, and 3) numerical eciency, in order to target a universal fast-robust detector. Furthermore, existing QRS detection algorithms may provide an acceptable solution only on small segments of ECG signals, within a certain amplitude range, or amid particular types of arrhythmia and/or noise. These issues are discussed in the context of a comparison with the most conventional algorithms, followed by future recommendations for developing reliable QRS detection schemes suitable for implementation on battery-operated mobile devices.
[5045] vixra:1301.0057 [pdf]
Fast QRS Detection with an Optimized Knowledge-Based Method: Evaluation on 11 Standard ECG Databases
The current state-of-the-art in automatic QRS detection methods show high robustness and almost negligible error rates. In return, the methods are usu- ally based on machine-learning approaches that require sucient computational re- sources. However, simple-fast methods can also achieve high detection rates. There is a need to develop numerically ecient algorithms to accommodate the new trend towards battery-driven ECG devices and to analyze long-term recorded signals in a time-ecient manner. A typical QRS detection method has been reduced to a basic approach consisting of two moving averages that are calibrated by a knowledge base using only two parameters. In contrast to high-accuracy methods, the proposed method can be easily implemented in a digital lter design.
[5046] vixra:1301.0056 [pdf]
Fast T Wave Detection Calibrated by Clinical Knowledge with Annotation of P and T Waves
Background: There are limited studies on the automatic detection of T waves in arrhythmic electrocardiogram (ECG) signals. This is perhaps because there is no available arrhythmia dataset with annotated T waves. There is a growing need to develop numerically-efficient algorithms that can accommodate the new trend of battery-driven ECG devices. Moreover, there is also a need to analyze long-term recorded signals in a reliable and time-efficient manner, therefore improving the diagnostic ability of mobile devices and point-of-care technologies. Methods: Here, the T wave annotation of the well-known MIT-BIH arrhythmia database is discussed and provided. Moreover, a simple fast method for detecting T waves is introduced. A typical T wave detection method has been reduced to a basic approach consisting of two moving averages and dynamic thresholds. The dynamic thresholds were calibrated using four clinically known types of sinus node response to atrial premature depolarization (compensation, reset, interpolation, and reentry). Results: The determination of T wave peaks is performed and the proposed algorithm is evaluated on two~well-known databases, the QT and MIT-BIH Arrhythmia databases. The detector obtained a sensitivity of 97.14% and a positive predictivity of 99.29% over the first lead of the validation databases (total of 221,186 beats). Conclusions: We present a simple yet very reliable T wave detection algorithm that can be potentially implemented on mobile battery-driven devices. In contrast to complex methods, it can be easily implemented in a digital filter design.
[5047] vixra:1301.0055 [pdf]
Can Heart Rate Variability (HRV) be Determined Using Short-Term Photoplethysmograms?
To date, there have been no studies that investigate the independent use of the photoplethysmogram (PPG) signal to determine heart rate variability (HRV). However, researchers have demonstrated that PPG signals offer an alternative way of measuring HRV when electrocardiogram (ECG) and PPG signals are collected simultaneously. Based on these findings, we take the use of PPGs to the next step and investigate a different approach to show the potential independent use of short 20-second PPG signals collected from healthy subjects after exercise in a hot environment to measure HRV. Our hypothesis is that if the PPG–HRV indices are negatively correlated with age, then short PPG signals are appropriate measurements for extracting HRV parameters. The PPGs of 27 healthy male volunteers at rest and after exercise were used to determine the HRV indices: standard deviation of heartbeat interval (SDNN) and the root-mean square of the difference of successive heartbeats (RMSSD). The results indicate that the use of the aa interval, derived from the acceleration of PPG signals, is promising in determining the HRV statistical indices SDNN and RMSSD over 20-second PPG recordings. Moreover, the post-exercise SDNN index shows a negative correlation with age. There tends to be a decrease of the PPG–SDNN index with increasing age, whether at rest or after exercise. This new outcome validates the negative relationship between HRV in general with age, and consequently provides another evidence that short PPG signals have the potential to be used in heart rate analysis without the need to measure lengthy sequences of either ECG or PPG signals.
[5048] vixra:1301.0054 [pdf]
Detection of c, d, and e Waves in the Acceleration Photoplethysmogram
Analyzing the acceleration photoplethysmogram (APG) is becom- ing increasingly important for diagnosis. However, processing an APG signal is challenging, especially if the goal is to detect its small com- ponents (c, d, and e waves). Accurate detection of c, d, and e waves is an important first step for any clinical analysis of APG signals. In this paper, a novel algorithm that can detect c, d, and e waves simul- taneously in APG signals of healthy subjects that have low amplitude waves, contain fast rhythm heart beats, and suffer from non-stationary effects was developed. The performance of the proposed method was tested on 27 records collected during rest, resulting in 97.39% sensitiv- ity and 99.82% positive predictivity.
[5049] vixra:1301.0053 [pdf]
Detection of a and B Waves in the Acceleration Photoplethysmogram
Background: Analyzing acceleration photoplethysmogram (APG) signals measured after exercise is challenging. In this paper, a novel algorithm that can detect a waves and consequently b waves under these conditions is proposed. Accurate a and b wave detection is an important rst step for the assessment of arterial stiness and other cardiovascular parameters. Methods: Nine algorithms based on xed thresholding are compared, and a new algorithm is introduced to improve the detection rate using a testing set of heat stressed APG signals containing a total of 1,540 heart beats. Results: The new a detection algorithm demonstrates the highest overall detection accuracy|99.78% sensitivity, 100% positive predictivity|over signals that suer from 1) non-stationary eects, 2)irregular heartbeats, and 3) low amplitude waves. In addition, the proposed b detection algorithm achieved an overall sensitivity of 99.78% and a positive predictivity of 99.95%. Conclusions: The proposed algorithm presents an advantage for real-time applications by avoiding human intervention in threshold determination.
[5050] vixra:1301.0046 [pdf]
Laurent-Nottale Arms: NGC 5921
Friends: Whenever feeling happy, sad, or dangerous, remember two things. First, scientific experiments continue to show that human experience is derived from the creation of natural structure. For example, love, disease, and mental worries are the origin of the creation of microscopic structure. Second, the products of human's creation is based on the natural one. That is, the products are the second creation to the natural one. For example, the toys which children are fond of, are the creation based on Earth's resources and deposits while the meaning the toys transfer is originated from human's feelings for the motion of natural structure. However, scientists do not know what is the origin of the natural creation. Laurent Nottale is the first person in human history who gave the fundamental explanation to the natural creation of Solar system, based on his theory of Scale Relativity. This paper is the study on the natural creation of galaxies. For a planar distribution of matter, Jin He and Bo He defined Darwin curves on the plane as such that the ratio of the matter densities at both sides of the curve is constant along the curve. Therefore, the arms of ordinary spiral galaxies are Darwin curves. Now an important question facing humans is that: Are the arms of barred spiral galaxies the Darwin curves too? Fortunately, Dr. Jin He made a piece of Galaxy Anatomy Graphic Software (www.galaxyanatomy.com). With the software, not only can people simulate the stellar density distribution of barred spiral galaxies but also can draw the Darwin curves of the simulated galaxy structure. Therefore, if Dr. Jin He's idea is true then people all over the world will witness the evidence that the arms of barred spiral galaxies are identical to the corresponding Darwin curves. This paper shows partial evidence that the arms of galaxy NGC 5921 follow Darwin curves.
[5051] vixra:1301.0044 [pdf]
Formulas for Various Summations of Quark Masses in Relation to the Mass of the Electron and Quantum Dimensionless Length Derived from the Fine Structure Constant, Seven Dimensions an Twenty-Six.
In this paper, we present several equations that generate the ratio of the sum of the roots of the masses of the quarks with respect to the electron, likewise the ratio of the sum of the masses of the quarks in relation to the mass of the electron. Both equations depend exclusively, and in a very simple and logical way from quantum dimensionless length derived from the fine structure constant to zero momentum; lengths in seven dimensions, the number of quarks and the three charges of color group SU(3).
[5052] vixra:1301.0032 [pdf]
Geometric Cosmology
The modified cosmological model (MCM) is explored in the context of general relativity. A flaw in the ADM positive-definiteness theorem is identified. We present an exposition of the relationship between Einstein's equations and the precessing classical oscillator. Kaluza theory is applied to the MCM and we find a logical motivation for the cylinder condition which leads to a simple mechanism for AdS/CFT.
[5053] vixra:1301.0029 [pdf]
Shifting the Balance of Global Economic Power: the Sinosphere in Ascension Towards Dominance
The Sinosphere is in ascension towards global economic dominance on both total and per capita bases, marking a fundamental shift in the balance of global economic and military power that is taking place absent any robust structural democratic and human rights reforms in this region. In contrast to comparisons during the 1980s of Japan potentially overtaking the United States as the world's largest economy, both purchasing power parity (PPP) and current United States dollar GDP metrics consistently project that China's gross domestic product (GDP) will exceed that of the United States sometime between 2015 and 2020. The Sinosphere's GDP-PPP passed that of The Commonwealth (including India) in 2011, The Commonwealth (excluding India) in 2005, the Francosphere member states in 2003, the Francosphere member and observer states in 2009 - subsequently widening the gap in all cases - and is predicted to surpass that of the Anglosphere by the early 2020s. China's military spending now exceeds that of all other nations bordering the East and South China Seas combined and the gap is widening rapidly. At current rates of increase, China's military expenditures may surpass those of the United States within the next decade. On a per capita basis, China's GDP-PPP is expected to overtake that of the United States and Canada by the early to mid-2030s, whereas Russia and the EU are projected to be surpassed by China in per capita GDP-PPP by the late 2020s.
[5054] vixra:1301.0024 [pdf]
CloudSVM : Training an SVM Classifier in Cloud Computing Systems
In conventional distributed machine learning methods, distributed support vector machines (SVM) algorithms are trained over pre-configured in-tranet/internet environments to find out an optimal classifier. These methods are very complicated and costly for large datasets. Hence, we propose a method that is referred as the Cloud SVM training mechanism (CloudSVM) in a cloud computing environment with MapReduce technique for distributed machine learning applications. Accordingly, (i) SVM algorithm is trained in distributed cloud storage servers that work concurrently; (ii) merge all support vectors in every trained cloud node; and (iii) iterate these two steps until the SVM con-verges to the optimal classifier function. Single computer is incapable to train SVM algorithm with large scale data sets. The results of this study are im-portant for training of large scale data sets for machine learning applications. We provided that iterative training of splitted data set in cloud computing envi-ronment using SVM will converge to a global optimal classifier in finite iteration size.
[5055] vixra:1301.0021 [pdf]
Number Systems Based On Logical Calculus
The reference \cite{Ref1} denote number systems with a logical calculus, but the form of natural numbers are not consistently in these number systems. So we rewrite number systems to correct the defect.
[5056] vixra:1301.0015 [pdf]
Simple Formulas that Generates the Quarks Masses
In this paper we present a very simple formulas that generate the quark masses as a very direct functions sine and cosine of the Cabibbo angle. The accuracy of the results are very big in relation to the latest experimental values.
[5057] vixra:1301.0012 [pdf]
Lorentz Transformation and The Euclidian Space
Based on the Euclidian concept of distance and velocity, we propose a thought experiment which shows that if the clocks carried by two observers in a uniform linear motion, don't indicate the same time, their relative velocities will necessarily be different.
[5058] vixra:1301.0009 [pdf]
The Gravitational Energy of the Universe
The gravitational energy, momentum, and stress are calculated for the Robertson-Walker metric. The principle of energy conservation is applied, in conjunction with the Friedmann equations. Together, they show that the cosmological constant is non-zero, the curvature index k = 0, and the acceleration is positive. It is shown that the gravitational field accounts for two-thirds of the energy in the Universe. Keywords: dark energy = gravitational energy
[5059] vixra:1301.0008 [pdf]
Unifying the Galilei and the Special Relativity
We present two models combining some aspects of the Galilei and the Special Relativity that leads to a unification of both relativities. This unification is founded on a reinterpretation of the absolute time of the Galilei relativity that is considered as a quantity in its own and not as mere reinterpretation of the time of the Special relativity in the limit of low velocity. In the first model, the Galilei relativity plays a prominent role in the sense that the basic kinematical laws of Special relativity, e.g. the Lorentz transformation and the velocity law, follows from the corresponding Galilei transformations for the position and velocity. This first model also provides a new way of conceiving the nature of relativistic spacetime where the Lorentz transformation is induced by the Galilei transformation through an embedding of 3-dimensional Euclidean space into hyperplanes of 4-dimensional Euclidean space. This idea provides the starting point for the development of a second model that leads to a generalization of the Lorentz transformation, which includes, as particular cases, the standard Lorentz transformation and transformations that apply to the case of superluminal frames.
[5060] vixra:1301.0006 [pdf]
Evolution of Stellar Objects According to J.Wheeler’s Geometrodynamic Concept
The proposed model is based on J.Wheeler’s geometrodynamic concept, in which space continuum is considered as a topologically non-unitary coherent surface admitting the existence of transitions of the input-output kind between distant regions of the space in an additional dimension. The existence of closed structures (macrocontours) formed at the expense of interbalance of gravitational, electric, magnetic and inertial forces has been substantiated. It is such macrocontours that have been demonstrated to form — independently of their material basis — the essential structure of stellar objects (SO) and to determine the position of these objects on the Hertzsprung-Russell diagram. Models of the characteristic types of stellar objects: stars and compact bodies emerging in the end of stellar evolution — have been presented, and their standard parameters at different stages of evolution have been calculated. The existence of the Hertzsprung-Russell diagram has been substantiated, and its computational analogue has been given. Parallels between stellar and microcosmic objects are drawn.
[5061] vixra:1301.0005 [pdf]
Philip-Gibbs Arms: NGC 4548
It may be true that mankind’s hope is the identification of the living meaning of natural structures. However, scientists including physicists, chemists, and biologists have not found any evidence of the meaning. In the natural world, there exists one kind of structure which is beyond the scope of human laboratorial experiment. It is the structure of galaxies. Spiral galaxies are flat disk-shaped. There are two types of spiral galaxies. The spiral galaxies with some bar-shaped pattern are called barred spirals, and the ones without the pattern are called ordinary spirals. Longer-wavelength galaxy images (infrared, for example) show that ordinary spiral galaxies are basically an axi-symmetric disk that is called exponential disk. For a planar distribution of matter, Jin He and Bo He defined Darwin curves on the plane as such that the ratio of the matter densities at both sides of the curve is constant along the curve. Therefore, the arms of ordinary spiral galaxies are Darwin curves. Now an important question facing humans is that: Are the arms of barred spiral galaxies the Darwin curves too? Fortunately, Dr. Jin He made a piece of Galaxy Anatomy Graphic Software (www.galaxyanatomy.com). With the software, not only can people simulate the stellar density distribution of barred spiral galaxies but also can draw the Darwin curves of the simulated galaxy structure. Therefore, if Dr. Jin He's idea is true then people all over the world will witness the evidence that the arms of barred spiral galaxies are identical to the corresponding Darwin curves. This paper shows partial evidence that the arms of galaxy NGC 4548 follow Darwin curves. Note: Dr. Philip Gibbs is the founder of viXra.org. Jin He has been jobless and denied any possibility of postdoc position since 2005. Jin He has been rejected by arXiv.org or even by PhysicsForums.com since 2006 and 2007 respectively. Philip Gibbs's eprint archive is the only channel with which Jin He can connect to the human world.
[5062] vixra:1301.0004 [pdf]
On the Gravitational Bending of Light -- Was Sir Professor Dr. Arthur Stanley Eddington Right?
The paramount British-Led May, 29, 1919 Solar Eclipse Result of Eddington et al. has had tremendous if not an arcane effect in persuading scientists, philosophers and the general public, to accept Einstein's esoteric General Theory of Relativity (GTR) thereby ``deserting" Newtonian gravitation altogether, especially in physical domains of extreme gravitation where Einstein's GTR is thought or believed to reign supreme. The all-crucial factor ``2" predicted by Einstein's GTR has been ``verified" by subsequent measurements, more so by the most impressive and precision modern technology of VLBA measurements using cosmological radio waves to within 99.998\% accuracy. From within the most well accepted provinces, confines and domains of Newtonian gravitational theory, herein, we demonstrate that the gravitational to inertial mass ratio of photons in Newtonian gravitational theory where the identities of the inertial and gravitational mass are preserved, the resulting theory is very much compatible with all measurements made of the gravitational bending of light. Actually, this approach posits that these measurements of the gravitational bending of light not only confirm the gravitational bending of electromagnetic waves, but that, on a much more subtler level; rather clandestinely, these measurements are in actual fact a measurement of the gravitational to inertial mass ratio of photons. The significant 20% scatter seen in the measurements where white-starlight is used, according to the present thesis, this scatter is seen to imply that the gravitational to inertial ratio of photons may very well be variable quantity such that for radio waves, this quantity must -- to within 99.998% accuracy, be unity. We strongly believe that the findings of the present reading demonstrate or hint to a much deeper reality that the gravitational and inertial mass, may -- after all; not be equal as we have come to strongly believe. With great prudence, it is safe to say that, this rather disturbing (perhaps exciting) conclusion, if correct; may direct us to closely re-examine the validity of Einstein's central tenant -- the embellished Equivalence Principle (EP), which stands as the strongest and most complete embodiment of the foundational basis of Einstein's beautiful and celebrated GTR.
[5063] vixra:1301.0003 [pdf]
Do Geometric Invariants of Preferred Extremals Define Topological Invariants of Space-Time Surface and Code for Quantum Physics?
The recent progress in the understanding of preferred extremals of Kaehler action leads to the conclusion that they satisfy Einstein-Maxwell equations with cosmological term with Newton's constant and cosmological constant predicted to have a spectrum. One particular implication is that preferred extremals have a constant value of Ricci scalar. The implications of this are expected to be very powerful since it is known that D>2-dimensional manifolds allow a constant curvature metric with volume and other geometric invariants serving as topological invariants. Also the the possibly discrete generalization of Ricci flow playing key role in manifold topology to Maxwell flow is very natural, and the connections with the geometric description of dissipation, self-organization, transition to chaos and also with coupling constant evolution are highly suggestive. A further fascinating possibility inspired by quantum classical correspondence is quantum ergodicity (QE): the statistical geometric properties of preferred extremals code for various correlations functions of zero energy states defined as their superpositions so that any preferred extremal in the superposition would serve as a representative of the zero energy state. QE would make possible to deduce correlation functions and S-matrix from the properties of single preferred extremal.
[5064] vixra:1301.0002 [pdf]
The Recent TGD Inspired View about Higgs
The existence of Higgs and its identification have been continual source of head ache in TGD framework. The vision which looks most plausible at this moment is rather conservative in the sense that it assumes that standard description of massivation using Higgs in QFT framework is the only possible one: if TGD has QFT limit, then Higgs provides a phenomenological parametrization of particle masses providing a mimicry for the microscopic description relying on p-adic thermodynamics. The anomalies related to Higgs are however still there. A new explanatory piece in the puzzle is M_{89} hadron physics. The gamma ray background from the decays of M_{89} pions could explain the anomalous decay rate to gamma pairs and the problemsrelated to the determination of Higgs mass. It could explain also the production of highly correlated charged particle pairs observed first at RHIC for colliding heavy ions and two years ago at LHC for proton heavy-ion collisions as decay products of string like objects of M_{89} hadron physics, the observations of Fermi satellite, and maybe even the latest Christmas rumour suggesting the existence of charge 2 states decaying to lepton pairs by identifying them as leptomeson formed from two color octet muons and produced ivia intermediate parallel gluon pairs n the decay of M_{89} mesonic strings to ordinary hadrons and leptons.
[5065] vixra:1301.0001 [pdf]
Could N=2 or N=4 Sym be a Part of TGD?
Whether right-handed neutrinos generate a supersymmetry in TGD has been a long standing open question. N=1 SUSY is certainly excluded by fermion number conservation but already N=2 defining a "complexification" of N=1 SUSY is possible and could generate right-handed neutrino and its antiparticle. These states should however possess a non-vanishing light-like momentum since the fully covariantly constant right-handed neutrino generates zero norm states. So called massless extremals (MEs) allow massless solutions of the modified Dirac equation for right-handed neutrino in the interior of space-time surface, and this seems to be case quite generally in Minkowskian signature for preferred extremals. This suggests that particle represented as magnetic flux tube structure with two wormhole contacts sliced between two MEs could serve as a starting point in attempts to understand the role of right handed neutrinos and how N=2 or N=4 SYM emerges at the level of space-time geometry. The following arguments inspired by the article of Nima Arkani-Hamed et al about twistorial scattering amplitudes suggest a more detailed physical interpretation of the possible SUSY associated with the right-handed neutrinos. The fact that right handed neutrinos have only gravitational interaction suggests a radical re-interpretation of SUSY: no SUSY breaking is needed since it is very difficult to distinguish between mass degenerate spartners of ordinary particles. In order to distinguish between different spartners one must be able to compare the gravitomagnetic energies of spartners in slowly varying external gravimagnetic field: this effect is extremely small.
[5066] vixra:1212.0173 [pdf]
Some Previous and Elementary Considerations on the Schrodinger Equation and on the Collapse of the Wave Function
This paper is intended to show the Schrodinger equation, within its structure, allows the manifestation of the wave function collapse within a very natural way of reasoning. In fact, as we will see, nothing new must be inserted to the classical quantum mechanics, viz., only the dialectics of the physical world must be interpreted under a correct manner. We know the nature of a physical system turns out to be quantical or classical, and, once under the validity of the Schrodinger equation to provide the evolution of this physical system, the dialectics, quantum or classical, mutually exclusive, must also be under context through the Schrodinger equation, issues within the main scope of this paper. We will show a classical measure, the obtention of a classical result, emerges from the structure of the Schrodinger equation, once one demands the possibility that, over a chronological domain, the system begins to provide a classical dialectic, showing the collapse may be understood from both: the structure of the Schrodinger equation as well as from the general solution to this equation. The general solution, even with a dialectical change of description, leads to the conservation of probability, obeying the Schrodinger equation. These issues will turn out to be a consequence of a general potential energy operator, obtained in this paper, including the possibility of the classical description of the physical system, including the possibility of interpretation of the collapse of the quantum mechanical state vector within the Schrodinger equation scope.
[5067] vixra:1212.0171 [pdf]
Time Trends for Water Levels in Lake Athabasca, Canada
Potential time trends for water levels in Lake Athabasca, Canada, were investigated with particular emphasis on a critical examination of the available hydrometric record and other confounding factors mitigating against reliable trend detection on this sytem. Four hydrometric stations are available on Lake Athbasca, but only the Lake Athabasca near Crackingstone Point (07MC003) site has suitable - albeit temporally limited (1960-2010) - records for a rigorous time series analysis of annual water levels. The examination presented herein provides evidence that the 2010 lake level dataset at 07MC003 is flawed and should not be included in any trend analyses. With the conclusion that 2010 lake levels on Lake Athabasca at station 07MC003 are erroneous, lake level time series regressions over various timeframes between 1960 and 2009 yield widely varying degrees of non-significance and slope magnitude / direction. As a further confounding factor against mechanistic time trend analyses of water levels on Lake Athabasca, a dam and rockfill weirs were constructed on the lake outlets during the 1970s in order to maintain elevated lake levels. Thus, the entire time series of lake levels on Lake Athabasca since filling of the reservoir behind the W.A.C. Bennett Dam (Lake Williston) began in 1968 can be described as experiencing substantial anthropogenic modification. Collectively, these influences - including problems in the hydrometric record - appear to sufficiently impact the annual lake level record as to prevent reliable trend analyses that unequivocally isolate natural factors such as climate change or any other anthropogenic factors that may be operative in the source watersheds.
[5068] vixra:1212.0169 [pdf]
Integrability of Maxwell's Equations Part II
In this article I pick up with [4] and [3] and show that the mathematical relations of quantum mechanics derive from classical electrodynamics, albeit without the use of the principle of indeterminacy.
[5069] vixra:1212.0164 [pdf]
Discrete Structure of Spacetime
In this paper, I introduce a particular discrete spacetime that should be seriously considered as part of physics because it allows to explain the characteristics of the motion properly, contrary to what happens with the continuous spacetime of the common conception.
[5070] vixra:1212.0163 [pdf]
A Realist Interpretation of Quantum Mechanics that rules out Determinism
We indicate how pursuit of a realist interpretation of Quantum Mechanics, starting from a simple and plausible physical principle and established Quantum Mechanics, leads to a physical picture almost as counter-intuitive but which among other things would if true confirm that the quest for a deterministic model of Quantum Mechanics is doomed to failure.
[5071] vixra:1212.0160 [pdf]
Sakaji-Licata Arms: NGC 3275
It may be true that mankind's hope is the identification of the living meaning of natural structures. However, scientists including physicists, chemists, and biologists have not found any evidence of the meaning. In the natural world, there exists one kind of structure which is beyond the scope of human laboratorial experiment. It is the structure of galaxies. Spiral galaxies are flat disk-shaped. There are two types of spiral galaxies. The spiral galaxies with some bar-shaped pattern are called barred spirals, and the ones without the pattern are called ordinary spirals. Longer-wavelength galaxy images (infrared, for example) show that ordinary spiral galaxies are basically an axi-symmetric disk that is called exponential disk. For a planar distribution of matter, Jin He and Bo He defined Darwin curves on the plane as such that the ratio of the matter densities at both sides of the curve is constant along the curve. Therefore, the arms of ordinary spiral galaxies are Darwin curves. Now an important question facing humans is that: Are the arms of barred spiral galaxies the Darwin curves too? Fortunately, Dr. Jin He made a piece of Galaxy Anatomy Graphic Software (www.galaxyanatomy.com). With the software, not only can people simulate the stellar density distribution of barred spiral galaxies but also can draw the Darwin curves of the simulated galaxy structure. Therefore, if Dr. Jin He's idea is true then people all over the world will witness the evidence that the arms of barred spiral galaxies are identical to the corresponding Darwin curves. This paper shows partial evidence that the arms of galaxy NGC 3275 follow Darwin curves. Note: Ammar Sakaji and Ignazio Licata are the founder or the editor-in-chief of the Electronic Journal of Theoretical Physics. Over fifty journals of astronomy and physics had rejected Dr. Jin He's core article on galaxy structure before 2010. It is Sakaji and Licata's journal that accepted the article.
[5072] vixra:1212.0154 [pdf]
Evidence for Increasingly Extreme and Variable Drought Conditions in the Contiguous United States Between 1895 and 2012
Potential annual (January-December) and summertime (June-August) regional time trends and increasingly extreme and / or variable values of Palmer-based drought indices were investigated over the contiguous United States (US) between 1895 and the present. Although there has been no significant change in the annual or summertime Palmer Drought Severity Index (PDSI), Palmer Hydrological Drought Index (PHDI), or Palmer Modified Drought Index (PMDI) for the contiguous US over this time frame, there is clear evidence of decreasing drought conditions in the eastern US (northeast, east north central, central, and southeast climate zones) and increasing drought conditions in the west climate region (California and Nevada). No significant time trends were found in the annual or summertime PDSI, PHDI, and PMDI for the spring and winter wheat belts and the cotton belt. The corn and soybean belts have significant increasing trends in both the annual and summertime PDSI, PHDI, and PMDI, indicating a tendency towards reduced drought conditions over time. Clear trends exist toward increasingly extreme (dry or wet) annual PDSI, PHDI, and PMDI values in the northeast, east north central, central, northwest, and west climate regions. The northeast, northwest, and west climate zones display significant temporal trends for increasingly extreme PDSI, PHDI, and PMDI values during the summertime. Trends toward increasingly variable annual and summertime drought index values are also apparent in the northeast, southwest, northwest, and west climate zones.
[5073] vixra:1212.0147 [pdf]
Mathematical Theory of Magnetic Field
The study of magnetic fields produced by steady currents is a full-valued physical theory which like any other physical theory employs a certain mathematics. This theory has two limiting cases in which source of the field is confined on a surface or a curve. It turns out that mathematical methods to be used in these cases are completely different and differ from from that of the main of the main part of this theory, so, magnetostatics actually consists of three distinct theories. In this work, these three theories are discussed with special attention to the case current carried by a curve. In this case the source serves as a model of thin wire carrying direct current, therefore this theory can be termed magnetostatics of thin wires. The only mathematical method used in this theory till now, is the method of Green's functions. Critical analysis of this method completed in this work, shows that application of this method to the equation for vector potential of a given current density has no foundation and application of this method yields erroneous results
[5074] vixra:1212.0145 [pdf]
A New Look at the Position Operator in Quantum Theory
The postulate that coordinate and momentum representations are related to each other by the Fourier transform has been accepted from the beginning of quantum theory by analogy with classical electrodynamics. As a consequence, an inevitable effect in standard theory is the wave packet spreading (WPS) of the photon coordinate wave function in directions perpendicular to the photon momentum. This leads to several paradoxes. The most striking of them is that coordinate wave functions of photons emitted by stars have cosmic sizes and strong arguments indicate that this contradicts observational data. We argue that the above postulate is based neither on strong theoretical arguments nor on experimental data and propose a new consistent definition of the position operator. Then WPS in directions perpendicular to the particle momentum is absent and the paradoxes are resolved. Different components of the new position operator do not commute with each other and, as a consequence, there is no wave function in coordinate representation. Implications of the results for entanglement, quantum locality and the problem of time in quantum theory are discussed.
[5075] vixra:1212.0143 [pdf]
Do Sovereign Wealth Funds Effectively Dampen Exchange Rate Variability?
Sovereign wealth funds (SWFs) are receiving significant attention from nations with substantial and sustained foreign reserves derived via natural resource development and/or manufacturing based export-led economies as a means of achieving intergenerational equity, government savings, and stable currency exchange rates. Based on an analysis of currency variability for representative export-led nations with and without SWFs between 1999 and 2012, the case for SWF-based currency sterilization requires further investigation. Furthermore, several nations undergoing active policy debates regarding the possible implementation of SWFs may not have current account balances suitable for accruing all perceived SWF benefits.
[5076] vixra:1212.0139 [pdf]
The Alcubierre Warp Drive Using Lorentz Boosts According to the Harold White Spacetime Metric Potential $\theta$
Warp Drives are solutions of the Einstein Field Equations that allows superluminal travel within the framework of General Relativity. The first of these solutions was discovered by the Mexican mathematician Miguel Alcubierre in $1994$.The Alcubierre warp drive seems to be very attractive because allows interstellar space travel at arbitrarily large speeds avoiding the time dilatation and mass increase paradoxes of Special Relativity. However it suffers from a very serious drawback:Interstellar space is not empty:It is fulfilled with photons and particle dusts and a ship at superluminal speeds would impact these obstacles in highly energetic collisions disrupting the warp field and placing the astronauts in danger.This was pointed out by a great number of authors like Clark,Hiscock,Larson,McMonigal,Lewis,O'Byrne, \newline Barcelo,Finazzi and Liberati. \newline In order to travel significant interstellar distances in reasonable amounts of time a ship would need to attain $200$ times the speed of light but according to Clark,Hiscock and Larson the impact between the ship and a single photon of Cosmic Background Radiation(COBE)would release an amount of energy equal to the photosphere of a star like the Sun.And how many photons of COBE we have per cubic centimeter of space?This serious problem seems to have no solution at first sight. \newline However some years ago Harold White from NASA appeared with an idea that may well solve this problem:According to him the ship never surpass the speed of light but the warp field generates a Lorentz Boost resulting in an apparent superluminal speed as seen by the astronauts on-board the ship and on the Earth while the warp bubble is always below the light speed with the ability to manoeuvre against these obstacles avoiding the lethal collisions. \newline In this work we examine the feasibility of the White idea using clear mathematical arguments and we arrived at the conclusion that Harold White is correct.
[5077] vixra:1212.0134 [pdf]
Waves in a Dispersive Exponential Half-Space
Maxwell equations for electromagnetic waves propagating in dispersive media are studied as they are, without commonplace substituting a scalar function for electromagnetic field. A method of variables separation for the original system of equation is proposed. It is shown that in case of planar symmetry variables separate in systems of Cartesian and cylindric coordinates and Maxwell equations reduce to one-dimensional Schr¨odinger equation. Complete solutions are obtained for waves in medium with electric permittivity and magnetic permeability given as ϵ = e^−κz, µ = c^−2e^−λz. keywords: Maxwell equations, dispersive media, complete solutions PACS numbers: 41.20.Jb, 42.25 .Bs Keywords: Maxwell equations, dispersive media, complete solutions
[5078] vixra:1212.0126 [pdf]
Hylomorphic Functions
Philosophers have long pondered the Problem of Universals. One response is Metaphysical Realism, such as Plato's Doctrine of the Forms and Aristotle's Hylomorphism. We postulate that Measurement in Quantum Mechanics forms the basis of Metaphysical Realism. It is the process that gives rise to the instantiation of Universals as Properties, a process we refer to as Hylomorphic Functions. This combines substance metaphysics and process metaphysics by identifying the instantiation of Universals as causally active processes along with physical substance, forming a dualism of both substance and information. Measurements of fundamental properties of matter are the Atomic Universals of metaphysics, which combine to form the whole taxonomy of Universals. We look at this hypothesis in relation to various different interpretations of Quantum Mechanics grouped under two exemplars: the Copenhagen Interpretation, a version of Platonic Realism based on wave function collapse, and the Pilot Wave Theory of Bohm and de Broglie, where particle--particle interactions lead to an Aristotelian metaphysics. This view of Universals explains the distinction between pure information and the medium that transmits it and establishes the arrow of time. It also distinguishes between universally true Atomic Facts and the more conditional Inferences based on them. Hylomorphic Functions also provide a distinction between Universals and Tropes based on whether a given Property is a physical process or is based on the qualia of an individual organism. Since the Hylomorphic Functions are causally active, it is possible to suggest experimental tests that can verify this viewpoint of metaphysics.
[5079] vixra:1212.0113 [pdf]
Cosmological Implications of the Casimir Energy Density
In this article, we analyze some unspecific detalis which are significant in certain experiment related to Casimir effect. At the "point of closest approach", as the Casimir force equals the Coulombic force, we can calculate the the static energy density. Also, identical phenomena occurs in the cosmological H and HeI Rydberg atoms. In spite of the marked contrast between both scales, by extrapolation utilizing a dynamical expression for this microscopic magnitudes, we can obtain the Cosmological Constant. Due to its intesives form, these finding are fascinating, since from a specific microscopic empty cavity, we can equalize its expansive energy density with respect to the cosmological energy density.
[5080] vixra:1212.0109 [pdf]
Polynomial 3-SAT-Solver
Five <u>different</u> polynomial 3-SAT algorithms named "Algorithm A/B/C/D[M]/E" are provided:<br> <table border='0'> <tr> <td style='width:1px;white-space:nowrap;vertical-align:top;'>v1: "Algorithm A": </td> <td>Obsolete, incorrect. My first attempt. In retrospect, this Algorithm A is just a bounded-width logical resolution and thus no polynomial 3-SAT solver.</td> <tr> <td style='width:1px;white-space:nowrap;vertical-align:top;'>v2: "Algorithm B": </td> <td>Obsolete. Never failed for millions of test runs, but I'm not sure if this Algorithm B is really correct. Some time after publishing, I found out that the algorithm keeps too many tuples enabled, for some SAT CNFs. Mr. M. Prunescu's paper 'About a surprizing computer program of Matthias Mueller' is about this Algorithm B.</td> </tr> <tr> <td style='width:1px;white-space:nowrap;vertical-align:top;'>v3: "Algorithm C": </td> <td>Obsolete, incorrect. A trial to replace the tuples of Algorithm B by single clauses. Fails for some SAT CNF types (e.g. for large pigeon hole problem CNFs).</td> </tr> <tr> <td style='width:1px;white-space:nowrap;vertical-align:top;'>v4: "Algorithm D‑1.0": </td> <td>Obsolete. Never failed, solved absolutely everything that I ever inputted, detected even the pigeon hole problem PHP<sub>6</sub> as UNSAT, what would not have been possible if this Algorithm D was just a resolution solver. The problem is that I did not understand the Algorithm D completely (I found it through trial and error, noticed it did never fail). The proof of correctness might not be completely satisfying.</td> </tr> <tr> <td style='width:1px;white-space:nowrap;vertical-align:top;'>v5: "Algorithm D‑1.1": </td> <td>Obsolete. Very same algorithm as v4, but better explained and with a re-written, completely new part of the proof of correctness.</td> </tr> <tr> <td style='width:1px;white-space:nowrap;vertical-align:top;'>v6: "Algorithm D‑1.1": </td> <td>Obsolete. Some helpful improvements (compared to v5).</td> </tr> <tr> <td style='width:1px;white-space:nowrap;vertical-align:top;'>v7: "Algorithm D‑1.2": </td> <td>Obsolete. Paper from May 22nd, 2016.</td> </tr> <tr> <td style='width:1px;white-space:nowrap;vertical-align:top;'>v8: "Algorithm D‑1.3": </td> <td>Obsolete. Parts of the proof of correctness have been replaced by a completely re-written, more detailed variant.</td> </tr> <tr> <td style='width:1px;white-space:nowrap;vertical-align:top;'>v9: "Algorithm D‑1.3": </td> <td>Obsolete. Another part of the proof of correctness has been made more detailed.</td> </tr> <tr> <td style='width:1px;white-space:nowrap;vertical-align:top;'>vA: "Algorithm DM‑1.0": </td> <td>Obsolete. Completely re-written document. Still describes algorithm D, but as short as possible and in mathematical notation.</td> </tr> <tr> <td style='width:1px;white-space:nowrap;vertical-align:top;'>vB: "Algorithm DM‑2.0": </td> <td>Obsolete. Heavily revised and extended vA document. A much more precise notation is used (compared to vA) and most formulas are now comprehensively commented and explained. Might be easier to understand for learned readers, while others prefer v9 (D-1.3).</td> </tr> <tr> <td style='width:1px;white-space:nowrap;vertical-align:top;'>vC: "Algorithm DM‑2.1": </td> <td>Obsolete. Compared to DM-2.0, three substantial extensions have been added: Why the algorithm does NOT have the restrictions of a logical resolution, that the polynomial solver correctly solved the pigeon hole problem for n=6 ("PHP<sub>6</sub>"), and what the ideas behind the three rules of the polynomial algorithm are.</td> </tr> <td style='width:1px;white-space:nowrap;vertical-align:top;'>vD: "Algorithm E‑1.0": </td> <td><u>Please read this paper!</u> This is the newest polynomial solving algorithm, called Algorithm E. Its source code is extremely simple, the main part consists of merely 4 nested loops and does not use tuples of 3-SAT clauses any more to save its internal state but merely single 3-SAT clauses. The first time it is explained in detail how the polynomial algorithm comes about, this time the polynomial algorithm is most widely <i>understood</i>. The algorithm has been extensively tested, the related code and binaries can be downloaded (see below). The algorithm and related paper should be much easier understandable than any of my previous works. </tr> </table> You can also visit www.louis-coder.com/Polynomial_3-SAT_Solver/Polynomial_3-SAT_Solver.htm for latest updates and news, and the Zip file containing the Windows/Linux demo implementation.
[5081] vixra:1212.0107 [pdf]
The Earth Must be Expanding Globally
Exactly 100 years ago, German scientist -- Alfred Lothar Wegener, sailed against the prevailing wisdom of his day when he posited that not only have the Earth's continental plates receded from each other over the course of the Earth's history, but that they are currently in a state of motion relative to one another. To explain this, Wegener setforth the hypothesis that the Earth must be expanding as a whole. Wegener's inability to provide an adequate explanation of the forces and energy source responsible for continental drift and the prevailing belief that the Earth was a rigid solid body resulted in the acrimonious dismissal of his theories. Today, that the continents are receding from each other is no longer a point of debate but a sacrosanct pillar of modern geology and geophysics. What is debatable is the energy source driving this phenomenon. An expanding Earth hypothesis is currently an idea that is not accepted on a general consensus level. Anti-proponent of the expanding Earth mercilessly dismiss it as a pseudo or fringe science. Be that it may, we show herein that from the well accepted law of conversation of spin angular momentum, Stephenson9 and Morrison (1995)'s result that over the last 2700 years or so, the length of the Earth's day has undergone a change of about +17.00 microsecond/yr, this result invariably leads to the fact the Earth must be expanding radially at a paltry rate of about +0.60mm/yr. This simple fact, automatically move the expanding Earth hypothesis from the realm of pseudo or fringe science, to that of real and ponderable science.
[5082] vixra:1212.0100 [pdf]
The Discovery of What? Ten Questions About the Higgs to the Particle Physics Community
2012 seems to become a year to be celebrated in the high energy physics community. ``As a layman, I would say we have it!'' said CERN director general Rolf-Dieter Heuer at the press conference on July 4, 2012, announcing the discovery of a footprint of `something' in the LHC proton collision data. Evidently, such a short statement was necessary because the expert's account of the discovery is a long story to tell. As physicists, we are seeking something in between. We would be curious if there are discussions in the community along our questions; in any case, they don't seem to have got outside so far. Therefore, we would like to invite a broader communication between the particle physics community and the rest of physics.
[5083] vixra:1212.0097 [pdf]
Reassessing Atmospheric Deposition Rates of Polycyclic Aromatic Compounds to the Athabasca River (Alberta, Canada) Watershed from Oil Sands Related Activities
In an earlier study (Kelly et al., PNAS, 2009, 106, 22346-22351), spatial patterns for the concentrations of particulate matter, particulate polycyclic aromatic compounds (PAC), and dissolved PAC in the snowpack around the Syncrude and Suncor upgrader facilities near the oil sands development at Fort McMurray, Alberta, Canada were determined. A reassessment of the datasets employed in this work yields significantly different deposition rates (by up to an order of magnitude) than reported, as well as reveals substantial sensitivity in deposition rate estimates depending on a range of equally valid regression types chosen. A high degree of uncertainty remains with regard to the quantities of particulate matter and PAC being deposited in the Athabasca River watershed from oil sands related activities.
[5084] vixra:1212.0088 [pdf]
Difficulties of the Set of Natural Numbers
In this article some difficulties are deduced from the set of natural numbers. The demonstrated difficulties suggest that if the set of natural numbers exists it would conflict with the axiom of regularity. As a result, we have the conclusion that the class of natural numbers is not a set but a proper class.
[5085] vixra:1212.0087 [pdf]
The Double-Padlock Problem: is Secure Classical Information Transmission Possible Without Key Exchange?
The idealized Kish-Sethuraman (KS) cipher is theoretically known to offer perfect security through a classical information channel. However, realization of the protocol is hitherto an open problem, as the required mathematical operators have not been identified in the previous literature. A mechanical analogy of this protocol can be seen as sending a message in a box using two padlocks; one locked by the Sender and the other locked by the Receiver, so that theoretically the message remains secure at all times. We seek a mathematical representation of this process, considering that it would be very unusual if there was a physical process with no mathematical description and indeed we find a solution within a four dimensional Clifford algebra. The significance of finding a mathematical description that describes the protocol, is that it is a possible step toward a physical realization having benefits in increased security with reduced complexity.
[5086] vixra:1212.0076 [pdf]
Macro-Analogies and Gravitation in the Micro-World: Further Elaboration of Wheeler’s Model of Geometrodynamics
The proposed model is based on Wheeler’s geometrodynamics of fluctuating topology and its further elaboration based on new macro-analogies. Micro-particles are considered here as particular oscillating deformations or turbulent structures in non-unitaty coherent two-dimensional surfaces. The model uses analogies of the macro-world, includes into consideration gravitational forces and surmises the existence of closed structures, based on the equilibrium of magnetic and gravitational forces, thereby supplementing the Standard Model. This model has perfect inner logic. The following phenomena and notions are thus explained or interpreted: the existence of three generations of elementary particles, quark-confinement,“Zitterbewegung”, and supersymmetry. Masses of leptons and quarks are expressed through fundamental constants and calculated in the first approximation. The other parameters — such as the ratio among masses of the proton, neutron and electron, size of the proton, its magnetic moment, the gravitational constant, the semi-decay time of the neutron, the boundary energy of the beta-decay— are determined with enough precision.
[5087] vixra:1212.0066 [pdf]
On the Expanding Earth and Contracting Moon
Exactly 100 years ago, German scientist Alfred Lothar Wegener (1880-1930), sailed against the prevailing wisdom of his day when he posited that not only have the Earth's continental plates receded from each other over the course of the Earth's history, but that they are currently in a state of motion relative to one another. To explain this, Wegener setforth the hypothesis that the Earth must be expanding as a whole. Wegener's inability to provide an adequate explanation of the forces and energy source responsible for continental drift and the prevailing belief that the Earth was a rigid solid body resulted in the acrimonious dismissal of his theories. Today, that the continents are receding from each other is no longer a point of debate but a sacrosanct pillar of modern geophysics. What is debatable is the energy source driving this phenomenon. Herein, we hold that continental drift is a result of the Earth undergoing a secular radial expansion. An expanding Earth hypothesis is currently an idea that is not accepted on a general consensus level. Be that it may, we show herein that the law of conversation of angular momentum and energy entail that the Earth must not only expand as a consequence of the secular recession of the Earth-Moon system from the Sun, but invariably, that the Moon must contract as-well. As a result, the much sort for energy source driving plate tectonics can (hypothetically) be identified with the energy transfers occurring between the orbital and rotational kinetic energy of the Earth. If our calculations are to be believed -- as we do; then, the Earth must be expanding radially at a paltry rate of about 1.50+/-mm/yr while the Moon is contracting radially at a relatively high rate of about -410+/- mm/yr.
[5088] vixra:1212.0049 [pdf]
Wolfgang Pauli and the Fine-Structure Constant
Wolfgang Pauli was influenced by Carl Jung and the Platonism of Arnold Sommerfeld, who introduced the fine-structure constant. Pauli's vision of a World Clock is related to the symbolic form of the Emerald Tablet of Hermes and Plato's geometric allegory otherwise known as the Cosmological Circle attributed to ancient tradition. With this vision Pauli revealed geometric clues to the mystery of the fine-structure constant that determines the strength of the electromagnetic interaction. A Platonic interpretation of the World Clock and the Cosmological Circle provides an explanation that includes the geometric structure of the pineal gland described by the golden ratio. In his experience of archetypal images Pauli encounters the synchronicity of events that contribute to his quest for physical symmetry relevant to the development of quantum electrodynamics.
[5089] vixra:1212.0026 [pdf]
Charge of the Electron, and the Constants of Radiation According to J.A.Wheeler’s Geometrodynamic Model
This study suggests a mechanical interpretation of Wheller’s model of the charge. According to the suggested interpretation, the oppositely charged particles are connected through the vortical lines of the current thus create a close contour “input-output” whose parameters determine the properties of the charge and spin. Depending on the energetic state of the system, the contour can be structurized into the units of the second and thirs order (photons). It is found that, in the framework of this interpretation, the charge is equivalent to the momentum. The numerical value of the unit charge has also been calculated proceeding from this basis. A system of the relations, connecting the charge to the constants of radiation (the Boltzmann, Wien, and Stefan-Boltzmann constants, and the fine structure constant) has been obtained: this give a possibility for calculating all these constants through the unit charge.
[5090] vixra:1212.0020 [pdf]
Reduced Total Energy Requirements for the Natario Warp Drive Spacetime using Heaviside Step Functions as Analytical Shape Functions
Warp Drives are solutions of the Einstein Field Equations that allows superluminal travel within the framework of General Relativity. There are at the present moment two known solutions: The Alcubierre warp drive discovered in $1994$ and the Natario warp drive discovered in $2001$. However as stated by both Alcubierre and Natario themselves the warp drive violates all the known energy conditions because the stress energy momentum tensor(the right side of the Einstein Field Equations) for the Einstein tensor $G_{00}$ is negative implying in a negative energy density. While from a classical point of view the negative energy is forbidden the Quantum Field Theory allows the existence of very small amounts of it being the Casimir effect a good example as stated by Alcubierre himself.The major drawback concerning negative energies for the warp drive are the huge negative energy density requirements to sustain a stable warp bubble configuration. Ford and Pfenning computed these negative energy and concluded that at least $10$ times the mass of the Universe is required to sustain a warp bubble configuration. However both Alcubierre and Natario warp drives as members of the same family of the Einstein Field Equations requires the so-called shape functions in order to be mathematically defined. We present in this work two new shape functions for the Natario warp drive spacetime based on the Heaviside step function and one of these functions allows arbitrary superluminal speeds while keeping the negative energy density at "low" and "affordable" levels.We do not violate any known law of quantum physics and we maintain the original geometry of the Natario warp drive spacetime We also discuss briefly Horizons and infinite Doppler blueshifts.
[5091] vixra:1212.0013 [pdf]
Pedagogical Use of Relativistic Mass at Better Visualization of Special Relativity
Relativistic mass is not incorrect. The main argument against it is that it does not tell us anything more than the relativistic energy tells us. In this paper it is shown that this is not true, because new aspects of special relativity (SR) can be presented. One reason for this definition is to show a relation between time dilation and relativistic mass. This relation can be further used to present a connection between space-time and matter more clearly, and to show that space-time does not exist without matter. This means even a simpler presentation than is shown with Einstein's general covariance. Therefore, this opposes that SR is only a theory of space-time geometry, but it needs also rest mass. Phenomenon of increasing of relativistic mass with speed can be used for a gradual transition from Newtonian mechanics to SR. It also shows how relativistic energy can have properties of matter. The postulates, which are used for the definition of SR, are therefore still clearer and the whole derivation of the Lorentz transformation is clearer. Such derivation also gives a more realistic example for the confirmation of Duff's claims.
[5092] vixra:1212.0010 [pdf]
Interview - Proof That the Black Hole Has No Basis in General Relativity or Newtonian Gravitation and Therefore Does Not Exist
This document is the transcript of an interview of me conducted by American scientists who requested me to explain in as simple terms as possible why the black hole does not exist. I provide five proofs, four of which each prove that General Relativity does not predict the black hole, and one which proves that the theoretical Michell-Laplace Dark Body of Newton’s theory of gravitation is not a black hole. The interview is located at this URL: http://www.youtube.com/watch?v=fsWKlNfQwJU
[5093] vixra:1212.0008 [pdf]
Matrix Transformation and Transform the Generalized Wave Equation into the Maxwell Wave Equation
For free electromagnetic field, there are two kinds of the wave equation, one is Maxwell wave equation, another is generalized wave equation. In the paper, according to the matrix transformation the author transform the general quadratic form into diagonal matrix. Then this can obtain both forms of wave equation. One is the Maxwell wave equation, another is the second form of the wave equation. In the half latter of the paper the author establish other two vibrator differential equations.
[5094] vixra:1212.0002 [pdf]
Finding the Fine Structure of the Solutions of Complicate Logical Probabilistic Problems by the Frequent Distributions
The Author suggests that frequent distributions can be applied to the modelling the influences of stochastically perturbing factors onto physical processes and situations, in order to look for most probable numerical values of the parameters of the complicate systems. In this deal, very visual spectra of the particularly undetermined complex problems have been obtained. These spectra allows to predict the probabilistic behaviour of the system.
[5095] vixra:1211.0164 [pdf]
Color and Isospin Waves from Tetrahedral Shubnikov Groups
This note supplements a recent article \cite{lamm} in which it was pointed out that the observed spectrum of quarks and leptons can arise as quasi-particle excitations in a discrete internal space. The paper concentrated on internal vibrational modes and it was only noted in the end that internal spin waves ('mignons') might do the same job. Here it will be shown how the mignon-mechanism works in detail. In particular the Shubnikov group $A_4 + S ( S_4 - A_4)$ will be used to describe the spectrum, and the mignetic ground state is explicitly given.
[5096] vixra:1211.0162 [pdf]
On the Independent Determination of the Ultimate Density of Physical Vacuum
In this paper, we attempt to present physical vacuum as a topologically non-unitary coherent surface. This representation follows with J.A.Wheeler’s idea about fluctuating topology, and provides a possibility to express some parameters of the unit space element through the fundamental constants. As a result, we determined the ultimate density of physical vacuum without use of Hubble’s constant.
[5097] vixra:1211.0141 [pdf]
On the Mass-Energy and Charge-Energy Equivalences
As the particles originating from point-like entities are associated with infinite self energies, a postulate, that the scalar-potential associated with particles are bounded by the Planck scale potential is presented. By defining the self-energy of a particle in terms of its scalar-potential, equivalences between charge-energy and mass-energy are obtained. The electromagnetic energy-momentum equation and de-Broglie's electromagnetic wave-length and frequency associated with a charge particle in motion are derived. Relativistically covariant electromagnetic energy and momentum expressions are obtained, resolving the 4/3 discrepancy. The non-covariance nature of the present classical electrodynamics is discussed and how the proposed postulate makes it a fully covariant theorem with the rest of the classical electrodynamics is presented. How the electromagnetic energy-momentum equation could potentially resolve the stability-problem of a charge particle is discussed and thereby a theoretical explanation to electron's spin is presented.
[5098] vixra:1211.0140 [pdf]
The Poisson Realization of $\mathfrak{so}(2, 2k+2)$ on Magnetic Leave
Let ${\mathbb R}^{2k+1}_*={\mathbb R}^{2k+1}\setminus\{\vec 0\}$ ($k\ge 1$) and $\pi$: ${\mathbb R}^{2k+1}_*\to \mathrm{S}^{2k}$ be the map sending $\vec r\in {\mathbb R}^{2k+1}_*$ to ${\vec r\over |\vec r|}\in \mathrm{S}^{2k}$. Denote by $P\to {\mathbb R}^{2k+1}_*$ the pullback by $\pi$ of the canonical principal $\mathrm{SO}(2k)$-bundle $\mathrm{SO}(2k+1)\to \mathrm{S}^{2k} $. Let $E_\sharp\to {\mathbb R}^{2k+1}_*$ be the associated co-adjoint bundle and $E^\sharp\to T^*{\mathbb R}^{2k+1}_*$ be the pullback bundle under projection map $T^*{\mathbb R}^{2k+1}_*\to {\mathbb R}^{2k+1}_*$. The canonical connection on $\mathrm{SO}(2k+1)\to \mathrm{S}^{2k} $ turns $E^\sharp$ into a Poisson manifold. The main result here is that the real Lie algebra $\mathfrak{so}(2, 2k+2)$ can be realized as a Lie subalgebra of the Poisson algebra $(C^\infty(\mathcal O^\sharp), \{, \})$, where $\mathcal O^\sharp$ is a symplectic leave of $E^\sharp$ of special kind. Consequently, in view of the earlier result of the author, an extension of the classical MICZ Kepler problems to dimension $2k+1$ is obtained. The hamiltonian, the angular momentum, the Lenz vector and the equation of motion for this extension are all explicitly worked out.
[5099] vixra:1211.0134 [pdf]
Law of Sums of the Squares of Areas, Volumes and Hyper Volumes of Regular Polytopes from Clifford Polyvectors
Inspired by the recent sums of the squares law obtained by Kovacs-Fang-Sadler-Irwin we derive the law of the sums of the squares of the areas, volumes and hyper-volumes associated with the faces, cells and hyper-cells of regular polytopes in diverse dimensions after using Clifford algebraic methods.
[5100] vixra:1211.0129 [pdf]
Duality in Robust Dynamic Programming
Many decision making problems that arise in Finance, Economics, Inventory etc. can be formulated as Markov Decision Problems (MDPs) and solved using Dynamic Programming techniques. Further, to mitigate the statistical errors in estimating the underlying transition matrix or to exercise optimal control under adverserial setup led to the study of robust formulations of the same problems in Ghaoui and Nilim~\cite{ghaoui} and Iyengar~\cite{garud}. In this work, we study the computational methodologies to develop and validate feasible control policies for the Robust Dynamic Programming Problem. In terms of developing control policies, the current work can be seen as generalizing the existing literature on Approximate Dynamic Programming (ADP) to its robust counterpart. The work also generalizes the Information Relaxation and Dual approach of Brown, Smith and Sun~\cite{bss} to robust multi period problems. While discussing this framework we approach it both from a discrete control perspective and also as a set of conditional continous measures as in Ghaoui and Nilim~\cite{ghaoui} and Iyengar~\cite{garud}. We show numerical experiments on applications like ... In a nutshell, we expand the gamut of problems that the dual approach can handle in terms of developing tight bounds on the value function.
[5101] vixra:1211.0127 [pdf]
A Convex Optimization Approach to Multiple Stopping
In this current work, we generalize the recent Pathwise Optimization approach of Desai et al.~\cite{desai2010pathwise} to Multiple stopping problems. The approach also minimizes the dual bound as in Desai et al.~\cite{desai2010pathwise} to find the best approximation architecture for the Multiple stopping problem. Though, we establish the convexity of the dual operator, in this setting as well, we cannot directly take advantage of this property because of the computational issues that arise due to the combinatorial nature of the problem. Hence, we deviate from the pure martingale dual approach to \emph{marginal} dual approach of Meinshausen and Hambly~\cite{meinshausenhambly2004} and solve each such optimal stopping problem in the framework of Desai et al.~\cite{desai2010pathwise}. Though, this Pathwise Optimization approach as generalized to the Multiple stopping problem is computationally intensive, we highlight that it can produce superior dual and primal bounds in certain settings.
[5102] vixra:1211.0122 [pdf]
On General Formulas for Generating Sequences of Pythagorean Triples Ordered by C-B
General formulas for generating sequences of Pythagorean triples ordered by c-b are studied in this paper. As computational proof, tables were made with a C++ script showing Pythagorean triples ordered by c-b and included as text files and screenshots. Furthermore, to enable readers to check and verify them, the C++ script which will interactively generate tables of Pythagorean triples from the computer console command line is attached. It can be run in Cling and ROOT CINT C/C++ interpreters or compiled.
[5103] vixra:1211.0116 [pdf]
On the Interval $[n,9(n+3)/8]$
In this paper we prove that the interval $[n,9(n+3)/8]$ contains at least one prime number for every positive integer $n$. In order to achieve our goal, we use a result by Pierre Dusart and we also do manual calculations.
[5104] vixra:1211.0099 [pdf]
Product of Distributions Applied to Discrete Differential Geometry
A method for dealing with the product of step discontinuous and delta functions is proposed. A standard method for applying the above defined product of distributions to polyhedron vertices is analysed and the method is applied to a special case where the well known angle defect formula, for the discrete curvature of polyhedra, is derived using the tools of tensor calculus. The angle defect formula is the discrete version of the curvature for vertices of polyhedra. Among other things, this paper is basically the formal proof of the above statement.
[5105] vixra:1211.0094 [pdf]
Exponential Hawkes Processes
The Hawkes process having a kernel in the form of a linear combination of exponential functions ν(t)=sum_(j=1)^Pα_j*e^(-β_j*t) has a nice recursive structure that lends itself to tractable likelihood expressions. When P=1 the kernel is ν(t)=α e^(-β t) and the inverse of the compensator can be expressed in closed-form as a linear combination of exponential functions and the LambertW function having arguments which can be expressed as recursive functions of the jump times.
[5106] vixra:1211.0084 [pdf]
Twin Paradox 1938-2012
The phenomenon of the transverse Doppler effect provides an opportunity to validate the twin paradox (or clock paradox). Using this approach, a contradiction can be shown between the theory and the experimental results.
[5107] vixra:1211.0070 [pdf]
Could Hyperbolic 3-Manifolds and Hyperbolic Lattices be Relevant in Zero Energy Ontology?
In zero energy ontology (ZEO) lattices in the 3-D hyperbolic manifold defined by H<sup>3</sup> (t<sup>2</sup>-x<sup>2</sup>-y<sup>2</sup>-z<sup>2</sup>=a<sup>2</sup>) (and known as hyperbolic space to distinguish it from other hyperbolic manifolds emerge naturally. The interpretation of H<sup>3</sup> as a cosmic time=constant slice of space-time of sub-critical Robertson-Walker cosmology (giving future light-cone of M^4 at the limit of vanishing mass density) is relevant now. ZEO leads to an argument stating that once the position of the "lower" tip of causal diamond (CD) is fixed and defined as origin, the position of the "upper" tip located at H<sup>3</sup> is quantized so that it corresponds to a point of a lattice H<sup>3</sup>/G, where G is discrete subgroup of SL(2,C) (so called Kleinian group). There is evidence for the quantization of cosmic redshifts: a possible interpretation is in terms of hyperbolic lattice structures assignable to dark matter and energy. Quantum coherence in cosmological scales would be in question. This inspires several questions. How does the crystallography in H<sup>3</sup> relate to the standard crystallography in Eucdlidian 3-space E<sup>3</sup>? Are there general results about tesselations H<sup>3</sup>? What about hyperbolic counterparts of quasicrystals? In this article standard facts are summarized and some of these questions are briefly discussed.
[5108] vixra:1211.0069 [pdf]
Do Blackholes and Blackhole Evaporation Have TGD Counterparts?
The blackhole information paradox is often believed to have solution in terms of holography stating in the case of blackholes that blackhole horizon can serve as a holographic screen representing the information about the surrounding space as a hologram. The situation is however far from settled. The newest challenge is so called firewall paradox proposed by Polchinsky et al. These paradoxes strengthen the overall impression that the blackhole physics indeed represent the limit at which GRT fails and the outcome is recycling of old arguments leading nowhere. Something very important is lacking. On the other hand, some authors like Susskind claim that the physics of this century more or less reduces to that for blackholes. I however see this endless tinkering with blackholes as a decline of physics. If super string had been a success as a physical theory, we would have got rid of blackholes. If TGD is to replace GRT, it must also provide new insights to blackholes, blackhole evaporation, information paradox and firewall paradox. This inspired me to look for what blackholes and blackhole evaporation could mean in TGD framework and whether TGD can avoid the paradoxes. This kind of exercises allow also to sharpen the TGD based view about space-time and quantum and build connections to the mainstream views.
[5109] vixra:1211.0068 [pdf]
Some Considerations Relating to the Dynamics of Quasicrystals
The dynamics of quasicrystals looks to me very interesting because it shares several features of the dynamics of K\"ahler action defining the basic variational principle of classical TGD and defining the dynamics of space-ti preferred extremals of Kähler action. Magnetic body carrying dark matter is the fundamental intentional agent in TGD inspired quantum biology and the cautious proposal is that magnetic flux sheets could define the grid of 3-planes (or more general 3-surfaces) defining quasi-periodic background fields favoring 4-D quasicrystals or more general structures in TGD Universe. Also 3-D quasicrystal like structures defined by grids of planes can be considered and 4-D quasicrystal structure could represent their time evolution. Quite recently it has been reported that grids consisting of 2-D curved orthogonal surfaces characterize the architecture of neural wiring so that this hypothesis might make sense. This structure would be analogous to 2-D quasicrystal and its time evolution to 3-D quasicrystal.
[5110] vixra:1211.0054 [pdf]
Some Solutions to the Clifford Space Gravitational Field Equations
We continue with the study of Clifford-space Gravity and find some solutions to the Clifford space ($ C$-space) generalized gravitational field equations which are obtained from a variational principle based on the generalization of the Einstein-Hilbert-Cartan action. The $C$-space connection requires $torsion$ and the field equations in $C$-space are $not$ equivalent to the ordinary gravitational equations with torsion in higher $2^D$-dimensions. We find specific metric solutions in the most simple case and discuss their difference with the metrics found in ordinary gravity.
[5111] vixra:1211.0052 [pdf]
Background Independent Relations Between Gravity and Electromagnetism
As every circuit designer knows, the flow of energy is governed by impedance matching. Classical or quantum impedances, mechanical or electromagnetic, fermionic or bosonic, topological,... To understand the flow of energy it is essential to understand the relations between the associated impedances. The connection between electromagnetism and gravitation can be made explicit by examining the impedance mismatch between the electrically charged Planck particle and the electron. This mismatch is shown to be the ratio of the gravitational and electromagnetic forces between these particles
[5112] vixra:1211.0050 [pdf]
Do Ion Channels Spin?
Ionic current flowing through a membrane pore with a helical architecture may impart considerable torque to the pore structure itself. If the channel protein is free to rotate, it will spin at significant speeds. Order of magnitude estimates of possible rotation rates are presented, as well as a few arguments why such motion could improve ion transport.
[5113] vixra:1211.0049 [pdf]
Underlying Symmetry Among the Quark and Lepton Mixing Angles (Five Year Update)
In 2007 a mathematical model encompassing both quark and lepton mixing was introduced. As five years have elapsed since its introduction it is timely to assess the model's accuracy. Despite large conflicts with experiment at the time of its introduction, five of six predicted angles now fit experiment fairly closely. The one angle incorrectly forecast necessitates a small change to the model's original framework (essentially, a sign is toggled). This change retains most of the model's original economy, while being interesting in its own right. The model's predicted mixing angles in degrees are 45, 33.210911, and 8.034394 (new) for leptons; and 12.920966, 2.367442, and 0.190986 for quarks.
[5114] vixra:1211.0048 [pdf]
Zanaboni Theorem and Saint-Venant's Principle
Violating the law of energy conservation, Zanaboni Theorem is invalid and Zanaboni's proof is wrong. Zanaboni's mistake of " proof " is analyzed. Energy Theorem for Zanaboni Problem is suggested and proved. Equations and conditions are established in this paper for Zanaboni Problem, which are consistent with , equivalent or identical to each other. Zanaboni Theorem is, for its invalidity , not a mathematical formulation or proof of Saint-Venant's Principle. AMS Subject Classifications: 74-02, 74G50
[5115] vixra:1211.0036 [pdf]
John von Neumann and Self-Reference ...
It is shown that the description as a "frog" of John von Neumann in a recent item by the Princeton celebrity physicist Freeman Dyson does among others miss completely on the immesnely important revolution of the so called "von Neumann architecture" of our modern electronic digital computers.
[5116] vixra:1211.0026 [pdf]
A Scienceographic Comparison of Physics Papers from the arXiv and viXra Archives
arXiv is an e-print repository of papers in physics, computer science, and biology, amongst others. viXra is a newer repository of e-prints on similar topics. Scienceography is the study of the writing of science. In this work we perform a scienceographic comparison of a selection of papers from the physics section of each archive. We provide the first study of the viXra archive and describe key differences on how science is written by these communities.
[5117] vixra:1211.0025 [pdf]
The Physical Origin of the Feynman Path Integral (Poster)
The Feynman path integral is an essential part of our mathematical description of fundamental nature at small scales. However, what it seems to say about the world is very much at odds with our classical intuitions, and exactly why nature requires us to describe her in this way is currently unknown. We will describe here a possibility according to which the path integral may be the spacetime manifestation of objects existing in a lower-dimensional analog of spacetime until they give rise to the emergence of spacetime objects under a process that is currently labeled a ‘Quantum Measurement’. This idea is based on a mathematical distinction which at present does not appear to be widely appreciated.
[5118] vixra:1211.0022 [pdf]
On the Fibonacci Numbers, the Koide Formula, and the Distribution of Primes
The Koide formula from physics is modified for use with the reciprocals of primes found in the intervals defined by the Fibonacci numbers. This formula's resultant values are found to alternate lower, higher, lower, higher, etc. from the interval (5,8] to the interval (514229,832040]. This pattern, inverted, is also shown to occur when the corresponding results are computed for non-primes.
[5119] vixra:1211.0020 [pdf]
Beauty Index and the Origin of Gender
It may be agreed by most people that the growth of each person or country is driven by the power of gender. However, human's understanding of gender origin is very little, and is solely based on the phenomenon of life on Earth. If humans raise up their head and look at the universe, its preliminary answer may emerge. In the history of mankind, the first person who was surprised at the look of cosmic gender is possibly the famous French scientist Henri Poincare. He, at the conclusion of the preface to his book, `Hyptheses Cosmogoniques', states ``One fact that strikes everyone is the spiral shape of some nebulae; it is encountered much too often for us to believe that it is due to chance. It is easy to understand how incomplete any theory of cosmogony which ignores this fact must be. None of the theories accounts for it satisfactorily, and the explanation I myself once gave, in a kind of toy theory, is no better than the others. Consequently, we come up against a big question mark.'' Now that humans enter the era of twenty-first century, Dr. Jin He, based on sufficient evidences, shows that the spiral pattern of galaxies is the male character of the universe.
[5120] vixra:1211.0018 [pdf]
On the Fundamental Nature of the Quantum Mechanical Probability Function
The probability of occurrence of an event or that of the existence of a physical state has no relative existence in the sense that motion is strongly believed to only exist in the relative sense. If the probability of occurrence of an event or that of the existence of a physical state is known by one observer, this probability must be measured to have the same numerical value by any other observer anywhere in the Universe. If we accept this bare fact, then, the probability function can only be a scalar. Consequently, from this fact alone, we argue that the quantum mechanical wavefunction can not be a scalar function as is assumed for the Schroedinger and the Klein-Gordon wavefunctions. This has fundamental implications on the nature of the wavefunction insofar as translations from one reference system to the other is concerned.
[5121] vixra:1211.0002 [pdf]
Recent Data Show no Weakening of the Walker Circulation
Various authors have examined the strength of the equatorial Pacific overturning known as the Walker Circulation in both climate models and observations, attributing a generalized weakening to anthropogenic global warming. Here we review the analysis in Power and Smith [2007] using updated Southern Oscillation Index (SOI) and NINO sea surface temperature indices. We find no significant long-term changes in the indices, although the SOI appears to have recovered from an anomalously low period from 1976 to 1998. The increasing sea surface temperature in the NINO4 region is not significant, nor representative of other NINO regions. The findings of a weakening Walker circulation appear to be premature, and the corresponding climate model projections cannot be substantiated at this time. The reports of weakening of horizontal atmospheric circulation in climate models should be regarded as an inconsistency and not as an indicator of anthropogenic climate change.
[5122] vixra:1210.0165 [pdf]
F Theory in a Nutshell
The story of F theory came top the light because of Cumrun Vafa . Here we have tried to show the journey and some aspects of F theory .
[5123] vixra:1210.0159 [pdf]
A Novel Way to 'Make Sense' Out of the Copenhagen Interpretation
This paper presents a concise exposition of the Dimensional Theory, a novel framework which helps make sense out of the Copenhagen Interpretation as it explains the peculiarities of quantum mechanics in a way that is most consistent with that interpretation. \footnote{A recording of the talk based on this material can be viewed at http://youtu.be/GurBISsM308 }
[5124] vixra:1210.0145 [pdf]
A New Crucial Experiment for Relativity
Dayton Miller performed an experiment in 1925-1926 that, at face value, contradicted relativity theory. The strongest argument against Miller's experiment is that subsequent Michelson-Morley experiments yielded increasing consistency with relativity, disagreeing with Miller's results. But subsequent experiments were not valid replications of Miller's. Specifically, they failed to replicate the medium in the light path and the scale of Miller's experiment. A valid replication must either be exact or be demonstrably equivalent with regard to its crucial sensing region. The unexplained effects seen by Miller demand exact replication. The proposed experiment is crucial for special relativity but is more than a replication of Miller. This proposed Crucial Experiment should use a Michelson-Morley apparatus with a 4.25 m arm length as Miller used. The novelty of this experiment is that the light path should be in a chamber that can be operated from near zero to one atmosphere. Predictions: (1) At one atmosphere, the result will agree with Miller's and contradict relativity. (2) Near zero atmospheres, the result will agree with Georg Joos' and agree with relativity. (3) Intermediate pressures will yield intermediate results.
[5125] vixra:1210.0116 [pdf]
Octonionic Non-Abelian Gauge Theory
Abstract We have made an attempt to describe the octonion formulation of Abelian and non-Abelian gauge theory of dyons in terms of 2\times 2 Zorn vector matrix realization. As such, we have discussed the U(1)_{e}\times U(1)_{m} Abelian gauge theory and U(1)\times SU(2) electroweak gauge theory and also the SU(2)_{e}\times SU(2)_{m} non-Abelian gauge theory in term of 2×2 Zorn vector matrix realization of split octonions. It is shown that SU(2)_{e} characterizes the usual theory of the Yang Mill's field (isospin or weak interactions) due to presence of electric charge while the gauge group SU(2)_{m} may be related to the existence of t-Hooft-Polyakov monopole in non-Abelian Gauge theory. Accordingly, we have obtained the manifestly covariant field equations and equations of motion.
[5126] vixra:1210.0081 [pdf]
Probe Graph Classes
Let GG be a class of graphs. A graph G is a probe graph of GG if its vertex set can be partitioned into a set P of `probes' and an independent set N of `nonprobes' such that G can be embedded into a graph of GG by adding edges between certain nonprobes. In this book we investigate probe graphs of various classes of graphs.
[5127] vixra:1210.0071 [pdf]
Calculation of Radar Signal Delays in the Vicinity of the Sun Due to the Contribution of a Yukawa Correction Term in the Gravitational Potential
There has been a renewed interest in the recent years in the possibility of deviations from the predictionsof Newton’s “inverse-square law” of universal gravitation.One of the reasons for renewing this interest lies in various theoretical attempts to construct a unified elementary particle theory, in which there is a natural prediction of new forces over macroscopic distances. Therefore the existence of such a force would only coexist with gravity, and in principle could only be detected as a deviation from the inverse square law, or in the “universality of free fall” experiments.New experimental techniques such that of Sagnac interferometry can help explore the range of the Yukawa correction λ ≥ 10^14 m where such forces might be present. It may be,that future space missions might be operating in this range which has been unexplored for very long time. To study the effect of the Yukawa correction to the gravitational potential and its corresponding signal delay in the vicinity of the Sun, we use a spherically symmetric modified space time metric where the Yukawa correction its added to the gravitational potential. Next, the Yukawa correction contribution to the signal delay is evaluated. In the case where the distance of closest approach is much less than the range λ, it results to a signal time delay that satisfies the relation t (b <λ)∼=37.7t (b = λ).
[5128] vixra:1210.0065 [pdf]
An Alternative Methodology for Imputing Missing Data in Trials with Genotype-by-Environment Interaction
A common problem in multi-environment trials arises when some genotype-by-environment combinations are missing. The aim of this paper is to propose a new deterministic imputation algorithm using a modification of the Gabriel cross-validation method. The method involves the singular value decomposition (SVD) of a matrix and was tested using three alternative component choices of the SVD in simulations based on two complete sets of real data, with values deleted randomly at different rates. The quality of the imputations was evaluated using the correlations and the mean square deviations between these estimates and the true observed values. The proposed methodology does not make any distributional or structural assumptions and does not have any restrictions regarding the pattern or mechanism of the missing data.
[5129] vixra:1210.0061 [pdf]
Updated View about the Hierarchy of Planck Constants
Few years has passed from the latest formulation for the hierarchy of Planck constant. The original hypothesis seven years ago was that the hierarchy is real. In this formulation the imbedding space was replaced with its covering space assumed to decompose to a Cartesian product of singular finite-sheeted coverings of M<sup>4</sup> and CP<sub>2</sub>. </p><p> Few years ago came the realization that the hierarchy could be only effective but have same practical implications. The basic observation was that the effective hierarchy need not be postulated separately but follows as a prediction from the vacuum degeneracy of Kähler action. In this formulation Planck constant at fundamental level has its standard value and its effective values come as its integer multiples so that one should write hbar<sub>eff</sub>=n×hbar rather than hbar= nhbar<sub>0</sub> as I have done. For most practical purposes the states in question would behave as if Planck constant were an integer multiple of the ordinary one. It was no more necessary to assume that the covering reduces to a Cartesian product of singular coverings of M<sup>4</sup> and CP<sub>2</sub> but for some reason I kept this assumption. </p><p> In the recent formulation this assumption is made and the emphasis is on the interpretation of the multi-sheetedness (in the sense of Riemann surfaces) resulting as a multi-furcation for a preferred extremal taking place at the partonic 2-surfaces. This gives a connection with complexity theory (say in living systems), with transition to chaos, and with general ideas about fractality. Second quantization of the multi-furcation means accepting not only superpositions of branches as single particle states but also the analogs of many-particle states obtained by allowing several branches up to the maximum number. This revives the ideas of N-atom, N-molecule etc.. already given up as too adventurous. </p><p> The question whether gravitational Planck constant h<sub>gr</sub> having gigantic values results as an effective Planck constant has remained open. A simple argument suggests that gravitational four-momentum could be identified as a projection of the inertial four-momentum to the space-time surface and that the square of the gravitational four-momentum obtained using the effective metric defined by the anti-commutators of the modified gamma matrices appearing in the modified Dirac equation naturally leads to the emergence of h<sub>gr</sub>.
[5130] vixra:1210.0059 [pdf]
Two Attempts to Understand PK
The question how intentional action is concretely realized is not only a key question in quantum consciousness theory but also in attempts to understand psychokinesis (PK). In TGD framework the mechanisms of intentional action and PK are basically the same, and the article can be seen also as a proposal for how intentional action might be realized in TGD Universe. There are experimental results - such as the experiments of Libet - suggesting that intentional action involves a signal propagating to geometric past where it initiates the desired action. PK experiments with random bit sequences suggest a model based on state function reduction, and the possibility to affect intentionally the probabilities of the outcomes of the microscopic quantum transition with two final states representing the values of bits. The standard view is that the intentional action interferes directly with the microscopic quantum transition. "Too-good-to-be-true" option is that intentional action is able to produce a quantum superposition of bits represented as magnetized regions. In this case a direct experimental proof of PK by comparing the data file subject to intentional action with its copy - and thus involving no statistical procedures - is possible. A detailed mechanism allowing the observer (either operator or experimenter) to affect by intentional action the number of 1s or 0s in a series of bits stored in a data file as magnetized regions is discussed.
[5131] vixra:1210.0056 [pdf]
Radar Signal Delay in the Dvali-Gabadadze-Porrati Gravity in the Vicinity of the Sun
In this paper we examine the recently introduced Dvali-Gabadadze-Porrati (DGP) gravity model. We use a space-time metric in which the local gravitation source dominates the metric over the contributions from the cosmological flow. Anticipating ideal possible solar system effects, we derive expressions for the signal time delays in the vicinity of the Sun. and for various ranges of the angle θ of the signal approach, The time contribution due to DGP correction to the metric is found to be proportional to b3/2/c2r0. For r0 equal to 5 Mpc and θ in the range [−π/3,π/3], t is equal to 0.0001233 ps. This delay is extremely small to be measured by today’s technology but it could be probably measurable by future experiments.
[5132] vixra:1210.0037 [pdf]
Rethinking Einstein's Rotation Analogy
Einstein built general relativity (GR) on the foundation of special relativity (SR) with the help of an analogy involving uniformly rotating bodies. Among this analogy's most useful implications are those concerning the need for non-Euclidean geometry. Although GR is well-supported by observations, a curious fact is that almost all of them are of phenomena over the surfaces of large gravitating bodies; i.e., they support the <i>exterior</i> solution. Whereas the <i>interior</i> solution remains untested. In particular, the prediction that the rate of a clock at the center of a gravitating body is a local minimum remains untested. The Newtonian counterpart for this prediction of GR is the common oscillation prediction for a test mass dropped into a hole through a larger gravitating body. The main point in what follows is that this prediction needs to be checked by direct observation. Einstein's analogy serves as a launching pad for bringing out the significance of this experiment as well as exposing possible weaknesses in a few other assumptions, which are then also duly questioned. To facilitate looking upon these problems with fresh eyes, we invoke an imaginary civilization whose members know a lot about rotation but nothing about gravity. Their home is a large and remote rotating body whose mass is too small to make gravitational effects easily noticeable. What would these people think of Einstein's rotation analogy?
[5133] vixra:1210.0028 [pdf]
The Lorentz Force Law Must be Modified to Satisfy the Principle of Relativity
Consideration of the relative motion of a bar magnet and a coil played a key role in Einstein's production of Special Relativity. In the frame where the bar magnet is moving and a coil is at rest, an EMF is generated in the coil as a consequence of the curl of E equation. In the frame where the bar magnet is at rest, an EMF is produced in the coil because its electrons are moving in a way such that the magnetic field is producing forces on them. We consider the complementary situation where instead of a bar magnet generating a magnetic field we have a point charge generating an electric field. We will see that in order to satisfy the Principle of Relativity changes must be made to the Lorentz force law.
[5134] vixra:1210.0021 [pdf]
Signal Space and the Schwarzschild Black Hole
The geometry of the Schwarzschild black hole is compared to the geometry of the signal space from Shannon's mathematical theory of communication. One result of these considerations is that the black hole is found to leak in a way that does not introduce an information loss paradox.
[5135] vixra:1210.0015 [pdf]
How to Construct Self/anti-Self Charge Conjugate States for Higher Spins?
We construct self/anti-self charge conjugate (Majorana-like) states for the (1/2,0)(0,1/2) representation of the Lorentz group, and their analogs for higher spins within the quantum field theory. The problem of the basis rotations and that of the selection of phases in the Diraclike and Majorana-like field operators are considered. The discrete symmetries properties (P, C, T) are studied. The corresponding dynamical equations are presented. In the (1/2,0)(0,1/2) representation they obey the Dirac-like equation with eight components, which has been first introduced by Markov. Thus, the Fock space for corresponding quantum fields is doubled (as shown by Ziino). The particular attention has been paid to the questions of chirality and helicity (two concepts which are frequently confused in the literature) for Dirac and Majorana states. We further experimental consequences which follow from the previous works of M.Kirchbach et al. on neutrinoless double beta decay, and G.J.Ni et al. on meson lifetimes.
[5136] vixra:1210.0010 [pdf]
AFT Gravitational Model \\ Unity of All Elementary Particles in Sp(12,C)
A new unifying theory was recently proposed in the publication "Arrangement field theory - beyond strings and loop gravity -". Such theory describes all fields (gravitational, gauge and matter fields) as entries in a matricial superfield which transforms in the adjoint representation of Sp(12,C). In this paper we show how this superfield is built and we introduce a new mechanism of symmetry breaking which doesn't need Higgs bosons.
[5137] vixra:1209.0113 [pdf]
The Analysis of Harold White Applied to the Natario Warp Drive Spacetime. from $10$ Times the Mass of the Universe to the Mass of the Mount Everest
Warp Drives are solutions of the Einstein Field Equations that allows superluminal travel within the framework of General Relativity. There are at the present moment two known solutions: The Alcubierre warp drive discovered in $1994$ and the Natario warp drive discovered in $2001$. However as stated by both Alcubierre and Natario themselves the warp drive violates all the known energy conditions because the stress energy momentum tensor is negative implying in a negative energy density. While from a classical point of view the negative energy is forbidden the Quantum Field Theory allows the existence of very small amounts of it being the Casimir effect a good example as stated by Alcubierre himself.The major drawback concerning negative energies for the warp drive is the huge amount of negative energy able to sustain the warp bubble.Ford and Pfenning computed the amount of negative energy needed to maintain an Alcubierre warp drive and they arrived at the result of $10$ times the mass of the entire Universe for a stable warp drive configuration rendering the warp drive impossible.However Harold White manipulating the parameter $@$ in the original shape function that defines the Alcubierre spacetime demonstrated that it is possible to low these energy density requirements.We repeat here the Harold White analysis for the Natario spacetime and we arrive at similar conclusions.From $10$ times the mass of the Universe we also manipulated the parameter $@$ in the original shape function that defines the Natario spacetime and we arrived at a result of $10$ billion tons of negative mass to maintain a warp drive moving with a speed $200$ times faster than light.Our result is still a huge number about the weight of the Everest Mountain but at least it is better than the original Ford-Pfenning result of $10$ times the mass of the Universe.The main reason of this work is to demonstrate that Harold White point of view is entirely correct.
[5138] vixra:1209.0104 [pdf]
Peaks in the CMBR Power Spectrum. I. Mathematical Analysis of the Associated Real Space Features
The purpose of our study is to understand the mathematical origin in real space of modulated and damped sinusoidal peaks observed in cosmic microwave background radiation anisotropies. We use the theory of the Fourier transform to connect localized features of the two-point correlation function in real space to oscillations in the power spectrum. We also illustrate analytically and by means of Monte Carlo simulations the angular correlation function for distributions of filled disks with fixed or variable radii capable of generating oscillations in the power spectrum. While the power spectrum shows repeated information in the form of multiple peaks and oscillations, the angular correlation function offers a more compact presentation that condenses all the information of the multiple peaks into a localized real space feature. We have seen that oscillations in the power spectrum arise when there is a discontinuity in a given derivative of the angular correlation function at a given angular distance. These kinds of discontinuities do not need to be abrupt in an infinitesimal range of angular distances but may also be smooth, and can be generated by simply distributing excesses of antenna temperature in filled disks of fixed or variable radii on the sky, provided that there is a non-null minimum radius and/or the maximum radius is constrained.
[5139] vixra:1209.0097 [pdf]
An Algebraic Approach to Systems with Dynamical Constraints
Constraints imposed directly on accelerations of the system leading to the relation of constants of motion with appropriate local projectors occurring in the derived equations are considered. In this way a generalization of the Noether's theorem is obtained and constraints are also considered in the phase space.
[5140] vixra:1209.0088 [pdf]
Is Temperature or the Temperature Record Rising?
In this paper, we prove a logical circularity undermines the validity of a commonly used method of homogenizing surface temperature networks. High rates of type I error due to circularity may explain the exaggeration of surface warming found in official temperature networks.
[5141] vixra:1209.0079 [pdf]
Summability Calculus
In this manuscript, we present the foundations of Summability Calculus, which places various established results in number theory, infinitesimal calculus, summability theory, asymptotic analysis, information theory, and the calculus of finite differences under a single simple umbrella. Using Summability Calculus, any given finite sum bounded by a variable n becomes immediately in analytic form. Not only can we differentiate and integrate with respect to the bound n without having to rely on an explicit analytic formula for the finite sum, but we can also deduce asymptotic expansions, accelerate convergence, assign natural values to divergent sums, and evaluate the finite sum for any complex value of n. This follows because the discrete definition of the simple finite sum embodies a unique natural continuation to the entire complex plane. Throughout the paper, many established results are strengthened such as the Bohr-Mollerup theorem, Stirling's approximation, Glaisher's approximation, and the Shannon-Nyquist sampling theorem. In addition, many celebrated theorems are extended and generalized such as the Euler- Maclaurin summation formula and Boole's summation formula. Finally, we show that countless identities that have been proved throughout the past 300 years by different mathematicians using different approaches can actually be derived in an elementary straightforward manner using the rules of Summability Calculus.
[5142] vixra:1209.0073 [pdf]
Ford-Pfenning Quantum Inequalities(QI) in the Natario Warp Drive Spacetime using the Planck Length Scale
Warp Drives are solutions of the Einstein Field Equations that allows superluminal travel within the framework of General Relativity. There are at the present moment two known solutions: The Alcubierre warp drive discovered in $1994$ and the Natario warp drive discovered in $2001$. However as stated by both Alcubierre and Natario themselves the warp drive violates all the known energy conditions because the stress energy momentum tensor(the right side of the Einstein Field Equations) for the Einstein tensor $G_{00}$ is negative implying in a negative energy density. While from a classical point of view the negative energy is forbidden the Quantum Field Theory allows the existence of very small amounts of it being the Casimir effect a good example as stated by Alcubierre himself.The major drawback concerning negative energies for the warp drive are the so-called Quantum Inequalities(QI) that restricts the time we can observe the negative energy density.This time is known as the sampling time.Ford and Pfenning computed the QI for the Alcubierre warp drive using a Planck Length Scale shape function and concluded that the negative energy in the Alcubierre warp drive can only exists for a sampling time of approximately $10^{-33}$ seconds rendering the warp drive impossible for an interstellar trip for example a given star at $20$ light years away with a speed of $200$ times faster than light because such a trip would require months not $10^{-33}$ seconds. We repeated the QI analysis of Ford and Pfenning for the Natario warp drive using the same Planck Length Scale but with a shape function that although different from the function choosed by Ford and Pfenning it obeys Natario requirements and because the Natario warp drive have a very different distribution of negative energy when compared to its Alcubierre counterpart this affects the QI analysis.We arrived at a sampling time that can last longer than $10^{-33}$ seconds enough to sustain a warp bubble for the interstellar travel mentioned above.We also computed the total negative energy requirements for the Natario warp drive and we arrived at a comfortable result.This leads us to conclude that the Natario warp drive is a valid solution of the Einstein Field Equations of General Relativity physically accessible for interstellar spaceflight. We also discuss Horizons and infinite Doppler blueshifts.
[5143] vixra:1209.0072 [pdf]
Heaven Breasts and Heaven Calculus
Since the birth of mankind, human beings have been looking for the origin of life. The fact that human history is the history of warfare and cannibalism proves that humans have not identified their origin. Humanity is still in the dark phase of lower animals. Humans can see the phenomenon of life only on Earth, and humans' vision does not exceed the one of lower animals. However, it is a fact that human beings have inherited the most advanced gene of life. Humans should be able to answer the following questions: Is the Universe hierarchical? What is Heaven? Is Heaven the origin of life? Is Heaven a higher order of life? For more than a decade, I have done an in-depth study on barred galaxy structure. Today (September 17, 2012) I suddenly discovered that the characteristic structure of barred spiral galaxies resembles the breasts of human female essentially. If the rational structure conjecture presented in the article is proved then Sun must be a mirror of the universe, and mankind is exactly the image on earth of the Heaven.
[5144] vixra:1209.0070 [pdf]
Interpreting Sergeyev's Numerical Methodology Within a Hyperreal Number System
In this paper we show the consistency of the essential part of Sergeyev's numerical methodology (\cite{Yarov 1}, \cite{Yarov 2}) by constructing a model of it within the framework of an ultrapower of the ordinary real number system.
[5145] vixra:1209.0066 [pdf]
Positive Definite Phase Space Quantum Mechanics
Still today the discussion about the foundations, physical interpretation, and real scope of quantum mechanics has never ceased. It would be wrong to dismiss these issues as mere philosophical problems, because questions of consistency and interpretation are not devoid of practical utility. We present the foundations and main properties of a positive definite phase space quantum mechanics. A new quantization procedure is proposed as well. This new interpretation/formulation eliminates conceptual and technical difficulties from quantum mechanics: (i) many paradoxes typical of the wave-particle duality, EPR experiments, macroscopic superpositions, and collapse of wavefunctions disappear; (ii) the elimination of the wavefunctions from quantum theory is in line with the procedure inaugurated by Einstein with the elimination of the ether in the theory of electromagnetism; (iii) it is useful in considering the classical limit, can treat mixed states with ease, and brings certain conceptual issues to the fore; (iv) confirms the ensemble interpretation of the wavefunctions, derives its statistical interpretation, corrects the temporal dependence of the old wavefunctions, and considers pure classical states --localizable states-- beyond the Hilbert space; (v) the quantum equation of motion is of the Liouville kind and star-products are not needed, simplifying the formalism; and (vi) eliminates the hypothetical external quantum field of the pilot wave interpretation, solving its problems on the status of probability, and correcting well-known inconsistencies of the Bohm potential. Finally, we offer some perspectives on future developments and research in progress.
[5146] vixra:1209.0059 [pdf]
Towards a Unified Model of Outdoor and Indoor Spaces
Geographic information systems traditionally dealt with only outdoor spaces. In recent years, indoor spatial information systems have started to attract attention partly due to the increasing use of receptor devices (e.g., RFID readers or wireless sensor networks) in both outdoor and indoor spaces. Applications that employ these devices are expected to span uniformly and supply seamless functionality in both outdoor and indoor spaces. What makes this impossible is the current absence of a unified account of these two types of spaces both in terms of modeling and reasoning about the models. This paper presents a unified model of outdoor and indoor spaces and receptor deployments in these spaces. The model is expressive, flexible, and invariant to the segmentation of a space plan, and the receptor deployment policy. It is focused on partially constrained outdoor and indoor motion, and it aims at underlying the construction of future, powerful reasoning applications.
[5147] vixra:1209.0051 [pdf]
Triangle-Partitioning Edges of Planar Graphs, Toroidal Graphs and K-Planar Graphs
We show that there is a linear-time algorithm to partition the edges of a planar graph into triangles. We show that the problem is also polynomial for toroidal graphs but NP-complete for k-planar graphs, where k is at least 8.
[5148] vixra:1209.0049 [pdf]
The Ford-Pfenning Quantum Inequalities(qi) Analysis Applied to the Natario Warp Drive Spacetime
Warp Drives are solutions of the Einstein Field Equations that allows superluminal travel within the framework of General Relativity. There are at the present moment two known solutions: The Alcubierre warp drive discovered in $1994$ and the Natario warp drive discovered in $2001$. However as stated by both Alcubierre and Natario themselves the warp drive violates all the known energy conditions because the stress energy momentum tensor(the right side of the Einstein Field Equations) for the Einstein tensor $G_{00}$ is negative implying in a negative energy density. While from a classical point of view the negative energy is forbidden the Quantum Field Theory allows the existence of very small amounts of it being the Casimir effect a good example as stated by Alcubierre himself.The major drawback concerning negative energies for the warp drive are the so-called Quantum Inequalities(QI) that restricts the time we can observe the negative energy density.This time is known as the sampling time.Ford and Pfenning computed the QI for the Alcubierre warp drive and concluded that the negative energy in the Alcubierre warp drive can only exists for a sampling time of approximately $10^{-10}$ seconds rendering the warp drive impossible for an interstellar trip for example a given star at $20$ light years away with a speed of $200$ times faster than light because such a trip would require months not $10^{-10}$ seconds. We repeated the QI analysis of Ford and Pfenning for the Natario warp drive and because the Natario warp drive have a very different distribution of negative energy when compared to its Alcubierre counterpart this affects the QI analysis.We arrived at a sampling time that can last longer than $10^{-10}$ seconds enough to sustain a warp bubble for the interstellar travel mentioned above.We also computed the total negative energy requirements for the Natario warp drive and we arrived at a comfortable result.This leads us to conclude that the Natario warp drive is a valid solution of the Einstein Field Equations of General Relativity physically accessible for interstellar spaceflight. We also discuss Horizons and infinite Doppler blueshifts.
[5149] vixra:1209.0024 [pdf]
On The Electromagnetic Basis For Gravity
The relationships between two alternative theories of gravity, the "physicalist", Electromagnetics based, "Polarisable Vaccuum" theory of Puthoff and Dicke, and Yilmaz's "phenomenological" variation of the General Theory of Relativity, are explored by virtue of a simple physical model based in the application of Newtonian mechanics to propagative systems. A particular virtue of the physical model is that, by introducing distributed source terms, it anticipates nonlocal relationships between observables within the framework of local realism.
[5150] vixra:1209.0023 [pdf]
Relativity and the Luminal Structure of Matter
It is shown that Lorentz Invariance is a wave phenomenon. The relativistic mass, length contraction and time dilation all follow from the assumption that energy-momentum is constrained to propagate at the speed of light, $c$, in all contexts, matter as well as radiation. Lorentz Transformations, and both of the usual postulates, then follow upon adopting Einstein clock synchronisation. The wave interpretation proposed here is paradox free and it is compatible with quantum nonlocality.
[5151] vixra:1209.0022 [pdf]
General Spin Dirac Equation (II)
In the reading Nyambuya (2009), it is shown that one can write down a general spin Dirac equation by modifying the usual Einstein energy-momentum equation via the insertion of the quantity ``s" which is identified with the spin of the particle. That is to say, a Dirac equation that describes a particle of spin \frac{1}{2}s\hbar\vec{\sigma} where \hbar is the normalised Planck constant, \vec{\sigma} are the Pauli 2 \times 2 matrices and s=(\pm 1, \pm2,\pm 3, \,\dots etc). What is not clear in this reading Nyambuya (2009) is how such a modified energy-momentum relation would arise in Nature. At the end of the day, the insertion by lathe of hand of the quantity ``s" into the usual Einstein energy-momentum equation, would then appear to be nothing more than speculation. In the present reading -- by making use of the curved spacetime Dirac equations proposed in the work Nyambuya (2008), we move the exercise of Nyambuya (2009) from the realm of speculation to that of plausibility
[5152] vixra:1209.0019 [pdf]
New Evidence for Anomalies of Radio-Active Decay Rates
A new piece of evidence for the periodic variations of nuclear decay rates in astrophysical time scales has been reported by Sturrock et al: now in the case of Ra-222 nuclei. In this article the TGD inspired explanation for the variations is developed in more detail by utilizing the data provided in this article. The explanation relies on nuclear string model predicting the existence of almost degenerate ground states of nuclei (in the natural MeV energy scale) with excitations energies assumed to lie in keV energy range. The variations of the decay rates defined naturally as averages for the decay rates of excitations would be induced by keV radiation from the solar corona. The would also explain the anomalously high temperature of solar corona and relate the observed periodicities to the rotation rate of corona.
[5153] vixra:1209.0010 [pdf]
Tempus Edax Rerum
A non-unitary quantum theory describing the evolution of quantum state tensors is presented. Einstein’s equations and the fine structure constant are derived. The problem of precession in classical mechanics gives an example.
[5154] vixra:1209.0005 [pdf]
Principles for Quantization of Gravitation
Summary of articles [1, 2] is shown, and a new approach to quantum gravity is presented. It is based on suppositions that masses of black holes can be smaller than Planck's mass, that dimensionless masses of particles or black holes are a first hint toward development of quantum gravity. It is shown a new way to develop principle of uncertainty, and this principle is more fundamental than wave function. Virtual gravitons are based on a wave function and if it does not exist, they also do not exist. Thus, it is excluded contradiction that "virtual photons exist in background space-time".At the same time all is based on the principle that "gravity is not a force".
[5155] vixra:1209.0004 [pdf]
Maximum Force Derived from Special Relativity, the Equivalence Principle and the Inverse Square Law
Based on the work of Jacobson [1] and Gibbons, [2] Schiller [3] has shown not only that a <i>maximum force</i> follows from general relativity, but that general relativity can be <i>derived</i> from the principle of maximum force. In the present paper an alternative derivation of maximum force is given. Inspired by the equivalence principle, the approach is based on a modification of the well known special relativity equation for the velocity acquired from uniform proper acceleration. Though in Schiller's derivation the existence of gravitational <i>horizons</i> plays a key role, in the present derivation this is not the case. In fact, though the kinematic equation that we start with does exhibit a horizon, it is not carried over to its gravitational counterpart. A few of the geometrical consequences and physical implications of this result are discussed.
[5156] vixra:1209.0001 [pdf]
On Whether or not Non-Gravitational Interaction Can Occur in the Absence of Gravity
The Standard Model of particle physics is built upon the implied assumption that non-gravitational interaction can occur in the absence of gravity. This essay takes this implied assumption at face value and then considers the alternative assumption -- non-gravitational interaction {\it can't} occur in the absence of gravity. The alternative assumption is then discussed in terms of the dark sector of the Universe.
[5157] vixra:1208.0245 [pdf]
The Arithmetic of Binary Representations of Even Positive Integer 2n and Its Application to the Solution of the Goldbach's Binary Problem
One of causes why Goldbach's binary problem was unsolved over a long period is that binary representations of even integer 2n (BR2n) in the view of a sum of two odd primes(VSTOP) are considered separately from other BR2n. By purpose of this work is research of connections between different types of BR2n. For realization of this purpose by author was developed the "Arithmetic of binary representations of even positive integer 2n" (ABR2n). In ABR2n are defined four types BR2n. As shown in ABR2n all types BR2n are connected with each other by relations which represent distribution of prime and composite positive integers less than 2n between them. On the basis of this relations (axioms ABR2n) are deduced formulas for computation of the number of BR2n (NBR2n) for each types. In ABR2n also is defined and computed Average value of the number of binary sums are formed from odd prime and composite positive integers $ < 2n $ (AVNBS). Separately AVNBS for prime and AVNBS for composite positive integers. We also deduced formulas for computation of deviation NBR2n from AVNBS. It was shown that if $n$ go to infinity then NBR2n go to AVNBS that permit to apply formulas for AVNBS to computation of NBR2n. At the end is produced the proof of the Goldbach's binary problem with help of ABR2n. For it apply method of a proof by contradiction in which we make an assumption that for any 2n not exist BR2n in the VSTOP then make computations at this conditions then we come to contradiction. Hence our assumption is false and forall $2n > 2$ exist BR2n in the VSTOP.
[5158] vixra:1208.0242 [pdf]
Constructive Motives and Scattering
This elementary text is for anyone interested in combinatorial methods in modern axiomatic particle physics. It focuses on the role of knots in motivic arithmetic, and the connection to category theoretic structures. Phenomenological aspects of rest mass quantum numbers are also discussed.
[5159] vixra:1208.0237 [pdf]
Can Differentiable Description of Physical Reality be Considered Complete? :toward a Complete Theory of Relativity
How to relate the physical \emph{real} reality with the logical \emph{true} abstract mathematics concepts is nothing but pure postulate. The most basic postulates of physics are by using what kind of mathematics to describe the most fundamental concepts of physics. Main point of relativity theories is to remove incorrect and simplify the assumptions about the nature of space-time. There are plentiful bonus of doing so, for example gravity emerges as natural consequence of curvature of spacetime. We argue that the Einstein version of general relativity is not complete, since it can't explain quantum phenomenon. If we want to reconcile quantum, we should give up one implicit assumption we tend to forget: the differentiability. What would be the benefits of these changes? It has many surprising consequences. We show that the weird uncertainty principle and non-commutativity become straightforward in the circumstances of non-differentiable functions. It's just the result of the divergence of usual definition of \emph{velocity}. All weirdness of quantum mechanics are due to we are trying to making sense of nonsense. Finally, we proposed a complete relativity theory in which the spacetime are non-differentiable manifold, and physical law takes the same mathematical form in all coordinate systems, under arbitrary differentiable or non-differentiable coordinate transformations. Quantum phenomenon emerges as natural consequence of non-differentiability of spacetime.
[5160] vixra:1208.0229 [pdf]
A Short Discussion of Relativistic Geometry
The relativists have not understood the geometry of Einstein’s gravitational field. They have failed to realise that the geometrical structure of spacetime manifests in the geometrical relations between the components of the metric tensor. Consequently, they have foisted upon spacetime quantities and geometrical relations which do not belong to it, producing thereby, grotesque objects, not due to Nature, but instead, to faulty thinking. The correct geometry and its consequences are described herein.
[5161] vixra:1208.0220 [pdf]
Clifford Space Gravitational Field Equations and Dark Energy
We continue with the study of Clifford-space Gravity and analyze further the Clifford space ($ C$-space) generalized gravitational field equations which are obtained from a variational principle based on the generalization of the Einstein-Hilbert-Cartan action. One of the main features is that the $C$-space connection requires $torsion$ in order to have consistency with the Clifford algebraic structure associated with the curved $C$-space basis generators. Hence no spin matter is required to induce torsion since it already exists in the vacuum. The field equations in $C$-spaces associated to a Clifford algebra in $D$-dimensions are $not$ equivalent to the ordinary gravitational equations with torsion in higher $2^D$-dimensions. The most physically relevant conclusion, besides the presence of torsion in the vacuum, is the contribution of the $higher$ grade metric components $g^{\mu_1 \mu_2 ~\nu_1 \nu_2}, g^{\mu_1 \mu_2 \mu_2~\nu_1 \nu_2 \nu_3 }, ~.....$ of the $C$-space metric to dark energy/dark matter.
[5162] vixra:1208.0219 [pdf]
Proof of Quark Confinement and Baryon-Antibaryon Duality: I: Gauge Symmetry Breaking in Dual 4D Fractional Quantum Hall Superfluidic Space-Time
We prove quark (and antiquark) confinement for a baryon-antibaryon pair and design a well-defined, easy-to-visualize, and simplified mathematical framework for particle and astro physics based on experimental data. From scratch, we assemble a dual 4D space-time topology and generalized coordinate system for the Schwarzschild metric. Space-time is equipped with "fractional quantum number order parameter fields" and topological defects for the simultaneous and spontaneous breaking of several symmetries, which are used to construct the baryon wavefunction and its corresponding antisymmetric tensor. The confined baryon-antibaryon pair is directly connected to skyrmions with "massive 'Higgs-like' scalar amplitude-excitations" and "massless Nambu-Goldstone pseudo-scalar phase-excitations". Newton's second law and Einstein's relativity are combined to define a Lagrangian with effective potential and effective kinetic. We prove that our theory upgrades the prediction precision and accuracy of QCD/QED and general relativity, implements 4D versions of string theory and Witten's M-theory, and exemplifies M.C. Escher's duality.
[5163] vixra:1208.0217 [pdf]
De Combinatoriek Van De Bruijn
In memoriam N.G. de Bruijn. In this article I present some highlights of De Bruijn's contributions in combinatorics. This article does not survey his work on eg Penrose tilings, asymptotics or AUTOMATH; other surveys on these topics are being written by others.
[5164] vixra:1208.0203 [pdf]
Linear and Angular Momentum Spaces for Majorana Spinors
In a Majorana basis, the Dirac equation for a free spin one-half particle is a 4x4 real matrix differential equation. The solution can be a Majorana spinor, a 4x1 real column matrix, whose entries are real functions of the space-time. Can a Majorana spinor, whose entries are real functions of the space-time, describe the energy, linear and angular momentums of a free spin one-half particle? We show that it can. We show that the Majorana spinor is an irreducible representation of the double cover of the proper orthochronous Lorentz group and of the full Lorentz group. The Fourier-Majorana and Hankel-Majorana transforms are defined and related to the linear and angular momentums of a free spin one-half particle.
[5165] vixra:1208.0164 [pdf]
Quantum Model for the Direct Currents of Becker
Robert Becker proposed on basis of his experimental work that living matter behaves as a semiconductor in a wide range of length scales ranging from brain scale to the scale of entire body. Direct currents flowing only in preferred direction would be essential for the functioning of living manner in this framework. One of the basic ideas of TGD inspired theory of living matter is that various currents, even ionic currents, are quantal currents. The first possibility is that they are Josephson currents associated with Josephson junctions but already this assumption more or less implies also quantal versions of direct currents. TGD inspired model for nerve pulse assumed that ionic currents through the cell membrane are probably Josephson currents. If this is the case, the situation is automatically stationary and dissipation is small as various anomalies suggest. One can criticize this assumption since the Compton length of ions for the ordinary value of Planck constant is so small that magnetic flux tubes carrying the current through the membrane look rather long in this length scale. Therefore either Planck constant should be rather large or one should have a non-ohmic quantum counterpart of a direct current in the case of ions and perhaps also protons in the case of neuronal membrane: electronic and perhaps also protonic currents could be still Josephson currents. This would conform with the low dissipation rate. In the following the results related to laser induced healing, acupuncture, and DC currents are discussed first. The obvious question is whether these direct currents are actually currents and whether they could be universal in living matter. A TGD inspired model for quantal direct currents is proposed and its possible implications for the model of nerve pulse are discussed.
[5166] vixra:1208.0163 [pdf]
How to Build a Quantum Computer from Magnetic Flux Tubes?
Magnetic flux tubes play a key role in TGD inspired model of quantum biology. Could the networks of magnetic flux tubes containing dark particles with large \hbar in macroscopic quantum states and carrying beams of dark photons define analogs of electric circuits? This would be rather cheap technology since no metal would be needed for wires. Dark photon beams would propagate along the flux tubes representing the analogs of optical cables and make possible communications with maximal signal velocity. I have actually made much more radical proposal in TGD inspired quantum biology. According to this proposal, flux tube connections are dynamical and can be changed by reconnection of two magnetic flux tubes. The signal pathways A→ C and B→ D would be transformed to signal pathways to A→ D and B→ C by reconnection. Reconnection actually represents a basic stringy vertex. The contraction of magnetic flux tubes by a phase transition changing Planck constant could be fundamental in bio-catalysis since it would allow distant molecules connected by flux tubes to find each other in the molecular crowd. DNA as a topological quantum computer is the idea that I have been developing for 5 years or so. I have concentrated on the new physics realization of braids and devoted not much thought to how the quantum computer problems might run in this framework. I was surprised to realize how little I know about what happens in even ordinary computation. Instead of going immediately to Wikipedia I take the risk of publicly making myself fool and try to use my own brain.
[5167] vixra:1208.0162 [pdf]
Does Thermodynamics Have a Representation at the Level of Space-Time Geometry?
R. Kiehn has proposed what he calls Topological Thermodynamics (TTD) as a new formalism of thermodynamics. The basic vision is that thermodynamical equations could be translated to differential geometric statements using the notions of differential forms and Pfaffian system. That TTD differs from TGD by a single letter is not enough to ask whether some relationship between them might exist. Quantum TGD can however in a well-defined sense be regarded as a square root of thermodynamics in zero energy ontology (ZEO) and this leads leads to ask seriously whether TTD might help to understand TGD at deeper level. The thermodynamical interpretation of space-time dynamics would obviously generalize black hole thermodynamics to TGD framework and already earlier some concrete proposals have been made in this direction. This raises several questions. Could the preferred extremals of Kähler action code for the square root of thermodynamics? Could induced Kähler gauge potential and Kähler form (essentially Maxwell field) have formal thermodynamic interpretation? The vacuum degeneracy of Kähler action implies 4-D spin glass degeneracy and strongly suggests the failure of strict determinism for the dynamics of Kähler action for non-vacuum extremals too. Could thermodynamical irreversibility and preferred arrow of time allow to characterize the notion of preferred extremal more sharply? It indeed turns out that one can translate Kiehn's notions to TGD framework rather straightforwardly. Kiehn's work 1- form corresponds to induced Kähler gauge potential implying that the vanishing of instanton density for Kähler form becomes a criterion of reversibility and irreversibility is localized on the (4-D) "lines" of generalized Feyman diagrams, which correspond to space-like signature of the induced metric. Heat produced in given generalized Feynman diagram is just the integral of instanton density and the condition that the arrow of geometric time has definite sign classically fixes the sign of produced heat to be positive. In this picture the preferred extremals of Kähler action would allow a trinity of interpretations as non-linear Maxwellian dynamics, thermodynamics, and integrable hydrodynamics.
[5168] vixra:1208.0156 [pdf]
Is it Really Higgs?
The discovery of a new spinless particle at LHC has dominated the discussions in physics blogs during July 2012. Quite many bloggers identify without hesitation the new particle as the long sought for Higgs although some aspects of data do not encourage the interpretation as standard model Higgs or possibly its SUSY variant. Maybe the reason is that it is rather imagine any other interpretation. In this article the TGD based interpretation as a pion-like states of scaled up variant of hadron physics is discussed explaining also why standard model Higgs - by definition provider of fermion masses - is not needed. Essentially one assumption, the separate conservation of quark and lepton numbers realized in terms of 8-D chiral invariance, excludes Higgs like states in this sense as also standard N=1 SUSY. One can however consider Higgs like particles giving masses to weak gauge bosons: motivation comes from the correctly predicted group theoretical W/Z mass ratio. The pion of M<sub>89</sub> hadron physics is the TGD proposal for a state behaving like Higgs and its decays via instanton coupling mimic the decays of Higgs to gauge boson pairs. For this option also charged Higgs like states are prediction. The instanton coupling can however generate vacuum expectation value of pion and this indeed happens in the model for leptopion. This would lead to the counterpart of Higgs mechanism with weak bosons "eating" three components of Higgs. This is certainly a problem. The solution is that at microscopic level instanton density can be non-vanishing only in Euclidian regions representing lines of generalized Feynman diagrams. It is Euclidian pion - a flux tube connecting opposite throats of a wormhole contact which develops vacuum expectation whereas ordinary pion is Minkowskian and corresponds to flux tube connecting throats of separate wormhole contacts and cannot develop vacuum expectation. This identification could explain the failure to find the decays to τ pairs and also the excess of two-gamma decays. The decays gauge boson pairs would be caused by the coupling of pion-like state to instanton density for electro-weak gauge fields. Also a connection with the dark matter researches reporting signal at 130 GeV and possibly also at 110 GeV suggests itself: maybe also these signals also correspond to pion-like states.
[5169] vixra:1208.0149 [pdf]
Access Control for Healthcare Data Using Extended XACML-SRBAC Model
In the modern health service, data are accessed by doctors and nurses using mobile, Personal Digital Assistants, and other electronic handheld devices. An individual’s health related information is normally stored in a central health repository and it can be accessed only by authorized doctors. However, this Data is prone to be exposed to a number of mobile attacks while being accessed. This paper proposes a framework of using XACML and XML security to support secure, embedded and fine-grained access control policy to control the privacy and data access of health service data accessed through handheld devices. Also we consider one of the models, namely Spatial Role-based access control (SRBAC) and model it using XACML.
[5170] vixra:1208.0082 [pdf]
A Cryptosystem for XML Documents
In this paper, we propose a cryptosystem (encrypting/decryption) for XML data using Vigenere cipher algorithm and EL Gamal cryptosystem. Such a system is designed to achieve some of security aspects such as confidentiality, authentication, integrity, and non-repudiation. We used XML data as an experimental work. Since, we have used Vigenere cipher which is not monoalphabetic, then the number of possible keywords of length m in a Vigenere Cipher is 26 m, so even for relatively small values of m, an exhaustive key search would require a long time.
[5171] vixra:1208.0080 [pdf]
Storing XML Documents and XML Policies in Relational Databases
In this paper, We explore how to support security models for XML documents by using relational databases. Our model based on the model in [6], but we use our algorithm to store the XML documents in relational databases.
[5172] vixra:1208.0077 [pdf]
Dark Energy in M-Theory
Dark Energy is yet to be predicted by any model that stands out in its simplicity as an obvious choice for unified investigative effort. It is widely accepted that a new paradigm is needed to unify the standard cosmological model (SCM) and the minimal standard model (MSM). The purpose of this article is to construct a modified cosmological model (MCM) that predicts dark energy and contains this unity. Following the program of Penrose, geometry rather than differential equations will be the mathematical tool. Analytical methods from loop quantum cosmology (LQC) are examined in the context of the Poincar´e conjecture. The longstanding problem of an external time with which to evolve quantum gravity is resolved. The supernovae and WMAP data are reexamined in this framework. No exotic particles or changes to General Relativity are introduced. The MCM predicts dark energy even in its Newtonian limit while preserving all observational results. In its General Relativistic limit, the MCM describes dark energy as an inverse radial spaghettification process. Observable predictions for the MCM are offered. AdS/CFT correspondence is discussed. The MCM is the 10 dimensional union of de Sitter and anti-de Sitter space and has M-theoretical application to the five string theories which lack a unifying conceptual component. This component unifies gravitation and electromagnetism.
[5173] vixra:1208.0074 [pdf]
Will Theory of Everything Save Earth?
The story is a mix of futurology, science fiction, new technical ideas, science, science ideas of author, and philosophy. The style is similar as in the paper of Makela. On an example of crisis of overpopulated human species in future, it is described how to develop a theory of everything, how to give more sense to amateur science, and how reactions of professional science regarding amateur science are too much inexact. Some ideas in the story are predictions of the author or are supported by him, and some are only for the course of story or for setting of special situations for thought experiments. Names in story are taken from science, but they are not or almost not connected with their thoughts and descriptions in story. An exception is James Randi.
[5174] vixra:1208.0066 [pdf]
The Mythos of a Theory of Everything
A fundamental assumption embedded in our current worldview is that there exists an as yet undiscovered `theory of everything', a final unified framework according to which all interactions in nature are but different manifestations of the same underlying thing. This paper argues that this assumption is wrong because our current distinct fundamental theories of nature already have mutually exclusive domains of validity, though under our current worldview this is far from obvious. As a concrete example, it is shown that if the concepts of mass in general relativity and quantum theory are distinct in a specific way, their domains become non-overlapping. The key to recognizing the boundaries of the domains of validity of our fundamental theories is an aspect of the frame of reference of an observer which has not yet been appreciated in mainstream physics. This aspect, called the dimensional frame of reference(DFR), depends on the number of length dimensions that constitute an observer frame. Edwin Abbott's Flatland is used as point of departure from which to provide a gentle introduction to the applications of this idea. Finally, a metatheory of nature is proposed to encompass the collection of theories of nature with mutually exclusive domains of validity.
[5175] vixra:1208.0058 [pdf]
Gravitational Waves
The proposed theory of gravitation is summarized, with a focus on dynamics. The linearized field equations are applied to gravitational waves. The theory predicts that longitudinal waves would be detected, which exert a force in the direction of propagation. It also explains the failure at LIGO and elsewhere to find transverse gravitational waves.
[5176] vixra:1208.0057 [pdf]
Derivation of Three Fundamental Masses and Large Numbers Hypothesis by Dimensional Analysis
Three mass dimension quantities have been derived by dimensional analysis by means of fundamental constants – the speed of light in vacuum (c), the gravitational constant (G), the Planck constant (h_bar) and the Hubble constant (H). The extremely small mass m1 ~ (h_bar*H)/c^2 ~ 10^(-33) eV has been identified with the Hubble mass mH, which seems close to the graviton mass mG. The enormous mass m2 ~ c^3/(G*H) ~ 10^53 kg is close to the mass of the Hubble sphere and practically coincides with the Hoyle-Carvalho formula for the mass of the observable universe. The third mass m3 ~ [(H*h_bar^3)/G^2]^(1/5) ~ 10^7 GeV could not be unambiguously identified at present time. Besides, it has been found remarkable fact that the Planck mass mPl ~ Sqrt [(h_bar*c)/G] appears geometric mean of the extreme masses m1 and m2. Finally, the substantial large number N = Sqrt [c^5/(2*G*h_bar*H^2) ≈ 5.73×10^60 has been derived relating cosmological parameters (mass, density, age and size of the observable universe) and fundamental microscopic properties of the matter (Planck units and Hubble mass). Thus, a precise formulation and proof of Large Numbers Hypothesis (LNH) has been found.
[5177] vixra:1208.0055 [pdf]
Two Kinds of Potential Difference for a Capacitor
It is shown that contrary to this current belief that the electrostatic potential difference between the two conductors of a capacitor is the same potential difference between the two poles of the battery which has charged it, the first is two times more than the second. We see the influence of this in the experiments performed for determination of charge and mass of the electron.
[5178] vixra:1208.0049 [pdf]
种飞机图像目标多特征信息融合识别方法
种基于概率神经网络(Probabilistic neural networks, PNN) 和DSmT 推理(Dezert-Smarandache theory) 的飞机图像目标多特征融合识别算法. 针对提取的多个图像特征量, 利用数据融合的思想对来自图像目标各个特征量提供的 信息进行融合处理. 首先, 对图像进行二值化预处理, 并提取Hu 矩、归一化转动惯量、仿射不变矩、轮廓离散化参数和奇异 值特征5 个特征量; 其次, 针对Dezert-Smarandache Theory 理论中信度赋值构造困难的问题, 利用PNN 网络, 构造目标识别率矩阵, 通过目标识 别率矩阵对证据源进行信度赋值; 然后, 用DSmT 组合规则在决策级层进行融合, 从而完成对飞机目标的识别; 最后, 在目标 图像小畸变情形下, 将本文提出的图像多特征信息融合方法和单一特征方法进行了对比测试实验, 结果表明本文方法在同等 条件下正确识别率得到了很大提高, 同时达到实时性要求, 而且具有有效拒判能力和目标图像尺寸不敏感性. 即使在大畸变情 况下, 识别率也能达到89.3 %.
[5179] vixra:1208.0041 [pdf]
On the W and Z Masses
Scalar and vector fields are coupled in a gauge invariant manner, such as to form massive vector fields. In this, there is no condensate or vacuum expectation value. Transverse and longitudinal solutions are found for the W and Z bosons. They satisfy the nonlinear cubic wave equation. Total energy and momentum are calculated, and this determines the mass ratio m_W/m_Z.
[5180] vixra:1208.0034 [pdf]
The Direction of Gravity
How much do we really know about gravity? Though our knowledge is sufficient to send people to the Moon, there is a large and fundamental gap in our empirical data; and there are basic questions about gravity that are rarely even asked, and so remain unanswered. The gap concerns the falling of test objects near the centers of larger gravitating bodies. Newton's theory of gravity and Einstein's theory, General Relativity, though giving essentially the same answers, describe the problem quite differently. A discussion of this difference--which emphasizes the role of <i>clock rates</i> in Einstein's theory--evokes a question concerning the most basic characteristic of any theory of gravity: Is the motion due to gravity primarily downward or upward; i.e., inward or outward? Have our accepted theories of gravity determined this direction correctly? The answer to this question may seem obvious. We will find, however, that we don't really know. And most importantly, it is emphasized that we can get an unequivocal answer by performing a relatively simple laboratory experiment.
[5181] vixra:1208.0024 [pdf]
Eight Assumptions of Modern Physics Which Are Not Fundamental
This essay considers eight basic physical assumptions which are not fundamental: (i) spacetime as the arena for physics, (ii) unitarity of the dynamics, (iii) microscopic time-reversibility, (iv) the need for black hole thermodynamics, (v) state vectors as the general description of quantum states, (vi) general relativity as a field theory, (vii) dark matter as real matter, (viii) and cosmological homogeneity. This selection ranges from micro-physics to cosmology, but is not exhaustive.
[5182] vixra:1208.0022 [pdf]
On Legendre's, Brocard's, Andrica's, and Oppermann's Conjectures
Let $n\in\mathbb{Z}^+$. Is it true that every sequence of $n$ consecutive integers greater than $n^2$ and smaller than $(n+1)^2$ contains at least one prime number? In this paper we show that this is actually the case for every $n \leq 1,193,806,023$. In addition, we prove that a positive answer to the previous question for all $n$ would imply Legendre's, Brocard's, Andrica's, and Oppermann's conjectures, as well as the assumption that for every $n$ there is always a prime number in the interval $[n,n+2\lfloor\sqrt{n}\rfloor-1]$.
[5183] vixra:1208.0002 [pdf]
The Recent Vision About Preferred Extremals and Solutions of the Modified Dirac Equation
During years several approaches to what preferred extremals of Kähler action and solutions of the modified Dirac equation could be have been proposed and the challenge is to see whether at least some of these approaches are consistent with each other. It is good to list various approaches first. <OL> <LI> For preferred extremals generalization of conformal invariance to 4-D situation is very attractive approach and leads to concrete conditions formally similar to those encountered in string model. The approach based on basic heuristics for massless equations, on effective 3-dimensionality, and weak form of electric magnetic duality is also promising. An alternative approach is inspired by number theoretical considerations and identifies space-time surfaces as associative or co-associative sub-manifolds of octonionic imbedding space. <LI> There are also several approaches for solving the modified Dirac equation. The most promising approach is assumes that the solutions are restricted on 2-D stringy world sheets and/or partonic 2-surfaces. This strange looking view is a rather natural consequence of number theoretic vision. The conditions stating that electric charge is conserved for preferred extremals is an alternative very promising approach. </OL> The question whether these various approaches are mutually consistent is discussed. It indeed turns out that the approach based on the conservation of electric charge leads under rather general assumptions to the proposal that solutions of the modified Dirac equation are localized on 2-dimensional string world sheets and/or partonic 2-surfaces. Einstein's equations are satisfied for the preferred extremals and this implies that the earlier proposal for the realization of Equivalence Principle is not needed. This leads to a considerable progress in the understanding of super Virasoro representations for super-symplectic and super-Kac-Moody algebra. In particular, the proposal is that super-Kac-Moody currents assignable to string world sheets define duals of gauge potentials and their generalization for gravitons: in the approximation that gauge group is Abelian - motivated by the notion of finite measurement resolution - the exponents for the sum of KM charges would define non-integrable phase factors. One can also identify Yangian as the algebra generated by these charges. The approach allows also to understand the special role of the right handed neutrino in SUSY according to TGD.
[5184] vixra:1207.0116 [pdf]
A Model of Preons with Short Distance Scalar Interaction
A preon model is proposed based on spin 1/2 fermion and spin 0 boson constituents. They interact via a massive scalar field which is tentatively considered a phenomenological model of quantum gravity. Implications to generations, heavy boson states and dark matter are briefly discussed.
[5185] vixra:1207.0104 [pdf]
Supersymmetry, Extra Dimensions, RG Running of the Higgs Quartic Coupling of MSSM/ NMSSM Models and the Seven Faces of the God Particle
In this paper, we focus on the calculation of the supersymmetric term for obtaining the mass of the lightest Higgs boson. Thus will provide the scale of supersymmetry. Similarly, entering the exact angle β, ,will allow us to calculate the masses of the remaining four Higgs bosons , mA , mH0 , mH ± and the stop mass 422.9 Gev. Was used for this, the well-known model of a one-dimensional string, or a dimensional box. We believe that the results obtained consistent with the observation, carry a string model mathematically satisfactory of a n-dimensional string, well known to all physicists. The main novelty of this model is the introduction of dimensionless ratios between the Planck length and the length n dimensional, as the length of the string. An extension of the Heisenberg principle to extra dimensions, will derive a principle of equivalence between mass, time and space so that it showed that the mass is actually another dimension. This principle of equivalence, so described, anger further and allow the equivalence between spin, probability, fluctuations and dimensions. Successive breaking symmetries-topology geometry involved, appears to be the cause of the distinguishability between spins, number of particles, dimensions, etc.; to the observer
[5186] vixra:1207.0092 [pdf]
Energy and Spacetime
A modified framework for special relativity is proposed in which mass is incorporated in the temporal components of the Minkowski metric and a particle may convert between massive and massless states. The speed of such a particle therefore changes between subluminal and luminal. The well-known equations for relativistic energy of massive and massless particles do not conflict with the notion that a massive particle can convert all of its mass into energy, thereby removing the problem of requiring infinite energy to reach the speed of light barrier. This enables the production of gravitons, which in collisions transfer their momentum and energy to matter, thereby giving rise to the gravitational force. The conversion of particles between massive and massless states suggests that time, and hence the universe, is eternal.
[5187] vixra:1207.0085 [pdf]
Progress in Clifford Space Gravity
Clifford-space Gravity is revisited and new results are found. The Clifford space ($ C$-space) generalized gravitational field equations are obtained from a variational principle and which is based on an extension of the Einstein-Hilbert-Cartan action. One of the main results of this work is that the $C$-space connection requires torsion in order to have consistency between the Clifford algebraic structure and the zero nonmetricity condition $ \nabla_K g^{MN} = 0 $. A discussion on the cosmological constant and bi-metric theories of gravity follows. We continue by pointing out the relations of Clifford space gravity to Lanczos-Lovelock-Cartan (LLC) higher curvature gravity with torsion. We finalize by pointing out that $ C$-space gravity involves higher-spins beyond spin $ 2 $ and argue why one could view the LLC higher curvature actions, and other extended gravitational theories based on $ f ( R ), f ( R_{\mu \nu} ), ... $ actions, for polynomial-valued functions, as mere $effective$ actions after integrating the $C$-space gravitational action with respect to all the poly-coordinates, except the vectorial ones $ x^\mu$.
[5188] vixra:1207.0071 [pdf]
Multiplication Modulo n Along The Primorials With Its Differences And Variations Applied To The Study Of The Distributions Of Prime Number Gaps. A.K.A. Introduction To The S Model
The sequence of sets of Z_n on multiplication where n is a primorial gives us a surprisingly simple and elegant tool to investigate many properties of the prime numbers and their distributions through analysis of their gaps. A natural reason to study multiplication on these boundaries is a construction exists which evolves these sets from one primorial boundary to the next, via the sieve of Eratosthenes, giving us Just In Time prime sieving. To this we add a parallel study of gap sets of various lengths and their evolution all of which together informs what we call the S model. We show by construction there exists for each prime number P a local finite probability distribution and it is surprisingly well behaved. That is we show the vacuum; ie the gaps, has deep structure. We use this framework to prove conjectured distributional properties of the prime numbers by Legendre, Hardy and Littlewood and others. We also demonstrate a novel proof of the Green-Tao theorem. Furthermore we prove the Riemann hypothesis and show the results are perhaps surprising. We go on to use the S model to predict novel structure within the prime gaps which leads to a new Chebyshev type bias we honorifically name the Chebyshev gap bias. We also probe deeper behavior of the distribution of prime numbers via ultra long scale oscillations about the scale of numbers known as Skewes numbers.
[5189] vixra:1207.0068 [pdf]
Noncommutative Geometry of Ads Coordinates on a D-Brane
In this short paper the noncommutative geometry and quantization of branes and the AdS is discussed. The question in part addresses an open problem left by this author in [1] on how branes are generated by stringy physics. The breaking of an open type I string into two strings generates a nascent brane at the new endpoints with inflationary cosmologies. This was left as a conjecture at the end of this paper on the role of quantum critical points in the onset of inflationary cosmology. The noncommutative geometry of the clock and lapse functions for the AdS-brane are derived as is the number of degrees of freedom which appear. The role of the AdS spacetime, or in particular its boundary, in cosmology is discussed in an elementary regularization scheme of the cosmological constant on the boundary. This is compared to schemes of conformal compactification of the AdS spacetime and the Heisenberg group.
[5190] vixra:1207.0059 [pdf]
Fractional Circuit Elements: Memristors, Memcapacitors, Meminductors and Beyond
Memristor was postulated by Chua in 1971 by analyzing mathematical relations between pairs of fundamental circuit variables and realized by HP laboratory in 2008. This relation can be generalized to include any class of two-terminal devices whose properties depend on the state and history of the system. These are called memristive systems, including current-voltage for the memristor, charge-voltage for the memcapacitor, and current-flux for the meminductor. This paper further enlarge the family of elementary circuit elements, in order to model many irregular and exotic nondifferentiable phenomena which are common and dominant to the nonlinear dynamics of many biological, molecular and nanodevices.
[5191] vixra:1207.0057 [pdf]
Extended PCR Rules for Dynamic Frames
In most of classical fusion problems modeled from belief functions, the frame of discernment is considered as static. This means that the set of elements in the frame and the underlying integrity constraints of the frame are fixed forever and they do not change with time. In some applications, like in target tracking for example, the use of such invariant frame is not very appropriate because it can truly change with time. So it is necessary to adapt the Proportional Conflict Redistribution fusion rules (PCR5 and PCR6) for working with dynamical frames. In this paper, we propose an extension of PCR5 and PCR6 rules for working in a frame having some non-existential integrity constraints. Such constraints on the frame can arise in tracking applications by the destruction of targets for example. We show through very simple examples how these new rules can be used for the belief revision process.
[5192] vixra:1207.0052 [pdf]
The Koide Formula and Its Analogues
The mathematics of analogues to the Koide formula is explored. In this context, a naturally occurring alternative to the Koide formula is shown to fit not only the tau-electron mass ratio, but also the muon-electron mass ratio.
[5193] vixra:1207.0050 [pdf]
Graded Tensor Products and the Problem of Tensor Grade Computation and Reduction
We consider a non-negative integer valued grading function on tensor products which aims to measure the extent of entanglement. This grading, unlike most of the other measures of entanglement, is defined exclusively in terms of the tensor product. It gives a possibility to approach the notion of entanglement in a more refined manner, as the non-entangled elements are those of grade zero or one, while the rest of elements with grade at least two are entangled, and the higher its grade, the more entangled an element of the tensor product is. The problem of computing and reducing the grade is studied in products of arbitrary vector spaces over arbitrary fields.
[5194] vixra:1207.0047 [pdf]
Hubble Volume, Cosmic Variable Proton Mass and the CMB Radiation Energy Density
It is noticed that, in the accelerating universe, proton mass, proton size and the strong coupling constant are cosmic variable constants. Independent of the cosmic red shift and CMBR observations, cosmic acceleration can be verifi ed by measuring the `rate of decrease' in the proton mass. Cosmic initial conditions can be addressed with the Planck mass(Mp) and the coulomb mass(MC). Based on the Mach's principle and the characteristic Hubble mass (M0) of the present universe, it is noticed that, in the Hubble volume, critical density, observed matter density and the thermal energy density are in geometric series and the geometric ratio is 1 + ln (M0/MC) : In this connection, it can be suggested that - in understanding the basics of grand uni cation and cosmology, cosmic Hubble volume can be given a chance.
[5195] vixra:1207.0044 [pdf]
Atom, Fine Structure Ratio and the Universe
In modern cosmology, the shape of the universe is flat. In between the closed space and at space, there is one compromise. That is `Hubble volume'. Note that Hubble volume is only a theoretical and spherical expanding volume and is virtual. From Hubble volume one can estimate the Hubble mass. By coupling the Hubble mass with the Mach's principle, one can understand the origin of cosmic and atomic physical parameters. Considering the Mach's principle and the Hubble mass, in this paper an attempt is made to understand the origin of the strong and electromagnetic interactions.
[5196] vixra:1207.0022 [pdf]
Generalized Quantum Impedances: A Background Independent Model for the Unstable Particles
The discovery of exact impedance quantization in the quantum Hall effect was greatly facilitated by scale invariance. This letter explores the possibility that quantum impedances may be generalized, defined not just for the Lorentz force and the quantum Hall effect, but rather for all forces, resulting in a precisely structured network of scale dependent and scale invariant impedances. If the concept of generalized quantum impedances correctly describes the physical world, then such impedances govern how energy is transmitted and reflected, how the hydrogen atom is ionized by a 13.6eV photon, or why the pizero branching ratio is what it is. An impedance model of the electron is presented, and explored as a model for the unstable particles.
[5197] vixra:1207.0019 [pdf]
On Communicating the Value of Basic Physics Research to the Public
The Argument that basic scientific research is of value is a strong one, but may not be always communicated very well to the public. As a result, even though the public overwhelmingly supports science, it may not always support important basic physics research projects. This can have definite consequences on policy decisions, as happened when the superconducting supercollider was shut down after pouring 2 billion dollars into it. This article makes six recommendations, directed primarily at physicists, but also more generally applicable to all scientists, to help communicate the value of basic physics research to the public more effectively. Doing this is especially important now, as basic research budgets are facing ever increasing threats of budget cuts.
[5198] vixra:1207.0016 [pdf]
On a New Position Space Doubly Special Relativity Theory
he general consensus feeling amongst researchers is that it is generally difficult to obtain a position space Lorentz invariant Doubly Special Relativity (DSR) theory. In this reading, we propose such a theory. The Lorentz transformations are modified such that the resultant theory has not one, but two invariants -- the speed of light c = 2.99792458e 8 m/s, and a minimum length. Actually, we achieve our desire by infusing Heisenberg's quantum mechanical uncertainty principle into the fabric of Minkowiski spacetime. In this new theory, it is seen that under extreme quantum conditions, it should be possible to exceed the light-speed-barrier without any limit. It should be stated that this theory has been developed more as a mathematical exercise to obtain a physically reasonable as is possible a position space DSR theory that is Lorentz invariant. In the low energy regime, the theory gives the same predictions as Einstein's Special Theory of Relativity (STR).
[5199] vixra:1207.0004 [pdf]
TGD View about Living Matter and Remote Mental Interactions
The book is intended to be a concise summary about the applications of TGD in consciousness theory, living matter, and remote mental interactions. The book begins with a very concise summary about TGD in the hope that it might help the mathematically oriented reader to gain an overall view: the notion of many-sheeted space-time and topological field quantization are of special importance for understanding biology in TGD Universe. </p><p> Quantum jump identified as moment of consciousness, the notion of self reducing to that of quantum jump, Negentropy Maximization Principle (NMP), zero energy ontology (ZEO), and the new view about relationship between subjective and experienced time are essential pieces of the vision. The identification of p-adic physics as correlate for cognition, the view about life as something in the intersection of real and p-adic worlds -matter and cognition - , and the notion of number theoretic negentropy are central in biological applications. </p><p> Magnetic body carrying dark matter and forming an onion-like structure with layers characterized by large values of Planck constant is a key concept of TGD inspired biology. Magnetic body is identified as intentional agent using biological body as sensory receptor and motor instrument. The role of magnetic body in various biological functions, and the new view about metabolism and its relationship to the generation of negentropic entanglement are discussed. </p><p> The applications to neuroscience are touched. The general model for the sensory qualia and for generation of attention and qualia leads to a view resembling the Orch OR of Penrose and Hameroff. Models for nerve pulse generation and EEG - or rather a fractal hierarchy of XEGs generalizing it serving as a communication tool in commutations between biological body and magnetic body - are proposed. </p><p> The mechanisms making possible for the magnetic body to control and receive sensory information from biological body apply can be regarded as those for remote mental interactions. Therefore also remote mental interactions are briefly considered in the proposed conceptual framework. Also tests for the general vision are also considered.
[5200] vixra:1207.0002 [pdf]
Stress-Energy Tensor Beyond the Belinfante and Rosenfeld Formula
The physical importance of the stress-energy tensor is twofold: at the one hand, it is a fundamental quantity appearing on the equations of mechanics; at the other hand, this tensor is the source of the gravitational field. Due to this importance, two different procedures have been developed to find this tensor for a given physical system. The first of the systematic procedures gives the canonical tensor, but this tensor is not usually symmetric and it is repaired, via the Belifante and Rosenfeld formula, to give the Hilbert tensor associated to the second procedure. After showing the physical deficiencies of the canonical and Hilbert tensors, we introduce a new and generalized tensor $\Theta^{\mu\nu}$ without such deficiencies. This $\Theta^{\mu\nu}$ is (i) symmetric, (ii) conserved, (iii) in agreement with the energy and momentum of a system of charges interacting via NILI potentials $\Lambda^\mu(R(t))$, and (\textbf{iv}) properly generalizes the Belifante and Rosenfeld formula, with the Hilbert tensor being a special case of $\Theta^{\mu\nu}$.
[5201] vixra:1207.0001 [pdf]
Chalmers Science School for International and Swedish Students
This paper will describe the implementation of competitively based and highly structured scientific programs in a framework of a science school at Chalmers University of Technology. We discuss the implementation, advantages and disadvantages of those programs, the requirements students and supervisors should fulfill, whether from the academia or from the industry, and we present the selection method of participants. We also reflect on the results of a survey conducted recently among Chalmers academic staff. We believe that the installation of this science school at Chalmers brings many advantages to students, starting with a better understanding of industry practices and ending with an easier path to recruitment. It further helps employers in efficiently administering the process of hiring students and in discovering technological breakthroughs. Moreover, it enables the university to establish better connections with the industry and later use its feedback to enhance academic courses and the content of those courses. Our method derives from the successful practices of a pioneering science school at the Israeli Weizmann Institute of Science; namely the Kupcinet-Getz Science School for Israeli and International Students. The method further acquires practices from published literature of relevance to our discussion. We aspire Chalmers Science School to be a blueprint for any emerging or evolving science school at any educational institute worldwide.
[5202] vixra:1206.0105 [pdf]
From Maxwell's Displacement Current to Superconducting Current
We investigate the nature of the superconducting current from the Maxwell's displacement current. We argue that the conduction current density term of the Maxwell's equations is physically untrue, and it should be eliminated from the equations. Essentially, both the superconducting current and conduction current are originated from the Maxwell's displacement current characterizing the changes of electric field with time or space. Therefore, there are no electrons tunnel through the insulating layer of the Josephson junction. It is shown that the conventional static magnetic field is, in fact, the static electric field of the intrinsic electron-ion electric dipoles in the materials. The new paradigm naturally leads to unification of magnetic and electrical phenomena, while at the same time realizing the perfect symmetry of the Maxwell's equations. Moreover, it is well confirmed that the Dirac's magnetic monopole is indeed the well-known electron. This research is expected to shed light on the high-temperature superconductivity.
[5203] vixra:1206.0103 [pdf]
On Clifford Space and Higher Curvature Gravity
Clifford-space Gravity is revisited and new results are found. A derivation of the proper expressions for the connections (with $torsion$) in Clifford spaces ( $C$-spaces) is presented. The introduction of hyper-determinants of hyper-matrices are instrumental in the derivation of the $ C$-space generalized gravitational field equations from a variational principle and based on the extension of the Einstein-Hilbert-Cartan action. We conclude by pointing out the relations of Clifford space gravity to Lanczos-Lovelock-Cartan higher curvature gravity with torsion and extended gravitational theories based on $ f ( R ), f ( R_{\mu \nu} ), ... $ actions, for polynomial-valued functions. Introducing nonmetricity furnishes higher curvature extensions of metric affine theories of gravity.
[5204] vixra:1206.0096 [pdf]
The Reaction Self-Force of a Classical Point Particle Actually Does not Diverge
For a point charge in a uniform external electric field the laws of classical electromagnetism are currently thought to imply an infinite self-force acting on the charge, in the opposite direction of the external force. This is a physically bizarre situation. We show here that this is not actually implied by the laws of classical electromagnetism. The problem with the standard approach turns out to be that it incorrectly tacitly assumes that the self-force is generated by acceleration induced by the external field when in reality the self-force is generated by acceleration induced by the sum of the external field and the self-force field itself.
[5205] vixra:1206.0090 [pdf]
The Analysis of Mcmonigal,lewis and O'byrne Applied to the Natario Warp Drive Spacetime
Warp Drives are solutions of the Einstein field equations that allows superluminal travel within the framework of General Relativity. There are at the present moment two known solutions: The Alcubierre warp drive discovered in $1994$ and the Natario warp drive discovered in $2001$. Recently McMonigal,Lewis and O´Byrne presented an important analysis for the Alcubierre warp drive:A warp drive ship at superluminal speeds in interstellar space would trap in the warp bubble all the particles and radiations the ship encounters in its pathway and these trapped bodies would achieve immense energies due to the bubble superluminal speed.As far the ship goes by in interstellar space, more and more particles and radiations are trapped in the bubble generating in front of it a blanket with extremely large amounts of positive energy. The physical consequences of having this blanket of positive energies with enormous magnitudes in front of the negative energy of the warp bubble are still unknown. When the ship finishes the trip it stops suddenly releasing in a highly energetic burst all the trapped particles and radiations contained in the blanket damaging severely the destination point. In this work,we reproduce the same analysis for the Natario warp drive however using different mathematical arguments more accessible to beginners or intermediate students and we arrived exactly to the same conclusions. While in a long-term future some of the physical problems associated to the warp drive science(negative energy,Horizons) seems to have solutions(better shape functions for the negative energy problem and a theory that encompasses both General Relativity and non local quantum entanglements of Quantum Mechanics for the Horizons problem) and we discuss these solutions in our work,the analysis of McMonigal,Lewis and O´Byrne although entirely correct do not have a foreseeable solution and remains the most serious obstacle against the warp drive as a physical reality
[5206] vixra:1206.0074 [pdf]
Spreading of Ultrarelativistic Wave Packet and Redshift
In explaining such phenomena as the redshift and even the fact that we can see stars and planets, the effect of wave packet spreading (WPS) of the photon wave function is not taken into account. Probably the main reason is a belief that WPS is not important since a considerable WPS would blur the images more than what is seen. However, WPS is an inevitable consequence of quantum theory and moreover this effect is also known in classical electrodynamics. So it is not sufficient to just say that a considerable WPS is excluded by observations. One should try to estimate the importance of WPS and to understand whether our intuition is correct or not. We explicitly demonstrate that a standard relativistic quantum-mechanical calculation shows that spreading in the direction perpendicular to the photon momentum is very important and cannot be neglected. Hence the physics of the above phenomena is not well understood yet. Possible approaches for solving the problem are discussed.
[5207] vixra:1206.0069 [pdf]
A Functional Determinant Expression for the Riemann XI Function
• ABSTRACT: We give and interpretation of the Riemann Xi-function as the quotient of two functional determinants of an Hermitian Hamiltonian . To get the potential of this Hamiltonian we use the WKB method to approximate and evaluate the spectral Theta function over the Riemann zeros on the critical strip . Using the WKB method we manage to get the potential inside the Hamiltonian , also we evaluate the functional determinant by means of Zeta regularization, we discuss the similarity of our method to the method applied to get the Zeros of the Selberg Zeta function. In this paper and for simplicity we use units so • Keywords: = Riemann Hypothesis, Functional determinant, WKB semiclassical Approximation , Trace formula ,Bolte’s law, Quantum chaos.
[5208] vixra:1206.0067 [pdf]
A Momentum Space for Majorana Spinors
In this work I will study the Dirac Gamma matrices in Majorana basis and Majorana spinors. A Fourier like transform is defined with Gamma matrices, defining a momentum space for Majorana spinors. It is shown that the Wheeler propagator has asymptotic states with well defined momentum.
[5209] vixra:1206.0063 [pdf]
Quantum Field Theory for Hypothetical Fifth Force
The fifth force is a hypothetical force, which is introduced as a hypothetical additional force, e.g. to describe deviations from gravitational force. Moreover, it is possible, to explain the baryon asymmetry in the universe, an unsolved problem of particle physics, with a hypothetical fifth force. This research shows, how the concept of a fifth force and its quantization can be used as a model for baryon asymmetry.
[5210] vixra:1206.0059 [pdf]
Lanczos-Lovelock-Cartan Gravity from Clifford Space Geometry
A rigorous construction of Clifford-space Gravity is presented which is compatible with the Clifford algebraic structure and permits the derivation of the expressions for the connections with $torsion$ in Clifford spaces ( $C$-spaces). The $ C$-space generalized gravitational field equations are derived from a variational principle based on the extension of the Einstein-Hilbert-Cartan action. We continue by arguing how Lanczos-Lovelock-Cartan higher curvature gravity with torsion can be embedded into gravity in Clifford spaces and suggest how this might also occur for extended gravitational theories based on $ f ( R ), f ( R_{\mu \nu} ), ... $ actions, for polynomial-valued functions. In essence, the Lanczcos-Lovelock-Cartan curvature tensors appear as Ricci-like traces of certain components of the $ C$-space curvatures. Torsional gravity is related to higher-order corrections of the bosonic string-effective action. In the torsionless case, black-strings and black-brane metric solutions in higher dimensions $ D > 4 $ play an important role in finding specific examples of solutions to Lanczos-Lovelock gravity.
[5211] vixra:1206.0055 [pdf]
How Did Schrodinger Obtain the Schrodinger Equation?
In this paper, we try to construct the famous Schrodinger equation of quantum mechanics in a very simple manner. It is shown that, even though the mathematical procedure of the construction may be correct, it is evident that the establishment of the Schrodinger equation is unreasonable in physics. We point out the application of the Schrodinger equation, in fact, will lead to the transformation of the studied system into an arbitrary variable pseudo physical system. This finding may help to uncover the nature of the nonlocality and Heisenberg's uncertainty principle of quantum mechanics. It is inevitable that the use of the Schrodinger equation will violate the law of conservation of energy. Hence, we argue that the Schrodinger equation is unsuitable to be applied to any physical systems.
[5212] vixra:1206.0046 [pdf]
Understanding Lorentz Violation with Rashba Interaction
Rashba spin orbit interaction is a well studied effect in condensed matter physics and has important applications in spintronics. The Standard Model Extension (SME) includes a CPT-even term with the coefficient H_{\mu \nu} which leads to the Rashba interaction term. From the limit available on the coefficient H_{\mu \nu} in the SME we derive a limit on the Rashba coupling constant for Lorentz violation. In condensed matter physics the Rashba term is understood as resulting from an asymmetry in the confining potential at the interface of two different types of semiconductors. Based on this interpretation we suggest that a possible way of inducing the H_{\mu \nu} term in the SME is with an asymmetry in the potential that confines us to 3 spatial dimensions.
[5213] vixra:1206.0041 [pdf]
Galaxy Anatomy: 'Darwin Spirals'
Humans have recognized the natural entity of galaxies for over ninety years. Dr. He initiated the concept of rational structure and applied it to the study of galaxy structure in 2001. The main reason is that galaxy arms trace their way on the galaxy disk plane elegantly so that the ratio of stellar density on the left side of the route to the one on the right is constant along the way. This is comparable to the elegant principle of natural selection. From now on, we call galaxy arms 'Darwin spirals' or 'Darwin curves' instead of calling them 'iso-ratio curves' or 'proportion curves'. If Dr. He's study is proved to be truth, then galaxy arms must be the disturbance to rational structure. The disturbance generates dust and gas which nurture human lives. This might be a clue to the important issue of life origin.
[5214] vixra:1206.0031 [pdf]
Belfer in Africa (Jurnal Marocan) [professor in Africa (Moroccan Journal)]
Author’s experience as professor of mathematics, teaching in French language, at the Sidi El Hassan Lyoussi College in Sefrou, Morocco, between 1982-1984. His traveling around and relationships with other professors from various nationalities, together with his involvement in training and selecting the Moroccan student team for the 1983 International Olympiad of Mathematics held in Paris, France.
[5215] vixra:1206.0026 [pdf]
Metaphysics of the Free Fock Space with Local and Global Information
A new interpretation of the basic vector of the free Fock space (FFS) and the FFS is proposed. The approximations to various equations with additional parameters, for n-point information (n-pi), are also considered in the case of non-polynomial nonlinearities. Key words: basic, generating and state vectors, local and global, Cuntz relations, perturbation and closure principles, homotopy analysis method, Axiom of Choice, consilience.
[5216] vixra:1206.0006 [pdf]
Baxter's Railroad Company.
In this document we analyze the thought experiment proposed by Baxter. Baxter's conclusion is that his thought experiment shows a contradiction and that therefore relativity is wrong. However, we shall show that there is no contradiction and that eventually Baxter has misinterpreted relativity. As Baxter in his thought experiment makes use of the relativistic light-clock, we start with an analysis of the relativistic light-clock. The relativistic light-clock eventually implies relations between time-intervals, but also between distances, where a distinction is made between parallel distances and orthogonal distances, with respect to the velocity of the moving distance. Correct implementation of the relativistic light-clock shows that Baxter's thought experiment does not lead to contradictions. Baxter's contradiction is based on misinterpretation of relativity by Baxter himself, i.e. he has not shown any contradiction.
[5217] vixra:1206.0005 [pdf]
Fractional Geometric Calculus: Toward A Unified Mathematical Language for Physics and Engineering
This paper discuss the longstanding problems of fractional calculus such as too many definitions while lacking physical or geometrical meanings, and try to extend fractional calculus to any dimension. First, some different definitions of fractional derivatives, such as the Riemann-Liouville derivative, the Caputo derivative, Kolwankar's local derivative and Jumarie's modified Riemann-Liouville derivative, are discussed and conclude that the very reason for introducing fractional derivative is to study nondifferentiable functions. Then, a concise and essentially local definition of fractional derivative for one dimension function is introduced and its geometrical interpretation is given. Based on this simple definition, the fractional calculus is extended to any dimension and the \emph{Fractional Geometric Calculus} is proposed. Geometric algebra provided an powerful mathematical framework in which the most advanced concepts modern physic, such as quantum mechanics, relativity, electromagnetism, etc., can be expressed in this framework graciously. At the other hand, recent developments in nonlinear science and complex system suggest that scaling, fractal structures, and nondifferentiable functions occur much more naturally and abundantly in formulations of physical theories. In this paper, the extended framework namely the Fractional Geometric Calculus is proposed naturally, which aims to give a unifying language for mathematics, physics and science of complexity of the 21st century.
[5218] vixra:1205.0117 [pdf]
Four Poission-Laplace Theory of Gravitation (I)
The Poisson-Laplace equation is a working and acceptable equation of gravitation which is mostly used or applied in its differential form in Magneto-Hydro-Dynamic (MHD) modelling. From a general relativistic standpoint, it describes gravitational fields in the region of low spacetime curvature as it emerges in the weak field limit. For none-static gravitational fields, this equation is not generally covariant. On the requirements of general covariance, this equation can be extended to include a time dependent component, in which case, one is led to the Four Poisson-Laplace equation. We solve the Four Poisson-Laplace equation for radial solutions, and apart from the Newtonian gravitational pole, we obtain four new solutions leading to four new gravitational poles capable (in-principle) of explaining e.g. the rotation curves of galaxies, the Pioneer anomaly, the Titius-Bode Law and the formation of planetary rings. In this letter, we focus only on writing down these solutions. The task to show that these new solutions might explain the aforesaid gravitational anomalies, has been left for separate future readings.
[5219] vixra:1205.0113 [pdf]
Using Higher Dimensions to Unify Dark Matter and Dark Energy if Massive Gravitons Are Stable
Discussion of a joint DM and DE model if massive gravitons are stable. Presented at Dark side of the Universe in Leon, Mexico, 2010, represented here due to inexplicable non publishing of conference proceedings for DSU 2010, of Leon, Mexico. Kaluza Klein treatment of graviton leads to DM and if there exists massive gravitons up to the present day, contributions to DE are presented. The caveat being if the graviton with a slight rest mass can be a stable particle.
[5220] vixra:1205.0104 [pdf]
Data Mining Career Batting Performances in Baseball
In this paper, we use statistical data mining techniques to analyze a multivariate data set of career batting performances in Major League Baseball. Principal components analysis (PCA) is used to transform the high-dimensional data to its lower-dimensional principal components, which retain a high percentage of the sample variation, hence reducing the dimensionality of the data. From PCA, we determine a few important key factors of classical and sabermetric batting statistics, and the most important of these is a new measure, which we call Offensive Player Grade (OPG), that efficiently summarizes a player’s offensive performance on a numerical scale. The determination of these lower-dimensional principal components allows for accessible visualization of the data, and for segmentation of players into groups using clustering, which is done here using the K-means clustering algorithm. We provide illuminating visual displays from our statistical data mining procedures, and we also furnish a player listing of the top 100 OPG scores which should be of interest to those that follow baseball.
[5221] vixra:1205.0102 [pdf]
Saint-Venant's Principe of the " Cavity in Cylinder " Problem
The problem of a cylinder with a small spherical cavity loaded by an equilibrium system of forces is suggested and discussed and its formulation of Saint-Venant's Principle is established. It is evident that finding solutions of boundary-value problems is a precise and pertinent approach to establish Saint-Venant type decay of elastic problems. Keywords : Saint-Venant’s Principe, proof, provability, solution, decay, formulation, cavity AMS Subject Classifications: 74-02, 74G50
[5222] vixra:1205.0100 [pdf]
Tricritical Quantum Point and Inflationary Cosmology
The holographic protection due to inflationary cosmology is a consequence of a quantum tricritical point. In this scenario a closed spacetime solution transitions into an inflationary de Sitter spacetime. Saturation of the holographic entropy bound is prevented by the phase change in the topology of the early universe.
[5223] vixra:1205.0098 [pdf]
Flavors of Clifford Algebra
Extensions of Clifford algebra are presented. Applications in physics, especially with regard to the flavor structure of Standard Model, are discussed. Modified gravity is also suggested.
[5224] vixra:1205.0096 [pdf]
Black Hole Universe and to Verify the Cosmic Acceleration
Based on the big bang concepts- in the expanding universe, ‘rate of decrease in CMBR temperature’ is a measure of the cosmic ‘rate of expansion’. Modern standard cosmology is based on two contradictory statements. They are - present CMBR temperature is isotropic and the present universe is accelerating. In particle physics also, till today laboratory evidence for the existence of ‘dark matter’ and ‘dark energy’ is very poor. Recent observations and thoughts supports the existence of the ‘cosmic axis of evil’. Independent of the cosmic red shift and CMBR observations, cosmic acceleration can be verified by measuring the `rate of decrease' in the fine structure ratio. In this connection an attempt is made to study the universe with a closed and growing model of cosmology. If the primordial universe is a natural setting for the creation of black holes and other non-perturbative gravitational entities, it is also possible to assume that throughout its journey, the whole universe is a primordial (growing and rotating) cosmic black hole. Instead of the Planck scale, initial conditions can be represented with the Coulomb scale. Obtained value of the present Hubble constant is close to 70.75 Km/sec/Mpc.
[5225] vixra:1205.0089 [pdf]
Saint-Venant's Principe of the Problem of the Cylinder
The Statement of Modified Saint-Venant's Principle is suggested. The axisymmetrical deformation of the infinite circular cylinder loaded by an equilibrium system of forces on its near end is discussed and its formulation of Modified Saint-Venant's Principle is established. It is evident that finding solutions of boundary-value problems is a precise and pertinent approach to establish Saint-Venant type decay of elastic problems. AMS Subject Classifications: 74-02, 74G50
[5226] vixra:1205.0084 [pdf]
RLC Circuit Derived from Particle and Field Electromagnetic Equations
The RLC circuit equation is derived step-by-step from basic equations of classical electrodynamics. The system is shown to be oscillating even if elements of the linear current would not interact with each other. Their mutual electromagnetic interaction due to acceleration of charges results in the phenomenon that looks like the increase of the effective mass of the charged particles. The increase of the mass makes the oscillations more persistent with respect to the damping caused by the friction.
[5227] vixra:1205.0083 [pdf]
Lanczos-Lovelock and f ( R ) Gravity from Clifford space Geometry
A rigorous construction of Clifford-space Gravity is presented which is compatible with the Clifford algebraic structure and permits the derivation of the generalized connections in Clifford spaces ( C-space) in terms of derivatives of the C-space metric. We continue by arguing how Lanczos-Lovelock higher curvature gravity can be embedded into gravity in Clifford spaces and suggest how this might also occur for extended gravitational theories based on f ( R ), f ( R_{\mu \nu} ), .... actions, for polynomial-valued functions. Black-strings and black-brane metric solutions in higher dimensions D > 4 play an important role in finding specific examples.
[5228] vixra:1205.0081 [pdf]
A New Microsimplicial Homology Theory
A homology theory based on both near-standard and non-near-standard microsimplices is constructed. Its basic properties, including Eilenberg-Steenrod axioms for homology and continuity with respect to resolutions of spaces, are proved.
[5229] vixra:1205.0077 [pdf]
(This Paper Has Been Withdrawn by the Author)
<em>This paper has been withdrawn by the author due to a flaw in the proof. / Este documento ha sido retirado por el autor debido a un error en la demostración.</em>
[5230] vixra:1205.0076 [pdf]
A Finite Reflection Formula For A Polynomial Approximation To The Riemann Zeta Function
The Riemann zeta function can be written as the Mellin transform of the unit interval map w(x)=⌊x−1⌋(x⌊x−1⌋+x−1) multiplied by s((s+1)/(s-1)). A finite-sum approximation to ζ(s) denoted by ζw(N;s) which has real roots at s=−1 and s=0 is examined and an associated function χ(N;s) is found which solves the reflection formula ζw(N;1−s)=χ(N;s)ζw(N;s). A closed-form expression for the integral of ζw(N;s) over the interval s=-1..0 is given. The function χ(N;s) is singular at s=0 and the residue at this point changes sign from negative to positive between the values of N = 176 and N = 177. Some rather elegant graphs of ζw(N;s) and the reflection functions χ(N;s) are also provided. The values ζw(N;1−n) for integer values of n are found to be related to the Bernoulli numbers.
[5231] vixra:1205.0072 [pdf]
Theory for Quantization of Gravity
The unification of Einstein's field equations with Quantum field theory is a problem of theoretical physics. Many models for solving this problem were done, i.e. the String Theory or Loop quantum gravity were introduced to describe gravity with quantum theory. The main problem of this theories is that they are mathematically very complicated. In this research text, there is given another description of gravity unified with Quantum field theory. In this case, gravity is described so that for weak gravitational fields the (semi)classical gravity desciption is equivalent.
[5232] vixra:1205.0071 [pdf]
Alpha, Fine Structure Constant and Square Root of Planck Momentum
The natural constants $G$, $h$, $e$ and $m_e$ are commonly used but are themselves difficult to measure experimentally with a high precision. Defining the Planck Ampere in terms of the square root of Planck momentum, referred to here as Quintessence momentum, and by assigning a formula for the electron as a magnetic monopole in terms of $e$ and $c$, a formula for the Rydberg constant can be derived. $G$, $h$, $e$ and $m_e$ can then each be written in terms of more precise constants; the speed of light $c$ (fixed value), the Rydberg constant (12 digit precision) and alpha, the fine structure constant (10 digit precision).
[5233] vixra:1205.0050 [pdf]
Dimensionless Physical Constant Mysteries
Feynman proposed searching for -α^(1/2)=-0.08542455 with the ± sign on the -α^(1/2) for the positive and negative charge, and may be related to π, ϕ, 2 and 5. We found α^(1/2)≈±log e/Φπ=±0.0854372 where Φ=1/ϕ=2cos(π/5). I/FQHE R_xy=± Z_0/2αν_i α^(1/2)=±log e^(±ϕ/Kπ) where Φ-ϕ-e-π in Euler Identity and K~{3,37,61} from 2^((p-1)) (p-1)!∈2^n n! are linked to Quantum theories. The energy-mass formula E=mc^2 and special relativistic mass m=γ m_0 established the particle rest-mass m_0 , mass-ratio m_i/m_e , mass-defect ∆m . The rest-mass of a particle can be quantized by the fine structure constant and the proton-electron mass ratio β_(p/e)=(α^(-3/2)-2α^(1/2)+α^2/πϕ^2-ηα^3)lnπ . The hydrogen atomic rest-mass is m_(1_H )=m_(p+)+m_e (1-α^2 ln10) in the Quantum Gravity. The high-energy W^± boson α_W^(1/2)=≈±(1-αsin^2 θ_w)log F/Φπ , where Fransén-Robinson constant F=∫_0^∞□(dt/(Γ(t)))=2.80777⋯ replaced e=∑_(n=0)^∞□(1/(Γ(n)))=2.71828⋯ We get the g-factors of particles (Leptons and Baryons).
[5234] vixra:1204.0102 [pdf]
Euler's Formula is the Key to Unlocking the Secrets of Quantum Physics
In this short note, the the key to unlocking the secrets of quantum physics will be elucidated by exploring the fundamentals of Schrodinger's wave mechanics approach to describing quantum phenomenon. We will show that de Broglie's wave-particle duality hypothesis which lies at the heart of Schrodinger's wave-function \psi produces a complex wave equation whose mathematical structure can be described by Euler's famous equation e^{i\theta}=cos(\theta)+isin(\theta) which basically describes a helical wave in 3D space. By comparing and contrasting the electromagnetic wave with that of a helical wave which Euler's equation represents, we may have discovered the geometric basis for spin and helicity and antimatter with negative energies that Dirac uncovered in his relativistic reformulation of Schrodinger's equation.
[5235] vixra:1204.0088 [pdf]
Maxwell's Theory of Gravity and Thermodynamics
We argue that the entropic origin of gravitation correctly reproduced general relativity and quantum mechanics, with a particular treatment, where entropic gravity can be viewed as Maxwell's theory of gravity and thermodynamics. And this application will give us more detailed knowledge on the origin of gravity
[5236] vixra:1204.0071 [pdf]
New Excited Levels of the Bottom and Anti Bottom Mesons in Integral Charge Quark Susy
Considering the `molar electron mass' an attempt is made to study the 4 interactions in a unified manner. Charged leptons, nucleons and (integral charge) quark masses were fitted in a unified scheme. Based on the modified SUSY, Higgs charged fermion, its boson and the quark baryon and quark meson masses were fitted. Finally an attempt is made to fit and predict the new excited levels of the bottom and anti bottom mesons.
[5237] vixra:1204.0038 [pdf]
The God Particle: the Higgs Boson, Extra Dimensions and the Particle in a Box
In this paper we show as the lightest Higgs boson, is directly linked to the existence of seven dimensions compacted Kaluza-Klein type, and four extended. The model of a particle in a box allows us to calculate, using the extra dimensions as entries in the well-known equations of this model, the mass of the lightest Higgs boson. This estimate coincides with complete accuracy with that obtained in our previous work, "God and His Creation: The Universe", using the well known quantum mechanical model of a particle in a spherically symmetric potential. Both load match and result in a mass for the lightest Higgs boson of 126.23 Gev - 126.17 Gev
[5238] vixra:1204.0026 [pdf]
Cold Big Bang Cosmology \\ Uma Nova Solução Dentro da Cosmologia Relativística
Conforme aqui se explanará, resolvem-se as equações de campo da relatividade geral no contexto cosmológico com a inserção de um novo argumento quântico para o substrato cosmológico: o princípio de indeterminação de Heisenberg. A solução obtida naturalmente provê um \textit{cutoff} para a temperatura cósmica de fundo nos promórdios do universo, levando à mínima entropia inicial. Ademais, a solução que aqui se obtém concorda com as observações cosmológicas, e.g., prevê a correta temperatura cósmica de fundo atual e a densidade crítica observada, esta tendo explicação dentro da solução obtida, ainda que se preveja um universo de topologia espacial aberta, hiperbólica, para o setor do tipo espaço (\textit{slices} temporais do continuum $4$-dimensional).
[5239] vixra:1203.0106 [pdf]
A Theorem Producing the Fine Structure Constant Inverse and the Quark and Lepton Mixing Angles
The value 137.036, a close approximation of the fine structure constant inverse, is shown to occur naturally in connection with a theorem employing a pair of related functions. It is also shown that the formula producing this approximation contains terms expressible using the sines squared of the experimental quark and lepton mixing angles, implying an underlying relationship between these constants. This formula places the imprecisely measured neutrino mixing angle <i>θ</i><sub>13</sub> at close to 8.09<sup>°</sup>, so that sin<sup>2</sup>2<i>θ</i><sub>13</sub> ≈ 0.0777.
[5240] vixra:1203.0103 [pdf]
Virtual Particle Interpretation of Quantum Mechanics a Non-Dualistic Model of QM with a Natural Probability Interpretation
An interpretation of non-relativistic quantum mechanics is presented in the spirit of Erwin Madelung’s hydrodynamic formulation of QM [1] and Louis de Broglie’s and David Bohm’s pilot wave models [2, 3]. The aims of the approach are as follows: 1) to have a clear ontology for QM, 2) to describe QM in a causal way, 3) to get rid of the wave-particle dualism in pilot wave theories, 4) to provide a theoretical framework for describing creation and annihilation of particles, and 5) to provide a possible connection between particle QM and virtual particles in QFT. These goals are achieved, if the wave function is replaced by a fluid of so called virtual particles. It is also assumed that in this fluid of virtual particles exist a few real particles and that only these real particles can be directly observed. This has relevance for the measurement problem in QM and it is found that quantum probabilities arise in a very natural way from the structure of the theory. The model presented here is very similar to a recent computational model of quantum physics [4] and recent Bohmian models of QFT [5, 6].
[5241] vixra:1203.0097 [pdf]
Role of the `molar Electron Mass' in Coupling the Strong, Weak and Gravitational Interactions
Considering the `molar electron mass' an attempt is made to understand the strong, weak and gravitational interactions in a unified approach. Muon \& tau rest masses, nuclear characteristic size, proton size, nucleon rest masses and magnetic moments were fitted. Obtained SEMF energy coefficients are 16.28, 19.36, 0.7681, 23.76 and 11.88 MeV respectively.
[5242] vixra:1203.0095 [pdf]
Heisenberg's Uncertainty : an Ill-Defined Notion ?
The often cited book [11] of Asher Peres presents Quantum Mechanics without the use of the Heisenberg Uncertainty Principle, a principle which it calls an "ill-defined notion". There is, however, no argument in this regard in the mentioned book, or comment related to the fact that its use in the realms of quanta is not necessary, let alone, unavoidable. A possible comment in this respect is presented here. And it is related to certain simple, purely logical facts in axiomatic theories, facts which are disregarded when using "physical intuition" and "physically meaningful" axioms or principles in the development of mathematical models of Physics, [16-18].
[5243] vixra:1203.0089 [pdf]
"Physical Intuition" : What Is Wrong with It ?
It appears not to be known that subjecting the axioms to certain conditions, such as for instance to be physically meaningful, may interfere with the logical essence of axiomatic systems, and do so in unforeseen ways, ways that should be carefully considered and accounted for. Consequently, the use of "physical intuition" in building up axiomatic systems for various theories of Physics may lead to situations which have so far not been carefully considered.
[5244] vixra:1203.0087 [pdf]
A Discinnect : Limitations of the Axiomatic Method in Physics
This paper presents the phenomenon of disconnect in the axiomatic approach to theories of Physics, a phenomenon which appears due to the insistence on axioms which have a physical meaning. This insistence introduces a restriction which is foreign to the abstract nature of axiomatic systems as such. Consequently, it turns out to introduce as well the mentioned disconnect. The axiomatic approach in Physics has a longer tradition. It is there already in Newton's Principia. Recently for instance, a number of axiomatic approaches have been proposed in the literature related to Quantum Mechanics. Special Relativity, [2], had from its beginning in 1905 been built upon two axioms, namely, the Galilean Relativity and the Constancy of the Speed of Light in inertial reference frames. Hardly noticed in wider circles, the independence of these two axioms had quite early been subjected to scrutiny, [5,3], and that issue has on occasion been addressed ever since, see [8,4,24] and the literature cited there. Recently, [24], related to these two axioms in Special Relativity, the following phenomenon of wider importance in Physics was noted. As the example of axiomatization of Special Relativity shows it, it is possible to face a disconnect between a system of physically meaningful axioms, and on the other hand, one or another of the mathematical models used in the study of the axiomatized physical theory. The consequence is that, seemingly unknown so far, one faces in Physics the possibility that the axiomatic method has deeper, less obvious, and in fact not considered, or simply overlooked limitations. As there is no reason to believe that the system of the usual two axioms of Special Relativity is the only one subjected to such a disconnect, the various foundational ventures in modern Physics, related for instance to gravitation, quanta, or their bringing together in an overarching theory, may bene t from the study of the possible sources and reasons for such a disconnect. An attempt of such study is presented in this paper.
[5245] vixra:1203.0078 [pdf]
Apparent Measure and Relative Dimension
In this paper, we introduce a concept of "apparent" measure in R^n and we define a concept of relative dimension (of real order) with it, which depends on the geometry of the object to measure and on the distance which separates it from an observer. At the end we discuss the relative dimension of the Cantor set. This measure enables us to provide a geometric interpretation of the Riemann-Liouville's integral of order alpha between 0 and 1.
[5246] vixra:1203.0074 [pdf]
Fermi Energy of Proton and the Semf Energy Coefficients
Considering the Avogadro number as a scaling factor an attempt is made to understand the origin of the strong, weak and electromagnetic interactions in a unified approach. Nuclear characteristic size, proton size, nucleon rest masses and magnetic moments were fitted. It is noticed that, nuclear binding energy can be understood with the `molar electron mass' and `fermi energy' concepts.
[5247] vixra:1203.0072 [pdf]
The 3 Atomic Forces and the Strong Coupling Constant
Key conceptual link that connects the gravitational force and non-gravitational forces is - the classical force limit, $F_C \cong \left(\frac{c^{4} }{G} \right)$. Nuclear weak force magnitude is $F_W \cong \frac{F_C}{N^2}$ where $N$ is the Avogadro number. Relation between the nuclear strong force and weak force magnitudes can be expressed as $\sqrt{\frac{F_S}{F_W}} \cong 2 \pi \ln \left(N^2\right).$ It is noticed that there exists simple relations in between the nuclear strong force, weak force and the force on the revolving electron in the Bohr radius of the Hydrogen atom. An attempt is made to couple the strong coupling constant with these 3 forces.
[5248] vixra:1203.0068 [pdf]
Quaternionic-valued Gravitation in 8D, Grand Unification and Finsler Geometry
A unification model of 4D gravity and SU(3) x SU(2) x U(1) Yang-Mills theory is presented. It is obtained from a Kaluza-Klein compactification of 8D quaternionic gravity on an internal CP^2 = SU (3)/U(2) symmetric space . We proceed to explore the nonlinear connection A^a_\mu ( x, y ) formalism used in Finsler geometry to show how ordinary gravity in D = 4 + 2 dimensions has enough degrees of freedom to encode a 4D gravitational and SU (5) Yang-Mills theory. This occurs when the internal two-dim space is a sphere S^2 . This is an appealing result because SU (5) is one of the candidate GUT groups. We conclude by discussing how the nonlinear connection formalism of Finsler geometry provides an infinite hierarchical extension of the Standard Model within a six dimensional gravitational theory due to the embedding of SU (3) x SU (2) x U(1) \subset SU ( 5 ) \subset SU ( \infty ) .
[5249] vixra:1203.0067 [pdf]
Kaluza-Cartan Theory And A New Cylinder Condition
Kaluza’s 1921 theory of gravity and electromagnetism using a fifth wrapped-up spatial dimension is inspiration for many modern attempts to develop new physical theories. For a number of reasons the theory is incomplete and generally considered untenable. An alternative approach is presented that includes torsion, unifying gravity and electromagnetism in a Kaluza-Cartan theory. Emphasis is placed on admitting important electromagnetic fields not present in Kaluza’s original theory, and on the Lorentz force law. This is investigated via a non-Maxwellian kinetic def- inition of charge related to Maxwellian charge and 5D momentum. Two connections and a new cylinder condition are used. General covariance and global properties are investigated via a reduced non-maximal atlas. Conserved super-energy is used in place of the energy conditions for 5D causality. Explanatory relationships between matter, charge and spin are present.
[5250] vixra:1203.0065 [pdf]
On the Growth of Meromorphic Solutions of a type of Systems of Complex Algebraic Differential Equations
This paper is concerned with the growth of meromorphic solutions of a class of systems of complex algebraic differentialequations. A general estimate the growth order of solutions of the systems of differential equation is obtained by Zalacman Lemma. We also take an example to show that the result is right.
[5251] vixra:1203.0064 [pdf]
The Goldbach Conjecture
The binary Goldbach conjecture asserts that every even integer greater than 4 is the sum of two primes. In order to prove this statement, we start by defining a kind of double sieve of Eratosthenes as follows. Given an even integer x, we sift out from [1, x] all those elements that are congruents to 0 modulo p, or congruents to x modulo p, where p is a prime less than the square root of x. So, any integer in the interval [sqrt{x}, x] that remains unsifted is a prime p for which either x-p = 1 or x-p is also a prime. Then, we introduce a new way to formulate this sieve, which we call the sequence of k-tuples of remainders. Using this tool, we obtain a lower bound for the number of elements in [1, x] that survives the sifting process. We prove, for every even number x greater than the square of 149, that there exist at least 3 integers in the interval [ 1, x ] that remains unsifted. This proves the binary Goldbach conjecture for every even number x greater than the square of 149, which is our main result.
[5252] vixra:1203.0059 [pdf]
Quantum Adeles
A generalization of number concept is proposed. One can replace integer n with n-dimensional Hilbert space and sum + and product × with direct sum ⊕ and tensor product ⊗ and introduce their co-operations, the definition of which is highly non-trivial. </p><p> This procedure yields also Hilbert space variants of rationals, algebraic numbers, p-adic number fields, and even complex, quaternionic and octonionic algebraics. Also adeles can be replaced with their Hilbert space counterparts. Even more, one can replace the points of Hilbert spaces with Hilbert spaces and repeat this process, which is very similar to the construction of infinite primes having interpretation in terms of repeated second quantization. This process could be the counterpart for construction of n<sup>th</sup> order logics and one might speak of Hilbert or quantum mathematics. The construction would also generalize the notion of algebraic holography and provide self-referential cognitive representation of mathematics. </p><p> This vision emerged from the connections with generalized Feynman diagrams, braids, and with the hierarchy of Planck constants realized in terms of coverings of the imbedding space. Hilbert space generalization of number concept seems to be extremely well suited for the purposes of TGD. For instance, generalized Feynman diagrams could be identifiable as arithmetic Feynman diagrams describing sequences of arithmetic operations and their co-operations. One could interpret ×<sub>q</sub> and +<sub>q</sub> and their co-algebra operations as 3-vertices for number theoretical Feynman diagrams describing algebraic identities X=Y having natural interpretation in zero energy ontology. The two vertices have direct counterparts as two kinds of basic topological vertices in quantum TGD (stringy vertices and vertices of Feynman diagrams). The definition of co-operations would characterize quantum dynamics. Physical states would correspond to the Hilbert space states assignable to numbers. One prediction is that all loops can be eliminated from generalized Feynman diagrams and diagrams are in projective sense invariant under permutations of incoming (outgoing legs).
[5253] vixra:1203.0058 [pdf]
About Absolute Galois Group
Absolute Galois Group defined as Galois group of algebraic numbers regarded as extension of rationals is very difficult concept to define. The goal of classical Langlands program is to understand the Galois group of algebraic numbers as algebraic extension of rationals - Absolute Galois Group (AGG) - through its representations. Invertible adeles -ideles - define Gl<sub>1</sub> which can be shown to be isomorphic with the Galois group of maximal Abelian extension of rationals (MAGG) and the Langlands conjecture is that the representations for algebraic groups with matrix elements replaced with adeles provide information about AGG and algebraic geometry. </p><p> I have asked already earlier whether AGG could act is symmetries of quantum TGD. The basis idea was that AGG could be identified as a permutation group for a braid having infinite number of strands. The notion of quantum adele leads to the interpretation of the analog of Galois group for quantum adeles in terms of permutation groups assignable to finite l braids. One can also assign to infinite primes braid structures and Galois groups have lift to braid groups. </p><p> Objects known as dessins d'enfant provide a geometric representation for AGG in terms of action on algebraic Riemann surfaces allowing interpretation also as algebraic surfaces in finite fields. This representation would make sense for algebraic partonic 2-surfaces, and could be important in the intersection of real and p-adic worlds assigned with living matter in TGD inspired quantum biology, and would allow to regard the quantum states of living matter as representations of AGG. Adeles would make these representations very concrete by bringing in cognition represented in terms of p-adics and there is also a generalization to Hilbert adeles.
[5254] vixra:1203.0057 [pdf]
A Proposal for Memory Code
In an article in the March 8 issue of the journal PLoS Computational Biology, physicists Travis Craddock and Jack Tuszynski of the University of Alberta, and anesthesiologist Stuart Hameroff of the University of Arizona propose a mechanism for encoding synaptic memory in microtubules, major components of the structural cytoskeleton within neurons. The self-explanatory title of the article is "Cytoskeletal Signaling: Is Memory Encoded in Microtubule Lattices by CaMKII Phosphorylation?". The basic ideas of the model are described and criticized and after than TGD inspired model is discussed.
[5255] vixra:1203.0051 [pdf]
Vertex-Only Bifurcation Diagrams Are Deceptively Simple
By plotting the polynomials corresponding to several iterations of the logistic map, it is found that the entropy of a branching path can be larger than what is intuitively expected.
[5256] vixra:1203.0048 [pdf]
Two Theories of Special Relativity ?
Recently, [3], it was shown that Special Relativity is in fact based just about on one single physical axiom which is that of Reciprocity. Originally, Einstein, [1], established Special Relativity on two physical axioms, namely, the Galilean Relativity and the Constancy of the Speed of Light in inertial reference frames. Soon after, [2,4,5], it was shown that the Galilean Relativity alone, together with some implicit mathematical type conditions, is sufficient for Special Relativity. The references in [7,3] can give an idea about the persistence over the years, even if not the popularity, of the issue of minimal axiomatic foundation of Special Relativity. Here it is important to note that, implicitly, three more assumptions have been used on space-time co-ordinate transformations, namely, the homogeneity of space-time, the isotropy of space, and certain mathematical condition of smoothness type on the coordinate transformations. In [3], a weaker boundedness type condition on space-time coordinate transformations is used instead of the usual mathematical smoothness type conditions. In this paper it is shown that the respective boundedness condition is related to the Principle of Local Transformation Increment Ratio Limitation, or in short, PLTIRL, a principle introduced here, and one which has an obvious physical meaning. It is also shown that PLTIRL is not a stronger assumption than that of the mentioned boundedness in [3], and yet it can also deliver the Lorentz Transformations. Of interest is the fact that, by formulating PLTIRL as a physical axiom, the possibility is opened up for the acceptance, or on the contrary, rejection of this physical axiom PLTIRL, thus leading to two possible theories of Special Relativity. And to add further likelihood to such a possibility, the rejection of PLTIRL leads easily to effects which involve unlimited time and/or space intervals, thus are not accessible to usual experimentation for the veri cation of their validity, or otherwise. A conclusion is that a more careful consideration of the assumptions underlying Special Relativity is worth pursuing. In this regard, a corresponding trend has lately been observable in Quantum Mechanics and General Relativity. In the former, the respective analysis is more involved than has so far been the case for Special Relativity. As for the latter, the technical and conceptual diffculties are considerable. Regarding Quantum Field Theory, the situation is, so far, unique in Physics since, to start with, there is not even one single known rigorous and comprehensive enough mathematical model. This paper is a new version of [20].
[5257] vixra:1203.0047 [pdf]
Quantum Theory, String Theory, Strong Gravity and the Avogadro Number
Nucleon behaves as if it constitutes molar electron mass. In a unified way nucleon's mass, size and other characteristic properties can be studied with this idea. If strong interaction is really $10^{39}$ times stronger than the strength of gravity, this proposal can be given a chance. Key conceptual link that connects the gravitational force and non-gravitational forces is - the classical force limit, $F_C \cong \left(\frac{c^{4} }{G} \right)$. It can be considered as the upper limit of the cosmic string tension. Weak force magnitude $F_W$ can be considered as the characteristic nuclear weak string tension. Until the measurement of $\left(F_C \;\&\; F_W\right)-$ it can be assumed that $\frac{F_C}{F_W}\cong N^2$ where $N$ is Avogadro like number.
[5258] vixra:1203.0044 [pdf]
Space Doesn't Expand. New Proof of Hubble's Law and Center of the Universe
After the expansion of universe was observed in the 1920s, physicists and astronomers introduced the concept of ``space expands" into physics and many observations and research results were used based on this. However, we can't explain why space expands and why it has a specific velocity and is no observations of expansion of space. This study proves that the expansion of the universe and Hubble's law doesn't result from the expansion of space, but is a dynamical result from the movement of galaxies in space. We could confirm that Hubble's law was always valid when the effect of acceleration was smaller than initial velocity. We can define the center of the universe and find it. There is a possibility that 2.7K background radiation is not radiation in the early days of the universe. In that case, we can't estimate that our universe is isotropic and uniform from CMBR. Also, this shows that cosmological redshift comes out from the Doppler effect of light. Expansion of space was explained that it was related to redshift and scale factor. Therefore, it is influencing many areas of astronomy and cosmology. Therefore, if this discovery is true, all matters related to redshift and scale factor should be reviewed.
[5259] vixra:1203.0042 [pdf]
General Relativity as Geometrical Approximation to a Field Theory of Gravity
It is broadly believed that general relativity --a geometric theory-- is fully equivalent to the field theory of a massless, self-interacting, spin-2 field. This belief is reinforced by statements in many textbooks. However, an increasing criticism to this belief has been published. To settle this old debate about the precise physical nature of gravitation, this author introduces a simple but exact argument --based in the equivalence principle-- that shows that general relativity is not equivalent to a field theory of gravity. Subsequently, both the general relativistic Lagrangian for a particle and the Hilbert & Einstein equations are obtained as an approximation from a field theory of gravity, somehow as geometric optics can be derived from physical optics. The approximations involved in the geometrization are two: (i) the neglect of $T_{grav}^{\mu\nu}$ and $T_{int}^{\mu\nu}$ in the field-theoretic tensor $\Theta^{\mu\nu}$ and (ii) the approximation of the effective metric by the curved spacetime metric $g_{\mu\nu} = \hat{g}_{\mu\nu} + O(h_{\mu\nu}^2)$. Further discussion of this derivation and of the approximations involved is given. Several misunderstandings about the consistency and observability of the flat spacetime theories of gravity are corrected. A detailed analysis of the fundamental differences between geometric and field-theoretic expressions reveals that all the well-known deficiencies of general relativity --including the impossibility to obtain a consistent quantum general relativity-- are direct consequences of the geometrization of the gravitational interaction. Finally, remarks about the status of dark matter are given, from the perspective of a generalized theory of gravity.
[5260] vixra:1203.0029 [pdf]
Local Fractional Improper Integral in Fractal Space
In this paper we study Local fractional improper integrals on fractal space. By some mean value theorems for Local fractional integrals, we prove an analogue of the classical Dirichlet-Abel test for Local fractional improper integrals.
[5261] vixra:1203.0025 [pdf]
A Different Model of the Cosmological Constant and Einstein Curvature Tensor in Relation to Dark Energy
The meaning and existence of the cosmological constant has come to the forefront of physics as a dark energy that could be responsible for an accelerating expansion of the universe, as well as having an extremely large magnitude as predicted by quantum field theory. This presents the most challenging physics problems known today. In this work I ask questions of a simple equivalency substituted into the Einstein field equation and demonstrate that this results in a repulsive Newtonian gravity that can be explained in terms of a large cosmological constant as well as a proposed path for dark matter.
[5262] vixra:1203.0018 [pdf]
Dirac Singletons in a Quantum Theory over a Galois Field
Dirac singletons are exceptional irreducible representations (IRs) of the so(2,3) algebra found by Dirac. As shown in a seminal work by Flato and Fronsdal, the tensor product of singletons can be decomposed into massless IRs of the so(2,3) algebra and therefore each massless particle (e.g. the photon) can be represented as a composite state of singletons. This poses a fundamental problem of whether only singletons can be treated as true elementary particles. However, in standard quantum theory (based on complex numbers) such a possibility encounters difficulties since one has to answer the following questions: a) why singletons have not been observed and b) why the photon is stable and its decay into singletons has not been observed. We show by direct calculations that in a quantum theory over a Galois field (GFQT), the decomposition of the tensor product of singletons IRs contains not only massless IRs but also special massive IRs which have no analogs in standard theory. In the case of supersymmetry we explicitly construct a complete set of IRs taking part in the decomposition of the tensor product of supersingletons. Then in GFQT one can give natural explanations of a) and b).
[5263] vixra:1203.0011 [pdf]
A Very Brief Introduction To Clifford Algebra
This article distills many of the essential definitions from the very thorough book, Clifford Algebras: An Introduction, by Dr D.J.H. Garling, with some minor additions.
[5264] vixra:1203.0010 [pdf]
A Treatment of the Twin Paradox Based on the Assumption of an Instantaneous Acceleration
We investigate the twin paradox assuming the acceleration acts instantaneously in one of the twins and whose effect is just to revert the relative movement of the twins keeping the same relative speed. The relative motion of the twins is then split in two stages: one where they move away and another when they approach each other. Each stage is described by specific Lorentz transformations that obey certain boundary conditions related to the reversion of motion. We then show how the paradox arises from the particular form of the Lorentz transformation describing the approaching movement of the twins.
[5265] vixra:1202.0094 [pdf]
On Leveraging the Chaotic and Combinatorial Nature of Deterministic N-Body Dynamics on the Unit M-Sphere in Order to Implement a Pseudo-Random Number Generator
The goal of this paper is to describe how to implement a pseudo-random number generator by using deterministic n-body dynamics on the unit m-sphere. Throughout this paper we identify several types of patterns in dynamics, along with ways to interrupt the formation of these patterns.
[5266] vixra:1202.0093 [pdf]
Symmetries in Wigner 18-J and 21-J Symbols
The symmetry group of the 18-j(H) Wigner symbol is restructured by splitting two symmetry equations (Yutsis et al. 1962) into three generators. The symmetry groups of two 21-j Wigner symbols (Ponzano 1965) are complemented to form groups of order 8. This summarizes systematic evaluation of the automorphisms of the associated simple cubic graphs with McKay’s nauty program.
[5267] vixra:1202.0090 [pdf]
Planck-Scale Physics: Facts and Beliefs
The relevance of the Planck scale to a theory of quantum gravity has become a worryingly little examined assumption that goes unchallenged in the majority of research in this area. However, in all scientific honesty, the significance of Planck's natural units in a future physical theory of spacetime is only a plausible, yet by no means certain, assumption. The purpose of this article is to clearly separate fact from belief in this connection.
[5268] vixra:1202.0089 [pdf]
Is Empty Spacetime a Physical Thing?
This article deals with empty spacetime and the question of its physical reality. By "empty spacetime" we mean a collection of bare spacetime points, the remains of ridding spacetime of all matter and fields. We ask whether these geometric objects -- themselves intrinsic to the concept of field -- might be observable through some physical test. By taking quantum-mechanical notions into account, we challenge the negative conclusion drawn from the diffeomorphism invariance postulate of general relativity, and we propose new foundational ideas regarding the possible observation -- as well as conceptual overthrow -- of this geometric ether.
[5269] vixra:1202.0088 [pdf]
Geometry, Pregeometry and Beyond
This article explores the overall geometric manner in which human beings make sense of the world around them by means of their physical theories; in particular, in what are nowadays called pregeometric pictures of Nature. In these, the pseudo-Riemannian manifold of general relativity is considered a flawed description of spacetime and it is attempted to replace it by theoretical constructs of a different character, ontologically prior to it. However, despite its claims to the contrary, pregeometry is found to surreptitiously and unavoidably fall prey to the very mode of description it endeavours to evade, as evidenced in its all-pervading geometric understanding of the world. The question remains as to the deeper reasons for this human, geometric predilection--present, as a matter of fact, in all of physics--and as to whether it might need to be superseded in order to achieve the goals that frontier theoretical physics sets itself at the dawn of a new century: a sounder comprehension of the physical meaning of empty spacetime.
[5270] vixra:1202.0078 [pdf]
On Building 4-Critical Plane and Projective Plane Multiwheels from Odd Wheels
We build unbounded classes of plane and projective plane multiwheels that are 4-critical that are received summing odd wheels as edge sums modulo two. These classes can be considered as ascending from single common graph that can be received as edge sum modulo two of the octahedron graph O and the minimal wheel W3. All graphs of these classes belong to 2n - 2-edges-class of graphs, among which are those that quadrangulate projective plane, i.e., graphs from Groetzsch class, received applying Mycielski's Construction to odd cycle.
[5271] vixra:1202.0077 [pdf]
On Building 4-critical Plane and Projective Plane Multiwheels from Odd Wheels. Extended Abstract
We build unbounded classes of plane and projective plane multiwheels that are 4-critical that are received summing odd wheels as edge sums modulo two. These classes can be considered as ascending from single common graph that can be received as edge sum modulo two of the octahedron graph O and the minimal wheel W_3.
[5272] vixra:1202.0076 [pdf]
Life as Evolving Software
In this paper we present an information-theoretic analysis of Darwin's theory of evolution, modeled as a hill-climbing algorithm on a fitness landscape. Our space of possible organisms consists of computer programs, which are subjected to random mutations. We study the random walk of increasing fitness made by a single mutating organism. In two different models we are able to show that evolution will occur and to characterize the rate of evolutionary progress, i.e., the rate of biological creativity.
[5273] vixra:1202.0075 [pdf]
Is Indeed Information Physical ?
Information being a relatively new concept in science, the likelihood is pointed out that we do not yet have a good enough grasp of its nature and relevance. This likelihood is further enhanced by the ubiquitous use of information which creates the perception of a manifest, yet in fact, rather superficial familiarity. The paper suggests several aspects which may be essential features of information, or on the contrary, may not be so. In this regard, further studies are obviously needed, studies which may have to avoid with care various temptations to reductionism, like for instance the one claiming that ``information is physical".
[5274] vixra:1202.0064 [pdf]
God and His Creation: the Universe
In this paper we demonstrate how the universe has to be caused or created by an intelligent being: God. And as the multiverse theory, mainly based on the random generation of states infinite universe is very poorly posed from the mathematical point of view, and how this failure leads to the demonstration of its impossibility. The anthropic principle called not taken into account, it is a mere philosophical statement without any physical-mathematical foundation, and therefore lacking the minimum necessary scientific validity. The main parameters are obtained: density of baryons, vacuum energy density, mass of the lightest Higgs boson, neutrino mass, mass of the graviton, among others. Deducting finally naturally within the theory, the inflation factor of the universe. Similarly, the theory implies necessarily the "creation" of matter
[5275] vixra:1202.0032 [pdf]
Cardinal Functions and Integral Functions
This paper presents the correspondences of the eccentric mathematics of cardinal and integral functions and centric mathematics, or ordinary mathematics. Centric functions will also be presented in the introductory section, because they are, although widely used in undulatory physics, little known.
[5276] vixra:1202.0031 [pdf]
The Local Doppler Effect Due to Earth's Rotation and the CNGS Neutrino Anomaly Due to Neutrino's Index of Refraction Through the Earth Crust
In this brief paper, we show the neutrino velocity discrepancy obtained in the OPERA experiment may be due to the local Doppler effect between a local clock attached to a given detector at Gran Sasso, say $\mathcal{C}_{G}$, and the respective instantaneous clock crossing $\mathcal{C}_{G}$, say $\mathcal{C}_{C}$, being this latter at rest in the instantaneous inertial frame having got the velocity of rotation of CERN about Earth's axis in relation to the fixed stars. With this effect, the index of refraction of the Earth crust may accomplish a refractive effect by which the neutrino velocity through the Earth crust turns out to be small in relation to the speed of light in the empty space, leading to an encrusted discrepancy that may have contamined the data obtained from the block of detectors at Gran Sasso, leading to a time interval excess $\epsilon$ that did not provide an exact match between the shift of the protons PDF (probability distribution function) by $\text{TOF}_{c}$ and the detection data at Gran Sasso via the maximum likelihood matching.
[5277] vixra:1202.0023 [pdf]
The Local Doppler Effect Due to Earth's Rotation and the CNGS Neutrino Anomaly?
In this brief paper, we show if the neutrino velocity discrepancy obtained in \cite{arxiv} may be due to the local Doppler effect between a local clock attached to a given detector at Gran Sasso, say $\mathcal{C}_{G}$, and the respective instantaneous clock crossing $\mathcal{C}_{G}$, say $\mathcal{C}_{C}$, being this latter at rest in the instantaneous inertial frame having got the velocity of rotation of CERN about Earth's axis in relation to the fixed stars.
[5278] vixra:1202.0015 [pdf]
Volume of the Off-center Spherical Pyramidal Trunk
The volume inside intersecting spheres may be computed by a standard method which computes a surface integral over all visible sections of the spheres. If the visible sections are divided in simple zonal sections, the individual contribution by each zone follows from basic analysis. We implement this within a semi-numerical program which marks the zones individually as visible or invisible.
[5279] vixra:1202.0011 [pdf]
Accelerating Universe and the Expanding Atom-2
In the cosmic Euclidean volume, inverse of the fine structure ratio is equal to the natural logarithm of ratio of number of (electrons or positrons) and the Avogadro number. Bohr radius of hydrogen atom, quanta of the angular momentum and the strong interaction range - are connected with the large scale structure of the massive universe. In the accelerating universe, as the space expands, in hydrogen atom, distance between proton and electron increases and is directly proportional to the size of the universe. Obtained value of the present Hubble constant is 70.75 Km/sec/Mpc. `Rate of decrease in fine structure ratio' is a measure of cosmic rate of expansion. Considering the integral nature of number of protons (of any nucleus), integral nature of `hbar' can be understood.
[5280] vixra:1202.0009 [pdf]
On the Cold Big Bang Cosmology and the Flatness Problem
In my papers \cite{Assis1} and \cite{Assis2}, I obtain a Cold Big Bang Cosmology, fitting the cosmological data, with an absolute zero primordial temperature, a natural cutoff for the cosmological data to a vanishingly small entropy at a singular microstate of a comoving domain of the cosmological fluid. Now, in this brief paper, we show the energy density of the $t$-sliced universe must actually be the critical one, following as consequence of the solution in \cite{Assis1} and \cite{Assis2}. It must be pointed out that the result obtained here in this paper on the flatness problem does not contradict the solution in \cite{Assis1}, viz., does not contradict the open universe, with $k=-1$, obtained in \cite{Assis1}, since the solution in \cite{Assis1} had negative pressure and negative total cosmological energy density, hence lesser that the critical positive density. The critical density we obtain here is due to the positive fluctuations I previously discussed regarding the Heisenberg mechanism in \cite{Assis2}. Hence, the energy density due to fluctuation turns out to be positive, the critical one, this being calculated in \cite{Assis2} from the fluctuations within the $t$-sliced spherical shell at its $t$-sliced hypersurface, being the total energy density that generates the fluctuations, actually, negative in \cite{Assis1} and \cite{Assis2}, hence, again, lesser than the critical one and supporting $k=-1$. These results are complementary and support my previous results.
[5281] vixra:1201.0127 [pdf]
Algebraic Braids, Sub-Manifold Braid Theory, and Generalized Feynman Diagrams
The basic challenge of quantum TGD is to give a precise content to the notion of generalization Feynman diagram and the reduction to braids of some kind is very attractive possibility inspired by zero energy ontology. The point is that no n>2-vertices at the level of braid strands are needed if bosonic emergence holds true. <OL> <LI> For this purpose the notion of algebraic knot is introduced and the possibility that it could be applied to generalized Feynman diagrams is discussed. The algebraic structrures kei, quandle, rack, and biquandle and their algebraic modifications as such are not enough. The lines of Feynman graphs are replaced by braids and in vertices braid strands redistribute. This poses several challenges: the crossing associated with braiding and crossing occurring in non-planar Feynman diagrams should be integrated to a more general notion; braids are replaced with sub-manifold braids; braids of braids ....of braids are possible; the redistribution of braid strands in vertices should be algebraized. In the following I try to abstract the basic operations which should be algebraized in the case of generalized Feynman diagrams. <LI> One should be also able to concretely identify braids and 2-braids (string world sheets) as well as partonic 2-surfaces and I have discussed several identifications during last years. Legendrian braids turn out to be very natural candidates for braids and their duals for the partonic 2-surfaces. String world sheets in turn could correspond to the analogs of Lagrangian sub-manifolds or two minimal surfaces of space-time surface satisfying the weak form of electric-magnetic duality. The latter opion turns out to be more plausible. Finite measurement resolution would be realized as symplectic invariance with respect to the subgroup of the symplectic group leaving the end points of braid strands invariant. In accordance with the general vision TGD as almost topological QFT would mean symplectic QFT. The identification of braids, partonic 2-surfaces and string world sheets - if correct - would solve quantum TGD explicitly at string world sheet level in other words in finite measurement resolution. <LI> Also a brief summary of generalized Feynman rules in zero energy ontology is proposed. This requires the identification of vertices, propagators, and prescription for integrating over al 3-surfaces. It turns out that the basic building blocks of generalized Feynman diagrams are well-defined. <LI> The notion of generalized Feynman diagram leads to a beautiful duality between the descriptions of hadronic reactions in terms of hadrons and partons analogous to gauge-gravity duality and AdS/CFT duality but requiring no additional assumptions. The model of quark gluon plasma as s strongly interacting phase is proposed. Color magnetic flux tubes are responsible for the long range correlations making the plasma phase more like a very large hadron rather than a gas of partons. One also ends up with a simple estimate for the viscosity/entropy ratio using black-hole analogy. </OL>
[5282] vixra:1201.0126 [pdf]
Could the Measurements Trying to Detect Absolute Motion of Earth Allow to Test Sub-Manifold Gravity?
The history of the modern measurements of absolute motion has a long - more than century beginning from Michelson-Morley 1887. The earliest measurements assumed aether hypothesis. Cahill identifies the velocity as a velocity with respect to some preferred rest frame and uses relativistic kinematics although he misleadingly uses the terms absolute velocity and aether. The preferred frame could galaxy, or the system defining rest system in cosmology. It would be easy to dismiss this kind of experiments as attempts to return to the days before Einstein but this is not the case. It might be possible to gain unexpected information by this kind of measurements. Already the analysis of CMB spectrum demonstrated that Earth is not at rest in the Robertson-Walker coordinate system used to analysis CMB data and similar motion with respect to galaxy is quite possible and might serve as a rich source of information also in GRT based theory. In TGD framework the situation is especially interesting since sub-manifold gravity implies that maximal signal velocity depends on the space-time sheet and this effect might show itself in Mickelson-Morley type experiments. Also the motion of space-time sheets with respect to each other might be detectable.
[5283] vixra:1201.0125 [pdf]
Evolution of TGD
A summary about how various ideas about TGD have developed is given. This is a response to a request of Mark McWilliams. I try to represent the development chronologically but I must confess that I have forgotten precise dates so that the chronology is not exact. Very probably I have also forgotten many important ideas and many side tracks which led nowhere. Unavoidably the emphasis is on the latest ideas and there is of course the risk that some of them are not here to stay. Even during writing process some ideas developed into more concrete form. A good example is the vision about what happens in quantum jump and what the unitarity of U-matrix really means, how M-matrices generalize to form Kac-Moody type algebra, and how the notion of quantum jump in zero energy ontology (ZEO) reproduces the basic aspects of quantum measurement theory. Also a slight generalization of quantum arithmetics suggested itself during the preparation of the article.
[5284] vixra:1201.0124 [pdf]
Inflation and TGD
The comparison of TGD with inflationary cosmology combined with new results about TGD inspired cosmology provides fresh insights to the relationship of TGD and standard approach and shows how TGD cures the lethal diseases of the eternal inflation. Very roughly: the replacement of the energy of the scalar field with magnetic energy replaces eternal inflation with a fractal quantum critical cosmology allowing to see more sharply the TGD counterpart of inflation and accelerating expansion as special cases of criticality. The rapid expansion periods correspond to phase transitions increasing the value of Planck constant and increasing the radius of magnetic flux tubes. This liberates magnetic energy and gives rise to radiation in turn giving rise to radiation and matter in the recent Universe just like the energy of inflaton field would give rise to radiation at the end of the inflation period in cosmic inflation. The multiverse of inflationary scenarios is replaced with the many-sheeted space-time and one can say that the laws of physics are essentially same everywhere in the sense that the fundamental symmetries are the symmetries of standard model everywhere.
[5285] vixra:1201.0123 [pdf]
QCD and TGD
The inspiration for this article came from listening some very inspiring Harward lectures relating to QCD, jets, gauge-gravity correspondence, and quark gluon plasma. Matthew Schwartz gave a talk titled <I> The Emergence of Jets at the Large Hadron Collider</I>. Dam Thanh Son's talk had the title <I> Viscosity, Quark Gluon Plasma, and String Theory</I>. Factorization theorems of jet QCD discussed in very clear manner by Ian Stewart in this talk titled <I> Mastering Jets: New Windows into Strong Interaction and Beyond</I>. These lecture inspired several blog postings and also the idea about a systematical comparison of QCD and TGD. This kind of comparisons are always very useful - at least to me - since they make it easier to see why the cherished beliefs- now the belief that QCD is <I> the</I> theory of strong interactions - might be wrong
[5286] vixra:1201.0122 [pdf]
Higgs or M_{89} Hadron Physics?
The newest results about Higgs search using 4.9/fb of data have published and there are many articles in arXiv. The overall view is that there is evidence for something around 125 GeV. Whether this something is Higgs or some other particle decaying to Higgs remains to my opinion an open question. The evidence comes basically from Higgs to γγ decays. The signal is however too larger so that something else than Higgs might be in question. There are some ZZ and WW events. CMS represented also data for more rare events. There are also indications about something at higher masses. In TGD framework Higgs is not needed for the massivation and the simplest option is that Higgs does not exist. Higgs is effectively replaced with a scaled up copy of hadron physics with mass scale 512 times higher than that for ordinary hadron physics. In this article this option will be discussed.
[5287] vixra:1201.0121 [pdf]
TGD Based View about Classical Fields in Relation to Consciousness Theory and Quantum Biology
In TGD Universe gauge fields are replaced with topological field quanta. Examples are topological light rays, magnetic/electric flux tubes and sheets, and flux quanta carrying both magnetic and electric fields. Flux quanta form a fractal hierarchy in the sense that there are flux quanta inside flux quanta. It is natural to assume quantization of Kähler magnetic flux. Braiding and reconnection are the basic topological operations for flux quanta. The basic question is how the basic notions assigned with the classical gauge and gravitational fields understood in standard sense generalize in TGD framework. <OL> <LI> Superposition and interference of the classical fields is very natural in Maxwell electrodynamics and certainly experimentally verified phenomena. Also the notion of hologram relies crucially on the notion of interference. How can one describe the effects explained in terms of superposition of fields in a situation in which the theory is extremely non-linear and all classical gauge fields are expressible in terms of CP_2 coordinates and their gradients? It is also rather clear that the preferred extremals for Kähler action decompose to space-time regions representing space-time correlates for quanta. The superposition of classical fields in Maxwellian sense is impossible. How can one cope with this situation? The answer is based on simple observation: only the {\it effects} of the classical fields superpose. There is no need for the fields to superpose. Together with the notion of many-sheeted space-time this leads to elegant description of interference effects without any need to assume that linearization is a good approximation. <LI> Topological quantization brings in also braiding and reconnection of magnetic flux tubes as basic operations for classical fields. These operations for flux tubes have also Maxwellian counterparts at the level of field lines. Braiding and reconnection are in a central role in TGD Universe and especially so in in TGD inspired theory of consciousness and quantum biology. The challenge is to build a coherent overall phenomenological view about the role of topologically quantized classical fields in biology and neuroscience. For instance, one can ask what is the precise formulation for the notion of conscious hologram and whether magnetic flux tubes could serve as correlates of entanglement (or at least negentropic entanglement suggested by the number theoretic vision and identified as a basic signature of living matter). <LI> Topological quantization and the notion of magnetic body are especially important in TGD inspired model of EEG. The attempt to understand the findings of Persinger from the study of what is known as God helmet leads to a considerable progress in the understanding the possible role of topologically quantized classical fields in biology and neuro-science. </OL>
[5288] vixra:1201.0105 [pdf]
Cosmic Mass and the Electromagnetic and Strong Interactions
It seems that- quanta of the angular momentum and the strong interaction range - both are connected with the large scale structure of the universe. In the expanding universe `quanta' increases with increasing mass of the universe. By any chance if the noticed empirical relation is found to be true and valid, `rate of decrease in fine structure ratio' is a measure of cosmic rate of expansion. Considering the integral nature of number of protons (of any nucleus), integral nature of `hbar' can be understood.
[5289] vixra:1201.0103 [pdf]
Spectral Energy Distribution of a Body in Hydrostatic Equilibrium
The Spectral Energy Distribution (SED) measurements of Sunlight indicate that the Sun's SED is approximately that of a black body at a temperature of about 5777K. This fact has been known for quite some time now. What is surprising is that this fact has not been interpreted correctly to mean that the Sun's temperature is constant throughout its profile i.e. the temperature of the core right up to the Surface must be the same i.e. if Tsun(r) the temperature of the Sun at any radial point r, then Tsun(r)=5777K. From the fundamental principles of statistical thermodynamics, a blackbody is a body whose constituents are all at a constant temperature and such a body will exhibit a Planckian SED. For a body that has a nearly blackbody SED like the Sun (and the stars), this means the constituents of this body must, at a reasonable degree of approximation, be at the same temperature i.e. its temperature must be constant throughout. If the Sun is approximately a blackbody as experience indicates, then, the Standard Solar Model (SSM) can not be a correct description of physical and natural reality for the one simple reason, that the Solar core must be at same temperature as the Solar surface. Simple, the Sun is not hot enough to ignite thermonuclear fission at its nimbus. If this is the case, then how does the Sun (and the stars) generate its luminosity. A suggestion to this problem is made in a future reading that is at an advanced stage of preparation; therein, it is proposed that the Sun is in a state of thermodynamic equilibrium -- i.e., in a state of uniform temperature and further a proposal (hypothesis or conjecture) is set-forth that the Sun may very well be powered by the 104.17 micro-Hz gravitational oscillations first detected by Brookes el al. (1976), Severny et al. (1976). Herein, we verily prove that the SED of a body in hydrostatic equilibrium can not, in general be Planckian in nature, thus ruling out the SSM in its current constitution. Only in the case were the density index is \alpha_{\varrho}=2 (which implies a zero temperature index i.e. \alpha_{T}=0$), will the SED of such a body be Planckian.
[5290] vixra:1201.0093 [pdf]
Accelerating Universe and the Increasing Bohr Radius
It seems that- Bohr radius of hydrogen atom, quanta of the angular momentum and the strong interaction range - are connected with the large scale structure of the massive universe. In the accelerating universe, as the space expands, in hydrogen atom, distance between proton and electron increases and is directly proportional to the mass of the universe (which is the product of critical density and the Hubble volume). `Rate of decrease in fine structure ratio' is a measure of cosmic rate of expansion. Considering the integral nature of number of protons (of any nucleus), integral nature of `hbar' can be understood.
[5291] vixra:1201.0091 [pdf]
Model of Superluminal Oscillating Neutrinos
We present a simple quantum relativistic model of neutrino oscillations and propagation in space. Matrix elements of the neutrino Hamiltonian depend on momentum and this dependence is responsible for the observed neutrino velocity. It is possible to choose the Hamiltonian in such a way that neutrino velocity oscillates around c in a pattern synchronized with flavor oscillations. The velocity can exceed c during some time intervals. Due to low masses of the electron, muon and tau neutrino species, this superluminal effect is too small to be seen in experiments. The consistency of our model with fundamental principles of relativity and causality is discussed as well.
[5292] vixra:1201.0087 [pdf]
On Point Mass Sources, Null Naked Singularities and Euclidean Gravitational Action as Entropy
It is rigorously shown how the static spherically symmetric solutions of Einstein's equations can furnish a $null$ naked singularity associated with a point mass source at $ r = 0$. The construction relies in the possibility of having a metric $discontinuity$ at the location of the point mass. This result should be contrasted with the spacelike singularity described by the textbook black hole solution. It has been argued by some authors why one cannot get any information from the null naked singularity so it will not have any undesirable physical effect to an outside far away observer and cannot cause a breakdown of predictability. In this way one may preserve the essence of the cosmic censorship hypothesis. The field equations due to a delta-function point-mass source at $ r = 0 $ are solved and the Euclidean gravitational action (in $ \hbar $ units) corresponding to those solutions is evaluated explicitly. It is found that it is precisely equal to the black hole entropy (in Planck area units). This result holds in any dimensions $ D \ge 3 $. We finalize by arguing why the Noncommutative Gravity of the spacetime tangent (co-tangent) bundle is the proper arena to study point masses.
[5293] vixra:1201.0068 [pdf]
To Unify the String Theory and the Strong Gravity
Key conceptual link that connects the gravitational force and non-gravitational forces is - the classical force limit, <math>F_C \cong \left(\frac{c^{4} }{G} \right)<\math>. It can be considered as the upper limit of the cosmic string tension. Weak force magnitude <math>F_W<\math> can be considered as the characteristic nuclear weak string tension. In 3+1 dimensions if strong interaction is really <math>10^{39}<\math> times stronger than the strength of gravity, until the measurement of <math>\left(F_C \;\&\; F_W\right)-<\math> it can be assumed that <math>\frac{F_C}{F_W}\cong N^2<\math> where <math>N<\math> is Avogadro like number.
[5294] vixra:1201.0062 [pdf]
Chirality and Symmetry Breaking in a Discrete Internal Space
In previous papers the permutation group S_4 has been suggested as an ordering scheme for elementary particles, and the appearance of this finite symmetry group was taken as indication for the existence of a discrete inner symmetry space underlying elementary particle interactions. Here it is pointed out that a more suitable choice than the tetrahedral group S_4 is the pyritohedral group A_4 x Z_2 because its vibrational spectrum exhibits exactly the mass multiplet structure of the 3 fermion generations. Furthermore it is noted that the same structure can also be obtained from a primordial symmetry breaking S_4 --> A_4. Since A_4 is a chiral group, while S_4 is achiral, an argument can be given why the chirality of the inner pyritohedral symmetry leads to parity violation of the weak interactions.
[5295] vixra:1201.0016 [pdf]
Anisotropic to Isotropic Phase Transitions in the Early Universe
We propose that the early Universe was not Lorentz symmetric and that a gradual transition to the Lorentz symmetric phase occurred. An underlying form of the Dirac equation hints to such a transition for fermions. Fermions were coupled to space-time in a non-trivial manner such that they were massless in the Lorentz violating phase. The partition function is used as a transfer matrix to model this transition on a two level thermodynamics system that describes how such a transition might have occurred. The system that models this transition evolves, with temperature, from a state of large to negligible entropy and this is interpreted as describing the transition to a state with Lorentz symmetry. In addition to this, analogy is created with the properties of this system to describe how the fields were massless and how a baryon asymmetry can be generated in this model.
[5296] vixra:1201.0003 [pdf]
Calculation of the Elementary Particle Mass
In this paper, the mass derived from the g_equation is assumed to be the mass of quark-lepton, and is used to calculate the masses of the most common elementary particles. The difference between the calculated results and observed values is within 3%.
[5297] vixra:1112.0078 [pdf]
Avogadro Number the 11 Dimensions Alternative
It is very clear that, to unify 2 interactions if 5 dimensions are required, for unifying 4 interactions 10 dimensions are required. For 3+1 dimensions if there exists 4 (observed) interactions, for 10 dimensions there may exist 10 (observable) interactions. To unify 10 interactions 20 dimensions are required. From this idea it can be suggested that- with `n' new dimensions `unification' problem can not be resolved. By implementing the gravitational constant in atomic and nuclear physics, independent of the CGS and SI units, Avogadro number can be obtained very easily and its order of magnitude is $\cong N \cong 6 \times 10^{23}$ but not $6 \times 10^{26}.$ If $M_P$ is the Planck mass and $m_e$ is the rest mass of electron, semi empirically it is observed that, $M_g \cong N^{\frac{2}{3}}\cdot \sqrt{M_Pm_e} \cong 1.0044118 \times 10^{-3} \; Kg.$ If $m_{p} $ is the rest mass of proton it is noticed that $\ln \sqrt{\frac{e^{2} }{4\pi \varepsilon _{0} Gm_{P}^{2} } } \cong \sqrt{\frac{m_{p} }{m_{e} } -\ln \left(N^{2} \right)}.$ Key conceptual link that connects the gravitational force and non-gravitational forces is - the classical force limit $\left(\frac{c^{4} }{G} \right)$. For mole number of particles, if strength of gravity is $\left(N.G\right),$ any one particle's weak force magnitude is $F_{W} \cong \frac{1}{N} \cdot \left(\frac{c^{4} }{N.G} \right)\cong \frac{c^{4} }{N^{2} G} $. Ratio of `classical force limit' and `weak force magnitude' is $N^{2} $. Assumed relation for strong force and weak force magnitudes is $\sqrt{\frac{F_{S} }{F_{W} } } \cong 2\pi \ln \left(N^{2} \right)$. From SUSY point of view, `integral charge quark fermion' and `integral charge quark boson' mass ratio is $\Psi=2.262218404$ but not unity. With these advanced concepts an ``alternative" to the `standard model' can be developed.
[5298] vixra:1112.0075 [pdf]
Modified Newtonian Dynamics and Dark Matter from a Generalized Gravitational Theory
<p>Vast amounts of data clearly demonstrate discrepancies between the observed dynamics, in large astronomical systems, and the predicted dynamics by Newtonian gravity and general relativity. The appearance of these discrepancies has two possible explanations: either these systems contain large quantities of a new kind of unseen matter −the Dark Matter (DM)− or the gravitational law has to be modified at this scale −as in MOdified Newtonian Dynamics (MOND)−. This dichotomy is not entirely new in the history of physics, with DM playing now the role of the old non-existent Vulcan planet.</p> <p>We have shown how both (<span style="font-weight: bold;">i</span>) the MONDian form and (<span style="font-weight: bold;">ii</span>) <span style="font-variant: small-caps;">Milgrom</span> acceleration follow from an extended theory of gravity −characterized by a new kind of gravitational potentials <i>h<sub>μν</sub></i>(<i>R</i>(<i>t</i>))−, which (<span style="font-weight: bold;">iii</span>) was initially aimed to solve those deficiencies of general relativity shared with classical electrodynamics −and that were previously solved with new electromagnetic potentials <i>Φ</i>(<i>R</i>(<i>t</i>)) and <span style="font-weight: bold;"><i>A</i></span>(<i>R</i>(<i>t</i>))−. We also show (<span style="font-weight: bold;">iv</span>) how the modified equation of motion can be cast into ordinary form, when an fictitious distribution of DM matter is added to the real mass. From our definition of DM, we obtain (<span style="font-weight: bold;">v</span>) the main properties traditionally attributed to it, in excellent agreement with the DM literature. Finally, (<span style="font-weight: bold;">vi</span>) we discuss further avenues of research opened by this new paradigm.</p>
[5299] vixra:1112.0066 [pdf]
Analytical Derivation of the Drag Curve $C_{D}=C_{D}\left(\mathcal{R}\right)$
Through a convenient mathematical approach for the Navier-Stokes equation, we obtain the quadratic dependence $v^{2}$ of the drag force $F_{D}$ on a falling sphere, and the drag coefficient, $C_{D}$, as a function of the Reynolds number. Viscosity effects related to the turbulent boundary layer under transition, from laminar to turbulent, lead to the tensorial integration related to the flux of linear momentum through a conveniently choosen control surface in the falling reference frame. This approach turns out to provide an efficient route for the drag force calculation, since the drag force turns out to be a field of a non-inertial reference frame, allowing an arbitrary and convenient control surface, finally leading to the quadratic term for the drag force.
[5300] vixra:1112.0062 [pdf]
Units Independent Avogadro Number and Its Applications in Unification
By implementing the gravitational constant in atomic and nuclear physics, independent of the CGS and SI units, Avogadro number can be obtained very easily. It is observed that, either in SI system of units or in CGS system of units, value of the order of magnitude of Avogadro number $\cong N \cong 6 \times 10^{23}$ but not $6 \times 10^{26}.$ Key conceptual link that connects the gravitational force and non-gravitational forces is - the classical force limit $\left(\frac{c^{4} }{G} \right)$. For mole number of particles, if strength of gravity is $\left(N.G\right),$ any one particle's weak force magnitude is $F_{W} \cong \frac{1}{N} \cdot \left(\frac{c^{4} }{N.G} \right)\cong \frac{c^{4} }{N^{2} G} $. Ratio of `classical force limit' and `weak force magnitude' is $N^{2} $. This may be the beginning of unification of `gravitational and non-gravitational interactions'.
[5301] vixra:1112.0059 [pdf]
Nonlocality and Interaction
In order to understand nonlocal phenomena, the corresponding processes must be treated as scattering experiments and investigated in the frame of quantum field theory. Quantum mechanics is not sufficient for this task, because it is only concerned with the influence of a potential on a physical object. But in processes with nonlocal effects the role of interaction between physical objects must be sufficiently respected. – All this is shown first of all for the special case of the spin-1/2-experiment, and then generalized to arbitrary scattering processes.
[5302] vixra:1112.0053 [pdf]
126 Gev Boson Constitutes Higgs Charged Boson and W Boson
It is suggested that, in super symmetry, the fermion and boson mass ratio is equal to 2.62218404 but not unity. Based on strong nuclear gravity and super symmetry it is suggested that: there exists a charged Higgs fermion of rest energy 103125 MeV. Its charged susy boson rest energy is 45586 MeV. The charged Higgs fermion and nuclear charge radius play a crucial role in the emission of the electron in Beta decay. The recently discovered neutral boson of rest energy 123 to 127 MeV seems to be composed of a Higgs charged boson and a W boson. Its obtained rest energy is 126 GeV.
[5303] vixra:1112.0049 [pdf]
Gravitational Constant in Nuclear Interactions
Till today no atomic principle implemented the gravitational constant in nuclear physics. Considering the electromagnetic and gravitational force ratio of electron and proton a simple semi empirical relation is proposed for estimating the strong coupling constant. Obtained value is $\alpha_s \cong 0.117378409.$ It is also noticed that $\alpha_s\cong \ln\left(r_Ur_D\right)$ where $r_U$ and $r_D$ are the geometric ratios of Up and Down quark series respectively. It is noticed that proton rest mass is equal to $\left(\frac{1}{\alpha}+\frac{1}{\alpha_s}\right)\sqrt{UD}.$ With reference to the the electromagnetic and gravitational force ratio of electron, 137 can be fitted at $r_U $ and 128 can be fitted at $r_D.$ Finally semi empirical mass formula energy constants are fitted.
[5304] vixra:1112.0048 [pdf]
The Universe Accelerating Expansion: A Counterexample
In this paper we build a counterexample that raises a fundamental distinction between recession movement of matter and space expansion. We prove that observing matter recession at an accelerating rate is not an indication for the acceleration of the universe expansion. More precisely, we show that the observed acceleration in the recession movement of galaxies is naturally due to a universe deceleration. The counterexample provides us with a possible space with independent movement that might produce the observed behavior of galaxies registered for the redshift $z<0.5$ as well as for the redshift $z>0.5$. This counterexample calls into question the recent interpretation of the accelerating recession movement of galaxies as a sign of universe acceleration.
[5305] vixra:1112.0039 [pdf]
Maxwell's Equations Derived from Minimum Assumptions
Electrodynamics can be presented in the course of physics as a chapter, or special case of continuum mechanics. At the macroscopic level of description it is the mechanics of an incompressible elastic-plastic medium with point defects. The key point is the properties of these defects. Currently, however, their derivation from the first principles does not seem to be feasible. So, in the present report I discuss the minimum requirements that must be imposed on the term of the "external" force in the Lame equation in order that the resulting system of equations appeared to be isomorphic to Maxwell's equations.
[5306] vixra:1112.0031 [pdf]
Gews Interactions in Strong Nuclear Gravity
In the atomic or nuclear space, till today no one measured the value of the gravitational constant. To bring down the planck mass scale to the observed elementary particles mass scale a large scale factor is required. Ratio of planck mass and electron mass is close to $\textrm{Avogadro number}/8 \pi\cong N/8 \pi$. The idea of strong gravity originally referred specifically to mathematical approach of Abdus Salam of unification of gravity and quantum chromo-dynamics, but is now often used for any particle level gravity approach. In this connection it is suggested that, key conceptual link that connects the gravitational force and non-gravitational forces is - the classical force limit $\left(\frac{c^{4} }{G} \right)$. For mole number of particles, if strength of gravity is $\left(N.G\right),$ any one particle's weak force magnitude is $F_{W} \cong \frac{1}{N} \cdot \left(\frac{c^{4} }{N.G} \right)\cong \frac{c^{4} }{N^{2} G} $. Ratio of `classical force limit' and `weak force magnitude' is $N^{2} $. This is another significance of Avogadro number. If $R_0\cong 1.21$ fermi is the nuclear charge radius, to a very good accuracy it is noticed that in Hydrogen atom, ratio of total energy of electron and nuclear potential is equal to the electromagnetic and gravitational force ratio of electron where the operating gravitational constant is $N^2G_C$ but not $G_C.$ Square root of ratio of strong and weak force magnitudes can be expressed as $2 \pi \ln\left(N^2\right).$ With the defined strong and weak force magnitudes observed elementary particles masses and their magnetic moments can be generated. Interesting application is that: characteristic building block of the cosmological ‘dark matter’ can be quantified in terms of fundamental physical constants. No extra dimensions are required in this new approach.
[5307] vixra:1112.0030 [pdf]
Nuclear Mass Density in Strong Gravity and Grand Unification
It is noticed that, when the black hole mass density reaches the nuclear mass density, mass of the black hole approaches to $1.81\times 10^{31} \rm{\;Kg\;} \cong 9.1M_\odot.$ This characteristic mass can be called as the Fermi black hole mass. This proposed mass unit plays an interesting role in grand unification and primordial black holes. Mass ratio of Fermi black hole mass and Chandrasekhar's mass limit is $2\pi.$ Mass ratio of Fermi black hole mass and neutron star mass limit is $\sqrt{2\pi}.$ Considering strong nuclear gravity, Fermi black hole mass can be obtained in a grand unified approach.
[5308] vixra:1112.0027 [pdf]
Manipulating Standard and Inverse Chladni Patterns by Modulating Adhesive, Frictional, and Damping Forces
Particles on a plate form Chladni patterns when the plate is acoustically excited. To better understand these patterns and their possible real-world applications, I present a new analytical and numerical study of the transition between standard and inverse Chladni patterns on an adhesive surface at any magnitude of acceleration. By spatial autocorrelation analysis, I examine the effects of surface adhesion and friction on the rate of pattern formation. Next, I explore displacement models of particles translating on a frictional surface with both adhesive and internal particle-plate frictions. In addition, I find that both adhesion and damping forces serve as exquisite particle sorting mechanisms. Finally, I discuss the possible real-world applications of these sorting mechanisms, such as separating nanoparticles, organelles, or cells.
[5309] vixra:1112.0026 [pdf]
Strong Nuclear Gravity a Very Brief Report
Key conceptual link that connects the gravitational force and non-gravitational forces is - the classical force limit $\left(\frac{c^{4} }{G} \right)$. For mole number of particles, if strength of gravity is $\left(N.G\right),$ any one particle's weak force magnitude is $F_{W} \cong \frac{1}{N} \cdot \left(\frac{c^{4} }{N.G} \right)\cong \frac{c^{4} }{N^{2} G} $. Ratio of `classical force limit' and `weak force magnitude' is $N^{2} $. This can be considered as the beginning of `strong nuclear gravity'. Assumed relation for strong force and weak force magnitudes is $\sqrt{\frac{F_{S} }{F_{W} } } \cong 2\pi \ln \left(N^{2} \right)$. If $m_{p} $ is the rest mass of proton it is noticed that $\ln \sqrt{\frac{e^{2} }{4\pi \varepsilon _{0} Gm_{P}^{2} } } \cong \sqrt{\frac{m_{p} }{m_{e} } -\ln \left(N^{2} \right)}.$ From SUSY point of view, `integral charge quark fermion' and `integral charge quark boson' mass ratio is $\Psi=2.262218404$ but not unity. With these advanced concepts starting from nuclear stability to charged leptons, quarks, electroweak bosons and charged Higgs boson's origin can be understood. Finally an ``alternative" to the `standard model' can be developed.
[5310] vixra:1112.0025 [pdf]
Nucleus in Strong Nuclear Gravity
Based on strong nuclear gravity, $N$ being the Avogadro number and $\left(\frac{c^4}{N^2G}\right)$ being the weak force magnitude, electron`s gravitational mass generator $= X_{E} \cong m_{e} c^{2} \div \sqrt{\frac{e^{2} }{4\pi \varepsilon _{0} } \left(\frac{c^4}{N^2G}\right) } \cong 295.0606338$. Weak coupling angle is $\sin \theta _{W} \cong \frac{1}{\alpha X_{E} } \cong 0.464433353\cong \frac{{\rm Up}\; {\rm quark}\; {\rm mass}}{{\rm Down}\; {\rm quark}\; {\rm mass}} $. $X_{S} \cong \ln \left(X_{E}^{2} \sqrt{\alpha } \right)\cong 8.91424\cong \frac{1}{\alpha _{s} } $ can be considered as `inverse of the strong coupling constant'.The proton-nucleon stability relation is $A_{S} \cong 2Z+\frac{Z^{2} }{S_{f} }$ where $S_{f} \cong X_{E} -\frac{1}{\alpha } -1\cong 157.0246441$. With reference to proton rest energy, semi empirical mass formula coulombic energy constant is $E_{c} \cong \frac{\alpha }{X_{S} } \cdot m_{p} c^{2} \cong \alpha \cdot \alpha _{s} \cdot m_{p} c^{2} \cong {\rm 0}.7681\; MeV.$ Pairing energy constant is $E_{p} \cong \frac{m_{p} c^{2} +m_{n} c^{2} }{S_{f} } \cong 11.959\; {\rm M}eV$ and asymmetry energy constant is $E_{a} \cong 2E_{p} \cong 23.918\; {\rm M}eV$. It is also noticed that, $\frac{E_{a} }{E_{v} } \cong 1+\sin \theta _{W} $ and $\frac{E_{a} }{E_{s} } \cong 1+\mathop{\sin }\nolimits^{2} \theta _{W} $. Thus $E_{v} \cong 16.332$ MeV and $E_{s} \cong 19.674\; {\rm M}eV.$ Nuclear binding energy can be fitted with 2 terms. In scattering experiments minimum distance between electron and the nucleus is $R_0 \cong \left(\frac{\hbar c}{\left(N.G\right)m_e^2}\right)^2 \frac{2Gm_e}{c^2}.$
[5311] vixra:1112.0011 [pdf]
Modified Hubble's Law and the Primordial Cosmic Black Hole
The concept of `dark energy' is still facing and raising a number of fundamental unresolved problems. `Cosmic acceleration',`dark energy' and `inflation', are the results of Edwin Hubble's incomplete conclusions. If there is a misinterpretation in Hubble's law - flat model of cosmology can not be considered as a correct model of cosmology. \textbf{If the \textit{primordial universe} is a natural setting for the creation of black holes and other nonperturbative gravitational entities, it is also possible to assume that throughout its journey, the whole universe is a \textit{primordial cosmic black hole.}} Planck particle can be considered as the baby universe. Key assumption is that, ``at any time, cosmic black hole rotates with light speed''. Cosmic temperature is inversely proportional to the geometric mean of cosmic mass and planck mass. For this growing cosmic sphere as a whole, while in light speed rotation, `rate of decrease' in temperature is a ``primary'' measure of cosmic `rate of expansion'. It can be suggested that, `rate of increase in galaxy red shift' from and about the cosmic center is a ``secondary'' measure of cosmic `rate of expansion'. Present `cosmic mass density' and `cosmic time' are fitted with the natural logarithm of ratio of cosmic volume and planck particle's volume. If present CMBR temperature is isotropic at 2.725 ${}^{0}$Kelvin, present angular velocity is 2.17 x 10${}^{-18}$ rad/sec = 67 Km/sec/Mpc.
[5312] vixra:1112.0008 [pdf]
General Relativity and Holographic Conjecture Tangles with Quantum Mechanics and Sub Quantum Theories :New Insights and Interpretations About Strings and Quantization with Cosmological Significance
General relativity and various possibilities about its proper combination with Quantum mechanics is a central topic in theoretical physics. In this article we explain various links which bridge between different views about Quantum Gravity theories. It turns out that these have very close relation with different interpretations about what quantization mean.We try to justify basic questions like what a string tension mean. First, we start with qualitative analysis and then we confirm these with quantitative results. These are shown to have relation with Quantum Gravity noise, Dark Energy, Hogan's noise and the Holographic principle. We clarify a long standing confusion in entropic Gravity, whether increase in entropy ($\triangle s$) makes the particle to move ($\triangle x$) or vice-versa. We indicate the possibility about non-local entanglement operation carried out by space-time which will increase entropy and a particle in space will move causing $\triangle x$. We relate noise with dissipation, much like a statistical system. In this context, we discuss the hierarchy issue. We try to see string as a natural consequence of intrinsic computation in space-time in planck scale. We clarify that although we talk about dissipative sub-quantum models, as quantization track only equivalence class (according to T'Hooft), it does not contradict with information conservations in quantum mechanics; equivalence class informations are conserved. We try to describe why we can attach entropy to an arbitrary spatial surface ( like a black hole horizon) and can derive Einstein's equation as the second law of thermodynamics.
[5313] vixra:1112.0006 [pdf]
The Simple Universe
This paper presents a unified theory for the universe which encompasses the dominant theories of physics. Our work is strictly based on the philosophy that the work of the universe is extremely simple in the fundamental level. We provide a minimal set of elements as the fundamental constituents of the universe, and demonstrate that all natural phenomena can be explained by a minimum number of laws governing these fundamental elements and their minimal set of properties.
[5314] vixra:1112.0004 [pdf]
Are The Concepts of Mass in Quantum Theory and in General Relativity the Same?
The predominant approaches to understanding how quantum theory and General Relativity are related to each other implicitly assume that both theories use the same concept of mass. Given that despite great efforts such approaches have not yet produced a consistent falsifiable quantum theory of gravity, this paper entertains the possibility that the concepts of mass in the two theories are in fact distinct. It points out that if the concept of mass in quantum mechanics is defined such that it always exists in a superposition and is not a gravitational source, then this sharply segregates the domains of quantum theory and of general relativity. This concept of mass violates the equivalence principle applied to active gravitational mass, but may still produce effects consistent with the equivalence principle when applied to passive gravitational mass (in agreement with observations) by the correspondence principle applied to a weak field in the appropriate limit. An experiment that successfully measures the gravity field of quantum objects in a superposition, and in particular of photons, would not only falsify this distinction but also constitute the first direct empirical test that gravity must in fact be described fundamentally by a quantum theory.
[5315] vixra:1111.0115 [pdf]
Sheldrake's Morphic Fields and TGD View about Quantum Biology
<p> This article is inspired by the study of two books of Rupert Sheldrake. What makes the study of the books of Sheldrake so rewarding is that Sheldrake starts from problems of the existing paradigm, analyzes them thoroughly, and proposes solutions in the framework provided by his vision. There is no need to accept Sheldrake′s views, just the reading of his arguments teaches a lot about the fundamental ideas and dogmas underlying recent day biology and forces the reader to realize how little we really know - not only about biology but even about so called established areas of physics such as condensed matter physics. These books are precious gems for anyone trying to build overall view. </p><p> The idea that Nature would have habits just as we do is probably one of those aspects in Sheldrake's work, which generate most irritation in physicalists believing that Nature is governed by deterministic laws with classical determinism replaced with quantum statistical determinism. Sheldrake is one of those very few scientists able to see the reality rather than only the model of reality. Morphic resonance would make possible to establish the habits of Nature and the past would determine to high extent the present but on organic manner and in totally different sense as in the world of physicalist. </p><p> In this article I propose an interpretation for the vision of Sheldrake based on zero energy ontology and TGD based view about geometric time and experienced time forcing to accept the notions of 4-dimensional brain and society. In this framework the problem is to understand why our sensory perception is 3-dimensional whereas the standard problems related to memory disappear since memory corresponds to 4-D aspects of perception and of conscious experience and memory storage is 4-dimensional. The vision about gene expression as something to some extend analogous to a democratic decision of 4-D society looks rather natural in this framework and would explain some still poorly understood aspects of gene expression known from the days of Mendel. Therefore the term ″the prence of the past″ appearing in the title of one of Sheldrake's books has quite a concrete meaning in TGD Universe. This article is inspired by the study of two books of Rupert Sheldrake. What makes the study of the books of Sheldrake so rewarding is that Sheldrake starts from problems of the existing paradigm, analyzes them thoroughly, and proposes solutions in the framework provided by his vision. There is no need to accept Sheldrake′s views, just the reading of his arguments teaches a lot about the fundamental ideas and dogmas underlying recent day biology and forces the reader to realize how little we really know - not only about biology but even about so called established areas of physics such as condensed matter physics. These books are precious gems for anyone trying to build overall view. </p><p> The idea that Nature would have habits just as we do is probably one of those aspects in Sheldrake's work, which generate most irritation in physicalists believing that Nature is governed by deterministic laws with classical determinism replaced with quantum statistical determinism. Sheldrake is one of those very few scientists able to see the reality rather than only the model of reality. Morphic resonance would make possible to establish the habits of Nature and the past would determine to high extent the present but on organic manner and in totally different sense as in the world of physicalist. </p><p> In this article I propose an interpretation for the vision of Sheldrake based on zero energy ontology and TGD based view about geometric time and experienced time forcing to accept the notions of 4-dimensional brain and society. In this framework the problem is to understand why our sensory perception is 3-dimensional whereas the standard problems related to memory disappear since memory corresponds to 4-D aspects of perception and of conscious experience and memory storage is 4-dimensional. The vision about gene expression as something to some extend analogous to a democratic decision of 4-D society looks rather natural in this framework and would explain some still poorly understood aspects of gene expression known from the days of Mendel. Therefore the term ″the prence of the past″ appearing in the title of one of Sheldrake's books has quite a concrete meaning in TGD Universe.</p>
[5316] vixra:1111.0114 [pdf]
Quantum Model for Remote Replication
<p> A model for remote replication of DNA is proposed. The motivating experimental discoveries are phantom DNA, the evidence for remote gene activation by scattered laser light from similar genome, and the recent findings of Montagnier's and Gariaev's groups suggesting remote DNA replication. </p><p> Phantom DNA is identified as dark nucleon sequences predicted by quantum TGD with dark nucleons defining naturally the analogs of DNA, RNA, tRNA, and amino-acids and realization of vertebrate genetic code. The notion of magnetic body defining a hierarchy of flux quanta realize as flux tubes connecting DNA nucleotides contained inside flux tubes connecting DNA codons and a condensed at flux sheets connecting DNA strands is an essential element of the model. Dark photons with large value of Planck constant coming as integer multiple of ordinary Planck constant propagate along flux quanta connecting biomolecules: this realizes the idea about wave DNA. Biomolecules act as quantum antennas and those with common antenna frequencies interact resonantly. </p><p> Biomolecules interacting strongly - in particular DNA nucleotides- would be characterized by same frequency. An additional coding is needed to distinguish between nucleotides: in the model for DNA as topological quantum computer quarks (u,d) and their antiquarks would code for the nucleotides A,T,C, and G would take care of this. The proposed role of quarks in biophysics of course makes sense only if one accepts the new physics predicted by quantum TGD. DNA codons (nucleotide triplets) would be coded by different frequencies which correspond to different values of Planck constant for photons with same photon energy propagating along corresponding flux tubes. This allows to interpret the previously proposed TGD based realization of so called divisor code proposed by Khrennikov and Nilsson in terms of quantum antenna mechanism. </p><p> In this framework the remote replication of DNA can be understood. DNA nucleotides interact resonantly with DNA strand and attach to the ends of the flux tubes emerging from DNA strand and organized on 2-D flux sheets. In Montagnier's experiment the interaction between test tubes A and B would be mediated by dark photons between DNA and dark nucleon sequences and amplify the dark photon beam, which in turn would induce remote replication. In the experiment of Gariaev scattered laser light would help to achieve the same purpose. Dark nucleon sequences would be generated in Montagnier's experiment by the homeopathic treatment of the test tube B. </p><p> Dark nucleon sequences could characterize the magnetic body of any polar molecule in water and give it a "name" written in terms of genetic codons so that genetic code would be much more general than usually thought. The dark nucleon sequence would be most naturally assigned with the hydrogen bonds between the molecule and the surrounding ordered water being perhaps generated when this layer of ordered water melts as the molecule becomes biologically active. Water memory and the basic mechanism of homeopathy would be due to the "dropping" of the magnetic bodies of polar molecules as the water is treated homeopathically and the dark nucleon sequences could define an independent life form evolving during the sequence of repeated dilutions and mechanical agitations taking the role environmental catastrophes as driving force of evolution. The association of DNA, RNA and amino-acid sequences associated with the corresponding dark nucleon sequences would be automatic since also also they are polar molecules surrounded by ordered water layers. </p><p> The transcription of the dark nucleon sequences associated the with the polar invader molecule to ordinary DNA sequences in turn coding of proteins attaching to the invader molecules by the quantum antenna mechanism could define the basic mechanism for functioning and evolution of the immune system. </p>
[5317] vixra:1111.0110 [pdf]
Oil Droplets as a Primitive Life Form?
<p> The origin of life is one the most fascinating problems of biology. The classic Miller-Urey experiment was carried out almost 60 years ago. In the experiment sparks were shot through primordial atmosphere consisting of methane, ammonia, hydrogen and water and the outcome was many of the aminoacids essential for life. The findings raised the optimism that the key to the understanding of the origins of life. After the death of Miller 2007 scientists re-examined sealed test tubes from the experiment using modern methods found that well over 20 aminoacids-more than the 20 occurring in life- were produced in the experiments. </p><p> The Urey-Miller experiments have yielded also another surprise: the black tar consisting mostly of hydrogen cyanide polymer produced in the experiments has turned out to be much more interesting than originally thought and suggests a direction where the candidates for precursors of living cells might be found. In earlier experiments nitrobenzene droplets doped with oleic anhydride exhibited some signatures of life. The droplets were capable to metabolism using oleic anhydride as ″fuel″ making for the droplet to move. Droplets can move along chemical gradients, sense each other′s presence and react to it and have also demonstrated rudimentary memory. Droplets can even ″solve″ a maze having ″food″ at its other end. </p><p> The basic objection against identification as primitive life form is that droplets have no genetic code and do not replicate. The model for dark nucleons however predicts that the states of nucleon are in one-one correspondence with DNA, RNA, tRNA, and aminoacid molecule and that vertebrate genetic code is naturally realized. The question is whether the realization of the genetic code in terms of dark nuclear strings might provide the system with genetic code and whether the replication could occur at the level of dark nucleon strings. In this article a model for oil droplets as a primitive life form is developed on basis of TGD inspired quantum model of biology. In particular, a proposal for how dark genes could couple to chemistry of oil droplets is developed.</p>
[5318] vixra:1111.0109 [pdf]
Generalization of Thermodynamics Allowing Negentropic Entanglement and a Model for Conscious Information Processing
<p> Costa de Beauregard considers a model for information processing by a computer based on an analogy with Carnot's heat engine. As such the model Beauregard for computer does not look convincing as a model for what happens in biological information processing. </p><p> Combined with TGD based vision about living matter, the model however inspires a model for how conscious information is generated and how the second law of thermodynamics must be modified in TGD framework. The basic formulas of thermodynamics remain as such since the modification means only the replacement S→ S-N, where S is thermodynamical entropy and N the negentropy associated with negentropic entanglement. This allows to circumvent the basic objections against the application of Beauregard's model to living systems. One can also understand why living matter is so effective entropy producer as compared to inanimate matter and also the characteristic decomposition of living systems to highly negentropic and entropic parts as a consequence of generalized second law.</p>
[5319] vixra:1111.0108 [pdf]
DNA Waves and Water
<p> The group of HIV Nobelist L. Montagnier has published two articles challenging the standard views about genetic code and providing strong support for the notion of water memory. Already the results of the first article suggested implicitly the existence of a new kind nano-scale representation of genetic code and the the recent article makes this claim explicitly. The TGD based model for the findings was based on the notion of magnetic body representing biologi- cally relevant aspects of molecules in terms of cyclotron frequencies. The model involved also the realization of genetic code using electromagnetic field patterns and as dark nucleon strings and led to a proposal that the analogs of trancription and translation are realized for the dark variants of DNA, RNA, tRNa, and aminoacids represented in terms of dark nucleon strings. Also processes transcribing ordinary and dark variants of the biomolecules to each other were proposed. This would make possible R&D-like controlled evolution based on experimentation using dark representations of biomoleculesd defining kind of virtual world. </p><p> The recent findings of the group of Montagnier allow a more detailed formulation of the model and suggest a general mechanism for generalized transcription and translation processes based on the reconnection of magnetic flux tubes between the molecules in question. A new element is the proposed role of ordered water and hydrogen bonds in the formation of water memories. These representation would result from the dropping of the magnetic bodies of molecules as the hydrogen bonds connecting the molecule to water molecules of the ordered water layer around it-analogous to ice layer- are split during the mechanical agitation. A similar process occurs quite generally when external energy feed excites the resting state of cell and induces protein folding and its reversal and the formation of protein aggregates. Good metaphors for resting state and excited states are cellular winter and summer. The necessity of a repeated dilution and mechanical agitation could be understood if agitation provides metabolic energy for the replication of the magnetic bodies filling the diluted water volume and gives rise to a series of "environmental catastrophes" inducing evolutionary leaps increasing the typical value of Planck constant associated with the magnetic bodies until the energy E = hf of 7 Hz dark photons exceeds the thermal energy at room temperature.</p>
[5320] vixra:1111.0107 [pdf]
Model for the Findings about Hologram Generating Properties of DNA
<p> A TGD inspired model for the strange replica structures observed when DNA sample is radiated by red, IR, and UV light using two methods by Peter Gariaev and collaborators. The first method produces what is tentatively interpreted as replica images of either DNA sample or of five red lamps used to irradiate the sample. Second method produce replica image of environment with replication in horizontal direction but only at the right hand side of the apparatus. Also a white phantom variant of the replica trajectory observed in the first experiment is observed and has in vertical direction the size scale of the apparatus. </p>p> The model is developed in order to explain the characteristic features of the replica patterns. The basic notions are magnetic body, massless extremal (topological light ray), the existence of Bose-Einstein condensates of Cooper pairs at magnetic flux tubes, and dark photons with large value of Planck constant for which macroscopic quantum coherence is possible. The hypothesis is that the first method makes part of the magnetic body of DNA sample visible whereas method II would produce replica hologram of environment using dark photons and produce also a phantom image of the magnetic tubes becoming visible by method I. Replicas would result as mirror hall effect in the sense that the dark photons would move back and forth between the part of magnetic body becoming visible by method I and serving as a mirror and the objects of environment serving also as mirrors. What is however required is that not only the outer boundaries of objects visible via ordinary reflection act as mirrors but also the parts of the outer boundary not usually visible perform mirror function so that an essentially 3-D vision providing information about the geometry of the entire object would be in question. Many-sheeted space-time allows this. </p><p> The presence of the hologram image for method II requires the self-sustainment of the reference beam only whereas the presence of phantom DNA image for method I requires the self-sustainment of both beams. Non-linear dynamics for the energy feed from DNA to the magnetic body could make possible self-sustainment for both beams simultaneously. Non-linear dynamics for beams themselves could allow for the self-sustainment of reference beam and/or reflected beam. The latter option is favored by data. </p>
[5321] vixra:1111.0091 [pdf]
Langlands Conjectures in TGD Framework
<p> The arguments of this article support the view that in TGD Universe number theoretic and geometric Langlands conjectures could be understood very naturally. The basic notions are following. </p><p> <OL> <LI>Zero energy ontology (ZEO) and the related notion of causal diamond CD (CD is short hand for the cartesian product of causal diamond of M<sup>4</sup> and of CP<sub>2</sub>). ZEO leads to the notion of partonic 2-surfaces at the light-like boundaries of CD and to the notion of string world sheet. These notions are central in the recent view about TGD. One can assign to the partonic 2-surfaces a conformal moduli space having as additional coordinates the positions of braid strand ends (punctures). By electric-magnetic duality this moduli space must correspond closely to the moduli space of string world sheets. </p><p> <LI>Electric-magnetic duality realized in terms of string world sheets and partonic 2-surfaces. The group G and its Langlands dual <sup>L</sup>G would correspond to the time-like and space-like braidings. Duality predicts that the moduli space of string world sheets is very closely related to that for the partonic 2-surfaces. The strong form of 4-D general coordinate invariance implying electric-magnetic duality and S-duality as well as strong form of holography indeed predicts that the collection of string world sheets is fixed once the collection of partonic 2-surfaces at light-like boundaries of CD and its sub-CDs is known. </p><p> <LI> The proposal is that finite measurement resolution is realized in terms of inclusions of hyperfinite factors of type II<sub>1</sub> at quantum level and represented in terms of confining effective gauge group. This effective gauge group could be some associate of G: gauge group, Kac-Moody group or its quantum counterpart, or so called twisted quantum Yangian strongly suggested by twistor considerations. At space-time level the finite measurement resolution would be represented in terms of braids at space-time level which come in two varieties correspond to braids assignable to space-like surfaces at the two light-like boundaries of CD and with light-like 3-surfaces at which the signature of the induced metric changes and which are identified as orbits of partonic 2-surfaces connecting the future and past boundaries of CDs. </p><p> There are several steps leading from G to its twisted quantum Yangian. The first step replaces point like particles with partonic 2-surfaces: this brings in Kac-Moody character. The second step brings in finite measurement resolution meaning that Kac-Moody type algebra is replaced with its quantum version. The third step brings in zero energy ontology: one cannot treat single partonic surface or string world sheet as independent unit: always the collection of partonic 2-surfaces and corresponding string worlds sheets defines the geometric structure so that multilocality and therefore quantum Yangian algebra with multilocal generators is unavoidable. </p><p> In finite measurement resolution geometric Langlands duality and number theoretic Langlands duality are very closely related since partonic 2-surface is effectively replaced with the punctures representing the ends of braid strands and the orbit of this set under a discrete subgroup of G defines effectively a collection of "rational" 2-surfaces. The number of the "rational" surfaces in geometric Langlands conjecture replaces the number of rational points of partonic 2-surface in its number theoretic variant. The ability to compute both these numbers is very relevant for quantum TGD. </p><p> <LI>The natural identification of the associate of G is as quantum Yangian of Kac-Moody type group associated with Minkowskian open string model assignable to string world sheet representing a string moving in the moduli space of partonic 2-surface. The dual group corresponds to Euclidian string model with partonic 2-surface representing string orbit in the moduli space of the string world sheets. The Kac-Moody algebra assigned with simply laced G is obtained using the standard tachyonic free field representation obtained as ordered exponentials of Cartan algebra generators identified as transversal parts of M<sup>4</sup> coordinates for the braid strands. The importance of the free field representation generalizing to the case of non-simply laced groups in the realization of finite measurement resolution in terms of Kac-Moody algebra cannot be over-emphasized. </p><p> <LI>Langlands duality involves besides harmonic analysis side also the number theoretic side. Galois groups (collections of them) defined by infinite primes and integers having representation as symplectic flows defining braidings. I have earlier proposed that the hierarchy of these Galois groups define what might be regarded as a non-commutative homology and cohomology. Also G has this kind of representation which explains why the representations of these two kinds of groups are so intimately related. This relationship could be seen as a generalization of the MacKay correspondence between finite subgroups of SU(2) and simply laced Lie groups. </p><p> <LI>Symplectic group of the light-cone boundary acting as isometries of the WCW geometry kenociteallb/compl1 allowing to represent projectively both Galois groups and symmetry groups as symplectic flows so that the non-commutative cohomology would have braided representation. This leads to braided counterparts for both Galois group and effective symmetry group. </p><p> <LI>The moduli space for Higgs bundle playing central role in the approach of Witten and Kapustin to geometric Landlands program is in TGD framework replaced with the conformal moduli space for partonic 2-surfaces. It is not however possible to speak about Higgs field although moduli defined the analog of Higgs vacuum expectation value. Note that in TGD Universe the most natural assumption is that all Higgs like states are "eaten" by gauge bosons so that also photon and gluons become massive. This mechanism would be very general and mean that massless representations of Poincare group organize to massive ones via the formation of bound states. It might be however possible to see the contribution of p-adic thermodynamics depending on genus as analogous to Higgs contribution since the conformal moduli are analogous to vacuum expectation of Higgs field. </OL></p>
[5322] vixra:1111.0090 [pdf]
How Infinite Primes Relate to Other Views About Mathematical Infinity?
<p> Infinite primes is a purely TGD inspired notion. The notion of infinity is number theoretical and infinite primes have well defined divisibility properties. One can partially order them by the real norm. p-Adic norms of infinite primes are well defined and finite. The construction of infinite primes is a hierarchical procedure structurally equivalent to a repeated second quantization of a supersymmetric arithmetic quantum field theory. At the lowest level bosons and fermions are labelled by ordinary primes. At the next level one obtains free Fock states plus states having interpretation as bound many particle states. The many particle states of a given level become the single particle states of the next level and one can repeat the construction ad infinitum. The analogy with quantum theory is intriguing and I have proposed that the quantum states in TGD Universe correspond to octonionic generalizations of infinite primes. It is interesting to compare infinite primes (and integers) to the Cantorian view about infinite ordinals and cardinals. The basic problems of Cantor's approach which relate to the axiom of choice, continuum hypothesis, and Russell's antinomy: all these problems relate to the definition of ordinals as sets. In TGD framework infinite primes, integers, and rationals are defined purely algebraically so that these problems are avoided. It is not surprising that these approaches are not equivalent. For instance, sum and product for Cantorian ordinals are not commutative unlike for infinite integers defined in terms of infinite primes. </p><p> Set theory defines the foundations of modern mathematics. Set theory relies strongly on classical physics, and the obvious question is whether one should reconsider the foundations of mathematics in light of quantum physics. Is set theory really the correct approach to axiomatization? </p><p> <OL> <LI> Quantum view about consciousness and cognition leads to a proposal that p-adic physics serves as a correlate for cognition. Together with the notion of infinite primes this suggests that number theory should play a key role in the axiomatics. <LI> Algebraic geometry allows algebraization of the set theory and this kind of approach suggests itself strongly in physics inspired approach to the foundations of mathematics. This means powerful limitations on the notion of set. <LI> Finite measurement resolution and finite resolution of cognition could have implications also for the foundations of mathematics and relate directly to the fact that all numerical approaches reduce to an approximation using rationals with a cutoff on the number of binary digits. <LI> The TGD inspired vision about consciousness implies evolution by quantum jumps meaning that also evolution of mathematics so that no fixed system of axioms can ever catch all the mathematical truths for the simple reason that mathematicians themselves evolve with mathematics. </OL> I will discuss possible impact of these observations on the foundations of physical mathematics assuming that one accepts the TGD inspired view about infinity, about the notion of number, and the restrictions on the notion of set suggested by classical TGD. </p>
[5323] vixra:1111.0089 [pdf]
Motives and Infinite Primes
<p> In this article the goal is to find whether the general mathematical structures associated with twistor approach, superstring models and M-theory could have a generalization or a modification in TGD framework. The contents of the chapter is an outcome of a rather spontaneous process, and represents rather unexpected new insights about TGD resulting as outcome of the comparisons. </p><p> <I>1. Infinite primes, Galois groups, algebraic geometry, and TGD</I> </p><p> In algebraic geometry the notion of variety defined by algebraic equation is very general: all number fields are allowed. One of the challenges is to define the counterparts of homology and cohomology groups for them. The notion of cohomology giving rise also to homology if Poincare duality holds true is central. The number of various cohomology theories has inflated and one of the basic challenges to find a sufficiently general approach allowing to interpret various cohomology theories as variations of the same motive as Grothendieck, who is the pioneer of the field responsible for many of the basic notions and visions, expressed it. </p><p> Cohomology requires a definition of integral for forms for all number fields. In p-adic context the lack of well-ordering of p-adic numbers implies difficulties both in homology and cohomology since the notion of boundary does not exist in topological sense. The notion of definite integral is problematic for the same reason. This has led to a proposal of reducing integration to Fourier analysis working for symmetric spaces but requiring algebraic extensions of p-adic numbers and an appropriate definition of the p-adic symmetric space. The definition is not unique and the interpretation is in terms of the varying measurement resolution. </p><p> The notion of infinite has gradually turned out to be more and more important for quantum TGD. Infinite primes, integers, and rationals form a hierarchy completely analogous to a hierarchy of second quantization for a super-symmetric arithmetic quantum field theory. The simplest infinite primes representing elementary particles at given level are in one-one correspondence with many-particle states of the previous level. More complex infinite primes have interpretation in terms of bound states. </p><p> <OL> </p><p> <LI>What makes infinite primes interesting from the point of view of algebraic geometry is that infinite primes, integers and rationals at the n:th level of the hierarchy are in 1-1 correspondence with rational functions of n arguments. One can solve the roots of associated polynomials and perform a root decomposition of infinite primes at various levels of the hierarchy and assign to them Galois groups acting as automorphisms of the field extensions of polynomials defined by the roots coming as restrictions of the basic polynomial to planes x<sub>n</sub>=0, x<sub>n</sub>=x<sub>n-1</sub>=0, etc... </p><p> <LI>These Galois groups are suggested to define non-commutative generalization of homotopy and homology theories and non-linear boundary operation for which a geometric interpretation in terms of the restriction to lower-dimensional plane is proposed. The Galois group G<sub>k</sub> would be analogous to the relative homology group relative to the plane x<sub>k-1</sub>=0 representing boundary and makes sense for all number fields also geometrically. One can ask whether the invariance of the complex of groups under the permutations of the orders of variables in the reduction process is necessary. Physical interpretation suggests that this is not the case and that all the groups obtained by the permutations are needed for a full description. </p><p> <LI>The algebraic counterpart of boundary map would map the elements of G<sub>k</sub> identified as analog of homotopy group to the commutator group [G<sub>k-2</sub>,G<sub>k-2</sub>] and therefore to the unit element of the abelianized group defining cohomology group. In order to obtains something analogous to the ordinary homology and cohomology groups one must however replaces Galois groups by their group algebras with values in some field or ring. This allows to define the analogs of homotopy and homology groups as their abelianizations. Cohomotopy, and cohomology would emerge as duals of homotopy and homology in the dual of the group algebra. </p><p> <LI>That the algebraic representation of the boundary operation is not expected to be unique turns into blessing when on keeps the TGD as almost topological QFT vision as the guide line. One can include all boundary homomorphisms subject to the condition that the anticommutator δ<sup>i</sup><sub>k</sub>δ<sup>j</sup><sub>k-1</sub>+δ<sup>j</sup><sub>k</sub>δ<sup>i</sup><sub>k-1</sub> maps to the group algebra of the commutator group [G<sub>k-2</sub>,G<sub>k-2</sub>]. By adding dual generators one obtains what looks like a generalization of anticommutative fermionic algebra and what comes in mind is the spectrum of quantum states of a SUSY algebra spanned by bosonic states realized as group algebra elements and fermionic states realized in terms of homotopy and cohomotopy and in abelianized version in terms of homology and cohomology. Galois group action allows to organize quantum states into multiplets of Galois groups acting as symmetry groups of physics. Poincare duality would map the analogs of fermionic creation operators to annihilation operators and vice versa and the counterpart of pairing of k:th and n-k:th homology groups would be inner product analogous to that given by Grassmann integration. The interpretation in terms of fermions turns however to be wrong and the more appropriate interpretation is in terms of Dolbeault cohomology applying to forms with homomorphic and antiholomorphic indices. </p><p> <LI> The intuitive idea that the Galois group is analogous to 1-D homotopy group which is the only non-commutative homotopy group, the structure of infinite primes analogous to the braids of braids of braids of ... structure, the fact that Galois group is a subgroup of permutation group, and the possibility to lift permutation group to a braid group suggests a representation as flows of 2-D plane with punctures giving a direct connection with topological quantum field theories for braids, knots and links. The natural assumption is that the flows are induced from transformations of the symplectic group acting on δ M<sup>2</sup><sub>+/-</sub>× CP<sub>2</sub> representing quantum fluctuating degrees of freedom associated with WCW ("world of classical worlds"). Discretization of WCW and cutoff in the number of modes would be due to the finite measurement resolution. The outcome would be rather far reaching: finite measurement resolution would allow to construct WCW spinor fields explicitly using the machinery of number theory and algebraic geometry. </p><p> <LI>A connection with operads is highly suggestive. What is nice from TGD perspective is that the non-commutative generalization homology and homotopy has direct connection to the basic structure of quantum TGD almost topological quantum theory where braids are basic objects and also to hyper-finite factors of type II<sub>1</sub>. This notion of Galois group makes sense only for the algebraic varieties for which coefficient field is algebraic extension of some number field. Braid group approach however allows to generalize the approach to completely general polynomials since the braid group make sense also when the ends points for the braid are not algebraic points (roots of the polynomial). </p><p> </OL> </p><p> This construction would realize the number theoretical, algebraic geometrical, and topological content in the construction of quantum states in TGD framework in accordance with TGD as almost TQFT philosophy, TGD as infinite-D geometry, and TGD as generalized number theory visions. </p><p> <I>2. p-Adic integration and cohomology</I> </p><p> This picture leads also to a proposal how p-adic integrals could be defined in TGD framework. </p><p> <OL> <LI> The calculation of twistorial amplitudes reduces to multi-dimensional residue calculus. Motivic integration gives excellent hopes for the p-adic existence of this calculus and braid representation would give space-time representation for the residue integrals in terms of the braid points representing poles of the integrand: this would conform with quantum classical correspondence. The power of 2π appearing in multiple residue integral is problematic unless it disappears from scattering amplitudes. Otherwise one must allow an extension of p-adic numbers to a ring containing powers of 2π. </p><p> <LI> Weak form of electric-magnetic duality and the general solution ansatz for preferred extremals reduce the Kähler action defining the Kähler function for WCW to the integral of Chern-Simons 3-form. Hence the reduction to cohomology takes places at space-time level and since p-adic cohomology exists there are excellent hopes about the existence of p-adic variant of Kähler action. The existence of the exponent of Kähler gives additional powerful constraints on the value of the Kähler fuction in the intersection of real and p-adic worlds consisting of algebraic partonic 2-surfaces and allows to guess the general form of the Kähler action in p-adic context. </p><p> <LI>One also should define p-adic integration for vacuum functional at the level of WCW. p-Adic thermodynamics serves as a guideline leading to the condition that in p-adic sector exponent of Kähler action is of form (m/n)<sup>r</sup>, where m/n is divisible by a positive power of p-adic prime p. This implies that one has sum over contributions coming as powers of p and the challenge is to calculate the integral for K= constant surfaces using the integration measure defined by an infinite power of Kähler form of WCW reducing the integral to cohomology which should make sense also p-adically. The p-adicization of the WCW integrals has been discussed already earlier using an approach based on harmonic analysis in symmetric spaces and these two approaches should be equivalent. One could also consider a more general quantization of Kähler action as sum K=K<sub>1</sub>+K<sub>2</sub> where K<sub>1</sub>=rlog(m/n) and K<sub>2</sub>=n, with n divisible by p since exp(n) exists in this case and one has exp(K)= (m/n)<sup>r</sup> × exp(n). Also transcendental extensions of p-adic numbers involving n+p-2 powers of e<sup>1/n</sup> can be considered. </p><p> <LI>If the Galois group algebras indeed define a representation for WCW spinor fields in finite measurement resolution, also WCW integration would reduce to summations over the Galois groups involved so that integrals would be well-defined in all number fields. </OL> </p><p> <I>3. Floer homology, Gromov-Witten invariants, and TGD</I> </p><p> Floer homology defines a generalization of Morse theory allowing to deduce symplectic homology groups by studying Morse theory in loop space of the symplectic manifold. Since the symplectic transformations of the boundary of δ M<sup>4</sup><sub>+/-</sub>× CP<sub>2</sub> define isometry group of WCW, it is very natural to expect that Kähler action defines a generalization of the Floer homology allowing to understand the symplectic aspects of quantum TGD. The hierarchy of Planck constants implied by the one-to-many correspondence between canonical momentum densities and time derivatives of the imbedding space coordinates leads naturally to singular coverings of the imbedding space and the resulting symplectic Morse theory could characterize the homology of these coverings. </p><p> One ends up to a more precise definition of vacuum functional: Kähler action reduces Chern-Simons terms (imaginary in Minkowskian regions and real in Euclidian regions) so that it has both phase and real exponent which makes the functional integral well-defined. Both the phase factor and its conjugate must be allowed and the resulting degeneracy of ground state could allow to understand qualitatively the delicacies of CP breaking and its sensitivity to the parameters of the system. The critical points with respect to zero modes correspond to those for Kähler function. The critical points with respect to complex coordinates associated with quantum fluctuating degrees of freedom are not allowed by the positive definiteness of Kähler metric of WCW. One can say that Kähler and Morse functions define the real and imaginary parts of the exponent of vacuum functional. </p><p> The generalization of Floer homology inspires several new insights. In particular, space-time surface as hyper-quaternionic surface could define the 4-D counterpart for pseudo-holomorphic 2-surfaces in Floer homology. Holomorphic partonic 2-surfaces could in turn correspond to the extrema of Kähler function with respect to zero modes and holomorphy would be accompanied by super-symmetry. </p><p> Gromov-Witten invariants appear in Floer homology and topological string theories and this inspires the attempt to build an overall view about their role in TGD. Generalization of topological string theories of type A and B to TGD framework is proposed. The TGD counterpart of the mirror symmetry would be the equivalence of formulations of TGD in H=M<sup>4</sup>× CP<sub>2</sub> and in CP<sub>3</sub>× CP<sub>3</sub> with space-time surfaces replaced with 6-D sphere bundles. </p><p> <I>4. K-theory, branes, and TGD </I> </p><p> K-theory and its generalizations play a fundamental role in super-string models and M-theory since they allow a topological classification of branes. After representing some physical objections against the notion of brane more technical problems of this approach are discussed briefly and it is proposed how TGD allows to overcome these problems. A more precise formulation of the weak form of electric-magnetic duality emerges: the original formulation was not quite correct for space-time regions with Euclidian signature of the induced metric. The question about possible TGD counterparts of R-R and NS-NS fields and S, T, and U dualities is discussed. </p><p> <I>5. p-Adic space-time sheets as correlates for Boolean cognition</I> </p><p> p-Adic physics is interpreted as physical correlate for cognition. The so called Stone spaces are in one-one correspondence with Boolean algebras and have typically 2-adic topologies. A generalization to p-adic case with the interpretation of p pinary digits as physically representable Boolean statements of a Boolean algebra with 2<sup>n</sup>>p>p<sup>n-1</sup> statements is encouraged by p-adic length scale hypothesis. Stone spaces are synonymous with profinite spaces about which both finite and infinite Galois groups represent basic examples. This provides a strong support for the connection between Boolean cognition and p-adic space-time physics. The Stone space character of Galois groups suggests also a deep connection between number theory and cognition and some arguments providing support for this vision are discussed. </p>
[5324] vixra:1111.0088 [pdf]
Could One Generalize Braid Invariant Defined by Vacuum Expecation of Wilson Loop to and Invariant of Braid Cobordisms and of 2-Knots?
<p> Witten was awarded by Fields medal from a construction recipe of Jones polynomial based on topological QFT assigned with braids and based on Chern-Simons action. Recently Witten has been working with an attempt to understand in terms of quantum theory the so called Khovanov polynomial associated with a much more abstract link invariant whose interpretation and real understanding remains still open. </p><p> The attempts to understand Witten's thoughts lead to a series of questions unavoidably culminating to the frustrating "Why I do not have the brain of Witten making perhaps possible to answer these questions?". This one must just accept. In this article I summarize some thoughts inspired by the associations of the talk of Witten with quantum TGD and with the model of DNA as topological quantum computer. In my own childish manner I dare believe that these associations are interesting and dare also hope that some more brainy individual might take them seriously. </p><p> An idea inspired by TGD approach which also main streamer might find interesting is that the Jones invariant defined as vacuum expectation for a Wilson loop in 2+1-D space-time generalizes to a vacuum expectation for a collection of Wilson loops in 2+2-D space-time and could define an invariant for 2-D knots and for cobordisms of braids analogous to Jones polynomial. As a matter fact, it turns out that a generalization of gauge field known as gerbe is needed and that in TGD framework classical color gauge fields defined the gauge potentials of this field. Also topological string theory in 4-D space-time could define this kind of invariants. Of course, it might well be that this kind of ideas have been already discussed in literature. </p>
[5325] vixra:1111.0087 [pdf]
Could the Notion of Hyperdeterminant be Useful in TGD Framework?
<p> The vanishing of ordinary determinant tells that a group of linear equations possesses non-trivial solutions. Hyperdeterminant generalizes this notion to a situation in which one has homogenous multilinear equations. The notion has applications to the description of quantum entanglement and has stimulated interest in physics blogs. Hyperdeterminant applies to hyper-matrices with n matrix indices defined for an n-fold tensor power of vector space - or more generally - for a tensor product of vector spaces with varying dimensions. Hyper determinant is an n-linear function of the arguments in the tensor factors with the property that all partial derivatives of the hyper determinant vanish at the point, which corresponds to a non-trivial solution of the equation. A simple example is potential function of n arguments linear in each argument. </p><p> Why the notion of hyperdeterminant- or rather its infinite-dimensional generalization- might be interesting in TGD framework relates to the quantum criticality of TGD stating that TGD Universe involves a fractal hierarchy of criticalities: phase transitions inside phase transitions inside... At classical level the lowest order criticality means that the extremal of Kähler action possesses non-trivial second variations for which the action is not affected. The system is critical. In QFT context one speaks about zero modes. The vanishing of the so called Gaussian (of functional) determinant associated with second variations is the condition for the existence of critical deformations. In QFT context this situation corresponds to the presence of zero modes. </p><p> The simplest physical model for a critical system is cusp catastrophe defined by a potential function V(x) which is fourth order polynomial. At the edges of cusp two extrema of potential function stable and unstable extrema co-incide and the rank of the matrix defined by the potential function vanishes. This means vanishing of its determinant. At the tip of the cusp the also the third derivative vanishes of potential function vanishes. This situation is however not describable in terms of hyperdeterminant since it is genuinely non-linear rather than only multilinear. </p><p> In a complete analogy, one can consider also the vanishing of n:th variations in TGD framework as higher order criticality so that the vanishing of hyperdeterminant might serve as a criterion for the higher order critical point and occurrence of phase transition. Why multilinearity might replace non-linearity in TGD framework could be due to the non-locality. Multilinearty with respect to imbedding space-coordinates at different space-time points would imply also the vanishing of the standard local divergences of quantum field theory known to be absent in TGD framework on basis of very general arguments. In this article an attempt to concretize this idea is made. The challenge is highly non-trivial since in finite measurement resolution one must work with infinite-dimensional system. </p>
[5326] vixra:1111.0086 [pdf]
Yangian Symmetry, Twistors, and TGD
<p> There have been impressive steps in the understanding of N=4 maximally sypersymmetric YM theory possessing 4-D super-conformal symmetry. This theory is related by AdS/CFT duality to certain string theory in AdS<sub>5</sub>× S<sup>5</sup> background. Second stringy representation was discovered by Witten and is based on 6-D Calabi-Yau manifold defined by twistors. The unifying proposal is that so called Yangian symmetry is behind the mathematical miracles involved. </p><p> In the following I will discuss briefly the notion of Yangian symmetry and suggest its generalization in TGD framework by replacing conformal algebra with appropriate super-conformal algebras. Also a possible realization of twistor approach and the construction of scattering amplitudes in terms of Yangian invariants defined by Grassmannian integrals is considered in TGD framework and based on the idea that in zero energy ontology one can represent massive states as bound states of massless particles. There is also a proposal for a physical interpretation of the Cartan algebra of Yangian algebra allowing to understand at the fundamental level how the mass spectrum of n-particle bound states could be understood in terms of the n-local charges of the Yangian algebra. </p><p> Twistors were originally introduced by Penrose to characterize the solutions of Maxwell's equations. Kähler action is Maxwell action for the induced Kähler form of CP<sub>2</sub>. The preferred extremals allow a very concrete interpretation in terms of modes of massless non-linear field. Both conformally compactified Minkowski space identifiable as so called causal diamond and CP<sub>2</sub> allow a description in terms of twistors. These observations inspire the proposal that a generalization of Witten's twistor string theory relying on the identification of twistor string world sheets with certain holomorphic surfaces assigned with Feynman diagrams could allow a formulation of quantum TGD in terms of 3-dimensional holomorphic surfaces of CP<sub>3</sub>× CP<sub>3</sub> mapped to 6-surfaces dual CP<sub>3</sub>× CP<sub>3</sub>, which are sphere bundles so that they are projected in a natural manner to 4-D space-time surfaces. Very general physical and mathematical arguments lead to a highly unique proposal for the holomorphic differential equations defining the complex 3-surfaces conjectured to correspond to the preferred extremals of Kähler action. </p>
[5327] vixra:1111.0084 [pdf]
Does the Opera Experiment Reveal a Systematic Error in the Satellite Ephemeris of the Global Positioning System ?
With respect to the speed of light, the speed excess of the neutrinos (7.2 ± 0.6 km.s−1 ) measured in the OPERA experiment is observed to be close, if not exactly equal, to two times the orbital velocity of the GPS satellites (≈ 3.9 km.s−1 ), strongly suggesting that this anomaly is due to an error made on some of the GPS-based measurements involved in the OPERA experiment. Moreover, when this error is assumed to arise from a systematic error made on the measurements of GPS satellite velocities, the origin of the factor two becomes obvious. So, it seems likely that the OPERA experiment, instead of revealing a new, unexpected and challenging aspect of the physics of neutrinos, has demonstrated that the Global Positioning System still suffers from a rather important error, which remained unoticed until now, probably as a consequence of its systematic nature.
[5328] vixra:1111.0071 [pdf]
Do We Really Understand the Solar System?
The recent experimental findings have shown that our understanding of the solar system is surprisingly fragmentary. As a matter fact, so fragmentary that even new physics might find place in the description of phenomena like the precession of equinoxes and the recent discoveries about the bullet like shape of heliosphere and strong magnetic fields near its boundary bringing in mind incompressible fluid flow around obstacle. TGD inspired model is based on the heuristic idea that stars are like pearls in a necklace defined by long magnetic flux tubes carrying dark matter and strong magnetic field responsible for dark energy and possibly accompanied by the analog of solar wind. Heliosphere would be like bubble in the flow defined by the magnetic field inside the flux tube inducing its local thickening. A possible interpretation is as a bubble of ordinary and dark matter in the flux tube containing dark energy. This would provide a beautiful overall view about the emergence of stars and their helio-spheres as a phase transition transforming dark energy to dark and visible matter. Among other things the magnetic walls surrounding the solar system would shield the solar system from cosmic rays.
[5329] vixra:1111.0070 [pdf]
TGD Inspired Vision About Entropic Gravitation
<p> Entropic gravity (EG) introduced by Verlinde has stimulated a great interest. One of the most interesting reactions is the commentary of Sabine Hossenfelder. The article of Kobakhidze relies on experiments supporting the existence of Schrödinger amplitudes of neutron in the gravitational field of Earth develops an argument suggesting that EG hypothesis in the form in which it excludes gravitons is wrong. Indeed, the mere existence of gravitational bound states suggests strongly the existence of transitions between them by graviton emission. The following arguments represent TGD inspired view about what entropic gravity (EG) could be if one throws out the unnecessary assumptions such as the emerging dimensions and absence of gravitons. Also the GRT limit of TGD is discussed leading to rather strong implications concerning the TGD counterparts of blackholes. </p><p> <OL> </p><p> <LI> If one does not believe in TGD, one could start from the idea that stochastic quantization">stochastic quantization or something analogous to it might imply something analogous to entropic gravity (EG). What is required is the replacement of the path integral with functional integral. More precisely, one has functional integral in which the real contribution to Kähler action of the preferred extremal from Euclidian regions of the space-time surface to the exponent represents Kähler function and the imaginary contribution from Minkowskian regions serves as a Morse function so that the counterpart of Morse theory in WCW is obtained on stationary phase approximation in accordance with the vision about TGD as almost topological QFT. The exponent of Kähler function is the new element making the functional integral well-defined and the presence of phase factor gives rise to the interference effects characteristic for quantum field theories although one does not integrate over all space-time surfaces. In zero energy ontology one has however pairs of 3-surfaces at the opposite light-like boundaries of CD so that something very much analogous to path integral is obtained. </p><p> <LI>Holography requires that everything reduces to the level of 3-metrics and more generally, to the level of 3-D field configurations. Something like this happens if one can approximate path integral integral with the integral over small deformations for the minima of the action. This also happens in completely integral quantum field theories. </p><p> The basic vision behind quantum TGD is that this approximation is much nearer to reality than the original theory. In other words, holography is realized in the sense that to a given 3-surface the metric of WCW assigns a unique space-time and this space-time serves as the analog of Bohr orbit and allows to realize 4-D general coordinate invariance in the space of 3-surfaces so that classical theory becomes an exact part of quantum theory. This point of view will be adopted in the following also in the framework of general relativity where one considers abstract 4-geometries instead of 4-surfaces: functional integral should be over 3-geometries with the definition of Kähler metric assigning to 3-geometry a unique 4-geometry. </p><p> <LI>A powerful constraint is that the functional integral is free of divergences. Both 4-D path integral and stochastic quantization for gravitation fail in this respect due to the local divergences (in super-gravity situation might be different). The TGD inspired approach reducing quantum TGD to almost topological QFT with Chern-Simons term and a constraint term depending on metric associated with preferred 3-surfaces allows to circumvent this difficulty. This picture will applied to the quantization of GRT and one could see the resulting theory as a guess for what GRT limit of TGD could be. The first guess that Kähler function corresponds to Einstein-Maxwell action for this kind of preferred extremal turns out to be correct. An essential and radically new element of TGD is the possibility of space-time regions with Euclidian signature of the induced metric replacing the interiors of blackholes: this element will be assumed also now. The conditions that CP<sub>2</sub> represents and extremal of EYM action requires cosmological constant in Euclidian regions determined by the constant curvature of CP<sub>2</sub> and one can ask whether the average value of cosmological constant over 3-space could correspond to the cosmological constant explaining accelerating cosmic expansion. </p><p> <LI> Entropic gravity is generalized in TGD framework so that all interactions are entropic: the reason is that in zero energy ontology (ZEO) the S-matrix is replaced with M-matrix defining a square root of thermodynamics in a well defined sense. </OL> </p>
[5330] vixra:1111.0062 [pdf]
A New Koide Triplet: Strange, Charm, Bottom.
With the negative sign for $\sqrt m_s$, the quarks strange, charm and bottom make a Koide tuple. It continues the c-b-t tuple recently found by Rodejohann and Zhang and, more peculiar, it is quasi-orthogonal to the original charged lepton triplet.
[5331] vixra:1111.0057 [pdf]
Is Kähler Action Expressible in Terms of Areas of Minimal Surfaces?
<p> The general form of ansatz for preferred extremals implies that the Coulombic term in Kähler action vanishes so that it reduces to 3-dimensional surface terms in accordance with general coordinate invariance and holography. The weak form of electric-magnetic duality in turn reduces this term to Chern-Simons terms. </p><p> The strong form of General Coordinate Invariance implies effective 2-dimensionality (holding true in finite measurement resolution) so that also a strong form of holography emerges. The expectation is that Chern-Simons terms in turn reduces to 2-dimensional surface terms. </p><p> The only physically interesting possibility is that these 2-D surface terms correspond to areas for minimal surfaces defined by string world sheets and partonic 2-surfaces appearing in the solution ansatz for the preferred extremals. String world sheets would give to Kähler action an imaginary contribution having interpretation as Morse function. This contribution would be proportional to their total area and assignable with the Minkowskian regions of the space-time surface. Similar but real string world sheet contribution defining Kähler function comes from the Euclidian space-time regions and should be equal to the contribution of the partonic 2-surfaces. A natural conjecture is that the absolute values of all three areas are identical: this would realize duality between string world sheets and partonic 2-surfaces and duality between Euclidian and Minkowskian space-time regions. </p><p> Zero energy ontology combined with the TGD analog of large N<sub>c</sub> expansion inspires an educated guess about the coefficient of the minimal surface terms and a beautiful connection with p-adic physics and with the notion of finite measurement resolution emerges. The t'Thooft coupling λ should be proportional to p-adic prime p characterizing particle. This means extremely fast convergence of the counterpart of large N<sub>c</sub> expansion in TGD since it becomes completely analogous to the pinary expansion of the partition function in p-adic thermodynamics. Also the twistor description and its dual have a nice interpretation in terms of zero energy ontology. This duality permutes massive wormhole contacts which can have off mass shell with wormhole throats which are always massive (also for the internal lines of the generalized Feynman graphs). </p>
[5332] vixra:1111.0056 [pdf]
An Attempt to Understand Preferred Extremals of Kähler Action
<p> There are pressing motivations for understanding the preferred extremals of Kähler action. For instance, the conformal invariance of string models naturally generalizes to 4-D invariance defined by quantum Yangian of quantum affine algebra (Kac-Moody type algebra) characterized by two complex coordinates and therefore explaining naturally the effective 2-dimensionality. One problem is how to assign a complex coordinate with the string world sheet having Minkowskian signature of metric. One can hope that the understanding of preferred extremals could allow to identify two preferred complex coordinates whose existence is also suggested by number theoretical vision giving preferred role for the rational points of partonic 2-surfaces in preferred coordinates. The best one could hope is a general solution of field equations in accordance with the hints that TGD is integrable quantum theory. </p><p> A lot is is known about properties of preferred extremals and just by trying to integrate all this understanding, one might gain new visions. The problem is that all these arguments are heuristic and rely heavily on physical intuition. The following considerations relate to the space-time regions having Minkowskian signature of the induced metric. The attempt to generalize the construction also to Euclidian regions could be very rewarding. Only a humble attempt to combine various ideas to a more coherent picture is in question. </p><p> The core observations and visions are following. </p><p> <OL> <LI>Hamilton-Jacobi coordinates for M<sup>4</sup> > define natural preferred coordinates for Minkowskian space-time sheet and might allow to identify string world sheets for X<sup>4</sup> as those for M<sup>4</sup>. Hamilton-Jacobi coordinates consist of light-like coordinate m and its dual defining local 2-plane M<sup>2</sup>⊂ M<sup>4</sup> and complex transversal complex coordinates (w,w*) for a plane E<sup>2</sup><sub>x</sub> orthogonal to M<sup>2</sup><sub>x</sub> at each point of M<sup>4</sup>. Clearly, hyper-complex analyticity and complex analyticity are in question. </p><p> <LI> Space-time sheets allow a slicing by string world sheets (partonic 2-surfaces) labelled by partonic 2-surfaces (string world sheets). </p><p> <LI>The quaternionic planes of octonion space containing preferred hyper-complex plane are labelled by CP<sub>2</sub>, which might be called CP<sub>2</sub><sup>mod</sup>. The identification CP<sub>2</sub>=CP<sub>2</sub><sup>mod</sup> motivates the notion of M<sup>8</sup>--M<sup>4</sup>× CP<sub>2</sub>. It also inspires a concrete solution ansatz assuming the equivalence of two different identifications of the quaternionic tangent space of the space-time sheet and implying that string world sheets can be regarded as strings in the 6-D coset space G<sub>2</sub>/SU(3). The group G<sub>2</sub> of octonion automorphisms has already earlier appeared in TGD framework. </p><p> <LI>The duality between partonic 2-surfaces and string world sheets in turn suggests that the CP<sub>2</sub>=CP<sub>2</sub><sup>mod</sup> conditions reduce to string model for partonic 2-surfaces in CP<sub>2</sub>=SU(3)/U(2). String model in both cases could mean just hypercomplex/complex analyticity for the coordinates of the coset space as functions of hyper-complex/complex coordinate of string world sheet/partonic 2-surface. </OL> </p><p> The considerations of this section lead to a revival of an old very ambitious and very romantic number theoretic idea. <OL> <LI> To begin with express octonions in the form o=q<sub>1</sub>+Iq<sub>2</sub>, where q<sub>i</sub> is quaternion and I is an octonionic imaginary unit in the complement of fixed a quaternionic sub-space of octonions. Map preferred coordinates of H=M<sup>4</sup>× CP<sub>2</sub> to octonionic coordinate, form an arbitrary octonion analytic function having expansion with real Taylor or Laurent coefficients to avoid problems due to non-commutativity and non-associativity. Map the outcome to a point of H to get a map H→ H. This procedure is nothing but a generalization of Wick rotation to get an 8-D generalization of analytic map. </p><p> <LI> Identify the preferred extremals of Kähler action as surfaces obtained by requiring the vanishing of the imaginary part of an octonion analytic function. Partonic 2-surfaces and string world sheets would correspond to commutative sub-manifolds of the space-time surface and of imbedding space and would emerge naturally. The ends of braid strands at partonic 2-surface would naturally correspond to the poles of the octonion analytic functions. This would mean a huge generalization of conformal invariance of string models to octonionic conformal invariance and an exact solution of the field equations of TGD and presumably of quantum TGD itself. </OL> </p>
[5333] vixra:1111.0055 [pdf]
The Master Formula for the U-Matrix Finally Found?
In zero energy ontology U-matrix replaces S-matrix as the fundamental object characterizing the predictions of the theory. U-matrix is defined between zero energy states and its orthogonal rows define what I call M-matrices, which are analogous to thermal S-matrices of thermal QFTs. M-matrix defines the time-like entanglement coefficients between positive and negative energy parts of the zero energy state. M-matrices identifiable as hermitian square roots of density matrices. In this article it is shown that M-matrices form in a natural manner a generalization of Kac-Moody type algebra acting as symmetries of M-matrices and U-matrix and that the space of zero energy states has therefore Lie algebra structure so that quantum states act as their own symmetries. The generators of this algebra are multilocal with respect to partonic 2-surfaces just as Yangian algebras are multilocal with respect to points of Minkowski space and therefore define generalization of the Yangian algebra appearing in the Grassmannian twistor approach to N=4 SUSY.
[5334] vixra:1111.0038 [pdf]
On a Strengthened Hardy-Hilbert�s Type Inequality
In this paper, by using the Euler-Maclaurin expansion for the zeta function and estimating the weight function effectively, we derive a strengthenment of a Hardy-Hilbert�s type inequality proved by W.Y. Zhong. As applications, some particular results are considered. work.
[5335] vixra:1111.0032 [pdf]
The Black Hole Catastrophe: A Short Reply to J. J. Sharples
A recent Letter to the Editor (Sharples J. J., Coordinate transformations and metric extension: a rebuttal to the relativistic claims of Stephen J. Crothers, Progress in Physics, v.1, 2010) has analysed a number of my papers. Dr. Sharples has committed errors in both mathematics and physics. His notion that r = 0 in the so-called �Schwarzschild solution� marks the point at the centre of the related manifold is false, as is his related claim that Schwarzschild�s actual solution describes a manifold that is extendible. His post hoc introduction of Newtonian concepts and related mathematical expressions into the �Schwarzschild solution� is invalid; for instance, Newtonian two-body relations into what is alleged to be a one-body problem. Each of the objections are treated in turn and their invalidity fully demonstrated. Black hole theory is riddled with contradictions. This article provides definitive proof that black holes do not exist.
[5336] vixra:1111.0021 [pdf]
Do X and Y Mesons Provide Evidence for Color Excited Quarks or Squarks?
This article was motivated by a blog posting in Quantum Diaries with the title "Who ordered that?! An X-traordinary particle?". The learned that in the spectroscopy of ccbar type mesons is understood except for some troublesome mesons christened with letters X and Y. X(3872) is the firstly discovered troublemaker and what is known about it can be found in the blog posting and also in Particle Data Tables. The problems are following. <OL> <LI> These mesons should not be there. </LI> <LI> Their decay widths seem to be narrow taking into account their mass.</LI> <LI> Their decay characteristics are strange: in particular the kinematically allow decays to DDbar dominating the decays of Ψ(3770) with branching ratio 93 per cent has not been observed whereas the decay to DDbarπ<sup>0</sup> occurs with a branching fraction >3.2× 10<sup>-3</sup>. Why the pion is needed? </LI> <LI> X(3872) should decay to photon and charmonium state in a predictable way but it does not. </LI> </OL> <br/> One of the basic predictions of TGD is that both leptons and quarks should have color excitations. In the case of leptons there is a considerable support as carefully buried anomalies: the first ones come from seventies. But in the case of quarks this kind of anomalies have been lacking. Could these mysterious X:s and Y:s provide the first signatures about the existence of color excited quarks? <br/> <OL> <LI> The first basic objection is that the decay widths of intermediate gauge bosons do not allow new light particles. This objection is encountered already in the model of leptohadrons. The solution is that the light exotic states are possible only if they are dark in TGD sense having therefore non-standard value of Planck constant and behaving as dark matter. The value of Planck constant is only effective and has purely geometric interpretation in TGD framework. <LI> Second basic objection is that light quarks do not seem to have such excitations. The answer is that gluon exchange transforms the exotic quark pair to ordinary one and vice versa and considerable mixing of the ordinary and exotic mesons takes place. At low energies where color coupling strength becomes very large this gives rise to mass squared matrix with very large non-diagonal component and the second eigenstate of mass squared is tachyon and therefore drops from the spectrum. For heavy quarks situation is different and one expects that charmonium states have also exotic counterparts. </LI> <LI> The selection rules can be also understood. The decays to DDbar involve at least two gluon emissions decaying to quark pairs and producing additional pion unlikes the decays of ordinary charmonium state involving only the emission of single gluon decaying to quark pair so that DDbar results. </LI> The decay of the lightest X to photon and charmonium is not possible in the lowest order since at least one gluon exchange is needed to transform exotic quark pair to ordinary one. Exotic charmonia can however transform to exotic charmonia. Therefore the basic constraints seem to be satisfied. </OL> The above arguments apply with minimal modifications also to squark option and at this moment I am not able to to distinguish between this options. The SUSY option is however favored by the fact that it would explain why SUSY has not been observed in LHC in terms of shadronization and subsequent decay to hadrons by gluino exhanges so that the jets plus missing energy would not serve as a signature of SUSY. Note that the decay of gluon to dark squark pair would require a phase transition to dark gluon first.
[5337] vixra:1111.0020 [pdf]
Are Neutrinos Superluminal?
OPERA collaboration in CERN has reported that the neutrinos travelling from CERN to Gran Sasso in Italy move with a super-luminal speed. There exists also earlier evidence for the super-luminality of neutrinos: for instance, the neutrinos from SN1987A arrived for few hours earlier than photons. The standard model based on tachyonic neutrinos is formally possible but breaks causality and is unable to explain all results. TGD based explanation relies on sub-manifold geometry replacing abstract manifold geometry as the space-time geometry. The notion of many- sheeted space-time predicts this kind of effects plus many other effects for which evidence exists as various anomalies which have not taken seriously by the main stream theorists. In this article the TGD based model is discussed in some detail.
[5338] vixra:1111.0019 [pdf]
First Evidence for M_89 Hadron Physics
p-Adic length scale hypothesis strongly suggests a fractal hierarchy of copies of hadron physics labelled by Mersenne primes. M<sub>89</sub> hadron physics whose mass scales relates by a factor 512 to that of ordinary M<sub>107</sub> hadron physics was predicted already for 15 years ago but only now the TeV energy region has been reached at LHC making possible to test the prediction. Pions of any hadron physics are produced copiously in hadronic reactions and their detection is the most probable manner how the new hadron physics will be discovered if Nature has realized them. Neutral pions produce monochromatic gamma pairs whereas heavy charged pions decay to W boson and gluon pair or quark pair. The first evidence -or should we say indication- for the existence of M<sub>89</sub> hadron physics has now emerged from CDF which for more than two years ago provided evidence also for the colored excitations of tau lepton and for leptohadron physics. What CDF has observed is evidence for the production of quark antiquark pairs in association with W bosons and the following arguments demonstrate that the interpretation in terms of M<sub>89</sub> hadron physics might make sense.
[5339] vixra:1111.0018 [pdf]
Explanation for the Soft Photon Excess in Hadron Production
There is quite a recent article entitled "Study of the Dependence of Direct Soft Photon Production on the Jet Characteristics in Hadronic Z<sup>0</sup> Decays" discussing one particular manifestation of an anomaly of hadron physics known for two decades: the soft photon production rate in hadronic reactions is by an averge factor of about four higher than expected. In the article soft photons assignable to the decays of Z<sup>0</sup> to quark-antiquark pairs. This anomaly has not reached the attention of particle physics which seems to be the fate of anomalies quite generally nowadays: large extra dimensions and blackholes at LHC are much more sexy topics of study than the anomalies about which both existing and speculative theories must remain silent. <br/> TGD leads to an explanation of anomaly in terms of the basic differences between TGD and QCD. <br/> <OL> <LI> The first difference is due to induced gauge field concept: both classical color gauge fields and the U(1) part of electromagnetic field are proportional to induced Kähler form. Second difference is topological field quantization meaning that electric and magnetic fluxes are associated with flux tubes. Taken together this means that for neutral hadrons color flux tubes and electric flux tubes can be and will be assumed to be one and same thing. In the case of charged hadrons the em flux tubes must connect different hadrons: this is essential for understanding why neutral hadrons seem to contribute much more effectively to the brehmstrahlung than charged hadrons- which is just the opposite for the prediction of hadronic inner bremsstrahlung model in which only charged hadrons contribute. Now all both sea and valence quarks of neutral hadrons contribute but in the case of charged hadrons only valence quarks do so. </LI> <LI>Sea quarks of neutral hadrons seem to give the largest contribution to bremsstrahlung. p-Adic length scale hypothesis predicting that quarks can appear in several mass scales represents the third difference and the experimental findings suggest that sea quarks are by a factor of 1/2 lighter than valence quarks implying that brehmstrahlung for given sea quark is by a factor 4 more intense than for corresponding valence quark. </LI> </OL>
[5340] vixra:1111.0017 [pdf]
The Incredibly Shrinking Proton
The recent discovery that the charge radius of proton deduced from quantum average of nuclear charge density from the muonic version of hydrogen atom is 4 per cent smaller than the radius deduced from hydrogen atom challenges either QED or the view about proton or both. In TGD framework topological quantization leads to the notion of eld body as a characteristic of any system. Field body is expected to contain substructures with sizes given by the primary and secondary p-adic length scales at at least. u and d quarks would have eld bodies with size much larger than proton itself. In muonic atom the p-adic size scale of the eld body of u quark having mass of 2 MeV according to the last estimates would be roughly twice the Boh radius so that the anomaly might be understood as a signature of eld body.
[5341] vixra:1111.0016 [pdf]
Could Neutrinos Appear in Several Mass Scales?
There are some indications that neutrinos can appear in several mass scales from neutrino oscillations. These oscillations can be classi ed to vacuum oscillations and to solar neutrino oscillations believed to be due to the so called MSW e ect in the dense matter of Sun. There are also indications that the mixing is di erent for neutrinos and antineutrinos. In the following the possibility that padic length scale hypothesis might explain these ndings is discussed.
[5342] vixra:1111.0012 [pdf]
How the 17 Gev Opera Superluminal Neutrino from Cern Arrived at Gran Sasso Without Desintegration??: it Was Carried Out by a Natario Warp Drive. Explanation for the Results Obtained by Glashow-Cohen and Gonzalez-Mestres
Recently Superluminal Neutrinos have been observed in the OPERA experiment at CERN.Since the neutrino possesses a non-zero rest mass then according to the Standard Model,Relativity and Lorentz Invariance this Superluminal speed result would be impossible to be achieved.This Superluminal OPERA result seems to be confirmed and cannot be explained by errors in the measurements or break-ups in the Standard Model,Relativity or Lorentz Invariance. In order to conciliate the Standard Model,Relativity and Lorentz Invariance with the OPERA Superluminal Neutrino we propose a different approach: Some years ago Gauthier,Gravel and Melanson introduced the idea of the micro Warp Drive:Microscopical particle-sized Warp Bubbles carrying inside sub-atomic particles at Superluminal speeds. These micro Warp Bubbles according to them may have formed spontaneously in the Early Universe after the Big Bang and they used the Alcubierre Warp Drive geometry in their mathematical model.We propose exactly the same idea of Gauthier,Gravel and Melanson to explain the Superluminal Neutrino at OPERA however using the Natario Warp Drive geometry.Our point of view can be resumed in the following statement:In a process that modern science still needs to understand,the OPERA Experiment generated a micro NatarioWarp Bubble around the neutrino that pushed it beyond the Light Speed barrier.Inside the Warp Bubble the neutrino is below the Light Speed and no break-ups of the Standard Model,Relativity or Lorentz Invariance occurs but outside the Warp Bubble the neutrino would be seen at Superluminal speeds.Remember that the CERN particle accelerators were constructed to reproduce in laboratory scale the physical conditions we believe that may have existed in the Early Universe so these micro Warp Bubbles generated after the Big Bang may perhaps be re-created or reproduced inside particle accelerators. We believe that our idea borrowed from Gauthier,Gravel and Melanson can explain what really happened with the neutrinos in the OPERA experiment.We also explain here the results obtained by Glashow-Cohen and Gonzalez-Mestres
[5343] vixra:1111.0011 [pdf]
Question of Planck�s Constant in Dark Matter Direct Detection Experiments
Recent astronomical observations have revealed important new clues regarding dark matter's behavior. However, the fact remains that all experimental efforts to detect dark matter directly, in a laboratory setting, have failed. A natural explanation for these failed efforts may be possible by postulating that dark matter's behavior is governed by a non-Planckian "action." It is pointed out, as a preliminary to advancing this possibility, that no purely dark matter measurement of Planck's constant exists. The resulting hypothesis advocates the existence of a new, experimentally verifiable, dark matter candidate. An extension of this hypothesis to the cosmological realm suggests that dark matter may have come into existence 10 to the minus 44 seconds after the big bang; an order of magnitude prior to the Planck era.
[5344] vixra:1111.0009 [pdf]
Programming Relativity and Gravity via a Discrete Pixel Space in Planck Level Simulation Hypothesis Models
Outlined here is a programming approach for use in Planck level simulation hypothesis models. It is based around an expanding (the simulation clock-rate measured in units of Planck time) 4-axis hyper-sphere and mathematical particles that oscillate between an electric wave-state and a mass (unit of Planck mass per unit of Planck time) point-state. Particles are assigned a spin axis which determines the direction in which they are pulled by this (hyper-sphere pilot wave) expansion, thus all particles travel at, and only at, the velocity of expansion (the origin of $c$), however only the particle point-state has definable co-ordinates within the hyper-sphere. Photons are the mechanism of information exchange, as they lack a mass state they can only travel laterally (in hypersphere co-ordinate terms) between particles and so this hypersphere expansion cannot be directly observed, relativity then becomes the mathematics of perspective translating between the absolute (hypersphere) and the relative motion (3D space) co-ordinate systems. A discrete `pixel' lattice geometry is assigned as the gravitational space. Units of $\hbar c$ `physically' link particles into orbital pairs. As these are direct particle to particle links, a gravitational force between macro objects is not required, the gravitational orbit as the sum of these individual orbiting pairs. A 14.6 billion year old hyper-sphere (the sum of Planck black-hole units) has similar parameters to the cosmic microwave background. The Casimir force is a measure of the background radiation density.
[5345] vixra:1110.0068 [pdf]
Why there is no Symmetry in Physical Vacuum Between the Overall Number of Particles and Twin Antiparticles
        Physical vacuum can be seen as a turbulent ideal fluid. Particles of matter originate from primordial inclusion in the fluid of the empty space. The proton is modeled by a hollow bubble stabilized due to positive perturbation of the averaged turbulence energy and accompanying drop of the pressure on the wall of the cavity.<br>         The antiproton can be created only in the pair with the proton: extracting from the medium a ball <i>V</i> of the fluid and inserting it into another place of the medium. The intrusion into the medium of the redundant void thus performed is concerned with a huge amount of the energy <i>p</i><sub>0</sub><i>V</i> needed in order to expand the fluid by the volume <i>V</i> against the background pressure <i>p</i><sub>0</sub>. Still, because of the free energy of the system tendency to decrease, the redundant void will be shortly canceled to the continuum.<br>         Creation of the electron–positron pair requires a relatively small energy ~<i>p</i><sub>0</sub>Δ<i>V</i>, where Δ<i>V</i> << <i>V</i>, which is the work of the elastic deformation of the turbulent medium. The resulting radial stress arising in the turbulent fluid corresponds to the electric field of the elementary charge. The system can be stabilized merging the small cavity Δ<i>V</i> of the positron with the large bubble <i>V</i> of a neutron.<br>         Thus, the total number of protons turns out to be equal to total number of electrons, where, being a void, the proton should be classified as particle, and, being an islet of the fluid, the electron should be classified as antiparticle.<br><br> Key words: physical vacuum, turbulent fluid, cavities, particles, antiparticles.
[5346] vixra:1110.0060 [pdf]
Dirac and Higher-Spin Equations of Negative Energies
It is easy to check that both algebraic equation Det(^p - m) = 0 and Det(^p + m) = 0 for 4-spinors u- and v- have solutions with (see paper) The same is true for higherspin equations. Meanwhile, every book considers the p0 = Ep only for both u- and v- spinors of the (see paper) representation, thus applying the Dirac-Feynman-Stueckelberg procedure for elimination of negative-energy solutions. Recent works of Ziino (and, independently, of several others) show that the Fock space can be doubled. We re-consider this possibility on the quantum field level for both s = 1/2 and higher spins particles.
[5347] vixra:1110.0057 [pdf]
Do We Need Dark Energy to Explain the Cosmological Acceleration?
We argue that the phenomenon of the cosmological acceleration can be easily and naturally explained from first principles of quantum theory without involving empty space-time background, dark energy and other artificial notions.
[5348] vixra:1110.0052 [pdf]
Superluminal Effect with Oscillating Neutrinos
A simple quantum relativistic model of muon-tau neutrino oscillations in the OPERA experiment is presented. This model suggests that the two components in the neutrino beam are separated in space. Being created in a meson decay, the muon neutrino emerges 18 meters ahead of the beam's center of energy, while the tau neutrino is behind. Both neutrinos have subluminal speeds, however the advanced start of the muon neutrino explains why it arrives in the detector 60 ns earlier than expected. Our model does violate the special-relativistic ban on superluminal signals. However, usual arguments about violation of causality in moving reference frames are not applicable here. The invalidity of standard special-relativistic arguments is related to the inevitable interaction-dependence of the boost operator, which implies that boost-transformed space-time coordinates of events with interacting particles do not obey linear and universal Lorentz formulas.
[5349] vixra:1110.0047 [pdf]
A Comment on Arxiv:1110.2685
This brief paper traces comments on the article [2]. This article, a preprint, has recently received an attention, raising errors related to the timing process within the OPERA Collaboration results in [1], that turns out to be a wrong route by which serious science should not be accomplished. A peer-reviewed status should be previously considered to assert that [2] claims a solution for the superluminal results in [1]. Within [2], it seems there is an intrinsical misconception within its claimed solution, since an intrinsical proper time reasoning leads to the assumption the OPERA collaboration interprets a time variation as a proper time when correcting time intervals between a GPS frame and the grounded baseline frame. Furthermore, the author of [2] seems to double radio signals, doubling the alleged half of the truly observed time of flight, since the Lorentz transformations do consider radio signals intrinsically by construction.
[5350] vixra:1110.0044 [pdf]
Fermion-Antifermion Asymmetry
An event with positive energy transfers this energy photons which carries it on recorders observers. Observers know that this event occurs, not before it happens. But event with negative energy should absorb this energy from observers. Consequently, observers know that this event happens before it happens. Since time is irreversible then only the events with positive energy can occur. In single-particle states events with a fermion have positive energy and occurences with an antifermion have negative energy. In double-particle states events with pair of antifermions have negative energy and events with pair of fermions and with fermion-antifermion pair have positive energy.
[5351] vixra:1110.0033 [pdf]
Can The Natario Warp Drive Explain The OPERA Superluminal Neutrino At CERN??
Recently Superluminal Neutrinos have been observed in the OPERA experiment at CERN.Since the neutrino possesses a non-zero rest mass then according to the Standard Model,Relativity and Lorentz Invariance this Superluminal speed result would be impossible to be achieved.This Superluminal OPERA result seems to be confirmed and cannot be explained by errors in the measurements or break-ups in the Standard Model,Relativity or Lorentz Invariance. In order to conciliate the Standard Model,Relativity and Lorentz Invariance with the OPERA Superluminal Neutrino we propose a different approach: Some years ago Gauthier,Gravel and Melanson introduced the idea of the micro Warp Drive:Microscopical particle-sized Warp Bubbles carrying inside sub-atomic particles at Superluminal speeds. These micro Warp Bubbles according to them may have formed spontaneously in the Early Universe after the Big Bang and they used the Alcubierre Warp Drive geometry in their mathematical model.We propose exactly the same idea of Gauthier,Gravel and Melanson to explain the Superluminal Neutrino at OPERA however using the Natario Warp Drive geometry.Our point of view can be resumed in the following statement:In a process that modern science still needs to understand,the OPERA Experiment generated a micro NatarioWarp Bubble around the neutrino that pushed it beyond the Light Speed barrier.Inside the Warp Bubble the neutrino is below the Light Speed and no break-ups of the Standard Model,Relativity or Lorentz Invariance occurs but outside the Warp Bubble the neutrino would be seen at Superluminal speeds.Remember that the CERN particle accelerators were constructed to reproduce in laboratory scale the physical conditions we believe that may have existed in the Early Universe so these micro Warp Bubbles generated after the Big Bang may perhaps be re-created or reproduced inside particle accelerators. We believe that our idea borrowed from Gauthier,Gravel and Melanson can explain what really happened with the neutrinos in the OPERA experiment
[5352] vixra:1110.0028 [pdf]
On Superluminal Particles and the Extended Relativity Theories
Superluminal particles are studied within the framework of the Extended Relativity theory in Clifford spaces (C-spaces). In the simplest scenario, it is found that it is the contribution of the Clifford scalar component π of the poly-vector-valued momentum which is responsible for the superluminal behavior in ordinary spacetime due to the fact that the effective mass M = (see paper) is imaginary (tachyonic). However, from the point of view of C-space, there is no superluminal (tachyonic) behavior because the true physical mass still obeys M2 > 0. Therefore, there are no violations of the Clifford-extended Lorentz invariance and the extended Relativity principle in C-spaces. Furthermore, to lowest order, there is no contribution of terms involving powers of the Planck mass (1/m2P ) indicating that quantum gravitational effects do not play a role at this order. A Born�s Reciprocal Relativity theory in Phase Spaces leads to modified dispersion relations involving both coordinates and momenta, and whose truncations furnish Lorentz-violating dispersion relations which appear in Finsler Geometry, rainbow-metrics models and Double (deformed) Special Relativity. These models also admit superluminal particles. A numerical analysis based on the recent OPERA experimental findings on alleged superluminal muon neutrinos is made. For the average muon neutrino energy of 17 Gev, we find a value for π = 119.7 Mev that, coincidentally, is close to the mass of the muon m<sub>μ</sub> = 105.7 Mev.
[5353] vixra:1110.0026 [pdf]
Instrumentalism Vs. Realism and Social Construction
An important debate in the philosophy of science, whether an instrumentalist or realist view of science correctly characterizes science, is examined in this paper through the lens of a related debate, namely whether science is a social construct or not. The latter debate arose in response to Kuhn's work The Structure of Scienti c Revolutions, in which he argued that while there exists a process through which scienti c understanding evolves from primitive to increasingly re ned ideas, it does not describe progress 'toward' anything. Kuhn's work was then used to argue that there is no such thing as a knowable objective reality, a view much in agreement with that of the instrumentalist. This paper argues that a generalized version of the correspondence principle applied to a theory's domain of validity is an exclusive feature of science which distinguishes it from socially constructed phenomena and thereby supports the realist position. According to this argument, progress in science can be characterized as the replacement of old paradigms by new ones with greater domains of validity which obey the correspondence principle where the two paradigms overlap. This characterization, however, is susceptible to the instrumentalist objection that it does not t the transition from Aristotelian to Newtonian physics. In response, it is required that this argument depend on the intactness of certain core concepts in the face of experimental challenge within some regions of the theory's original domain of validity. While this requirement saves the argument and even o ers an answer to the question of what it would take for our most established theories in physics, relativity and quantum theory, to su er the same fate as Aristotelian physics, it also defers a conclusive resolution to the debate between instrumentalists and realists until it can be determined whether an ultimate theory of nature can be found.
[5354] vixra:1110.0024 [pdf]
A Dimensional Theory of Quantum Mechanics
Ever since quantum mechanics was first developed, it has been unclear what it really tells us about reality. A novel framework, based on 5 axioms, is presented here which offers an interpretation of quantum mechanics unlike any considered thus far: It is postulated that physical objects can exist in one of two distinct modes, based on whether they have an intrinsic actual spacetime history or not. If they do, their mode of existence is actual and they can be described by classical physics. If they do not, then their mode of existence is called actualizable and they must be described in terms of an equal-weight superposition of all possible actualizable (not actual) histories. The distinction is based on an axiom according to which there exists a limit in which spacetime reduces to a one-dimension reduced version, called areatime, and that objects which merely actualizably exist in spacetime actually exist in areatime. The operational comparison of the passage of time for such objects to the passage of time for a spacetime observer is postulated to be made possible by what is called an angular dual bilateral symmetry. This symmetry can be decomposed into the superposition of two imaginary phase angles of opposite sign. To mathematically describe the spacetime manifestation of objects which actually exist in areatime, each actualizable spacetime history is associated with an actualizable path, which in turn is associated with the imaginary phases. For a single free particle, the complex exponent is identified with a term proportional to its relativistic action, thus recovering the path integral formulation of quantum mechanics. Although based on some highly unfamiliar ideas, this framework appears to render at least some of the usual mysteries connected with quantum mechanics amenable to simple conceptual understanding. It also appears to connect the foundations of quantum theory to the foundation of the special theory of relativity while clarifying its relationship to the general theory of relativity and yields a testable prediction about a type of experiment, as yet unperformed, which under the current paradigm is utterly unexpected, namely, that the gravitational field of radiation is zero. The paper concludes with some speculations about how the theory may be extended to a metatheory of nature.
[5355] vixra:1110.0023 [pdf]
EPR Paradox as Evidence for the Emergent Nature of Spacetime
After a providing a review of the EPR paradox which draws a distinction between what is here called the locality and the in uence paradoxes, this paper presents a qualitative overview of a framework recently introduced by this author in which spacetime is assumed to emerge from areatime. Two key assumptions from this framework allow one to make the notion of quantum e ects originating from `outside' spacetime intelligible. In particular, this framework assumes that until a quantum object is measured, it does not actually exist in spacetime and that there are connections between quantum particles in areatime which are independent of metric relations in spacetime. These assumptions are then shown to permit one to conceptually understand both the locality and the in uence paradoxes, and lead to the overall conclusion that spacetime is emergent in the sense that a very large number of discrete events which correspond to `measurements' in quantum mechanics aggregate to give rise on a large scale to the apparently smooth reality we experience in our daily lives.
[5356] vixra:1110.0022 [pdf]
Quantum Superposition, Mass and General Relativity
The quantum superposition principle, which expresses the idea that a system can exist simultaneously in two or more mutually exclusive states is at the heart of the mystery of quantum mechanics. This paper presents an axiom, called the principle of actualizable histories, which naturally leads to the quantum superposition principle. However, in order to be applicable to massive systems, it requires introducing a novel distinction between actualizable and actual mass. By means of arriving in conjunction with two previously introduced axioms at the path integral formulation of quantum mechanics, it is shown that actualizable mass is the central concept of mass in quantum theory, whereas actual mass is the central concept in classical theories, and in particular general relativity. This distinction sharply segregates the domains of validity of the two theories, making it incompatible with any theory of quantum gravity which does not respect this segregation. Finally, an experiment is suggested to test this idea.
[5357] vixra:1110.0021 [pdf]
A Derivation of the Quantum Phase
The quantum phase has profound effects on quantum mechanics but its physical origin is currently unexplained. This paper derives its general form from two physical axioms: 1) in the limit in which space goes to zero, spacetime reduces to a constant quantity of areatime, and 2) the proper time dimensions of areatime and of spacetime are orthogonal but can be compared to each other according to what will here be called an angular dual bilateral symmetry. The mathematical derivation and the explanation of the physical origin of the quantum phase from these two axioms is straightforward and implies that the quantum phase is intimately related to the quantization of spacetime.
[5358] vixra:1110.0020 [pdf]
Ontology and the Wave Function Collapse
This paper makes a case for ontology, the study of existence, to be explicitly and formally incorporated into foundational physics in general and the wave function collapse of quantum mechanics in particular. It introduces a purely ontological distinction between two modes of physical existence-actualizable and actual- into the conventional mathematical representation of the wave function collapse, and examines the implications of doing so, arguing that this may lead to insights that permit one to understand seemingly mysterious aspects of the wave function collapse, such as 'Schrödinger's cat paradox', as well as how quantum theory in general and Einstein's general theory of relativity relate to one another. A specific empirical prediction is given, which if con rmed, may move ontology outside the exclusive purview of philosophy.
[5359] vixra:1110.0019 [pdf]
The Change of Gravitational Potential Energy and Dark Energy in the Zero Energy Universe
Gravity is the force conquering the structure of the universe. By recognizing the components of the universe, we are estimating the quantity of components composing the universe through size of gravity and gravitational potential energy (GPE). In this paper, it is being shown that the universe can be born and expanded through pair creation of positive energy (mass) and negative energy (mass) from zero energy condition. Also, GPE is composed by 3 units of U++, U--, and U-+ when negative and positive energy exists, U-+ (GPE between negative mass and positive mass) has positive values and is the component that makes repulsive gravitational effect. U-+ corresponds with the inner energy of the system and can be interpreted as dark energy. The force by U-+ is F = (4piG/3)k_h(t)Mρr = (1/3)Λ(t)Mc^2r shaped. Also, situations in which U-+ has much higher value than |U--| + |U++| depending on the distribution of negative mass and positive mass is possible. This doesn't mean that 72.1% of dark energy independently exists, but means that explanation from GPE occurring from 4.6% of negative energy, which is the same as 4.6% of positive energy, is possible. Moreover, 4.6% of negative energy is the energy which is inevitably required from zero energy, which is the most natural total energy value in the universe. This discovery implies that our belief that size of gravitational effect and size of components of the universe would always 1:1 correspond was wrong. We set up each model from the birth of universe to the present, and calculated GPE using computer simulation in each level. As a result, we could verify that “pair creation model of negative mass and positive mass” explains inflation of the early universe and decelerating expansion, and present accelerating expansion in time series.
[5360] vixra:1110.0014 [pdf]
On Nonzero Photon Mass Within Wave-Particle Duality
The mass of a photon is one of the most intriguing ideas of theoretical physics, and their existence is consistently justified in the light of certain experimental data. In this paper the proposal for explanation of the nonzero photon mass in frames of the waveparticle duality is concisely presented. The standard formulation of the wave-particle duality is modified by the constant frequency field, which can be interpreted as the Zero-Point Frequency field.
[5361] vixra:1110.0013 [pdf]
Gravitational Waves Versus Cosmological Perturbations: Commentary to Mukhanov�s Talk
Recently, on the conference �Quantum Theory and Gravitation� held in Zürich on June 14-24, 2011, V.F. Mukhanov has been presented talk �Massive Gravity� discussing the relationships between massive gravitational waves and Cosmological Perturbations of the Minkowski background. His crucial result was modification of the Newtonian potential of universal gravitation due to a multiplicative constant equal to 4=3. However, this presentation has been stirred up my negative opinion. The controversy has been caused by absence of a lot of details, what have been made the talk manifestly misleading. The lecturer did not respond to my questions satisfactory. Mukhanov�s deductions are at most half-true, and they can be easily verified by straightforward calculations. In this paper I explain shortly what is right and what is wrong in the approach propagated by Mukhanov. Particularly, I shall to show that restoration of the Newton law of universal gravitation is unambiguous.
[5362] vixra:1110.0012 [pdf]
Two Charged Scalar Field-Based Mass Generation Mechanisms
Despite consistency of the Higgs mechanism, experimental data have not revealed existence of the Higgs particle. Moreover, the Higgs mechanism explains why photon is massless, while another experimental data reveal very small but detectable photon mass. In this manner the crucial problem is to combine abstract ideas of the Standard Model with the verified experimental data to obtain constructive physical picture. In this paper we discuss two alternative consistent mass generation mechanisms which are based on charged scalar field and the O(2) symmetric Higgs potential. Both the mechanisms for abelian fields of the Standard Model lead to nonzero photon mass, but predict distinguishable mass of the new neutral scalar boson. Both the models are similar to the Higgs mechanism. The scenarios base on existence of a new scalar neutral boson c and an auxiliary scalar neutral field j which can be interpreted as a dilaton. In the first model a new scalar particle is massive, and the value of its mass can be estimated by the present day experimental limits on the photon mass. In the second model dilaton is massless and a new scalar particle has a mass which can be determined only by experimental data. The mass of a photon in this model does not depend on the mass of a Higgs-like particle.
[5363] vixra:1110.0011 [pdf]
Higher Dimensional Quantum Gravity Model
In this paper the constructive and consistent formulation of quantum gravity as a quantum field theory for the case of higher dimensional ADM space-times, which is based on the author previous works, is presented. The present model contains a certain new contribution which, however, do not change the general idea which leads to extraordinary simple treatment of quantum gravity in terms of fundamental notions of quantum field theory, like e.g. the Fock space, quantum correlations, etc. We discuss the way to establishing the dimension of space and the relation to string theory of the model.
[5364] vixra:1110.0010 [pdf]
Complement to Special Relativity at Superluminal Speeds: CERN Neutrinos Explained
The most recent notifications from OPERA Collaboration of CERN Geneva report highly probable existence of faster-than-light neutrinos. Such a state of affairs has been also detected earlier in radio galaxies, quasars and recently in microquasars. The usual scenario explaining superluminal speeds is based on a black hole contained in these sources producing the high speed mass ejection. Superluminal speeds are, however, plainly and efficiently explainable within the framework of Special Relativity, in which the Einstein postulates, the Minkowski energy-momentum space, and both the Poincar� and the Lorentz symmetries remain unchanged, but the energy-momentum relation is deformed. In this paper superluminal deformations of Special Relativity, complementing the Einstein theory at faster-than-light speeds, are studied in the context of CERN neutrinos. For full consistency we propose to apply the non-parallelism hypothesis, the deformation derived ab initio, and the concept of measured speed of light which can be higher than c. We show that such a theory is able to explain both superluminal speed as well as mass of neutrino.
[5365] vixra:1110.0005 [pdf]
A Novel Way of 'Understanding' Quantum Mechanics
Written at a level appropriate for an educated lay audience, this paper attempts to give a primarily conceptual overview of a framework recently introduced in reference [3] by this author, which attempts to clarify what quantum mechanics tells us about reality. Physicists may find this paper useful because it focuses on the central ideas of the framework at a conceptual level, thereby lessening their unfamiliarity, an unavoidable feature of truly novel ideas. The author hopes that this article will motivate physicists to seriously evaluate the mathematical details of the framework given in the original reference.
[5366] vixra:1109.0056 [pdf]
On the Neutrino Opera in the CNGS Beam
In this brief paper, we solve the relativistic kinematics related to the intersection between a relativistic beam of particles (neutrinos, e.g.) and consecutive detectors. The gravitational e ects are neglected, but the e ect of the Earth rotation is taken into consideration under a simple approach in which we consider two instantaneous inertial reference frames in relation to the fixed stars: an instantaneous inertial frame of reference having got the instantaneous velocity of rotation (about the Earth axis of rotation) of the Cern at one side, the lab system of reference in which the beam propagates, and another instantaneous inertial system of reference having got the instantaneous velocity of rotation of the detectors at Gran Sasso at the other side, this latter being the system of reference of the detectors. Einstein�s relativity theory provides a velocity of intersection between the beam and the detectors greater than the velocity of light in the empty space as derived in this paper, in virtue of the Earth rotation. We provide a simple calculation for the discrepancy between a correct measure for the experiment and a measure arising due to the e ect derived in this paper.
[5367] vixra:1109.0052 [pdf]
Rigorous Testing of Fair Sampling Assumption
Fair sampling assumption is used in photonic tests of Bell inequalities. However, rigorous testing of this assumption is still to be performed. Here it is argued that without rigorous testing bias can be introduced that would mask indications of unfair sampling. For purpose of argument local realistic model for polarization entangled photons is outlined. According to model coincidence rate and correlation visibility are complementary.
[5368] vixra:1109.0051 [pdf]
Possible Explanation for Speed of Neutrinos, Faster Than Light
The recent measurement of speed of muon neutrinos shows that it is possible that speed of neutrinos is faster than speed of light. Here an explanation is suggested, that this is a consequence of Scharnhorst e ect. This e ect shows that photons propagating through vacuum are modifying in virtual pairs electron-positron for a while and thus the measured speed of light is really lower than one maximal possible speed of light in short moments. Because neutrinos are not modifying so, author supposes that their speed is larger than speed of photons.
[5369] vixra:1109.0039 [pdf]
Scope of Center of Charge in Electrostatics
The notion of the center of an electrostatic charge distribution is introduced. Then, it is investigated in which problems the notion may be useful. It is seen that in many problems with positive and negative charge contents (for example, image problems) the notion works nice.
[5370] vixra:1109.0036 [pdf]
An OpenCL Fast Fourier Transformation Implementation Strategy
This paper describes an implementation strategy in preparation for an implementation of an OpenCL FFT. The two most essential factors (memory bandwidth and locality) that are crucial to obtain high performance on a GPU for an FFT implementation are highlighted. Theoretical upper bounds for performance in terms of the locality factor are derived. An implementation strategy is proposed that takes these factors into consideration so that the resulting implementation has the potential to achieve high performance.
[5371] vixra:1109.0010 [pdf]
Noncommutative Complex Scalar Field and Casimir Effect
Using noncommutative deformed canonical commutation relations, a model describing a non-commutative complex scalar field theory is proposed. The noncommutative field equations are solved, and the vacuum energy is calculated to the second order in the parameter of noncommutativity. As an application to this model, the Casimir effect, due to the zero point fluctuations of the noncommutative complex scalar field, is considered. It turns out that in spite of its smallness, the noncommutativity gives rise to a repulsive force at the microscopic level, leading to a modified Casimr potential with a minimum at the point ...
[5372] vixra:1108.0042 [pdf]
Majorana Neutrino: Chirality and Helicity
We introduce the Majorana spinors in the momentum representation. They obey the Dirac-like equation with eight components, which has been first introduced by Markov. Thus, the Fock space for corresponding quantum fields is doubled (as shown by Ziino). Particular attention has been paid to the questions of chirality and helicity (two concepts which frequently are confused in the literature) for Dirac and Majorana states.
[5373] vixra:1108.0032 [pdf]
Key Evidence for the Accumulative Model of High Solar Influence on Global Temperature
Here we present three key pieces of empirical evidence for a solar origin of recent and paleoclimate global temperature change, caused by amplification of forcings over time by the accumulation of heat in the ocean. Firstly, variations in global temperature at all time scales are more correlated with the accumulated solar anomaly than with direct solar radiation. Secondly, accumulated solar anomaly and sunspot count fits the global temperature from 1900, including the rapid increase in temperature since 1950, and the flat temperature since the turn of the century. The third, crucial piece of evidence is a 90$^{\circ}$ shift in the phase of the response of temperature to the 11 year solar cycle. These results, together with previous physical justifications, show that the accumulation of solar anomaly is a viable explanation for climate change without recourse to changes in heat-trapping greenhouse gasses.
[5374] vixra:1108.0030 [pdf]
Deceleration of Massive Bodies by the Isotropic Graviton Background as a Possible Alternative to Dark Matter
Deceleration of massive bodies by the isotropic graviton background is considered here as a possible alternative to dark matter. This deceleration has the same order of magnitude as a small additional acceleration of NASA deep-space probes.
[5375] vixra:1108.0028 [pdf]
One More Step Towards Generalized Graph-Based Weakly Relational Domains
This paper proposes to extend graph-based weakly relational domains to a generalized relational context. Using a new definition of coherence, we show that the definition of a normal form for this domain is simplified. A transitive closure algorithm for combined relations is constructed and a proof of its correctness is given. Using the observed similarity between transitive closure of a combined relation and the normal form closure of a graph-based weakly relational domain, we extract a mathematical property that a relational abstract domain must satisfy in order to allow us to use an algorithm with the same form as the transitive closure algorithm to compute the normal form of a graph-based weakly relational domain.
[5376] vixra:1108.0020 [pdf]
Accumulation of Solar Irradiance Anomaly as a Mechanism for Global Temperature Dynamics
Global temperature (GT) changes over the 20th century and glacial-interglacial periods are commonly thought to be dominated by feedbacks, with relatively small direct effects from variation of solar insolation. Here is presented a novel empirical and physically-based auto-regressive AR(1) model, where temperature response is the integral of the magnitude of solar forcing over its duration, and amplification increases with depth in the atmospheric/ocean system. The model explains 76% of the variation in GT from the 1950s by solar heating at a rate of $0.06\pm 0.03K W^{-1}m^{-2}Yr^{-1}$ relative to the solar constant of $1366Wm^{-2}$. Miss-specification of long-equilibrium dynamics by empirical fitting methods (as shown by poor performance on simulated time series) and atmospheric forcing assumptions have likely resulted in underestimation of solar influence. The solar accumulation model is proposed as a credible mechanism for explaining both paleoclimatic temperature variability and present-day warming through high sensitivity to solar irradiance anomaly.
[5377] vixra:1108.0004 [pdf]
On the Dynamics of Global Temperature
In this alternative theory of global temperature dynamics over the annual to the glacial time scales, the accumulation of variations in solar irradiance dominates the dynamics of global temperature change. A straightforward recurrence matrix representation of the atmosphere/surface/deep ocean system, models temperature changes by (1) the size of a forcing, (2) its duration (due to accumulation of heat), and (3) the depth of forcing in the atmosphere/surface/deep ocean system (due to increasing mixing losses and increasing intrinsic gain with depth). The model can explain most of the rise in temperature since 1950, and more than 70\% of the variance with correct phase shift of the 11-year solar cycle. Global temperature displays the characteristics of an accumulative system over 6 temporal orders of magnitude, as shown by a linear $f^{-1}$ log-log relationship of frequency to the temperature range, and other statistical relationships such as near random-walk and distribution asymmetry. Over the last century, annual global surface temperature rises or falls $0.063\pm 0.028C/W/m^2$ per year when solar irradiance is greater or less than an equilibrium value of $1366W/m^2$ at top-of-atmosphere. Due to an extremely slow characteristic time scale the notion of 'equilibrium climate sensitivity' is largely superfluous. The theory does not require a range of distinctive feedback and lag parameters. Mixing losses attenuate the effectiveness of greenhouse gasses, and the amplification of solar variations by slow accumulation of heat dominates the dynamics of global temperature at all time-scales.
[5378] vixra:1107.0057 [pdf]
Theory of Electron
The solution with no singularity of wave equation for E-M fields is solved not to Bessel function, which's geometrical size is little enough to explain all effects in matter's structure: strong, weak effect or even other new ones. The mathematic calculation leaded by quantum theory reveals the weak or strong decay and static properties of elementary particles, all coincide with experimental data, and a covariant equation comprising bent space is proposed to explain mass.
[5379] vixra:1107.0054 [pdf]
Introductory Questions About Symbolic Values
Many distinct concepts including data, memes, and information are introduced. This manuscript aims to highlight the important role that unconscious physical reality plays in the creation and transmission of symbolic values.
[5380] vixra:1107.0045 [pdf]
A Different Approach to Logic: Absolute Logic
The paper is about 'absolute logic': an approach to logic that differs from the standard first-order logic and other known approaches. It should be a new approach the author has created proposing to obtain a general and unifying approach to logic and a faithful model of human mathematical deductive process. In first-order logic there exist two different concepts of term and formula, in place of these two concepts in our approach we have just one notion of expression. In our system the set-builder notation is an expression-building pattern. In our system we can easily express second-order, third order and any-order conditions. The meaning of a sentence will depend solely on the meaning of the symbols it contains, it will not depend on external 'structures'. Our deductive system is based on a very simple definition of proof and provides a good model of human mathematical deductive process. The soundness and consistency of the system are proved. We also discuss how our system relates to the most know types of paradoxes, from the discussion no specific vulnerability to paradoxes comes out. The paper provides both the theoretical material and a fully documented example of deduction.
[5381] vixra:1107.0039 [pdf]
Pseudo-Smarandache Functions of First and Second kind
In this paper we de ne two kinds of Pseudo-Smarandache functions. We have investigated more than fifty terms of each pseudoSmarandache function. We have proved some interesting results and properties of these functions.
[5382] vixra:1107.0020 [pdf]
Proving the Theorem of Wigner by an Exercise in Simple Geometry
The leading idea of this paper is to prove the theorem of Wigner with concepts and methods inspired by geometry. The exercise mentionned in the title has two functions: On the one hand it can serve as a pedagogical text in order to make the reader acquainted with the essential features of the theorem and its proof. On the other hand it will turn out to be the core of the general proof.
[5383] vixra:1107.0016 [pdf]
A Self-Similar Model of the Universe Unveils the Nature of Dark Energy
This work presents a critical yet previously unnoticed property of the units of some constants, able of supporting a new, self-similar, model of the universe. This model displays a variation of scale with invariance of dimensionless parameters, a characteristic of self-similar phenomena displayed by cosmic data. The model is deducted from two observational results (expansion of space and invariance of constants) and has just one parameter, the Hubble parameter. Somewhat surprisingly, classic physical laws hold both in standard and comoving units, except for a small new term in the angular momentum law that is beyond present possibilities of direct measurement. In spite of having just one parameter, the model is as successful as the ΛCDM model in the classic cosmic tests, and a value of H<sub>0</sub> = 64 kms<sup>-1</sup>Mpc<sup>-1</sup> is obtained from the fitting with supernovae Ia data from Union compilation. It is shown that in standard units the model corresponds to Big Bang cosmologies, namely to the ΛCDM model, unveiling what dark energy stands for. This scaling (dilation) model is a one-parameter model that seems able of fitting cosmic data, that does not conflict with fundamental physical laws and that is not dependent on hypotheses, being straightforwardly deducted from the two observational results above mentioned.
[5384] vixra:1107.0013 [pdf]
Identification of Nature�s Rationality via Galaxy NGC 5921
Weakly interacted galaxies present very simple body structure. They are either three-dimensional objects resembling ellipsoids or flat-shaped disks showing spiral disturbance. Elliptical galaxies present little dust and gas but spiral galaxies demonstrate arms and rings which are characterized by containing a huge amount of dust and gas. Since arms and rings are linear-shaped, the body structure of spiral galaxies may be a textured one as earth-bound materials always are. This led to the concept of rational structure which is based on proportion curves. The proportion curves for normal spiral galaxies are all equiangular spirals which trace or cut through arms consistently. This paper demonstrates the spider-shaped proportion curves for barred spiral galaxies. It shows for the seventh galaxy (NGC 5921) that the curves do trace or cut through arms or rings consistently. More examples of barred galaxies will be studied for the testification of Nature�s rationality.
[5385] vixra:1107.0012 [pdf]
Identification of Nature's Rationality Through Galaxy NGC 5701
Weakly interacted galaxies present very simple body structure. They are either three-dimensional objects resembling ellipsoids or flat-shaped disks showing spiral disturbance. Elliptical galaxies present little dust and gas but spiral galaxies demonstrate arms and rings which are characterized by containing a huge amount of dust and gas. Since arms and rings are linear-shaped, the body structure of spiral galaxies may be a textured one as earth-bound materials always are. This led to the concept of rational structure which is based on proportion curves. The proportion curves for normal spiral galaxies are all equiangular spirals which trace or cut through arms consistently. This paper demonstrates the spider-shaped proportion curves for barred spiral galaxies. It shows for the sixth galaxy (NGC 5701) that the curves do trace or cut through arms or rings consistently. More examples of barred galaxies will be studied for the testification of Nature�s rationality.
[5386] vixra:1107.0011 [pdf]
Identification of Nature�s Rationality with Galaxy NGC 4930
Weakly interacted galaxies present very simple body structure. They are either three-dimensional objects resembling ellipsoids or flat-shaped disks showing spiral disturbance. Elliptical galaxies present little dust and gas but spiral galaxies demonstrate arms and rings which are characterized by containing a huge amount of dust and gas. Since arms and rings are linear-shaped, the body structure of spiral galaxies may be a textured one as earth-bound materials always are. This led to the concept of rational structure which is based on proportion curves. The proportion curves for normal spiral galaxies are all equiangular spirals which trace or cut through arms consistently. This paper demonstrates the spider-shaped proportion curves for barred spiral galaxies. It shows for the fifth galaxy (NGC 4930) that the curves do trace or cut through arms or rings consistently. More examples of barred galaxies will be studied for the testification of Nature�s rationality.
[5387] vixra:1107.0004 [pdf]
Born�s Reciprocal Gravity in Curved Phase-Spaces and the Cosmological Constant
The main features of how to build a Born�s Reciprocal Gravitational theory in curved phase-spaces are developed. The scalar curvature of the 8D cotangent bundle (phase space) is explicitly evaluated and a generalized gravitational action in 8D is constructed that yields the observed value of the cosmological constant and the Brans-Dicke-Jordan Gravity action in 4D as two special cases. It is found that the geometry of the momentum space can be linked to the observed value of the cosmological constant when the curvature in momentum space is very large, namely the small size of P is of the order of (1/R<sub>Hubble</sub>). More general 8D actions can be developed that involve sums of 5 distinct types of torsion squared terms and 3 distinct curvature scalars R,P, S. Finally we develop a Born�s reciprocal complex gravitational theory as a local gauge theory in 8D of the deformed Quaplectic group that is given by the semi-direct product of U(1, 3) with the deformed (noncommutative) Weyl-Heisenberg group involving four noncommutative coordinates and momenta. The metric is complex with symmetric real components and antisymmetric imaginary ones. An action in 8D involving 2 curvature scalars and torsion squared terms is presented.
[5388] vixra:1106.0065 [pdf]
Identify the Nature�s Rationality Through Galaxy NGC 6782
Weakly interacted galaxies present very simple body structure. They are either three-dimensional objects resembling ellipsoids or flat-shaped disks showing spiral disturbance. Elliptical galaxies present little dust and gas but spiral galaxies demonstrate arms and rings which are characterized by containing a huge amount of dust and gas. Since arms and rings are linear-shaped, the body structure of spiral galaxies may be a textured one as earth-bound materials always are. This led to the concept of rational structure which is based on proportion curves. The proportion curves for normal spiral galaxies are all equiangular spirals which trace or cut through arms consistently. This paper demonstrates the spider-shaped proportion curves for barred spiral galaxies. It shows for the third galaxy NGC 6782 that the curves do trace or cut through arms or rings consistently. More examples of barred galaxies will be studied for the testification of Nature�s rationality.
[5389] vixra:1106.0064 [pdf]
Identify the Nature�s Rationality via Galaxy NGC 4665
Weakly interacted galaxies present very simple body structure. They are either three-dimensional objects resembling ellipsoids or flat-shaped disks showing spiral disturbance. Elliptical galaxies present little dust and gas but spiral galaxies demonstrate arms and rings which are characterized by containing a huge amount of dust and gas. Since arms and rings are linear-shaped, the body structure of spiral galaxies may be a textured one as earth-bound materials always are. This led to the concept of rational structure which is based on proportion curves. The proportion curves for normal spiral galaxies are all equiangular spirals which trace or cut through arms consistently. This paper demonstrates the spider-shaped proportion curves for barred spiral galaxies. It shows for the fourth galaxy NGC 4665 that the curves do trace or cut through arms or rings consistently. More examples of barred galaxies will be studied for the testification of Nature�s rationality.
[5390] vixra:1106.0053 [pdf]
An Interstellar Position Fixing Method
fix a ship�s position in charted interstellar space with the assistance of a three dimensional computer based stellar chart and star camera spectrometers capable of measuring angular separations between three sets of pair stars. The method offers another tool for the navigator to rely on if alternative position fixing methods are not available or if the navigator wishes to verify the validity of one�s position given by other means.
[5391] vixra:1106.0049 [pdf]
A Note on the Quantization Mechanism Within the Cold Big Bang Cosmology
In my paper [3], I obtain a Cold Big Bang Cosmology, fitting the cosmological data, with an absolute zero primordial temperature, a natural cuto for the cosmological data to a vanishingly small entropy at a singular microstate of a comoving domain of the cosmological fluid. This solution resides on a negative pressure solution from the general relativity field equation and on a postulate regarding a Heisenberg indeterminacy mechanism related to the energy fluctuation obtained from the solution of the field equations under the Robertson-Walker comoving elementar line element context in virtue of the adoption of the Cosmological Principle. In this paper, we see the, positive, di erential energy fluctuation, purely obtained from the general relativity cosmological solution in [3], leads to the quantum mechanical argument of the postulate in [3], provided this energy fluctuation is quantized, strongly supporting the postulate in [3]. I discuss the postulate in [3], showing the result for the energy fluctuation follows from a discreteness hypothesis.
[5392] vixra:1106.0047 [pdf]
Comments on the Statistical Nature and on the Irreversibility of the Wave Function Collapse
In a previous preprint, [1], reproduced here within the appendix in its revised version, we were confronted, to reach the validity of the second law of thermodynamics for an unique collapse of an unique quantum object, to the necessity of an ensemble of measures to be accomplished within copies of identical isolated systems. The validity of the second law of thermodynamics within the context of the wave function collapse was sustained by the large number of microstates related to a given collapsed state. Now, we will consider just one pure initial state containing just one initial state of the quantum subsystem, not an ensemble of identically prepared initial quantum subsystems, e.g., just one photon from a very low intensity beam prepared with an equiprobable eigenset containing two elements, an unique observation raising two likelihood outcomes. Again, we will show the statistical interpretation must prevail, albeit the quantum subsystem being a singular, unique, pure state element within its unitary quantum subsystem ensemble set. This feature leads to an inherent probabilistic character, even for a pure one-element quantum subsystem object.
[5393] vixra:1106.0045 [pdf]
Identify the Nature�s Rationality with Galaxy NGC 4548
Weakly interacted galaxies present very simple body structure. They are either three-dimensional objects resembling ellipsoids or flat-shaped disks showing spiral disturbance. Elliptical galaxies present little dust and gas but spiral galaxies demonstrate arms and rings which are characterized by containing a huge amount of dust and gas. Since arms and rings are linear-shaped, the body structure of spiral galaxies may be a textured one as earth-bound materials always are. This led to the concept of rational structure which is based on proportion curves. The proportion curves for normal spiral galaxies are all equiangular spirals which trace or cut through arms consistently. This paper demonstrates the spider-shaped proportion curves for barred spiral galaxies. It shows for the second galaxy NGC 4548 that the curves do trace or cut through arms or rings consistently. More examples of barred galaxies will be studied for the testification of Nature�s rationality.
[5394] vixra:1106.0041 [pdf]
A Lagrangian Which Models Lambda CDM Cosmology and Explains the Null Results of Direct Detection Efforts.
The purpose of this paper is to reconcile observations of dark matter effects on the galactic and cosmological scales with the null results of astroparticle physics observations such as CDMS and ANTARES. This paper will also provide a candidate unified and simpler mathematical formulation for the Lambda CDM model. Unification is achieved by a combination of the f(R) approach, with the standard LCDM approach and inflationary models. It is postulated that dark matter-energy fields depend on the Ricci curvature R. Standard methods of classical and quantum field theory on curved space time are applied. When this model is treated as a quantum field theory in curved space-time, the dark matter-dark matter fermion annihilation cross section grows as the square of the Ricci scalar. It is proposed and mathematically demonstrated that in this model dark matter particles could have shorter lifetimes in regions of relatively strong gravity such as near the sun, near the Earth, or any other large mass. The unexpected difficulties in directly observing fermionic particles of dark matter in Earth based observatories are explained by this theory. The gravitational field of the Sun and Earth may effect them in ways the standard WIMP models would never predict.
[5395] vixra:1106.0040 [pdf]
Identifying the Nature�s Rationality with Galaxy NGC 3275
Weakly interacted galaxies present very simple body structure. They are either three-dimensional objects resembling ellipsoids or flat-shaped disks showing spiral disturbance. Elliptical galaxies present little dust and gas but spiral galaxies demonstrate arms and rings which are characterized by containing a huge amount of dust and gas. Since arms and rings are linear-shaped, the body structure of spiral galaxies may be a textured one as earth-bound materials always do. This led to the concept of rational structure which is based on proportion curves. The proportion curves for normal spiral galaxies are all equiangular spirals which trace or cut through arms consistently. This paper demonstrates the spider-shaped proportion curves for barred spiral galaxies. It shows for the galaxy NGC 3275 that the curves do trace or cut through arms or rings consistently. More examples of barred galaxies will be studied for the testification of Nature�s rationality.
[5396] vixra:1106.0033 [pdf]
Towards a Group Theoretical Model for Algorithm Optimization
This paper proposes to use a group theoretical model for the optimization of algorithms. We first investigate some of the fundamental properties that are required in order to allow the optimization of parallelism and communication. Next, we explore how a group theoretical model of computations can satisfy these requirements. As an application example, we demonstrate how this group theoretical model can uncover new optimization possibilities in the polyhedral model.
[5397] vixra:1106.0029 [pdf]
Foundations of a Theory of Quantum Gravity
This adventure started out as a paper, but soon it grew considerably in size and there was no choice left anymore but to present it as a full blown book written in a style which is intermediate between that of an original research paper and that of a book. More precisely, I opted for a style which is somewhat between the historical and axiomatic approach and this manuscript can therefore be read from di erent perspectives depending upon the knowledge and skills of the reader. Since quantum gravity is more than a technical problem, the mandatory sections constitute the introduction as well as the technical and axiomatic framework of sections seven till eleven. However, the reader who is also interested in the philosophical aspects as well as a general overview of the problem is advised to study sections two and three as well. The critical reader who is not willing to take any statement for granted should include also sections four till six, since these are somewhat of a transitional nature closing the gap between the conservative initial point of view and the new theory developed later on. Lecturing about this work made me aware that there is also a more direct way to arrive in Rome and for that very reason, this introduction is also split into two parts. The rst one takes the conservative point of view as it is done by the very large majority of researchers which necessitates a careful and precise way of phrasing the content; the second approach however is more bold and direct but goes, in my humble opinion, much more economic to the heart of the matter. I believe that the variety of presenting the same material in this introduction will allow the reader to choose which way he prefers to follow.
[5398] vixra:1106.0023 [pdf]
A Lattice Model for the Optimization of Communication in Parallel Algorithms
This paper describes a unified model for the optimization of communication in parallel algorithms and architectures. Based on a property that provides a unified view of locality in space and time, an algorithm is constructed that generates a parallel architecture that is optimized for communication for a given computation. The optimization algorithm is constructed using the lattice algebraic properties of congruence relations and is therefor applicable in a general context. An application to a bio-informatics algorithm demonstrates the value of the model and optimization algorithm.
[5399] vixra:1106.0022 [pdf]
Parallelisation with a Grid Abstraction
This paper describes a new technique for automatic parallelisation in the Z-polyhedral model. The presented technique is applicable to arbitrarily nested loopnests with iteration spaces that can be represented as unions of Z-polyhedra and affine modular data-access functions. The technique partitions both iteration and data spaces of the computation. The maximal amount of parallelism that can be represented using grid partitions is extracted.
[5400] vixra:1106.0006 [pdf]
How Electrons Consist of Electromagnetic Waves
In this paper we investigate the connection between electrons and electromagnetic waves. We then propose how electrons could consist of electromagnetic waves. From this proposal we explain why electron-positron annihilation results in only gamma rays being formed, as well as how gamma rays can form electron-positron pairs.
[5401] vixra:1106.0005 [pdf]
An Experiment to Determine Whether Electromagnetic Waves have Mass
In previous papers we have proposed that the mass of an electromagnetic wave is dependent upon its speed. This relationship is such that when the wave is travelling at the speed of light it has no mass, but its mass monotonically increases as the wave slows down (i.e. the wave has mass when it is passing through a medium). In this paper we propose an experiment that would be able to determine whether electromagnetic waves have mass when they are not travelling at the speed of light. Also the experiment would be able to determine whether the wave�s frequency affects the wave�s mass for a given speed.
[5402] vixra:1106.0004 [pdf]
Comments on the Entropy of the Wave Function Collapse
Academically, among students, an apparent paradox may arise when one tries to interpret the second law of thermodynamics within the context of the quantum mechanical wave function collapse. This is so because a quantum mechanical system suddenly seems to undergo, from a less restrictive state constructed from a superposition of eigenstates of a given operator, to a more restrictive state: the collapsed state. This paper is intended to show how this picture turns out to be a misconception and, albeit brie y, furtherly discuss the scope of Max Born's probabilistic interpretation within the second law of thermodynamics.
[5403] vixra:1106.0001 [pdf]
Inconsistency of the Beckwith Entropy Formula
In my recent paper [1] published by Prespacetime Journal I discussed certain consequences of the entropy formula presented by A.W. Beckwith and his coauthors [2]. The main result of the deductions were bonons and the inflaton constant. However, I now consider the Beckwith entropy formula to be wrong, and deductions based on this relation can therefore be at most half-true. In this brief paper the right way to deduce the entropy formula is concisely discussed, the results obtained previously are revised, and certain new results are presented.
[5404] vixra:1105.0042 [pdf]
What is Mass?
In this paper we investigate the connection between energy and mass. From this we propose that mass is �generated� when a volume of space contains a sufficient amount of localised energy. We then show how this definition explains various phenomena, for example why mass increases with velocity.
[5405] vixra:1105.0041 [pdf]
Mass of an Electromagnetic Wave
In this paper we investigate whether an electromagnetic wave can have mass, whilst also still having a maximum velocity equal to the speed of light. We find that their mass is inversely proportional to their velocity, such that they have no mass when travelling at the speed of light. This proportionality may also help explain the duality of light.
[5406] vixra:1105.0027 [pdf]
An Effective Temperature for Black Holes
The physical interpretation of black hole's quasinormal modes is fundamental for realizing unitary quantum gravity theory as black holes are considered theoretical laboratories for testing models of such an ultimate theory and their quasinormal modes are natural candidates for an interpretation in terms of quantum levels. The spectrum of black hole's quasinormal modes can be re-analysed by introducing a black hole's effective temperature which takes into account the fact that, as shown by Parikh and Wilczek, the radiation spectrum cannot be strictly thermal. This issue changes in a fundamental way the physical understanding of such a spectrum and enables a re-examination of various results in the literature which realizes important modifies on quantum physics of black holes. In particular, the formula of the horizon's area quantization and the number of quanta of area result modified becoming functions of the quantum "overtone" number n. Consequently, the famous formula of Bekenstein-Hawking entropy, its sub-leading corrections and the number of microstates are also modified. Black hole's entropy results a function of the quantum overtone number too. We emphasize that this is the first time that black hole's entropy is directly connected with a quantum number. Previous results in the literature are re-obtained in the limit n → ∞.
[5407] vixra:1105.0024 [pdf]
A Note on the Gravity Screening in Quantum Systems
We discuss how, in the theoretical scenario presented in [1], the gravity screening and the gravity impulse which seem to be produced under certain conditions by high temperature superconductors are expected to be an entropic response to the ow of part of the system into a deeper quantum regime.
[5408] vixra:1105.0021 [pdf]
Ternary Octonionic Gauge Field Theories
A ternary gauge field theory is explicitly constructed based on a totally antisymmetric ternary-bracket structure associated with a 3-Lie algebra. It is shown that the ternary infinitesimal gauge transformations do obey the key closure relations [δ<sub>1</sub>, δ<sub>2</sub>] = δ<sub>3</sub>. Invariant actions for the 3-Lie algebra-valued gauge fields and scalar fields are displayed. We analyze and point out the difficulties in formulating a nonassociative Octonionic ternary gauge field theory based on a ternary-bracket associated with the octonion algebra and defined earlier by Yamazaki. It is shown that a Yang-Mills-like quadratic action is invariant under global (rigid) transformations involving the Yamazaki ternary octonionic bracket, and that there is closure of these global (rigid) transformations based on constant antisymmetric parameters Λ<sup>ab</sup> = -Λ<sup>ba</sup>. Promoting the latter parameters to spacetime dependent ones Λ<sup>ab</sup>(x<sup>μ</sup>) allows to build an octonionic ternary gauge field theory when one imposes gauge covariant constraints on the latter gauge parameters leading to field-dependent gauge parameters and nonlinear gauge transformations. In this fashion one does not spoil the gauge invariance of the quadratic action under this restricted set of gauge transformations and which are tantamount to spacetime-dependent scalings (homothecy) of the gauge fields.
[5409] vixra:1105.0015 [pdf]
On Time :Trying to go Beyond Endless Confusions ... Comment on the Paper Arxiv:0903.3489
It is mentioned that in physics, much like in everyday life, we are vitally interested in certain abstract concepts, such as, geometry, number, time, or for that matter, monetary value. And contrary to usual views, we can never ever really know what such abstract concepts are. Instead, all that we may know are speci c models of such concepts. This state of a airs has direct relevance upon the long ongoing disputes related to time in physics. In particular, the paper indicates the exaggeration in claims according to which \time as an independent concept has no place in physics".
[5410] vixra:1105.0013 [pdf]
On Octonionic Nonassociative Ternary Gauge Field Theories
A novel (to our knowledge) nonassociative Octonionic ternary gauge field theory is explicitly constructed based on a ternary-bracket structure involving the octonion algebra. The ternary bracket was defined earlier by Yamazaki. The antisymmetric rank-two field strength F<sub>μν</sub> is defined in terms of the ternary-bracket (... see paper) involving an auxiliary octonionicvalued coupling (...) . The ternary bracket cannot be rewritten in terms of 2-brackets, [A,B,C] ≠ 1/4[[A,B],C]. It is found that gaugeinvariant matter kinetic terms for an octonionic-valued scalar field can be introduced in the action if one starts instead with an octonionic-valued rank-three antisymmetric field strength (...) permutations, which is defined in terms of an antisymmetric tensor field of rank two (...) and (...) We conclude with some preliminary steps towards the construction of generalized ternary gauge field theories involving both 3-Lie algebras and octonions.
[5411] vixra:1105.0009 [pdf]
Why the Colombeau Algebras Cannot Handle Arbitrary Lie Groups ?
It is briefly shown that, due to the growth conditions in their definition, the Colombeau algebras cannot handle arbitrary Lie groups, and in particular, cannot allow the formulation, let alone, solution of Hilbert's Fifth Problem.
[5412] vixra:1105.0007 [pdf]
Why the Colombeau Algebras Cannot Formulate, Let Alone Prove the Global Cauchy-Kovalevskaia Theorem ?
It is briefly shown that, due to the growth conditions in their definition, the Colombeau algebras cannot handle arbitrary analytic nonlinear PDEs, and in particular, cannot allow the formulation, let alone, give the proof of the global Cauchy-Kovalevskaia theorem.
[5413] vixra:1105.0006 [pdf]
The Local-Nonlocal Dichotomy Is but a Relative and Local View Point
As argued earlier elsewhere, what is the Geometric Straight Line, or in short, the GSL, we shall never know, and instead, we can only deal with various mathematical models of it. The so called standard model, given by the usual linearly ordered eld R of real numbers is essentially based on the ancient Egyptian assumption of the Archimedean Axiom which has no known reasons to be assumed in modern physics. Setting aside this axiom, a variety of linearly ordered fields F<sub>U</sub> becomes available for the mathematical modelling of the GSL. These elds, which are larger than R, have a rich self-similar structure due to the presence of infinitely small and infinitely large numbers. One of the consequences is the obvious relative and local nature of the long ongoing local versus nonlocal dichotomy which still keeps having foundational implications in quantum mechanics.
[5414] vixra:1104.0082 [pdf]
Experiments on Electron Bremsstrahlung When Passing Through Narrow Slits and Their Interpretation in Terms of Inverse Photoelectric Effect
          In special experiments on slowing down soft electrons from the energy <i>E</i><sub>1</sub> at the entry of a narrow slit down to <i>E</i><sub>2</sub><<i>E</i><sub>1</sub> in the exit there was drawn a conclusion that the source of the retardation radiation with the energy Δ<i>E</i><sub>12</sub>=<i>E</i><sub>1</sub>–<i>E</i><sub>2</sub> in the opening of the narrow slit is not the passing by electrons, but a radiation due to inverse photoelectric effect of valence electrons in the stationary structure of the edge of the hole. Here we consider only central-axial flight of electrons via a narrow slit (of the width <0.2 μm) which generates quanta of light with the energy Δ<i>E</i><sub>12</sub>. If with the aid of external electrodes inside a wider slit (>2 μm) to create a field with the same retardation potential φ=Δ<i>E</i><sub>12</sub> then despite of the same slowing down in it of central-axial flying by electrons there will be observed no emission of light quanta with the energy Δ<i>E</i><sub>12</sub>. This enables us to interpret in a different way the mechanism of induced radiation of matter under quantum transitions in it of particles. It looks such that the flying by electrons excites around themselves spherical zones of nonlinearity with radius ∼ 0.2 μm. The orbitals (with energies <i>E</i><sub>1</sub> and <i>E</i><sub>2</sub><<i>E</i><sub>1</sub>) of stationary valence electrons in the edge of the narrow orifice (of the width < 0.2 μm), falling in these zones, in accord with the Ritz combination rule gives from the difference of terms ν<sub>1</sub>=<i>E</i><sub>1</sub>/<i>h</i> and ν<sub>2</sub>=<i>E</i><sub>2</sub>/<i>h</i> the observed in experiments monochromatic radiation of the frequency ν<sub>12</sub>=ν<sub>1</sub>– ν<sub>2</sub>. The passing of center-axial electrons via a wider gaps (>2 μm) is not affected by the nonlinearity zones of the orbitals of stationary valence electrons in the edge of the slit. Thence, despite of the dragging by the external field of the diaphragm φ=Δ<i>E</i><sub>12</sub> in this case the flying by electrons does not radiate at the frequency ν<sub>12</sub>=Δ<i>E</i><sub>12</sub>/<i>h</i>.
[5415] vixra:1104.0079 [pdf]
A Multi-Space Model for Chinese Bids Evaluation with Analyzing
A tendering is a negotiating process for a contract through by a tenderer issuing an invitation, bidders submitting bidding documents and the tenderer accepting a bidding by sending out a notification of award. As a useful way of purchasing, there are many norms and rulers for it in the purchasing guides of the World Bank, the Asian Development Bank,..., also in contract conditions of various consultant associations. In China, there is a law and regulation system for tendering and bidding. However, few works on the mathematical model of a tendering and its evaluation can be found in publication. The main purpose of this paper is to construct a Smarandache multi-space model for a tendering, establish an evaluation system for bidding based on those ideas in the references [7] and [8] and analyze its solution by applying the decision approach for multiple objectives and value engineering. Open problems for pseudo-multi-spaces are also presented in the final section.
[5416] vixra:1104.0078 [pdf]
Smarandache Multi-Space Theory(IV)
A Smarandache multi-space is a union of n different spaces equipped with some different structures for an integer n ≥ 2, which can be both used for discrete or connected spaces, particularly for geometries and spacetimes in theoretical physics. This monograph concentrates on characterizing various multi-spaces including three parts altogether. The first part is on algebraic multi-spaces with structures, such as those of multi-groups, multi-rings, multi-vector spaces, multi-metric spaces, multi-operation systems and multi-manifolds, also multi-voltage graphs, multi-embedding of a graph in an n-manifold,..., etc.. The second discusses Smarandache geometries, including those of map geometries, planar map geometries and pseudo-plane geometries, in which the Finsler geometry, particularly the Riemann geometry appears as a special case of these Smarandache geometries. The third part of this book considers the applications of multi-spaces to theoretical physics, including the relativity theory, the M-theory and the cosmology. Multi-space models for p-branes and cosmos are constructed and some questions in cosmology are clarified by multi-spaces. The first two parts are relative independence for reading and in each part open problems are included for further research of interested readers (part IV)
[5417] vixra:1104.0077 [pdf]
Smarandache Multi-Space Theory(III)
A Smarandache multi-space is a union of n different spaces equipped with some different structures for an integer n ≥ 2, which can be both used for discrete or connected spaces, particularly for geometries and spacetimes in theoretical physics. This monograph concentrates on characterizing various multi-spaces including three parts altogether. The first part is on algebraic multi-spaces with structures, such as those of multi-groups, multi-rings, multi-vector spaces, multi-metric spaces, multi-operation systems and multi-manifolds, also multi-voltage graphs, multi-embedding of a graph in an n-manifold,..., etc.. The second discusses Smarandache geometries, including those of map geometries, planar map geometries and pseudo-plane geometries, in which the Finsler geometry, particularly the Riemann geometry appears as a special case of these Smarandache geometries. The third part of this book considers the applications of multi-spaces to theoretical physics, including the relativity theory, the M-theory and the cosmology. Multi-space models for p-branes and cosmos are constructed and some questions in cosmology are clarified by multi-spaces. The first two parts are relative independence for reading and in each part open problems are included for further research of interested readers (part III)
[5418] vixra:1104.0076 [pdf]
Smarandache Multi-Space Theory(II)
A Smarandache multi-space is a union of n different spaces equipped with some different structures for an integer n &t; 2, which can be both used for discrete or connected spaces, particularly for geometries and spacetimes in theoretical physics. This monograph concentrates on characterizing various multi-spaces including three parts altogether. The first part is on algebraic multi-spaces with structures, such as those of multi-groups, multirings, multi-vector spaces, multi-metric spaces, multi-operation systems and multi-manifolds, also multi-voltage graphs, multi-embedding of a graph in an n-manifold,..., etc.. The second discusses Smarandache geometries, including those of map geometries, planar map geometries and pseudo-plane geometries, in which the Finsler geometry, particularly the Riemann geometry appears as a special case of these Smarandache geometries. The third part of this book considers the applications of multi-spaces to theoretical physics, including the relativity theory, the M-theory and the cosmology. Multi-space models for p-branes and cosmos are constructed and some questions in cosmology are clarified by multi-spaces. The first two parts are relative independence for reading and in each part open problems are included for further research of interested readers.
[5419] vixra:1104.0075 [pdf]
Smarandache Multi-Space Theory(I)
A Smarandache multi-space is a union of n different spaces equipped with some different structures for an integer n ≥ 2, which can be both used for discrete or connected spaces, particularly for geometries and spacetimes in theoretical physics. This monograph concentrates on characterizing various multi-spaces including three parts altogether. The first part is on algebraic multi-spaces with structures, such as those of multi-groups, multirings, multi-vector spaces, multi-metric spaces, multi-operation systems and multi-manifolds, also multi-voltage graphs, multi-embedding of a graph in an n-manifold,..., etc.. The second discusses Smarandache geometries, including those of map geometries, planar map geometries and pseudo-plane geometries, in which the Finsler geometry, particularly the Riemann geometry appears as a special case of these Smarandache geometries. The third part of this book considers the applications of multi-spaces to theoretical physics, including the relativity theory, the M-theory and the cosmology. Multi-space models for p-branes and cosmos are constructed and some questions in cosmology are clarified by multi-spaces. The first two parts are relative independence for reading and in each part open problems are included for further research of interested readers.
[5420] vixra:1104.0074 [pdf]
On Multi-Metric Spaces
A Smarandache multi-space is a union of n spaces A1,A2,...,An with some additional conditions holding. Combining Smarandache multispaces with classical metric spaces, the conception of multi-metric space is introduced. Some characteristics of a multi-metric space are obtained and Banach�s fixed-point theorem is generalized in this paper.
[5421] vixra:1104.0073 [pdf]
On Algebraic Multi-Vector Spaces
A Smarandache multi-space is a union of n spaces A1,A2,...,An with some additional conditions holding. Combining Smarandache multispaces with linear vector spaces in classical linear algebra, the conception of multi-vector spaces is introduced. Some characteristics of a multi-vector space are obtained in this paper.
[5422] vixra:1104.0072 [pdf]
On Algebraic Multi-Ring Spaces
A Smarandache multi-space is a union of n spaces A1,A2,...,An with some additional conditions holding. Combining Smarandache multispaces with rings in classical ring theory, the conception of multi-ring spaces is introduced. Some characteristics of a multi-ring space are obtained in this paper
[5423] vixra:1104.0071 [pdf]
On Algebraic Multi-Group Spaces
A Smarandache multi-space is a union of n spaces A1,A2, ... ,An with some additional conditions holding. Combining classical of a group with Smarandache multi-spaces, the conception of a multi-group space is introduced in this paper, which is a generalization of the classical algebraic structures, such as the group, filed, body,..., etc.. Similar to groups, some characteristics of a multi-group space are obtained in this paper.
[5424] vixra:1104.0069 [pdf]
A Generalization of Stokes Theorem on Combinatorial Manifolds
For an integer m > 1, a combinatorial manifold fM is defined to be a geometrical object fM such that for(...) there is a local chart (see paper) where Bnij is an nij -ball for integers 1 < j < s(p) < m. Integral theory on these smoothly combinatorial manifolds are introduced. Some classical results, such as those of Stokes� theorem and Gauss� theorem are generalized to smoothly combinatorial manifolds in this paper.
[5425] vixra:1104.0068 [pdf]
Geometrical Theory on Combinatorial Manifolds
For an integer m ≥ 1, a combinatorial manifold fM is defined to be a geometrical object fM such that for (...), there is a local chart (see paper) where Bnij is an nij -ball for integers 1 ≤ j ≤ s(p) ≤ m. Topological and differential structures such as those of d-pathwise connected, homotopy classes, fundamental d-groups in topology and tangent vector fields, tensor fields, connections, Minkowski norms in differential geometry on these finitely combinatorial manifolds are introduced. Some classical results are generalized to finitely combinatorial manifolds. Euler-Poincare characteristic is discussed and geometrical inclusions in Smarandache geometries for various geometries are also presented by the geometrical theory on finitely combinatorial manifolds in this paper.
[5426] vixra:1104.0065 [pdf]
Gravity as a Manifestation of de Sitter Invariance over a Galois Field
We consider a system of two free bodies in de Sitter invariant quantum mechanics. De Sitter invariance is understood such that representation operators satisfy commutation relations of the de Sitter algebra. Our approach does not involve quantum field theory, de Sitter space and its geometry (metric and connection). At very large distances the standard relative distance operator describes a well known cosmological acceleration. In particular, the cosmological constant problem does not exist and there is no need to involve dark energy or other fields for solving this problem. At the same time, for systems of macroscopic bodies this operator does not have correct properties at smaller distances and should be modified. We propose a modification which has correct properties, reproduces Newton�s gravity, the gravitational redshift of light and the precession of Mercury�s perihelion if the width of the de Sitter momentum distribution δ for a macroscopic body is inversely proportional to its mass m. We argue that fundamental quantum theory should be based on a Galois field with a large characteristic p which is a fundamental constant characterizing laws of physics in our Universe. Then one can give a natural explanation that δ = constR/(mG) where R is the radius of the Universe (such that λ = 3/R<sup>2</sup> is the cosmological constant) and G is a quantity defining Newton�s gravity. A very rough estimation gives G ~ R/(m<sub>N</sub>lnp) where mN is the nucleon mass. If R is of order 10<sup>26</sup>m then lnp is of order 10<sup>80</sup> and therefore p is of order exp(10<sup>80</sup>). In the formal limit p → ∞ gravity disappears, i.e. in our approach gravity is a consequence of finiteness of nature.
[5427] vixra:1104.0064 [pdf]
The Many Novel Physical Consequences of Born�s Reciprocal Relativity in Phase-Spaces
We explore the many novel physical consequences of Born�s reciprocal Relativity theory in flat phase-space and to generalize the theory to the curved phase-space scenario. We provide with six specific novel physical results resulting from Born�s reciprocal Relativity and which are not present in Special Relativity. These are : momentum-dependent time delay in the emission and detection of photons; energy-dependent notion of locality; superluminal behavior; relative rotation of photon trajectories due to the aberration of light; invariance of areas-cells in phase-space and modified dispersion relations. We finalize by constructing a Born reciprocal general relativity theory in curved phase-spaces which requires the introduction of a complex Hermitian metric, torsion and nonmetricity.
[5428] vixra:1104.0062 [pdf]
Pseudo-Manifold Geometries with Applications
A Smarandache geometry is a geometry which has at least one Smarandachely denied axiom(1969), i.e., an axiom behaves in at least two different ways within the same space, i.e., validated and invalided, or only invalided but in multiple distinct ways and a Smarandache n-manifold is a n-manifold that support a Smarandache geometry. Iseri provided a construction for Smarandache 2-manifolds by equilateral triangular disks on a plane and a more general way for Smarandache 2-manifolds on surfaces, called map geometries was presented by the author in [9]-[10] and [12]. However, few observations for cases of n ≥ 3 are found on the journals. As a kind of Smarandache geometries, a general way for constructing dimensional n pseudo-manifolds are presented for any integer n ≥ 2 in this paper. Connection and principal fiber bundles are also defined on these manifolds. Following these constructions, nearly all existent geometries, such as those of Euclid geometry, Lobachevshy-Bolyai geometry, Riemann geometry, Weyl geometry, Kähler geometry and Finsler geometry, ...,etc., are their sub-geometries.
[5429] vixra:1104.0061 [pdf]
Combinatorial Speculations and the Combinatorial Conjecture for Mathematics
Combinatorics is a powerful tool for dealing with relations among objectives mushroomed in the past century. However, an more important work for mathematician is to apply combinatorics to other mathematics and other sciences not merely to find combinatorial behavior for objectives. Recently, such research works appeared on journals for mathematics and theoretical physics on cosmos. The main purpose of this paper is to survey these thinking and ideas for mathematics and cosmological physics, such as those of multi-spaces, map geometries and combinatorial cosmoses, also the combinatorial conjecture for mathematics proposed by myself in 2005. Some open problems are included for the 21th mathematics by a combinatorial speculation.
[5430] vixra:1104.0060 [pdf]
Parallel Bundles in Planar Map Geometries
Parallel lines are very important objects in Euclid plane geometry and its behaviors can be gotten by one�s intuition. But in a planar map geometry, a kind of the Smarandache geometries, the situation is complex since it may contains elliptic or hyperbolic points. This paper concentrates on the behaviors of parallel bundles in planar map geometries, a generalization of parallel lines in plane geometry and obtains characteristics for parallel bundles.
[5431] vixra:1104.0059 [pdf]
A New View of Combinatorial Maps by Smarandache�s Notion
On a geometrical view, the conception of map geometries is introduced, which is a nice model of the Smarandache geometries, also new kind of and more general intrinsic geometry of surfaces. Some open problems related combinatorial maps with the Riemann geometry and Smarandache geometries are presented.
[5432] vixra:1104.0056 [pdf]
Crystal Power: Piezo Coupling to the Quantum Zero Point
We consider electro-optical constructions in which the Casimir force is modulated in opposition to piezo-crystal elasticity, as in a stack of alternating tunably conductive and piezo layers. Adjacent tunably conducting layers tuned to conduct, attract by the Casimir force compressing the intermediate piezo, but when subsequently detuned to insulate, sandwiched piezo layers expand elastically to restore their original dimension. In each cycle some electrical energy is made available from the quantum zero point (zp). We estimate that the maximum power that could be derived at semiconductor THz modulation rates is megawatts/cm<sup>3</sup> ! Similarly a permittivity wave generated by a THz acoustic wave in a single crystal by the acousto-optic effect produces multiple coherent Casimir wave mode overtones and a bulk mode. We model the Casimir effect in a sinusoidally graded medium finding it to be very enhanced over what is found in a multilayer stack for the equivalent permittivity contrast, and more slowly decreasing with scale, going as the wavelength 1/λ<sup>2</sup>. Acoustic waves give comparable theoretical power levels of MW/cm<sup>3</sup> below normal crystal damage thresholds. Piezo thermodynamic relations give conditions for effective coupling of the Casimir bulk mode to an external electrical load. Casimir wave modes may exchange energy with the main acoustic wave too, which may partially account for THz attenuation seen in materials. We outline feasibility issues for building a practical crystal power generator.
[5433] vixra:1104.0054 [pdf]
Microscopes and Telescopes for Theoretical Physics : How Rich Locally and Large Globally is the Geometric Straight Line ?
One is reminded in this paper of the often overlooked fact that the geometric straight line, or GSL, of Euclidean geometry is not necessarily identical with its usual Cartesian coordinatisation given by the real numbers in <b>R</b>. Indeed, the GSL is an abstract idea, while the Cartesian, or for that matter, any other specific coordinatisation of it is but one of the possible mathematical models chosen upon certain reasons. And as is known, there are a a variety of mathematical models of GSL, among them given by nonstandard analysis, reduced power algebras, the topological long line, or the surreal numbers, among others. As shown in this paper, the GSL can allow coordinatisations which are arbitrarily more rich locally and also more large globally, being given by corresponding linearly ordered sets of no matter how large cardinal. Thus one can obtain in relatively simple ways structures which are more rich locally and large globally than in nonstandard analysis, or in various reduced power algebras. Furthermore, vector space structures can be defined in such coordinatisations. Consequently, one can define an extension of the usual Differential Calculus. This fact can have a major importance in physics, since such locally more rich and globally more large coordinatisations of the GSL do allow new physical insights, just as the introduction of various microscopes and telescopes have done. Among others, it and general can reassess special relativity with respect to its independence of the mathematical models used for the GSL. Also, it can allow the more appropriate modelling of certain physical phenomena. One of the long vexing issue of so called �infinities in physics� can obtain a clarifying reconsideration. It indeed all comes down to looking at the GSL with suitably constructed microscopes and telescopes, and apply the resulted new modelling possibilities in theoretical physics. One may as well consider that in string theory, for instance, where several dimensions are supposed to be compact to the extent of not being observable on classical scales, their mathematical modelling may benefit from the presence of infinitesimals in the mathematical models of the GSL presented here. However, beyond all such particular considerations, and not unlikely also above them, is the following one : theories of physics should be not only background independent, but quite likely, should also be independent of the specific mathematical models used when representing geometry, numbers, and in particular, the GSL. One of the consequences of considering the essential difference between the GSL and its various mathematical models is that what appears to be the definitive answer is given to the intriguing question raised by Penrose : �Why is it that physics never uses spaces with a cardinal larger than that of the continuum ?�.
[5434] vixra:1104.0053 [pdf]
A New Proof of Menelaus's Theorem of Hyperbolic Quadrilaterals in the Poincaré Model of Hyperbolic Geometry
In this study, we present a proof of the Menelaus theorem for quadrilaterals in hyperbolic geometry, and a proof for the transversal theorem for triangles.
[5435] vixra:1104.0045 [pdf]
On the Cold Big Bang Cosmology an Alternative Solution Within the GR Cosmology
We solve the general relativity (GR) field equations under the cosmological scope via one extra postulate. The plausibility of the postulate resides within the Heisenberg indeterminacy principle, being heuristically analysed throughout the appendix. Under this approach, a negative energy density may provide the positive energy content of the universe via fluctuation, since the question of conservation of energy in cosmology is weakened, supported by the known lack of scope of the Noether's theorem in cosmology. The initial condition of the primordial universe turns out to have a natural cutoff such that the temperature of the cosmological substratum converges to the absolute zero, instead of the stablished divergence at the very beginning. The adopted postulate provides an explanation for the cosmological dark energy open question. The solution agrees with cosmological observations, including a 2.7K CMBT prediction.
[5436] vixra:1104.0036 [pdf]
The Geometry of Large Rotating Systems
This paper presents an analytical solution to the geometry of large rotating systems which reconciles the peculiar rotation profiles of distant galaxies with Einstein's principle of General Relativity. The resulting mathematical solution shows that large rotating systems are distorted in the space of a non-rotating observer into a spiral pattern with tangential velocities that behave in agreement with those observed in distant galaxies. This paper also demonstrates how the scale of the spiral structure of rotating systems can be used to determine its distance from the observer. The authors' proposed equations for the rotation pro le and the distance measure are compared with the observed rotation pro les and Cepheid distance measurements of several galaxies with strong agreement. A formal error analysis is not included however the authors suggest a method for better qualifying the accuracy of the theorums.
[5437] vixra:1104.0035 [pdf]
The Archaeological Search for Tartessos-Tarshish-Atlantis and Other Human Settlements in the Donana National Park
Adolf Schulten suggested that Tartessos-Tarshish was the model for Plato's Atlantis. I argued that its capital was situated in what is now the Marisma de Hinojos within the central part of the Andalucian Donana National Park in south-west Spain. This article reports about the preliminary results of an archaeological expedition to test this theory. The preliminary results of the expedition include evidence of either a tsunami or a storm flood during the third millenium BC and evidence of human settlements from the Neolithic Age to the Middle Ages.
[5438] vixra:1104.0009 [pdf]
The Tetron Model in 6+1 Dimensions
The possibility of a 6+1 dimensional spacetime model being the fundamental theory for elementary particle interactions is explored. The dynamical object is an (octonion) spinor defined over a spacetime lattice with S8 permutation symmetry which gets broken to S4 x S4. Electroweak parity violation is argued to arise from the interplay of the two permutation groups S4 or eventually from the definition of the octonion product. It corresponds to a change in sign for odd permutation lattice transformations and is shown to suggest a form for the Hamiltonian.
[5439] vixra:1103.0128 [pdf]
Possible Nonstandard Effects in Z-Gamma Events at LEP2
We point out that the so�called �radiative return� events e+e- → Z are suited to the study of nonstandard physics, particularly if the vector bosons are emitted into the central detector region. An effective vertex is constructed which contains the most general gauge invariant e+e-Z interaction and its phenomenolgocial consequences are examined. Low Energy Constraints on the effective vertex are discussed as well.
[5440] vixra:1103.0127 [pdf]
Complete Helicity Decomposition of the B-T-Tbar Vertex Including Higher Order QCD Corrections and Applications to E+e → T+tbar
The complete density matrix for all polarization configurations in the process B* → t+tbar, where B* is an off�shell Z or photon and t is the top quark, is calculated numerically including oneloop QCD corrections. The analysis is done in the framework of the helicity formalism. The results are particularly suited for top quark production at the Linear Collider, but may be useful in other circumstances as well. Relations to LEP and Tevatron physics are pointed out.
[5441] vixra:1103.0126 [pdf]
New Interactions in Top Quark Production and Decay at the Tevatron Upgrade
New interactions in top-quark production and decay are studied under the conditions of the Tevatron upgrade. Studying the process q+qbar → t+tbar → b+mu+nu+tbar it is shown how the lepton rapidity and transverse energy distribution are modified by nonstandard modifications of the g-t-tbar- and the t-b-W-vertex.
[5442] vixra:1103.0125 [pdf]
NLO QCD Corrections and Triple Gauge Boson Vertices at the NLC
We study NLO QCD corrections as relevant to hadronic W decay in W pair production at a future 500 GeV e+e- linac, with particular emphasis on the determination of triple gauge boson vertices. We find that hard gluon bremstrahlung may mimic signatures of anomalous triple gauge boson vertices in certain distributions. The size of these effects can strongly depend on the polarisation of the initial e+e- beams.
[5443] vixra:1103.0124 [pdf]
A Low�energy Compatible SU(4)�type Model for Vector Leptoquarks of Mass ≤ 1 Tev
The Standard Model is extended by a SU(2)L singlet of vector leptoquarks. An additional SU(4) gauge symmetry between right�handed up quarks and right�handed leptons is introduced to render the model renormalizable. The arrangement is made in such a way that no conflict with low energy restrictions is encountered. The SU(2)L singlet mediates interactions between the right�handed leptons and up type quarks for which only moderate low energy restrictions exist.
[5444] vixra:1103.0123 [pdf]
A Note on QCD Corrections to A_b(fb) Using Thrust to Determine the B-Quark Direction
I discuss one-loop QCD corrections to the forward backward asymmetry of the bquark in a way appropriate to the present experimental procedure. I try to give insight into the structure of the corrections and elucidate some questions which have been raised by experimental experts. Furthermore, I complete and comment on results given in the literature.
[5445] vixra:1103.0122 [pdf]
Complete Description of Polarization Effects in Top Quark Decays Including Higher Order QCD Corrections
The complete set of matrix elements for all polarization configurations in top quark decays is presented including higher order QCD corrections. The analysis is done in the framework of the helicity formalism. The results can be used in a variety of circumstances, e.g. in the experimental analysis of top quark production and decay at Tevatron, LHC and NLC. Relations to LEP1 and LEP2 physics are pointed out.
[5446] vixra:1103.0121 [pdf]
Directions in High Energy Physics
The future goals of particle physics are classified from a theorist�s point of view. The prospects of mass and mixing angle determination and of the top quark and Higgs boson discovery are discussed. It is shown that the most important progress will come from LHC and NLC. These machines should be planned and developed as quickly as possible.
[5447] vixra:1103.0120 [pdf]
The First Moment of δg(x) � a Comparative Study
The sensititivity of various future polarization experiments to the first moment Δg of the polarized gluon density is elucidated in detail. It is shown to what extent the first moment can be extracted from the future data as compared to the higher moments. We concentrate on two processes which in the near future will become an important source of information on the polarized gluon density, namely the photoproduction of open charm to be studied at CERN (COMPASS) and SLAC and the production of direct photons at RHIC.
[5448] vixra:1103.0116 [pdf]
QCD Corrections to W Pair Production at LEP200
One loop QCD corrections to hadronic W decay are calculated for arbitrary W polarizations . The results are applied to W pair production and decay at LEP200. We focus on the corrections to angular distributions with particular emphasis on azimuthal distributions and correlations. The relevance of our results to the experimental determination of possible nonstandard triple gauge bosons interactions is discussed.
[5449] vixra:1103.0114 [pdf]
The Finite Element Method (Fem)to Finding the Reverberation Times of Irregular Rooms
In this paper we applied a finite element method to finding the effects on the reverberation times of common irregularities like curved surfaces, non-parallel walls and large open-walled ante-rooms, found in auditoria. The number of modes having a reverberation time in a specified time interval is expressed as a function of the total allowed degrees of freedom and it is shown that even when the number of degrees of freedom of the model is large there is, in general, no one dominant group. Curved surfaces in particular lead to a situation where some modes have very long reverberation times, leading to bad acoustics. In such situations a knowledge of the offending mode shapes give an indication on where to position absorptive material for optimum effect.
[5450] vixra:1103.0111 [pdf]
Some Studies on K-Essence Lagrangian
It has by now established that the universe consists of roughly 25 percent dark matter and 70 percent dark energy. Parametric lagrangian from an exact k-essence lagrangian is studied of an uni ed dark matter and dark energy model.
[5451] vixra:1103.0110 [pdf]
The Structuring Force of Galaxies
The concept of rational structure was suggested in 2000. A flat material distribution is called the rational structure if there exists a special net of orthogonal curves on the plane, and the ratio of mass density at one side of a curve (from the net) to the one at the other side is constant along the curve. Such curve is called a proportion curve. Such net of curves is called an orthogonal net of proportion curves. Eleven years have passed and a rational sufficient condition for given material distribution is finally obtained. This completes the mathematical basis for the study of rational structure and its galaxy application. People can fit the stellar distribution of a barred spiral galaxy with exponential disk and dual-hand structure by varying their parameter values. If the conjecture is proved that barred galaxies satisfy a rational suffcient condition then the assumption of galaxy rational origin will be established.
[5452] vixra:1103.0104 [pdf]
Quantum Field Theory with Electric-Magnetic Duality and Spin-Mass Duality But Without Grand Unification and Supersymmetry
I present a generalization of quantum electrodynamics which includes Dirac magnetic monopoles and the Salam magnetic photon. This quantum electromagnetodynamics has many attractive features. (1) It explains the quantization of electric charge. (2) It describes symmetrized Maxwell equations. (3) It is manifestly covariant. (4) It describes local four-potentials. (5) It avoids the unphysical Dirac string. (6) It predicts a second kind of electromagnetic radiation which can be veri ed by a tabletop experiment. An e ect of this radiation may have been observed by August Kundt in 1885. Furthermore I discuss a generalization of General Relativity which includes Cartan's torsion. I discuss the mathematical de nition, concrete description, and physical meaning of Cartan's torsion. I argue that the electric-magnetic duality of quantum electromagnetodynamics is analogous to the spin-mass duality of Einstein-Cartan theory. A quantum version of this theory requires that the torsion tensor corresponds to a spin-3 boson called tordion which is shown to have a rest mass close to the Planck mass. Moreover I present an empirically satis ed fundamental equation of uni ed eld theory which includes the fundamental constants of electromagnetism and gravity. I conclude with the remark that the concepts presented here require neither Grand Uni cation nor supersymmetry.
[5453] vixra:1103.0089 [pdf]
A Prediction Loophole in Bell's Theorem
We consider the Bell's Theorem setup of Gill et. al. (2002). We present a \proof of concept" that if the source emitting the particles can predict the settings of the detectors with sufficiently large probability, then there is a scenario consistent with local realism that violates the Bell inequality for the setup.
[5454] vixra:1103.0087 [pdf]
Reduced Total Energy Requirements For The Original Alcubierre and Natario Warp Drive Spacetimes-The Role Of Warp Factors.
Warp Drives are solutions of the Einstein Field Equations that allows Superluminal Travel within the framework of General Relativity. There are at the present moment two known solutions: The Alcubierre Warp Drive discovered in 1994 and the Natario Warp Drive discovered in 2001. However as stated by both Alcubierre and Natario themselves the Warp Drive violates all the known energy conditions because the stress energy momentum tensor(the right side of the Einstein Field Equations) for the Einstein tensor G<sub>00</sub> is negative implying in a negative energy density. While from a classical point of view the negative energy is forbidden the quantum theory allows the existence of very small amounts of it being the Casimir effect a good example as stated by Alcubierre himself. But the stress energy momentum tensor of both Alcubierre and Natario Warp Drives have the speed of the ship raised to the square inside its mathematical structure which means to say that as fast the ship goes by then more and more amounts of negative energy are needed in order to maintain the Warp Drive. Since the total energy requirements to maintain the Warp Drive are enormous and since quantum theory only allows small amounts of it,many authors regarded the Warp Drive as unphysical and impossible to be achieved. We compute the negative energy density requirements for a Warp Bubble with a radius of 100 meters(large enough to contain a ship) moving with a speed of 200 times light speed(fast enough to reach stars at 20 light-years away in months not in years)and we verify that the negative energy density requirements are of about 10<sup>28</sup> times the positive energy density of Earth!!!(We multiply the mass of Earth by c<sup>2</sup> and divide by Earth volume for a radius of 6300km). However both Alcubierre and Natario Warp Drives as members of the same family of the Einstein Field Equations requires the so-called Shape Functions in order to be mathematically defined. We present in this work two new Shape Functions one for the Alcubierre and another for the NatarioWarp Drive Spacetimes that allows arbitrary Superluminal speeds while keeping the negative energy density at �low� and �affordable� levels.We do not violate any known law of quantum physics and we maintain the original geometries of both Alcubierre and Natario Warp Drive Spacetimes.
[5455] vixra:1103.0075 [pdf]
Probability Distribution Function of the Particle Number in a System with Concurrent Existence of Temperature T and Potential Φ
In a system, coupling between the large number of charged particles will induce potential Φ. When temperature T and potential Φ concurrently exist in the system, the particle potential energy and kinetic energy would satisfy the probabilistic statistical distribution. Based on such consideration, we established the quantum statistical distribution for the particle. When temperature T → 0, and the potential is extremely low, all the particles in the system would approach the ground-state-level distribution.
[5456] vixra:1103.0066 [pdf]
Exceptional Jordan Strings/Membranes and Octonionic Gravity/p-branes
Nonassociative Octonionic Ternary Gauge Field Theories are revisited paving the path to an analysis of the many physical applications of Exceptional Jordan Strings/Membranes and Octonionic Gravity. The old octonionic gravity constructions based on the split octonion algebra Os (which strictly speaking is not a division algebra) is extended to the full fledged octonion division algebra O. A real-valued analog of the Einstein-Hilbert Lagrangian L = R involving sums of all the possible contractions of the Ricci tensors plus their octonionic-complex conjugates is presented. A discussion follows of how to extract the Standard Model group (the gauge fields) from the internal part of the octonionic gravitational connection. The role of Exceptional Jordan algebras, their automorphism and reduced structure groups which play the roles of the rotation and Lorentz groups is also re-examined. Finally, we construct (to our knowledge) generalized novel octonionic string and p-brane actions and raise the possibility that our generalized 3-brane action (based on a quartic product) in octonionic flat backgrounds of 7, 8 octonionic dimensions may display an underlying E7,E8 symmetry, respectively. We conclude with some final remarks pertaining to the developments related to Jordan exceptional algebras, octonions, black-holes in string theory and quantum information theory.
[5457] vixra:1103.0062 [pdf]
On the Measurement, Statistics and Uncertainty
It is intended here to propose descriptive explanations for the basic statistical concepts. Although most of them are highly familiar to us, their conventional descriptions have vague sides. Especially it was focused on the absolute probabilistic uncertainty which was characterized by momentum of the measurement device and the system which was measured.
[5458] vixra:1103.0057 [pdf]
Serious Anomalies in the Reported Geometry of Einstein�s Gravitational Field
Careful reading of the reported geometry of Einstein�s gravitational field reveals that the physicists have committed fatal errors in the elementary differential geometry of a pseudo-Riemannian metric manifold. These elementary errors in mathematics invalidate much of the reported physics of Einstein�s gravitational field. The consequences for astrophysical theory are significant.
[5459] vixra:1103.0056 [pdf]
Fundamental Errors in the General Theory of Relativity
The notion of black holes voraciously gobbling up matter, twisting spacetime into contortions that trap light, stretching the unwary into long spaghetti-like strands as they fall inward to ultimately collide and merge with an infinitely dense point-mass singularity, has become a mantra of the astrophysical community. There are almost daily reports of scientists claiming that they have again found black holes again here and there. It is asserted that black holes range in size from micro to mini, to intermediate and on up through to supermassive behemoths and it is accepted as scientific fact that they have been detected at the centres of galaxies. Images of black holes interacting with surrounding matter are routinely included with reports of them. Some physicists even claim that black holes will be created in particle accelerators, such as the Large Hadron Collider, potentially able to swallow the Earth, if care is not taken in their production. Yet contrary to the assertions of the astronomers and astrophysicists of the black hole community, nobody has ever found a black hole, anywhere, let alone imaged one. The pictures adduced to convince are actually either artistic impressions (i.e. drawings) or photos of otherwise unidentified objects imaged by telescopes and merely asserted to be due to black holes, ad hoc.
[5460] vixra:1103.0055 [pdf]
On Line-Elements and Radii: A Correction
Using a manifold with boundary various line-elements have been proposed as solutions to Einstein�s gravitational field. It is from such line-elements that black holes, expansion of the Universe, and big bang cosmology have been alleged. However, it has been proved that black holes, expansion of the Universe, and big bang cosmology are not consistent with General Relativity. In a previous paper disproving the black hole theory, the writer made an error which, although minor and having no effect on the conclusion that black holes are inconsistent with General Relativity, is corrected herein for the record.
[5461] vixra:1103.0054 [pdf]
Planck Particles and Quantum Gravity
The alleged existence of so-called Planck particles is examined. The various methods for deriving the properties of these �particles� are examined and it is shown that their existence as genuine physical particles is based on a number of conceptual flaws which serve to render the concept invalid.
[5462] vixra:1103.0053 [pdf]
On Theoretical Contradictions and Physical Misconceptions in the General Theory of Relativity
It is demonstrated herein that:-<br/> 1. The quantity �r� appearing in the so-called �Schwarzschild solution� is neither a distance nor a geodesic radius in the manifold but is in fact the inverse square root of the Gaussian curvature of the spatial section and does not generally determine the geodesic radial distance (the proper radius) from the arbitrary point at the centre of the spherically symmetric metric manifold.<br/> 2. The Theory of Relativity forbids the existence of point-mass singularities because they imply infinite energies (or equivalently, that a material body can acquire the speed of light in vacuo);<br/> 3. Ric=R<sub>μν</sub> =0 violates Einstein�s �Principle of Equivalence� and so does not describe Einstein�s gravitational field;<br/> 4. Einstein�s conceptions of the conservation and localisation of gravitational energy are invalid;<br/> 5. The concepts of black holes and their interactions are ill-conceived;<br/> 6. The FRW line-element actually implies an open, infinite Universe in both time and space, thereby invalidating the Big Bang cosmology.
[5463] vixra:1103.0052 [pdf]
Geometric and Physical Defects in the Theory of Black Holes
The so-called �Schwarzschild solution� is not Schwarzschild�s solution, but a corruption of the Schwarzschild/Droste solution due to David Hilbert (December 1916), wherein m is allegedly the mass of the source of the alleged associated gravitational field and the quantity r is alleged to be able to go down to zero (although no proof of this claim has ever been advanced), so that there are two alleged �singularities�, one at r=2m and another at r=0. It is routinely alleged that r=2m is a �coordinate� or �removable� singularity which denotes the so-called �Schwarzschild radius� (event horizon) and that the �physical� singularity is at r=0. The quantity r in the usual metric has never been rightly identified by the physicists, who effectively treat it as a radial distance from the alleged source of the gravitational field at the origin of coordinates. The consequence of this is that the intrinsic geometry of the metric manifold has been violated in the procedures applied to the associated metric by which the black hole has been generated. It is easily proven that the said quantity r is in fact the inverse square root of the Gaussian curvature of a spherically symmetric geodesic surface in the spatial section of Schwarzschild spacetime and so does not denote radial distance in the Schwarzschild manifold. With the correct identification of the associated Gaussian curvature it is also easily proven that there is only one singularity associated with all Schwarzschild metrics, of which there is an infinite number that are equivalent. Thus, the standard removal of the singularity at r=2m is actually a removal of the wrong singularity, very simply demonstrated herein.
[5464] vixra:1103.0051 [pdf]
The Schwarzschild Solution and Its Implications for Gravitational Waves
The so-called �Schwarzschild solution� is not Schwarzschild�s solution, but a corruption, due to David Hilbert (December 1916), of the Schwarzschild/Droste solution, wherein m is allegedly the mass of the source of a gravitational field and the quantity r is alleged to be able to go down to zero (although no proof of this claim has ever been advanced), so that there are two alleged �singularities�, one at r=2m and another at r=0. It is routinely asserted that r=2m is a �coordinate� or �removable� singularity which denotes the so-called �Schwarzschild radius� (event horizon) and that the �physical� singularity is at r=0. The quantity r in the so-called �Schwarzschild solution� has never been rightly identified by the physicists, who, although proposing many and varied concepts for what r therein denotes, effectively treat it as a radial distance from the claimed source of the gravitational field at the origin of coordinates. The consequence of this is that the intrinsic geometry of the metric manifold has been violated. It is easily proven that the said quantity r is in fact the inverse square root of the Gaussian curvature of a spherically symmetric geodesic surface in the spatial section of the �Schwarzschild solution� and so does not in itself define any distance whatsoever in that manifold. With the correct identification of the associated Gaussian curvature it is also easily proven that there is only one singularity associated with all Schwarzschild metrics, of which there is an infinite number that are equivalent. Thus, the standard removal of the singularity at r=2m is, in a very real sense, removal of the wrong singularity, very simply demonstrated herein. This has major implications for the localisation of gravitational energy i.e. gravitational waves.
[5465] vixra:1103.0046 [pdf]
Concerning Fundamental Mathematical and Physical Defects in the General Theory of Relativity
The physicists have misinterpreted the quantity �r� appearing in the socalled �Schwarzschild solution� as it is neither a distance nor a geodesic radius but is in fact the inverse square root of the Gaussian curvature of a spherically symmetric geodesic surface in the spatial section of the Schwarzschild manifold, and so it does not directly determine any distance at all in the Schwarzschild manifold - in other words, it determines the Gaussian curvature at any point in a spherically symmetric geodesic surface in the spatial section of the manifold. The concept of the black hole is consequently invalid. It is also shown herein that the Theory of Relativity forbids the existence of point-mass singularities because they imply infinite energies (or equivalently, that a material body can acquire the speed of light in vacuo), and so the black hole is forbidden by the Theory of Relativity. That Ric=R<sub>μν</sub> = 0 violates Einstein�s �Principle of Equivalence� and so does not describe Einstein�s gravitational field, is demonstrated. It immediately follows that Einstein�s conceptions of the conservation and localisation of gravitational energy are invalid - the General Theory of Relativity violates the usual conservation of energy and momentum.
[5466] vixra:1103.0025 [pdf]
Guessed Formulae for the Elementary Particle Masses, Interpretation and Arguments of Them and a New View on Quantum Gravity
Formulae for the masses of elementary particles obtained with guessing are presented in the article. A derivation, which tries to physically interpret these formulae, was also made afterward. Said simpli ed, it is obtained that particle masses are varying quickly that they are mainly zero and sometimes roughly equal to Planckian mass, while we, as measurers, measure a particle mass only as a time average. Then the arguments, why it is possible that those formulae are not contradictory with known physical facts, and why it is necessary that something like should exist to describe the masses of elementary particles, are listed. The formulae and models are realistic according to the known physics. It is shown also an example which can be veri ed with a statistical analysis. Some ndings are shown, which can survive although maybe formulae will not survive.
[5467] vixra:1103.0018 [pdf]
Global Weather Control Using Nuclear Reactors on Geographic Poles
Geographic north and south poles are key points in global atmospheric dynamics. Taking chaos theory into account, any large perturbation in the local atmospheric velocity field at the geographic poles, has the potential of e ecting weather patterns all over the globe. Generating thermal upcurrents in the atmosphere at the geographic poles using heat from nuclear reactor, opens up the possibility of benign global weather control - and a globally temperate climate.
[5468] vixra:1102.0058 [pdf]
The Powers of π are Irrational
Transcendence of a number implies the irrationality of powers of a number, but in the case of π there are no separate proofs that powers of π are irrational. We investigate this curiosity. Transcendence proofs for e involve what we call Hermite�s technique; for π�s transcendence Lindemann�s adaptation of Hermite�s technique is used. Hermite�s technique is presented and its usage is demonstrated with irrationality proofs of e and powers of e. Applying Lindemann�s adaptation to a complex polynomial, π is shown to be irrational. This use of a complex polynomial generalizes and powers of π are shown to be irrational. The complex polynomials used involve roots of i and yield regular polygons in the complex plane. One can use graphs of these polygons to visualize various mechanisms used to proof π<sup>2</sup>, π<sup>3</sup>, and π<sup>4</sup> are irrational. The transcendence of π and e are easy generalizations from these irrational cases.
[5469] vixra:1102.0050 [pdf]
Energy Transfer and Differential Entropy of Two Charged Systems that Show Potential Difference
In the process to combine two charged systems, potential-different systems, we testified that energy would transfer from the high-potential system to the low-potential one, during which the entropy of the two systems show corresponding changes.
[5470] vixra:1102.0049 [pdf]
The Partition Function Z and Lagrangian Multiplier 1/(qΦ) of a Particle-Charged System
In a system consisting of a large number of charged particles, the potential energy of the in-system particles satisfy probability distribution. Starting from such consideration, we defined the partition function Z and get the particles in-system distribution function N<sub>i</sub>, and obtained the Lagrangian multiplier 1/(qΦ) by using the Lagrange's multiplication method.
[5471] vixra:1102.0048 [pdf]
The Potential Function and Entropy Function of a System that Carries Large Number of Charged Particles
If a system is consisted of a large number of charged particles, any one of the system's particles would couple with its neighbors by dissimilar strengths. Therefore, the system's particles would produce dissimilar potentials, which satisfy the probability distribution. To make the potential induced by wave number k an exact differential, we introduced the function λ. In this way, we defined the potential function Φ and entropy function S of the system.
[5472] vixra:1102.0035 [pdf]
The Nature�s Selection of Cubic Roots
The English naturalist Charles Darwin established that all species of life have descended over time from common ancestry, and proposed the scientific theory that this branching pattern of evolution resulted from a process that he called natural selection. In fact, Darwin theory dealt with the evolutional phenomena of the biosphere, not its origins. Further more, there exist the natural worlds other than our beloved one. Compared to the large-scale structure of galaxies, the biosphere is �microscopic�. The electromagnetic and nuclear forces which rule the world disappear in the formation of large-scale galaxy structure. Similarly they disappear in the formation of the solar system. My previous papers showed that large-scale galaxy structure originates rationally from an algebraic cubic equation. This paper presents the nature�s selection of the cubic roots and its application to the galaxy NGC 3275.
[5473] vixra:1102.0033 [pdf]
Projective Properties of Informational Voltage in the Periodic System of the Socion
The periodic system of the socion (PSS) gives direction and intensity of informational voltage between two people. The first version of the PSS (G. A. Shulman) was a flat table with two poles. There exists some voltage between the two poles. New version of the PSS (the author) is a two-dimensional projective manifold with two maps. The second map defines some gluing of the two poles of the PSS. It means propagating informational voltage through the glued poles, i.e. through some abstract infinity. One could see correlation with Euler-Varshamov projective arithmetic (by Euler, negative numbers are greater than infinity). Algebraic interpretation of intuition is given.
[5474] vixra:1102.0032 [pdf]
Programming Relativity as the Mathematics of Perspective in a Planck Simulation Hypothesis Universe
The Simulation Hypothesis proposes that all of reality is in fact an artificial simulation, analogous to a computer simulation, and as such our reality is an illusion. Outlined here is a method for programming relativistic mass, space and time at the Planck level as applicable for use in (Planck-level) Universe-as-a-Simulation Hypothesis. For the virtual universe the model uses a 4-axis hyper-sphere that expands in incremental steps (the simulation clock-rate). Virtual particles that oscillate between an electric wave-state and a mass point-state are mapped within this hyper-sphere, the oscillation driven by this expansion. Particles are assigned an N-S axis which determines the direction in which they are pulled along by the expansion, thus an independent particle momentum may be dispensed with. Only in the mass point-state do particles have fixed hyper-sphere co-ordinates. The rate of expansion translates to the speed of light and so in terms of the hyper-sphere co-ordinates all particles (and objects) travel at the speed of light, time (as the clock-rate) and velocity (as the rate of expansion) are therefore constant, however photons, as the means of information exchange, are restricted to lateral movement across the hyper-sphere thus giving the appearance of a 3-D space. Lorentz formulas are used to translate between this `3-D space' and the hyper-sphere co-ordinates, relativity resembling the mathematics of perspective.
[5475] vixra:1102.0028 [pdf]
The Planck Scale in the Light of Psychological Enquiry
<p>A psychological enquiry into the Planck scale in quantum gravity, as guided by the application of Garrett Hardin's three filters against folly: literacy (what are the words?), numeracy (what are the numbers?), and ecolacy (and then, what?).</p> <p>Una indagación psicológica sobre la escala de Planck en la gravedad cuántica según los tres filtros contra la zoncera de Garrett Hardin: alfabetismo (¿cuáles son la palabras?), numerismo (¿cuáles son los números?) y ecolismo (¿y después, qué?).</p>
[5476] vixra:1102.0021 [pdf]
A Mathematical Model of the Quark and Lepton Mixing Angles (2011 Update)
A single mathematical model encompassing both quark and lepton mixing is described. This model exploits the fact that when a 3 × 3 rotation matrix whose elements are squared is subtracted from its transpose, a matrix is produced whose non-diagonal elements have a common absolute value, where this value is an intrinsic property of the rotation matrix. For the traditional CKM quark mixing matrix with its second and third rows interchanged (i.e., c - t interchange), this value equals one-third the corresponding value for the leptonic matrix (roughly, 0.05 versus 0.15). By imposing this and two additional related constraints on mixing, and letting leptonic φ<sub>23</sub> equal 45<sup>°</sup>, a framework is defined possessing just two free parameters. A mixing model is then specified using values for these two parameters that derive from an equation that reproduces the fine structure constant. The resultant model, which possesses no constants adjusted to fit experiment, has mixing angles of θ<sub>23</sub> = 2.367445<sup>°</sup>, θ<sub>13</sub> = 0.190987<sup>°</sup>, θ<sub>12</sub> = 12.920966<sup>°</sup>, φ<sub>23</sub> = 45<sup>°</sup>, φ<sub>13</sub> = 0.013665<sup>°</sup>, and φ<sub>12</sub> = 33.210911<sup>°</sup>. A fourth, newly-introduced constraint of the type described above produces a Jarlskog invariant for the quark matirx of 2.758 ×10<sup>−5</sup>. Collectively these achieve a good fit with the experimental quark and lepton mixing data. The model predicts the following CKM matrix elements: |V<sub>us</sub>| = √0.05 = 2.236 × 10<sup>−1</sup>, |V<sub>ub</sub>| = 3.333 × 10<sup>−3</sup>, and |V<sub>cb</sub>| = 4.131 × 10<sup>−2</sup>. For leptonic mixing the model predicts sin<sup>2</sup>φ<sub>12</sub> = 0.3, sin<sup>2</sup>φ<sub>23</sub> = 0.5, and sin<sup>2</sup>φ<sub>13</sub> = 5.688 × 10<sup>−8</sup>. At the time of its 2007 introduction the model's values for |V<sub>us</sub>| and |V<sub>ub</sub>| had disagreements with experiment of an improbable 3.6σ and 7.0σ, respectively, but 2010 values from the same source now produce disagreements of just 2.4σ and 1.1σ, the absolute error for |V<sub>us</sub>| having been reduced by 53%, and that for |V<sub>ub</sub>| by 78%.
[5477] vixra:1102.0012 [pdf]
The Fine Structure Constant Derived from the Broken Symmetry of Two Simple Algebraic Identities
The fine structure constant is shown to arise naturally in the course of altering the symmetry of two algebraic identities. Specifically, the symmetry of the identity <i>M</i><sup>2</sup> = <i>M</i><sup>2</sup> is "broken" by making the substitution <i>M</i> → <i>M</i> − <i>y</i> on its left side, and the substitution <i>M</i><sup><i>n</i></sup> → <i>M</i><sup><i>n</i></sup> − <i>x</i><sup><i>p</i></sup> on its right side, where <i>p</i> equals the order of the identity; these substitutions convert the above identity into the equation (<i>M</i> − <i>y</i>)<sup>2</sup> = <i>M</i><sup>2</sup> − <i>x</i><sup>2</sup>. These same substitutions are also applied to the only slightly more complicated identity (<i>M</i> / <i>N</i>)<sup>3</sup> + <i>M</i><sup>2</sup> = (<i>M</i> / <i>N</i>)<sup>3</sup> + <i>M</i><sup>2</sup> to produce this second equation (<i>M</i> − <i>y</i>)<sup>3</sup> / <i>N</i><sup>3</sup> + (<i>M</i> − <i>y</i>)<sup>2</sup> = (<i>M</i><sup>3</sup> − <i>x</i><sup>3</sup>) / <i>N</i><sup>3</sup> + <i>M</i><sup>2</sup> − <i>x</i><sup>3</sup>. These two equations are then shown to share a mathematical property relating to <i>dy</i>/<i>dx</i>, where, on the second equation�s right side this property helps define the special case (<i>M</i><sup>3</sup> − <i>x</i><sup>3</sup>) / <i>N</i><sup>3</sup> + <i>M</i><sup>2</sup> − <i>x</i><sup>3</sup> = (10<sup>3</sup> − 0.1<sup>3</sup>) / 3<sup>3</sup> + 10<sup>2</sup> − 0.1<sup>3</sup> = 137.036, which incorporates a value close to the experimental fine structure constant inverse.
[5478] vixra:1102.0010 [pdf]
The Mirror Neutrino Dark Matter Hypothesis
In a quantum information approach to quantum gravity, one naturally extends the Bilson-Thompson braid particle spectrum by a right handed neutrino sector. This suggests a parity restoring non local form of mirror matter, considered as a novel contributor to the dark matter sector. In the non standard Riofrio cosmology, where the entire dark matter sector is approximated by black hole states, the mirror matter should occupy a space on the other side of our conformal horizons, which are present everywhere in our universe. In particular we note that the Koide matrix antineutrino rest mass prediction of 0.00117 eV corresponds precisely to a black body peak temperature of 2.73 K, the CMB temperature, as a result of its annihilation with mirror antineutrinos. Initial consequences of these ideas for dark matter profiles are discussed.
[5479] vixra:1101.0100 [pdf]
Chemical Potential of Equilibrium Electromagnetic Radiation and the Means for Electromagnetic Waves to Propagate in Free Space
The article shows that in case if the photon is viewed as a particle moving in empty space the zero value of chemical potential of equilibrium electromagnetic radiation cannot be explained basing only on first principles of statistical physics. On the contrary, to explain the chemical potential of equilibrium electromagnetic radiation being equal to zero is rather simple if the photon is considered as a quasi-particle that is the way to describe collective motion of a system consisting of particles whose number is a fixed value. Collective motions of the particles of mentioned system are interpreted in the article as oscillations of an electromagnetic eld that corresponds to observation data of modern astronomy, according to which the space, that fills the gaps, both between massive objects and between massive particles forming them, should be attributed to characteristics of a continuous medium.
[5480] vixra:1101.0091 [pdf]
Fermat Last Theorem And Riemann Hypothesis (6)
1637 Fermat wrote: �It is impossible to separate a cube into two cubes, or a biquadrate into two biquadrates, or in general any power higher than the second into powers of like degree: I have discovered a truly marvelous proof, which this margin is too small to contain.� (6)
[5481] vixra:1101.0090 [pdf]
Fermat Last Theorem And Riemann Hypothesis (5)
1637 Fermat wrote: �It is impossible to separate a cube into two cubes, or a biquadrate into two biquadrates, or in general any power higher than the second into powers of like degree: I have discovered a truly marvelous proof, which this margin is too small to contain.� (5)
[5482] vixra:1101.0089 [pdf]
Fermat Last Theorem And Riemann Hypothesis (4)
1637 Fermat wrote: �It is impossible to separate a cube into two cubes, or a biquadrate into two biquadrates, or in general any power higher than the second into powers of like degree: I have discovered a truly marvelous proof, which this margin is too small to contain.� (4)
[5483] vixra:1101.0085 [pdf]
Horizons(Causally Disconnected Regions of Spacetime) and Infinite Doppler Blueshifts in both Alcubierre and Natario Warp Drive Spacetimes
Although both Alcubierre and Natario Spacetimes belongs to the same family of Einstein Field Equations of General Relativity and both have many resemblances between each other ,the Energy Density distribution in the Alcubierre Warp Drive is different than the one found in the Natario Warp Drive.The Horizons will arise in both Spacetimes when approaching Superluminal(Warp) speeds however due to a different distribution of Energy Density the Natario Warp Drive behaves slightly different when compared to the Alcubierre one. The major differences between the Natario and Alcubierre Warp Drive Spacetimes occurs when we study the Infinite Doppler Blueshifts that affect the Alcubierre Spacetime but not affect the Natario one because while in Alcubierre Spacetime the Negative Energy is distributed in a toroidal region above and below the ship perpendicular to the direction of the motion while in front of the ship.the space is empty having nothing to prevent a photon to reach the Horizon because in this case the Horizon lies on empty space,in the Natario Spacetime the Energy Density is distributed in a spherical shell that covers the entire ship and a photon sent to the front will be deflected by this shell of Negative Energy before reaching the Horizon because the Horizon also lies inside this shell and not on �empty� space.This shell avoids the occurrence of Infinite Doppler Blueshifts in the Natario Warp Drive Spacetime.We examine in this work the major differences between both Natario and Alcubierre Spacetimes outlining the repulsive character of the Negative Energy Density.The creation of a Warp Bubble in Alcubierre or Natario Spacetimes is beyond the scope of Classical General Relativity and will have to wait until the arrival of a real Quantum Gravity theory that must encompass Superluminal Non-Local Quantum Entanglement Effects in order to deal with the Horizon problem added to the Geometrical features of Classical General Relativity plus it must also provide a way to generate large outputs of Negative Energy Densities.Since this theory is ahead of our scientific capabilities,we discuss in the end of this work an approach that could be performed by our science in a short period of term.to increase our knowledge about the Warp Drive as a Dynamical Spacetime.
[5484] vixra:1101.0077 [pdf]
A Simple Flat-Universe Model Recovering Mach Principle
Mach Principle presents the absolute universe. For example, when you stand on the ground and relaxed, your arms fall down naturally. However, if you rotate your body then your arms are lifted up as the rotation is faster and faster. Mach principle is that the matter of the whole universe can affect local dynamic systems. That is, the matter of the whole universe sets up the local absolute reference frames. However, both the theories of general relativity and Big Bang are against the absolute reference frames of Mach Principle. Here I present a simple model of flat universe which is consistent to most cosmic laws, and Mach Principle is recovered amazingly.
[5485] vixra:1101.0076 [pdf]
Quantum Gravity Based on Mach Principle and its Solar Application
The starting point of quantum mechanics is the classical algebraic formula connecting energy to momentum: energy is proportional to the squared momentum. As a result, energy and momentum do not be treated equally. The wave equation of quantum mechanics (a differential equation) results from the replacement of the classical energy quantity with the derivative with time and the replacement of the momentum quantity with the derivative with space. Both replacements have a scale factor that is the Planck constant. Similar to the classical formula, the wave equation does not treat time and space equally, and the Planck constant is not canceled out from both sides of the equation. That is, Planck constant remains which describes the microscopic world. My theory of gravity is the local bending of background spacetime based on Mach principle which, as suggested by Einstein, is described by a classical form of second order treating time and space equally. Therefore, the Planck constant is completely canceled out in the wave equation. In other words, the quantization of gravity does not need the Planck constant. This is because gravity obeys Equivalence Principle. But I keep the scale factor which describes the hierarchical structure of local universe as suggested by Laurent Nottale.
[5486] vixra:1101.0072 [pdf]
Smarandache GT-Algebras
We introduce the notion of Smarandache GT-algebras, and the notion of Smarandache GT-Filters of the Smarandache GT-algebra related to the Tarski algebra, and related some properties are investigated.
[5487] vixra:1101.0060 [pdf]
Krugman's War for Supporting a Long Festering Populist Superficiality
It is suggested that, unlike the long practiced traditional superficial approaches to human issues, like those pursued by Krugman among many others, one should give serious consideration to the frequent fact that individuals coming from the same family background, thus with the same racial, ethnic, cultural, economic and social initial conditions, often diverge immensely in their adult lives across ranges between honest work and crime, extremist and moderate politics, or atheism and religion, among others. Presently, no social, political or economics science deals in the least with that issue.
[5488] vixra:1101.0059 [pdf]
When Coupled with Another Particle, a Charged Particle Produces the Coulomb Potential, Weak Coupling Potential and Yukawa Potential
When a charged particle couples through its field with another particle, and based on the dissimilar coupling strengths,it produces the 3 different potential functions ', namely, the Coulomb potential, weak coupling potential and Yukawa potential. The 3 potentials show the common characteristics: They are all periodic functions in time-space. Under the influence of those potentials, a particle takes periodic motions in space. The author notes that the electric field strength of an isolated charged particle would not show the divergent phenomenon.
[5489] vixra:1101.0058 [pdf]
Particle Characteristics Energy and Action of the Free Electromagnetic Field
When discussing the free electromagnetic field, the author takes the electric field and magnetic field as two physical events on the time-space coordinate system. In line with the restricted theory of relativity, the author discusses the particle characteristics of the free electromagnetic field, and deems that the electromagnetic particle and the form of particle-captured energy are in perfect conformity with the Planck quantum assumption. In the ending part of the paper, the author has discussed the value of action exerted by dissimilar particles.
[5490] vixra:1101.0057 [pdf]
Electromagnetic Field Equation and Field Wave Equation
Starting from the Gauss field equation, the author of this paper sets up a group of electromagnetic field functions and a continuity equation which depicts the electric and magnetic fields. This group of equations is in perfect conformity with the Maxwell equation. By using these function groups we derived another group of wave equation of the free electromagnetic field, in which the wave amplitude is the function of frequency ! and wave number k.
[5491] vixra:1101.0054 [pdf]
On the Resolution \textit{of} the Azimuthally Symmetric Theory of Gravitation's $\lambda$-Parameters
In Newtonian gravitational physics, as currently understood, the spin of a gravitating body has no effect on the nature of the gravitational field emergent from this gravitating body. This position has been questioned by the Azimuthally Symmetric Theory of Gravitation (ASTG-model; in Nyambuya $2010$). From the ASTG-model -- which is a theory resulting from the consideration of the azimuthally symmetric solutions of the well known and well accepted Poisson-Laplace equation for gravitation, it has been argued that it is possible to explain the unexpected perihelion shift of Solar planetary orbits. However, as it stands in the present, the ASTG-model suffers from the apparent diabolic defect that there are unknown parameters ($\lambda$'s) in the theory that up to now have not been able to be adequately deduced from theory. If this defect is not taken care of, it would consume the theory altogether, bringing it to a complete standstill, to nothing but an obsolete theory. Effort in resolving this defect has been made in the genesis reading of the theory \textit{i.e.} in \cite{nyambuya10a}. This initial effort in trying to resolve this problem is not complete. In this short reading, we present what we believe is a significant improvement to the resolution of this problem. If this effort proves itself correct, then the ASTG-model is set on a sure pedal to make predictions without having to relay on observations to deduce these unknown parameters. Other than resolving the $\lambda$-parameter problem, this reading is designed to serve as an exposition of the ASTG-model as it currently stands.
[5492] vixra:1101.0053 [pdf]
New Human Civilization Glittering Under Galaxy �Snowflakes�
What is human life? From the material sense, it is a material structure composed of oxygen, carbon and other atoms. From the biological point of view, it is a kind of advanced animal that understands the natural world, recognizes and creates products. In the last tens of thousands of years, human beings created languages and tools, and achieved a near-perfect understanding of the microscopic world of elementary particles. However, in the 21st century, mankind has experienced irreversible crises such as environmental pollution, erratic weather, food shortage, and population explosion. However, the crisis is also an opportunity. In the complexity of this world, a Chinese scientist opened a window for the understanding of human beings ourselves as well as the universe. The Earth is the direct environment for the survival of human life, but the root cause of human creation is Milky Way. Surprisingly enough, the life of galaxies is determined by a cubic algebraic equation. Therefore, the general public all have the potential to understand the lives of galaxies. Coincidentally, with human invention of computers, the general public has the potential to run the simple computer program (See Appendix of this paper) to generate and study galaxy snowflake chart (a simple graph expressing the internal structure of galaxies). Therefore, we see the hope of mankind: A harmonic general-public administrated society of new civilization rather than controlled by a few elites, will inevitably be born!
[5493] vixra:1101.0045 [pdf]
How Logical Foundation of Quantum Theory Derives from Foundational Anomalies in Pure Mathematics
Logical foundation for quantum theory is considered. I claim that quantum theory correctly represents Nature when mathematical physics embraces and indeed features, logical anomalies inherent in pure mathematics. This approach links undecidability in arithmetic with the logic of quantum experiments. The undecidablity occupies an algebraic environment which is the missing foundation for the 3-valued logic predicted by Hans Reichenbach, shown by him to resolve `causal anomalies' of quantum mechanics, such as: inconsistency between prepared and measured states, complementarity between pairs of observables, and the EPR-paradox of action at a distance. Arithmetic basic to mathematical physics, is presented formally as a logical system consisting of axioms and propositions. Of special interest are all propositions asserting the existence of particular numbers. All numbers satisfying the axioms permeate the arithmetic indistinguishably, but these logically partition into two distinct sets: numbers whose existence the axioms determine by proof, and numbers whose existence axioms cannot determine, being neither provable nor negatable. Failure of mathematical physics to incorporate this logical distinction is seen as reason for quantum theory being logically at odds with quantum experiments. Nature is interpreted as having rules isomorphic to the abovementioned axioms with these governing arithmetical combinations of necessary and possible values or effects in experiments. Soundness and Completeness theorems from mathematical logic emerge as profoundly fundamental principles for quantum theory, making good intuitive sense of the subject.
[5494] vixra:1101.0043 [pdf]
A Simple Presentation of Derivation of Harmonic Oscillator and a Different Derivation of the Pythagorean Theorem
Sergio Rojas wrote an article about derivation of classical harmonic oscillator. This derivation is more clear for students. Some sentences are added here which still more visualize his derivation. A derivation of the Pythagorean theorem from kinetic energy law is added. Such derivations are a way, how to improve visualization of fundamental theories of physics, and to visualize their derivations and problems.
[5495] vixra:1101.0041 [pdf]
A Clifford Cl(5,C) Unified Gauge Field Theory of Conformal Gravity, Maxwell and U(4) x U(4) Yang-Mills in 4D
A Clifford Cl(5,C) Unified Gauge Field Theory of Conformal Gravity, Maxwell and U(4)xU(4) Yang-Mills in 4D is rigorously presented extending our results in prior work. The Cl(5,C) = Cl(4,C)⊕Cl(4,C) algebraic structure of the Conformal Gravity, Maxwell and U(4)xU(4) Yang-Mills unification program advanced in this work is that the group structure given by the direct products U(2, 2)xU(4)xU(4) = [SU(2, 2)]<sub>spacetime</sub>x [U(1) x U(4) x U(4)]<sub>internal</sub> is ultimately tied down to four-dimensions and does not violate the Coleman-Mandula theorem because the spacetime symmetries (conformal group SU(2, 2) in the absence of a mass gap, Poincare group when there is mass gap) do not mix with the internal symmetries. Similar considerations apply to the supersymmetric case when the symmetry group structure is given by the direct product of the superconformal group (in the absence of a mass gap) with an internal symmetry group so that the Haag-Lopuszanski-Sohnius theorem is not violated. A generalization of the de Sitter and Anti de Sitter gravitational theories based on the gauging of the Cl(4, 1,R),Cl(3, 2,R) algebras follows. We conclude with a few remarks about the complex extensions of the Metric Affine theories of Gravity (MAG) based on GL(4,C) x<sub>s</sub> C<sup>4</sup>, the realizations of twistors and the N = 1 superconformal su(2, 2|1) algebra purely in terms of Clifford algebras and their plausible role in Witten�s formulation of perturbative N = 4 super Yang-Mills theory in terms of twistor-string variables.
[5496] vixra:1101.0037 [pdf]
Fine Structure Constant α ~ 1/137.036 and Blackbody Radiation Constant α<sub>R</sub> ~ 1/157.555
The fine structure constant α = e<sup>2</sup>/hc ~ 1/137.036 and the blackbody radiation constant α<sub>R</sub> = e<sup>2</sup>(a<sub>R</sub>/k<sup>4</sup><sub>B</sub>)<sup>1/3</sup> ~ 1/157.555 are linked by prime numbers. The blackbody radiation constant is a new method to measure the fine structure constant. It also links the fine structure constant to the Boltzmann constant.
[5497] vixra:1101.0036 [pdf]
Open Letter to P Krugman
It is suggested that psychological aspects of the behaviour of the general population with respect to economics - advocated by Krugman, following the economic crises of 2008 - should be extended to economists as well, in order to avoid the excessive and mostly senseless polarization of economists in opposing ideological camps, a polarization still so dominant nowadays.
[5498] vixra:1101.0026 [pdf]
Discrete Time and Kleinian Structures in Duality Between Spacetime and Particle Physics
The interplay between continuous and discrete structures results in a duality between the moduli space for black hole types and AdS<sub>7</sub> spacetime. The 3 and 4 Q-bit structures of quantum black holes is equivalent to the conformal completion of AdS.
[5499] vixra:1101.0024 [pdf]
How to Use the Cosmological Schwinger Principle for Energy Ux, Entropy, and "Atoms of Space-Time" to Create a Thermodynamic Space-Time and Multiverse
We make explicit an idea by Padmanabhan in DICE 2010 [1], as to finding "atoms of space{time" permitting a thermodynamic treatment of emergent structure similar to Gibbs treatment of statistical physics. That is, an ensemble of gravitons is used to give an "atom" of space-time congruent with relic GW. The idea is to reduce the number of independent variables to get a simple emergent space-time structure of entropy. An electric field, based upon the cosmological Schwinger principle, is linked to relic heat flux, with entropy production tied in with candidates as to infl aton potentials. The effective electric field links with the Schwinger 1951s result of an E field leading to pairs of e+e- charges nucleated in space-time volume V.t. Note that in most in flationary models, the assumption is for a magnetic field, not an electric field. An electric field permits a kink-anti-kink construction of an emergent structure, which includes Glinka's recent pioneering approach to a Multiverse. Also an E field allows for an emergent relic particle frequency range between one and 100 GHz. The novel contribution is a relic E field, instead of a B field, in relic space-time "atom" formation and vacuum nucleation of the same.
[5500] vixra:1101.0015 [pdf]
Introducing Distance and Measurement in General Relativity: Changes for the Standard Tests and the Cosmological Large-Scale
Relativistic motion in the gravitational field of a massive body is governed by the external metric of a spherically symmetric extended object. Consequently, any solution for the point-mass is inadequate for the treatment of such motions since it pertains to a fictitious object. I therefore develop herein the physics of the standard tests of General Relativity by means of the generalised solution for the field external to a sphere of incompressible homogeneous fluid.
[5501] vixra:1101.0014 [pdf]
On the General Solution to Einstein�s Vacuum Field for the Point-Mass When λ ≠ 0 and Its Consequences for Relativistic Cosmology
It is generally alleged that Einstein�s theory leads to a finite but unbounded universe. This allegation stems from an incorrect analysis of the metric for the point-mass when λ ≠ 0. The standard analysis has incorrectly assumed that the variable r denotes a radius in the gravitational field. Since r is in fact nothing more than a real-valued parameter for the actual radial quantities in the gravitational field, the standard interpretation is erroneous. Moreover, the true radial quantities lead inescapably to λ = 0 so that, cosmologically, Einstein�s theory predicts an infinite, static, empty universe.
[5502] vixra:1101.0013 [pdf]
The Kruskal-Szekeres �Extension�: Counter-Examples
The Kruskal-Szekeres �coordinates� are said to �extend� the so-called �Schwarzschild solution�, to remove an alleged �coordinate singularity� at the event horizon of a black hole at r = 2m, leaving an infinitely dense point-mass singularity at �the origin� r = 0. However, the assumption that the point at the centre of spherical symmetry of the �Schwarzschild solution� is at �the origin� r = 0 is erroneous, and so the Kruskal-Szekeres �extension� is invalid; demonstrated herein by simple counter-examples.
[5503] vixra:1101.0012 [pdf]
On the Regge-Wheeler Tortoise and the Kruskal-Szekeres Coordinates
The Regge-Wheeler tortoise �coordinate� and the the Kruskal-Szekeres �extension� are built upon a latent set of invalid assumptions. Consequently, they have led to fallacious conclusions about Einstein�s gravitational field. The persistent unjustified claims made for the aforesaid alleged coordinates are not sustained by mathematical rigour. They must therefore be discarded.
[5504] vixra:1101.0011 [pdf]
On the Vacuum Field of a Sphere of Incompressible Fluid
The vacuum field of the point-mass is an unrealistic idealization which does not occur in Nature - Nature does not make material points. A more realistic model must therefore encompass the extended nature of a real object. This problem has also been solved for a particular case by K. Schwarzschild in his neglected paper on the gravitational field of a sphere of incompressible fluid. I revive Schwarzschild�s solution and generalise it. The black hole is necessarily precluded. A body cannot undergo gravitational collapse to a material point.
[5505] vixra:1101.0010 [pdf]
On the Generalisation of Kepler�s 3rd Law for the Vacuum Field of the Point-Mass
I derive herein a general form of Kepler�s 3rd Law for the general solution to Einstein�s vacuum field. I also obtain stable orbits for photons in all the configurations of the point-mass. Contrary to the accepted theory, Kepler�s 3rd Law is modified by General Relativity and leads to a finite angular velocity as the proper radius of the orbit goes down to zero, without the formation of a black hole. Finally, I generalise the expression for the potential function of the general solution for the point-mass in the weak field.
[5506] vixra:1101.0008 [pdf]
On Isotropic Coordinates and Einstein�s Gravitational Field
It is proved herein that the metric in the so-called �isotropic coordinates� for Einstein�s gravitational field is a particular case of an infinite class of equivalent metrics. Furthermore, the usual interpretation of the coordinates is erroneous, because in the usual form given in the literature, the alleged coordinate length (see paper) is not a coordinate length. This arises from the fact that the geometrical relations between the components of the metric tensor are invariant and therefore bear the same relations in the isotropic system as those of the metric in standard Schwarzschild coordinates.
[5507] vixra:1101.0007 [pdf]
The Black Hole Catastrophe And the Collapse of Spacetime
The notion of black holes voraciously gobbling up matter, twisting spacetime into contortions that trap light, stretching the unwary into long spaghetti-like strands as they fall inward to ultimately collide and merge with an infinitely dense point-mass singularity, has become a mantra of the astrophysical community, so much so that even primaryschool children know about the sinister black hole. There are almost daily reports of scientists claiming that they have again found black holes here and there. It is asserted that black holes range in size from micro to mini, to intermediate and on up through to supermassive behemoths. Black holes are spoken of as scientific facts and it is routinely claimed that they have been detected at the centres of galaxies. Images of black holes having their wicked ways with surrounding matter are routinely included with reports of them. Some physicists even claim that black holes will be created in particle accelerators, such as the Large Hadron Collider, potentially able to swallow the Earth. Despite the assertions of the astronomers and astrophysicists, nobody has ever found a black hole, anywhere, let alone �imaged� one. The pictures adduced to convince are actually either artistic impressions (i.e. drawings) or photos of otherwise unidentified objects imaged by telescopes and merely asserted to be due to black holes, ad hoc.
[5508] vixra:1101.0006 [pdf]
On Certain Conceptual Anomalies in Einstein�s Theory of Relativity
There are a number of conceptual anomalies occurring in the Standard exposition of Einstein�s Theory of Relativity. These anomalies relate to issues in both mathematics and in physics and penetrate to the very heart of Einstein�s theory. This paper reveals and amplifies a few such anomalies, including the fact that Einstein�s field equations for the so-called static vacuum configuration, R<sub>μν</sub> =0, violates his Principle of Equivalence, and is therefore erroneous. This has a direct bearing on the usual concept of conservation of energy for the gravitational field and the conventional formulation for localisation of energy using Einstein�s pseudo-tensor. Misconceptions as to the relationship between Minkowski spacetime and Special Relativity are also discussed, along with their relationships to the pseudo-Riemannian metric manifold of Einstein�s gravitational field, and their fundamental geometric structures pertaining to spherical symmetry.
[5509] vixra:1101.0005 [pdf]
Gravitation on a Spherically Symmetric Metric Manifold
The usual interpretations of solutions for Einstein�s gravitational field satisfying the spherically symmetric condition contain anomalies that are not mathematically permissible. It is shown herein that the usual solutions must be modified to account for the intrinsic geometry associated with the relevant line elements.
[5510] vixra:1101.0004 [pdf]
A Brief History of Black Holes
Neither the layman nor the specialist, in general, have any knowledge of the historical circumstances underlying the genesis of the idea of the Black Hole. Essentially, almost all and sundry simply take for granted the unsubstantiated allegations of some ostentatious minority of the relativists. Unfortunately, that minority has been rather careless with the truth and is quite averse to having its claims corrected, notwithstanding the documentary evidence on the historical record. Furthermore, not a few of that vainglorious and disingenuous coterie, particularly amongst those of some notoriety, attempt to dismiss the testimony of the literature with contempt, and even deliberate falsehoods, claiming that history is of no importance. The historical record clearly demonstrates that the Black Hole has been conjured up by combination of confusion, superstition and ineptitude, and is sustained by widespread suppression of facts, both physical and theoretical. The following essay provides a brief but accurate account of events, verifiable by reference to the original papers, by which the scandalous manipulation of both scientific and public opinion is revealed.
[5511] vixra:1101.0003 [pdf]
On the Geometry of the General Solution for the Vacuum Field of the Point-Mass
The black hole, which arises solely from an incorrect analysis of the Hilbert solution, is based upon a misunderstanding of the significance of the coordinate radius r. This quantity is neither a coordinate nor a radius in the gravitational field and cannot of itself be used directly to determine features of the field from its metric. The appropriate quantities on the metric for the gravitational field are the proper radius and the curvature radius, both of which are functions of r. The variable r is actually a Euclidean parameter which is mapped to non-Euclidean quantities describing the gravitational field, namely, the proper radius and the curvature radius.
[5512] vixra:1101.0002 [pdf]
On the Ramifications of the Schwarzschild Space-Time Metric
In a previous paper I derived the general solution for the simple point-mass in a true Schwarzschild space. I extend that solution to the point-charge, the rotating pointmass, and the rotating point-charge, culminating in a single expression for the general solution for the point-mass in all its configurations when Λ = 0. The general exact solution is proved regular everywhere except at the arbitrary location of the source of the gravitational field. In no case does the black hole manifest. The conventional solutions giving rise to various black holes are shown to be inconsistent with General Relativity.
[5513] vixra:1101.0001 [pdf]
The Stability of Electron Orbital Shells based on a Model of the Riemann-Zeta Function
It is shown that the atomic number Z is prime at the beginning of the each s<sup>1</sup>, p<sup>1</sup>, d<sup>1</sup>, and f<sup>1</sup> energy levels of electrons, with some fluctuation in the actinide and lanthanide series. The periodic prime number boundary of s<sup>1</sup>, p<sup>1</sup>, d<sup>1</sup>, and f<sup>1</sup> is postulated to occur because of stability of Schrodinger�s wave equation due to a fundamental relationship with the Riemann-Zeta function.
[5514] vixra:1012.0046 [pdf]
Black Hole Complementarity as a Condition on Pre and Post Selected String States
The holographic principle of black holes tells us the field theoretic information of strings on the event horizon is completely equivalent to field theoretic information in the spacetime one dimension larger outside. This physics is observed on a frame stationary with respect to the black hole. The question naturally arises; what physics is accessed by the observer falling through the event horizon on an inertial frame? This paper examines this and demonstrates a duality between the two perspectives. This question is important for the black hole small enough to exhibit fluctuations comparable to its scale. A sufficiently small quantum black hole will be composed of strings in a superposition of interior and exterior configurations or states
[5515] vixra:1012.0038 [pdf]
Units of a Metric Tensor and Physical Interpretation of the Gravitational Constant.
It is shown that writing the metric tensor in dimensionfull form is mathematically more appropriate and allows a simple interpretation of the gravitational constant as an emergent parameter. It is also shown that the value of the gravitational constant is due to the contribution of all the particles in the Universe. Newton's law of gravitation is derived from atomic considerations only. The Dirac's large number is related to the number of particles in the Universe.
[5516] vixra:1012.0028 [pdf]
Intuitionistic Fuzzy Γ-Ideals of Γ-la-Semigroups.
We consider the intuitionistic fuzzi?cation of the concept of several Γ-ideals in Γ-LA-semigroup S, and investigate some related properties of such Γ-ideals. We also prove in this paper the set of all intuitionistic fuzzy left(right) Γ-ideal of S is become LA-semigroup. We prove In Γ-LA band intuitionistic fuzzy right and left Γ-ideals are coincide..
[5517] vixra:1012.0026 [pdf]
The Equivalence Between Gauge and Non-Gauge Abelian Models
This work is intended to estabilish the equivalence between gauge and non-gauge abelian models. Following a technique proposed by Harada and Tsutsui, it is shown that the Proca and chiral Schwinger models may be equivalent to correspondent gauge invariant ones. Finally, it is shown that a gauge invariant version of the chiral Schwinger model, after integrated out the fermions, can be identified with the 2-D Stueckelberg model without the gauge fixing term.
[5518] vixra:1012.0025 [pdf]
Path-Integral Gauge Invariant Mapping: From Abelian Gauge Anomalies to the Generalized Stueckelberg Mechanism
Reviewing a path-integral procedure of recovering gauge invariance from anomalous effective actions developed by Harada and Tsutsui in the 80's, it is shown that there is another way to achieve gauge symmetry, besides the one presented by the authors, which may be anomaly-free, preserving current conservation. It is also shown that the generalization of Harada-Tsutsui technique to other models which are not anomalous but do not exhibit gauge invariance allows the identification of the gauge invariant formulation of the Proca model with the Stueckelberg model, leading to the interpretation of the gauge invariant mapping as a generalization of the Stueckelberg mechanism.
[5519] vixra:1012.0023 [pdf]
On the Failure of Particle Dark Matter Experiments to Yield Positive Results
It is argued that the failure of particle dark matter experiments to verify its existence may be attributable to a non-Planckian �action,� which renders dark matter�s behavior contradictory to the consequences of quantum mechanics as it applies to luminous matter. It is pointed out that such a possibility cannot be convincingly dismissed in the absence of a physical law that prohibits an elementary �action� smaller than Planck�s. It is further noted that no purely dark matter measurement of Planck�s constant exists. Finally, the possibility of a non-Planckian cold dark matter particle is explored, and found to be consistent with recent astronomical observations.
[5520] vixra:1012.0020 [pdf]
Non-Cartesian Systems : an Open Problem
The following open problem is presented and motivated : Are there physical systems whose state spaces do not compose according to either the Cartesian product, as classical systems do, or the usual tensor product, as quantum systems do ?
[5521] vixra:1012.0019 [pdf]
A Second Measurement Problem ?
Within quantum measurement there is the sharp di erence in the dynamics between the case when the eigenstate of the prepared quantum system is different from any of those of the measuring device, and on the other and, when it is the same with one of those of the measuring devices. It is argued that here one may face a "second measurement problem".
[5522] vixra:1012.0018 [pdf]
On the General Solution to Einstein�s Vacuum Field and Its Implications for Relativistic Degeneracy
The general solution to Einstein�s vacuum field equations for the point-mass in all its configurations must be determined in such a way as to provide a means by which an infinite sequence of particular solutions can be readily constructed. It is from such a solution that the underlying geometry of Einstein�s universe can be rightly explored. I report here on the determination of the general solution and its consequences for the theoretical basis of relativistic degeneracy, i. e. gravitational collapse and the black hole.
[5523] vixra:1012.0014 [pdf]
Four Departures in Mathematics and Physics
Much of Mathematics, and therefore Physics as well, have been limited by four rather consequential restrictions. Two of them are ancient taboos, one is an ancient and no longer felt as such bondage, and the fourth is a surprising omission in Algebra. The paper brings to the attention of those interested these four restrictions, as well as the fact that each of them has by now ways, even if hardly yet known ones, to overcome them.
[5524] vixra:1012.0006 [pdf]
Reasons for Relativistic Mass and Its Influence on Duff's Claims that Dimensionful Quantities Are Physically Nonexistent
The main argument against the relativistic mass is that it does not tell us anything more than the total energy tells us, although it is not incorrect. It is shown that this is not true, because new aspects of special relativity (SR) can be noticed. One reason for this de nition is to show a relation between time dilation and relativistic mass. This relation can be further used to present a connection between space-time and matter more clearly, and to show that space-time does not exist without matter. This means a simpler presentation than is shown with Einstein's general covariance. Therefore, this opposes that SR is only a theory of space-time geometry. Phenomenon of increasing of relativistic mass with speed can be used for a gradual transition from Newtonian mechanics to SR. The postulates, which are used for the de nition of SR, are therefore still clearer and the total derivation of the Lorentz transformation is clearer. Such derivation also gives a more realistic example for the debate regarding Du 's claims. It gives also some counter-arguments for some details of debate about physical nonexistence of dimensionful units and quantities. These details are why three elementary units exist and why a direct physical measurement is not the only possibility for physical existence. Such derivation thus shows that relativistic mass is di erently presented to us as relativistic energy.
[5525] vixra:1012.0002 [pdf]
Some Comments on Projective Quadrics Subordinate to Pseudo-Hermitian Spaces
We study in some detail the structure of the projective quadric Q' obtained by taking the quotient of the isotropic cone in a standard pseudohermitian space H<sub>p,q</sub> with respect to the positive real numbers R<sup>+</sup> and, further, by taking the quotient ~Q = Q'/U(1). The case of signature (1. 1) serves as an illustration. ~Q is studied as a compacti cation of RxH<sub>p-1,q-1</sub>
[5526] vixra:1011.0077 [pdf]
On "Discovering and Proving that π is Irrational"
We discuss the logical fallacies in an article appeared in The American Mathematical Monthly [6], and present the historical origin and motivation of the simple proofs of the irrationality of π.
[5527] vixra:1011.0068 [pdf]
Nonassociative Octonionic Ternary Gauge Field Theories
A novel (to our knowledge) nonassociative and noncommutative octonionic ternary gauge field theory is explicitly constructed that it is based on a ternary-bracket structure involving the octonion algebra. The ternary bracket was defined earlier by Yamazaki. The field strengths F<sub>μ</sub><sub>ν</sub> are given in terms of the 3-bracket [B<sub>μ</sub>,B<sub>ν</sub>, Φ] involving an auxiliary octonionic-valued scalar field Φ = Φ<sup>a</sup>e<sub>a</sub> which plays the role of a "coupling" function. In the concluding remarks a list of relevant future investigations are briefly outlined.
[5528] vixra:1011.0062 [pdf]
The Black Hole Catastrophe: A Reply to J. J. Sharples
A recent Letter to the Editor (Sharples J. J., Coordinate transformations and metric extension: a rebuttal to the relativistic claims of Stephen J. Crothers, Progress in Physics, v.1, 2010) has analysed a number of my publications in Progress in Physics. There are serious problems with this treatment which should be brought to the attention of the Journal's readership. Dr. Sharples has committed errors in both mathematics and physics. For instance, his notion that r = 0 in the so-called "Schwarzschild solution" marks the point at the centre of the related manifold is false, as is his related claim that Schwarzschild's actual solution describes a manifold that is extendible. His post hoc introduction of Newtonian concepts and related mathematical expressions into Schwarzschild's actual solution are invalid; for instance, Newtonian two-body relations into what is alleged to be a one-body problem. Each of the objections are treated in turn and their invalidity fully demonstrated. Black hole theory is riddled with contradictions. This article provides definitive proof that black holes do not exist.
[5529] vixra:1011.0060 [pdf]
New Lewis Structures Through the Application of the Hypertorus Electron Model
The hypertorus electron model is applied to the chemical bond. As a consequence, the bond topology can be determined. A linear correlation is found between the normalized bond area and the bond energy. The normalization number is a whole number. This number is interpreted as the Lewis's electron pair. A new electron distribution in the molecule follows. This discovery prompts to review the chemical bond, as it is understood in chemistry and physics.
[5530] vixra:1011.0057 [pdf]
Is an Algebraic Cubic Equation the Primitive Instinct Beyond Electromagnetic and Nuclear World?
Everyone lives his or her life instinctively. Does the instinct originate from the natural world? If the instinct is a rational process, is the natural world rational? Unfortunately, people have not found any rational principle behind the natural world. Because human activities are realized directly through electromagnetic and nuclear forces of entropy-increase, people are difficult to recognize the principle. Compared to the large-scale structure of galaxies, human bodies and their immediate environment are the �microscopic� world. The electromagnetic and nuclear forces which rule the world, however, disappear in the formation of large-scale galaxy structures. Similarly they disappear in the formation of the solar system. My previous papers found many evidences that galaxies are rational. This paper shows that large-scale galaxy structure must originate from an algebraic cubic equation.
[5531] vixra:1011.0054 [pdf]
Four Comments on "The Road to Reality" by R Penrose
Four comments are presented on the book of Roger Penrose entitled "The Road to Reality, A Complete Guide to the Laws of the Universe". The first comment answers a concern raised in the book. The last three point to important omissions in the book.
[5532] vixra:1011.0053 [pdf]
Champs, Vide, et Univers Miroir
Cet ouvrage est la traduction française du livre "Fields, Vacuum and the Mirror Universe" publié originalement en anglais en 2009, par les physiciens Larissa Borissova et Dmitri Rabounski, enrichi de nouveaux exposés. Le livre propose une analyse physico-mathématique nouvelle en élaborant une théorie des observables dans le cadre de la relativité générale. Dans leur célèbre livre de référence "Théorie des Champs", Lev Landau et Evgeny Lifshitz ont décrit de manière très complète le mouvement des particules dans les champs électromagnétique et gravitationnel. Les méthodes d'analyse covariante alors en vigueur depuis le milieu des années 30 ne prenaient pas encore en compte les concepts de quantités physiquement observables (grandeurs chronologiquement invariantes ou plus précisément grandeurs dites "chronométriques") de la relativité générale. Les auteurs ont donc voulu insister sur la nécessité d'étendre cette perspective mathématique à la théorie physique existante en l'appliquant au mouvement des particules se déplaçant dans les champs électromagnétiques et gravitationnels. De plus, l'étude des mouvements d'une particule douée de moment de rotation intrinsèque, n'a pas été entreprise dans ce contexte par Landau et Lifshitz. C'est pourquoi un exposé séparé du livre a été entièrement consacré à ce type de mouvement particulier. Les auteurs ont également ajouté un chapitre redéfinissant les éléments d'algèbre tensorielle et d'analyse dans le cadre des invariants chronométriques. L'ensemble de cet ouvrage se présente alors comme une contribution supplémentaire à la "Théorie des Champs".
[5533] vixra:1011.0039 [pdf]
Some Orbital and Other Properties of the 'Special Gravitating Annulus'
Our obtaining the analytical equations for the gravitation of a particular type of mathematical annulus, which we called a 'Special Gravitating Annulus' (SGA), greatly facilitates studying its orbital properties by computer programming. This includes isomorphism, periodic and chaotic polar orbits, and orbits in three dimensions. We provide further insights into the gravitational properties of this annulus and describe our computer algorithms and programs. We study a number of periodic orbits, giving them names to aid identification. 'Ellipses extraordinaires' which are bisected by the annulus, have no gravitating matter at either focus and represent a fundamental departure from the normal association of elliptical orbits with Keplerian motion. We describe how we came across this type of orbit and the analysis we performed. We present the simultaneous differential equations of motion of 'ellipses extraordinaires' and other orbits as a mathematical challenge. The 'St.Louis Gateway Arch' orbit contains two 'instantaneous static points' (ISP). Polar elliptical orbits can wander considerably without tending to form other kinds of orbit. If this type of orbit is favoured then this gives a similarity to spiral galaxies containing polar orbiting material. Annular oscillatory orbits and rotating polar elliptical orbits are computed in isometric projection. A 'daisy' orbit is computed in stereo-isometric projection. The singularity at the centre of the SGA is discussed in relation to mechanics and computing, and it appears mathematically different from a black hole. In the Appendix, we prove by a mathematical method that a thin plane self-gravitating Newtonian annulus, free from external influence, exhibiting radial gravitation that varies inversely with the radius in the annular plane, must have an area mass density which also varies inversely with the radius and this exact solution is the only exact solution.
[5534] vixra:1011.0025 [pdf]
The Parameters of S. Marinov's Curve (Evidence for My Three-dimensional Time and my new Wave Formula)
There are various physics phenomenon which can find a simple explanation in linking gravity and electromagnetism. Einstein's Relativity can simply explain only the mass because he considered time as a scalar rather than a vector. The intension of this paper is to propose a new point of view, treating time as a three-dimensional vector, finding three vector value formula by three-dimensional space-time formula (curvature formula), finding a new symmetry on the plane for a wave equation substitute for Maxwell's symmetrical wave, only E and only B, not E and B linked. This linking is in error; in fact they are a two sum effect. The electromagnetic wave in space has, as well known, three energy components: B field E field and wave length (frequency). This energy is acknowledged, but we must see the wave like an elastic chain of single wavelets (a string of individual wavelets).
[5535] vixra:1011.0016 [pdf]
In Support of Comte-Sponville
When I was a small boy, about two years old, on occasion, I had some nightmares. One morning I mentioned that to my Mother, and she replied, in a most natural matter of course manner, as if it had been about a simple and trivial issue, that next time, when I would again have such a nightmare, I should simply remember that I was dreaming, that all it would only be in a dream, and then I should just wake up ... Since my Mother's reply came so instantly, smoothly, and without the least emotion, let alone dramatization, I simply took it as such ... And next time, when a nightmare came upon me during my dream, I simply did, and yes, I managed rather naturally to do, what my Mother had told me to do ... And never ever I would again have nightmares for more than a mere moment, before I would manage to wake up from them ... Later, nightmares would, so to say, even avoid me completely...
[5536] vixra:1011.0011 [pdf]
Re-Identification of the Many-World Background of Special Relativity as Four-World Background. Part II.
The re-identification of the many-world background of the special theory of relativity (SR) as four-world background in the first part of this paper (instead of two-wold background isolated in the initial papers), is concluded in this second part. The flat two-dimensional proper intrinsic spacetime, which underlies the flat four-dimensional proper spacetime in each universe, introduced as ansatz in the initial paper, is derived formally within the four-world picture. The identical magnitudes of masses, identical sizes and identical shapes of the four members of every quartet of symmetry-partner particles or objects in the four universes are shown. The immutability of Lorentz invariance on flat spacetime of SR in each of the four universes is shown to arise as a consequence of the perfect symmetry of relative motion at all times among the four members of every quartet of symmetry-partner particles and objects in the four universes. The The perfect symmetry of relative motions at all times, coupled with the identical magnitudes of masses, identical sizes and identical shapes, of the members of every quartet of symmetry-partner particles and objects in the four universes, guarantee perfect symmetry of state among the universes.
[5537] vixra:1011.0010 [pdf]
Re-Identification of the Many-World Background of Special Relativity as Four-World Background. Part I.
The pair of co-existing symmetrical universes, referred to as our (or positive) universe and negative universe, isolated and shown to constitute a two-world background for the special theory of relativity (SR) in previous papers, encompasses another pair of symmetrical universes, referred to as positive time-universe and negative time-universe. The Euclidean 3-spaces (in the context of SR) of the positive time-universe and the negative time-universe constitute the time dimensions of our (or positive) universe and the negative universe respectively, relative to observers in the Euclidean 3-spaces of our universe and the negative universe and the Euclidean 3-spaces of or our universe and the negative universe constitute the time dimensions of the positive time-universe and the negative time-universe respectively, relative to observers in the Euclidean 3-spaces of the positive time-universe and the negative time-universe. Thus time is a secondary concept derived from the concept of space according to this paper. The one-dimensional particle or object in time dimension to every three-dimensional particle or object in 3-space in our universe is a three-dimensional particle or object in 3-space in the positive time-universe. Perfect symmetry of natural laws is established among the resulting four universes and two outstanding issues about the new spacetime/intrinsic spacetime geometrical representation of Lorentz transformation/intrinsic Lorentz transformation in the two-world picture, developed in the previous papers, are resolved within the larger four-world picture in this first part of this paper.
[5538] vixra:1011.0009 [pdf]
Generalized Uncertainty Principle
Quantum theory brought an irreducible lawlessness in physics. This is accompanied by lack of speci cation of state of a system. We can not measure states even though they ever existed. We can measure only transition from one state into another. We deduce this lack of determination of state mathematically, and thus provide formalism for maximum precision of determination of mixed states. However, the results thus obtained show consistency with Heisenberg's uncertainty relations.
[5539] vixra:1011.0007 [pdf]
A New Face of the Multiverse Hypothesis: Bosonic-Phononic Inflaton Quantum Universes
The boson-phonon duality due to inflaton energy is presented in the context of quantum universes discussed recently by the author. The duality leads to bonons, i.e. the bosonic-phononic quantum universes. This state of things manifestly corresponds to the Lewis-Kripke modal realism, and physical presence of Multiverse in Nature.
[5540] vixra:1010.0056 [pdf]
The Tetron Model as a Lattice Structure: Applications to Astrophysics
The tetron model is reinterpreted as an inner symmetry lattice model where quarks, leptons and gauge fields arise as lattice excitations. On this basis a modification of the standard Big Bang scenario is proposed, where the advent of a spacetime manifold is connected to the appearance of a permutation lattice. The metric tensor is constructed from lattice excitations and a possible reason for cosmic inflation is elucidated. Furthermore, there are natural dark matter candidates in the tetron model. The ratio of ordinary to dark matter in the universe is estimated to be 1:5.
[5541] vixra:1010.0053 [pdf]
Relativity in Combinatorial Gravitational Fields
A combinatorial spacetime (CGj t) is a smoothly combinatorial manifold C underlying a graph G evolving on a time vector t. As we known, Einstein's general relativity is suitable for use only in one spacetime. What is its disguise in a combinatorial spacetime? Applying combinatorial Riemannian geometry enables us to present a combinatorial spacetime model for the Universe and suggest a generalized Einstein's gravitational equation in such model. For finding its solutions, a generalized relativity principle, called projective principle is proposed, i.e., a physics law in a combinatorial spacetime is invariant under a projection on its a subspace and then a spherically symmetric multisolutions of generalized Einstein's gravitational equations in vacuum or charged body are found. We also consider the geometrical structure in such solutions with physical formations, and conclude that an ultimate theory for the Universe maybe established if all such spacetimes in R3. Otherwise, our theory is only an approximate theory and endless forever.
[5542] vixra:1010.0039 [pdf]
Fusion of Imprecise Qualitative Information
In this paper, we present a new 2-tuple linguistic representation model, i.e. Distribution Function Model (DFM), for combining imprecise qualitative information using fusion rules drawn from Dezert-Smarandache Theory (DSmT) framework. Such new approach allows to preserve the precision and efficiency of the combination of linguistic information in the case of either equidistant or unbalanced label model. Some basic operators on imprecise 2-tuple labels are presented together with their extensions for imprecise 2-tuple labels. We also give simple examples to show how precise and imprecise qualitative information can be combined for reasoning under uncertainty. It is concluded that DSmT can deal efficiently with both precise and imprecise quantitative and qualitative beliefs, which extends the scope of this theory.
[5543] vixra:1010.0035 [pdf]
A Philosophical And Mathematical Theory Of Everything
In this theory I measure the "light speed" per duration of "X particle motions". This basic definition of C exclude the term time (the fourth dimension). Instead it include the term "motion inside a particle" ("a particle's spin" may be a better term). Then, in chapter B1 to B9, I first show the 9 consequenses of this new expression in a philosophical description. In chapter C, I show how these consequenses can be used to explain "The quantum theory of wave / particle duality and the phenomenon of wave collapse". In chapter D the consequenses is described in depth in the mathematical form. I will espesially draw your attention to chapter D 3 wich shows two clear-cut predictions. 1. The gravity-ratio between two particle-positions, relative to a reference object/particle (for instance a sun), will sharply drop for the particles farthest away from us, from 12 billion lightyears and farther away. 2. C, the speed limit, is slightly higher inside dense matter than the observed light speed in vacuum out in the universe. This theory shows that not only is mass and "time" relative, in reference to the "constant" C, but also gravity and electromagnetism is relative, here in reference to the constant edge of our universe.
[5544] vixra:1010.0029 [pdf]
On Neutral Particle Gravity with Nonassociative Braids
Some years ago, Bilson-Thompson [1] characterised the fundamental leptons and quarks using simple three strand ribbon diagrams. These diagrams are interpreted in an abstract categorical language, which underlies an information theoretic quantum gravity. More recently, Graham Dungworth [2] has discussed the astrophysical consequences of this non local quantum gravity, under a symmetry restoring extension of the braid set that doubles the matter sector to include mirror matter. The resulting low energy particle set is reinterpreted in the categorical framework for localization, which considers neutral particle oscillations to be responsible for gravity. A few quantitative observational consequences, such as CPT violation in the neutrino sector, are discussed.
[5545] vixra:1010.0025 [pdf]
Combinatorial Maps with Normalized Knot
We consider combinatorial maps with fixed combinatorial knot numbered with augmenting numeration called normalized knot. We show that knot's normalization doesn't affect combinatorial map what concerns its generality. Knot's normalization leads to more concise numeration of corners in maps, e.g., odd or even corners allow easy to follow distinguished cycles in map caused by the fixation of the knot. Knot's normalization may be applied to edge structuring knot too. If both are normalized then one is fully and other partially normalized mutually.
[5546] vixra:1010.0019 [pdf]
Research on Number Theory and Smarandache Notions
This Book is devoted to the proceedings of the Sixth International Conference on Number Theory and Smarandache Notions held in Tianshui during April 24-25, 2010. The organizers were myself and Professor Wangsheng He from Tianshui Normal University. The conference was supported by Tianshui Normal University and there were more than 100 participants. We had one foreign guest, Professor K.Chakraborty from India. The conference was a great success and will give a strong impact on the development of number theory in general and Smarandache Notions in particular. We hope this will become a tradition in our country and will continue to grow. And indeed we are planning to organize the seventh conference in coming March which will be held in Weinan, a beautiful city of shaanxi.
[5547] vixra:1010.0012 [pdf]
On a Generalized Theory of Relativity
The General Theory of Relativity (GTR) is essentially a theory of gravitation. It is built on the Principle of Relativity. It is bonafide knowledge, known even to Einstein the founder, that the GTR violates the very principle upon which it is founded i.e., it violates the Principle of Relativity; because a central equation i.e., the geodesic law which emerges from the GTR, is well known to be in conflict with the Principle of Relativity because the geodesic law, must in complete violation of the Principle of Relativity, be formulated in special (or privileged) coordinate systems i.e., Gaussian coordinate systems. The Principle of Relativity clearly and strictly forbids the existence/use of special (or privileged) coordinate systems in the same way the Special Theory of Relativity forbids the existence of privileged and or special reference systems. In the pursuit of a more Generalized Theory of Relativity i.e., an all-encampusing unified field theory to include the Electromagnetic, Weak & the Strong force, Einstein and many other researchers, have successfully failed to resolve this problem. In this reading, we propose a solution to this dilemma faced by Einstein and many other researchers i.e., the dilemma of obtaining a more Generalized Theory of Relativity. Our solution brings together the Gravitational, Electromagnetic, Weak & the Strong force under a single roof via an extension of Riemann geometry to a new hybrid geometry that we have coined the Riemann-Hilbert Space (RHS). This geometry is a fusion of Riemann geometry and the Hilbert space. Unlike Riemann geometry, the RHS preserves both the length and the angle of a vector under parallel transport because the affine connection of this new geometry, is a tensor. This tensorial affine leads us to a geodesic law that truly upholds the Principle of Relativity. It is seen that the unified field equations derived herein are seen to reduce to the well known Maxwell-Procca equation, the non-Abelian nuclear force field equations, the Lorentz equation of motion for charged particles and the Dirac equation.
[5548] vixra:1010.0011 [pdf]
The Inner Connection Between Gravity, Electromagnetism and Light
In this paper, we prove the existence of an inner connection between gravity and electromagnetism using a different procedure than the standard approaches. Under the assumption of the invariance of the ratio of the Gravitational force to the Electric force in an expanding space-time, we prove that gravity is naturally traceable to the surrounding expanding medium.
[5549] vixra:1010.0006 [pdf]
Power Structures in Finite Fields and the Riemann Hypothesis
Some tools are discussed, in order to build power structures of primitive roots in finite fields for any order q<sup>k</sup>; relations between distinct roots are deduced from m- and shift-and-add- sequences. Some heuristic computational techniques, where information in a m- sequence is built from below, are proposed. Full settlement is finally viewed in a physical scenario, where a path leading to the Riemann Hypothesis can be enlighted.
[5550] vixra:1010.0003 [pdf]
On the Experimental Failure to Detect Dark Matter
It is argued that the failure of dark matter experiments to verify its existence may be attributable to a non-Planckian 'action,' which renders dark matter's behavior contradictory to the consequences of quantum mechanics as it applies to luminous matter. It is pointed out that such a possibility cannot be convincingly dismissed in the absence of a physical law that prohibits an elementary 'action' smaller than Planck's. It is further noted that no purely dark matter measurement of Planck's constant exists.
[5551] vixra:1009.0068 [pdf]
A Map to Unified Grand Model for Space Time, Particles, Fields and Universe Builded on a Trial Mathematical Physics Interpretation of the Holy Quran Creation Story
Since early 1980 there is no new discoveries for the basic laws of physics[1], the main problem behind that is the missing of a complete physical description for our universe physics. The concepts of space time, particles (mass), fields, and energy have a big ambiguity in our deep understanding of physics. One of reasons for that is the limitation of our experimental technology in high energy accelerators, and in cosmological observations. In this research the author follows the creative thinking rules which discovered in the field of human psychology studies[2{4] for solving difficult problems through human specie history. The main idea in creative thinking is to borrow ideas and concepts from fields away from the problems own field, those new ideas and concepts are brought to the field of the problems to help human to go inside the problems and get the solution[5]
[5552] vixra:1009.0057 [pdf]
Why Over 30 Years Aether Wind Was not Detected in Michelson-Type Experiments with Resonators
We show that measured by S.Herrmann et al., Phys.Rev.D 80, 105011 (2009) small (but finite) value of relative variation (δν/ν>0) of the resonance frequency of an evacuated optical resonator, when changing its orientation in space, can not serve as an indication of the absence of a preferred direction concerned with the absolute motion of the setup. On the contrary, the finiteness δν/ν>0 testifies to spatial anisotropy of the velocity of light. In order to detect the absolute motion and determine the value and direction of its velocity, the volume of the resonator should be regarded, at any degree of evacuation, as being an optical medium, with its refractive index <i>n</i>>1 to be necessarily taken into account, irrespective of the extent to be the medium's tenuity. In this event the residual pressure of the evacuated medium should be controlled: that will ensure the magnitude of <i>n</i> to be known at least to the first significant digit after 1.00000... <br> If the working body is a gas then, as in the case of the fringe shift in the interferometer, the shift δν of the resonance frequency of the volume resonator is proportional to <i>n</i><sup>2</sup>–1=Δε and to the square of the velocity υ of absolute motion of the resonator. At sufficiently large values of optical density, δν is proportional to (<i>n</i><sup>2</sup>–1)(2–<i>n</i><sup>2</sup>)=Δε(1–Δε), and at <i>n</i>>1.5 it may possess such a great value that there even becomes possible a jump of the automatic laser frequency trimmer from the chosen <i>m</i>-mode of the reference resonator to its adjacent <i>m</i>±1 modes. Taking into account the effect of the medium permittivity by introducing in calculation the actual value <i>n</i>>1 in experiments with resonators performed by the scheme of the Michelson experiment enabled us to estimate the absolute speed of the Earth as several hundreds kilometers per second.
[5553] vixra:1009.0056 [pdf]
On Conformal Infinity and Compactifications of the Minkowski Space
Using the standard Cayley transform and elementary tools it is reiterated that the conformal compactification of the Minkowski space involves not only the �cone at infinity� but also the 2-sphere that is at the base of this cone. We represent this 2-sphere by two additionally marked points on the Penrose diagram for the compactified Minkowski space. Lacks and omissions in the existing literature are described, Penrose diagrams are derived for both, simple compactification and its double covering space, which is discussed in some detail using both the U(2) approach and the exterior and Clifford algebra methods. Using the Hodge ☆ operator twistors (i.e. vectors of the pseudo-Hermitian space H<usb>2;2</sub>) are realized as spinors (i.e., vectors of a faithful irreducible representation of the even Clifford algebra) for the conformal group SO(4,2)/Z<sub>2</sub>. Killing vector fields corresponding to the left action of U(2) on itself are explicitly calculated. Isotropic cones and corresponding projective quadrics in H<sub>p;q</sub> are also discussed. Applications to flat conformal structures, including the normal Cartan connection and conformal development has been discussed in some detail.
[5554] vixra:1009.0045 [pdf]
Generalized Gravity in Clifford Spaces, Vacuum Energy and Grand Unification
Polyvector-valued gauge field theories in Clifford spaces are used to construct a novel Cl(3, 2) gauge theory of gravity that furnishes modified curvature and torsion tensors leading to important modifications of the standard gravitational action with a cosmological constant. Vacuum solutions exist which allow a cancellation of the contributions of a very large cosmological constant term and the extra terms present in the modified field equations. Generalized gravitational actions in Clifford-spaces are provided and some of their physical implications are discussed. It is shown how the 16 fermions and their masses in each family can be accommodated within a Cl(4) gauge field theory. In particular, the Higgs fields admit a natural Clifford-space interpretation that differs from the one in the Chamseddine-Connes spectral action model of Noncommutative geometry. We finalize with a discussion on the relationship with the Pati-Salam color-flavor model group SU(4)<sub>C</sub> x SU(4)<sub>F</sub> and its symmetry breaking patterns. An Appendix is included with useful Clifford algebraic relations.
[5555] vixra:1008.0061 [pdf]
Some Properties of the Pseudo-Smarandache Function
Charles Ashbacher [1] has posed a number of questions relating to the pseudo-Smarandache function Z(n). In this note we show that the ratio of consecutive values Z(n + 1)/Z(n) and Z(n - 1)/Z(n) are unbounded; that Z(2n)/Z(n) is unbounded; that n/Z(n) takes every integer value infinitely often; and that the series Σ<sub>n</sub> 1/Z(n)<sup>α</sup> is convergent for any α > 1.
[5556] vixra:1008.0052 [pdf]
Is Initial Data for Cosmological Arrow of Time Emerging Due to Inflation Start?
We ask if setting the vanishing chemical potential limit μ -> 0 with entropy S ∝ T<sup>3</sup> [1] for a number of degrees of freedom significantly greater than the standard electroweak value of g<sub>star</sub> ~ 100 - 120, do we have a new foundation for the arrow of time in quantum cosmology with inflation?
[5557] vixra:1008.0050 [pdf]
Could Gravitons from Prior Universe Survive Quantum Bounce to Reappear in Present Universe
We ask the question if an entropy S = E/T with a usual value ascribed of initial entropy S ~ 10<sup>5</sup> of the onset of inflation can allow an order of magnitude resolution of the question of if there could be a survival of a graviton from a prior to the present universe, using typical Planckian peak temperature values of T ~ 10<sup>19</sup> GeV. We obtain the values consistent with up to 1038 gravitons contributing to an energy value of E ~ 10<sup>24</sup> GeV if we assume a relic energy contribution based upon each graviton initially exhibiting a frequency spike of 1010 Hz. The value of E ~ 10<sup>24</sup> GeV is picked from looking at the aftermath of what happens if there exists a quantum bounce with a peak density value of ρ<sub>maximum</sub> ~ 2.07 ρ<sub>Planck</sub> as has been considered recently by P. Malkiewicz and W. Piechocki [15] in the LQG bounce regime radii of the order of magnitude of l ~ 10<sup>35</sup> meters. In this paper estimates specifically avoids using S = (E - μN)/T are done, by setting vanishing chemical potential μ = 0 for ultra high temperatures. Finally we compare briefly the obtained results with the ones recently investigated by G. 't Hooft [20] and L.A. Glinka [21, 22].
[5558] vixra:1008.0045 [pdf]
Transition of Expansion Acceleration of the Universe Through Negative Mass
This letter explains that density of positive mass and negative mass is almost uniformly throughout the whole universe, but density of positive mass and negative mass included in a random universe radius R can be different from each other. Like this, positive, zero, negative values of total gravitational potential energy are all possible due to density difference of positive mass and negative mass included in a random universe radius R. This letter is showing possibility in explaining the decelerating expansion and accelerating expansion due to density difference of positive mass and negative mass because negative mass and positive mass conducts different forms of movement depending on the density difference of positive mass and negative mass. As change of total gravitational potential energy sign occurs from UT=0 and total gravitational potential energy oscillates based on 0. This provides valid explanation regarding to the problems of the atness of the universe and fine tuning.
[5559] vixra:1008.0043 [pdf]
Differentiable Structures on Real Grassmannians
Given a vector space V of dimension n and a natural number k < n, the grassmannian G<sub>k</sub>(V) is defined as the set of all subspaces W ⊂ V such that dim(W) = k. In the case of V = R<sup>n</sup>, G<sub>k</sub>(V) is the set of k-fl ats in R<sup>n</sup> and is called real grassmannian [1]. Recently the study of these manifolds has found applicability in several areas of mathematics, especially in Modern Differential Geometry and Algebraic Geometry. This work will build two differential structures on the real grassmannian, one of which is obtained as a quotient space of a Lie group [1], [3], [2], [7]
[5560] vixra:1008.0025 [pdf]
Survey on Singularities and Differential Algebras of Generalized Functions : A Basic Dichotomic Sheaf Theoretic Singularity Test
It is shown how the infinity of differential algebras of generalized functions is naturally subjected to a basic dichotomic singularity test regarding their significantly different abilities to deal with large classes of singularities. In this respect, a review is presented of the way singularities are dealt with in four of the infinitely many types of differential algebras of generalized functions. These four algebras, in the order they were introduced in the literature are : the nowhere dense, Colombeau, space-time foam, and local ones. And so far, the first three of them turned out to be the ones most frequently used in a variety of applications. The issue of singularities is naturally not a simple one. Consequently, there are different points of view, as well as occasional misunderstandings. In order to set aside, and preferably, avoid such misunderstandings, two fundamentally important issues related to singularities are pursued. Namely, 1) how large are the sets of singularity points of various generalized functions, and 2) how are such generalized functions allowed to behave in the neighbourhood of their point of singularity. Following such a two fold clarification on singularities, it is further pointed out that, once one represents generalized functions - thus as well a large class of usual singular functions - as elements of suitable differential algebras of generalized functions, one of the main advantages is the resulting freedom to perform globally arbitrary algebraic and differential operations on such functions, simply as if they did not have any singularities at all. With the same freedom from singularities, one can perform globally operations such as limits, series, and so on, which involve infinitely many generalized functions. The property of a space of generalized functions of being a flabby sheaf proves to be essential in being able to deal with large classes of singularities. The first and third type of the mentioned differential algebras of generalized functions are flabby sheaves, while the second type fails to be so. The fourth type has not yet been studied in this regard.
[5561] vixra:1008.0024 [pdf]
Hydrodynamics of the Rotating Spherical Matter Fields and Atomic Structure
Atomic structure model was proposed as a rotating stratified fl uidic matter field with the particles corresponded to solitary waves in the field. Mathematical formulation of the proposed structure was constructed on the model of thermal convection in rotating spherical shells of conducting fluids using magnetohydrodynamic Navier-Stokes Equations. Acceleration term was derived using Coulomb potential. Novel model showed that internal structure of atoms is subjected to complex fluid dynamics.
[5562] vixra:1008.0015 [pdf]
Arithmetic Information in Particle Mixing
Quantum information theory motivates certain choices for parameterizations of the CKM and MNS mixing matrices. In particular, we consider the rephasing invariant parameterization of Kuo and Lee, which is given by a sum of real circulants. Noting the relation of this parameterization to complex rotation matrices we find a potential reduction in the degrees of freedom required for the CKM matrix.
[5563] vixra:1008.0011 [pdf]
Reflections on an Asymmetry on the Occasion of Arnold's Passing Away ...
Keeping silent by authorities in science about breakthroughs made by less well known scientists creates a massively asymmetric situation which is to the detriment of science.
[5564] vixra:1008.0003 [pdf]
Why Shamir and Fox Did not Detect "Aether Wind" in 1969?
Up to 1960ies the measurement of the aether wind velocity by the technology of Michelson presumed that a medium placed across the path of light rays has no substantial significance (except as being an obstacle) for obtaining the expected shift of the interference fringe from the brought together orthogonal rays on the interference of the turnabout device. In 1960ties several authors began independent research on the Michelson-type interferometers with different optical media used as light carriers. J.Shamir and R.Fox declared "negative" the results of their measurements on the plexiglas (though they registered the fringe shift 1/3000 fraction of the fringe's width and determined the respective velocity of aether wind 6.5 km/s). The authors considered this result as "enhancing the experimental basis of special relativity", and their report has been published. My results of same years appeared to be positive. I managed to register on gaseous, liquid and solid optically transparent bodies hundred times greater relative shifts of the fringe (0.01-5.0) giving for horizontal projection of the aether wind velocity the value hundreds km/s. At different times of day and night at the latitude of Obninsk city I registered the changing of this velocity in the interval 140-480 km/s. Insofar as my results "weaken the experimental basis of special relativity", their publication is still refused. I will show in the present report, basing on my experimental experience, that in reality Shamir and Fox obtained positive results. The historical precedence of misunderstanding the positive measurements of aether wind of the order 200-400 km/s, performed by Michelson and Miller in 1920-1930ies at lengthened to 32 m air light carriers, described by me in arXiv:0910.5658v3, 24 June 2010, repeated in the work by Shamir and Fox. Misunderstood was another artifact, manifesting itself in an interferometer with a solid light carrier. In the current work, I explain the nature of this artifact, hiding from Shamir and Fox their experimental success in detecting the aether wind velocity of hundreds km/s. I discussed also the inadequacy of their own interpretation of the results.
[5565] vixra:1007.0054 [pdf]
Possibility to Explain Aether and Gravitational Wave from Electromagnetic-Dynamics Equations
The generalized Maxwell equations in vacuum are basically the equations for steady states, which satisfy both energy and force conservation laws. However, superpositions of the steady states often break those conservation laws, although the generalized Maxwell equations are kept. To study those cases, we derived electromagnetic-dynamics equations, which include the generalized Maxwell equations, energy and force conservation laws, and dynamics of scalar fields. These equations explain that the scalar fields may work as the aether propagating the electromagnetic wave, and scalar waves may work as the gravitational waves.
[5566] vixra:1007.0047 [pdf]
Using Gravitation to Emulate Electromagnetism.
The possibility of Universe-scale black holes living in closed 3D space of constant positive curvature was briefly considered in previous work. Further consideration of this possibility is given here. A possible link between gravitation and electromagnetism is discussed.
[5567] vixra:1007.0043 [pdf]
Special Theory of Relativity in Absolute Space and the Symmetric Twin Paradox ( On the Possibility of Absolute Motion )
Departing from the traditional case where one twin stays put while the other rockets into space, we consider the case of identically accelerated twins. Both twins depart at uniform relativistic speeds in opposite directions for a round trip from the Earth on their 21th birthday destined into space to some distant constellation that is a distance L<sub>0</sub> in the rest frame of the Earth. A "proper" application of the Special Theory of Relativity (STR) tells us that the Earth bound observers will conclude that on the day of reunion, both twins must both have aged the same albeit their clocks (which where initially synchronized with that of the Earth bound observers) will have registered a duration less than that registered by the Earth bound observers. In the traditional twin paradox, it is argued that the stay at home twin will have aged more than the traveling twin and the asymmetry is attributed to the fact that the travelling twin's frame of reference is not an inertial reference frame during the periods of acceleration and deceleration making it illegal for the travelling twin to use the STR in their frame, thus "resolving" the paradox. This same argument does not hold in the case considered here as both twins will undergo identical experiences where each twin sees the other as the one that is in motion. This means, each twin must conclude that the other twin is the one that is younger. They will conclude that their ages must be numerically different, thus disagreeing with the Earth bound observers that their ages are the same. This leads us to a true paradox whose resolution is found in the deduction that motion must be absolute. We provide a thought-experiment on how to measure absolute motion. Through this thought-experiment, we extend the second postulate of the STR to include the direction of propagation of light, namely that not only is the speed of light the same for all observers, but the direction of propagation as-well. Succinctly, the speed of light along its direction of motion in the absolute frame of reference is the same for all observers in the Universe. In an effort to try and resolve the symmetric twin paradox, we set-forth a relativistic aether model, which at best can be described as the Special Theory of Relativity in Absolute Space. By recalibrating several experiments performed by other researchers in the past, we find that the Earth's speed through the aether is in the range 240 ± 80 kms<sup>-1</sup>.
[5568] vixra:1007.0039 [pdf]
Can the Edges of a Complete Graph Form a Radially Symmetric Field in Closed Space of Constant Positive Curvature?
In earlier work, it was found that the edges of a complete graph can very nearly form a radially symmetric field at long distance in at 2D and 3D space if the number of graph vertices is great enough. In this work, it is confirmed that the edges of a complete graph can also very nearly form a radially symmetric field in closed 2D and 3D space of constant positive curvature if the graph is small compared to the entirety of the space in which it lives and if the number of graph vertices is great enough.
[5569] vixra:1007.0034 [pdf]
On the Gini Mean Difference Test for Circular Data
In this paper, we propose a new test of uniformity on the circle based on the Gini mean difference of the sample arc-lengths. These sample arc-lengths, which are the gaps between successive observations on the circumference of the circle, are analogous to sample spacings on the real line. The Gini mean difference, which compares these arc-lengths between themselves, is analogous to Rao's spacings statistic, which has been used to test the uniformity of circular data. We obtain both the exact and asymptotic distributions of the Gini mean difference arc-lengths test, under the null hypothesis of circular uniformity. We also provide a table of upper percentile values of the exact distribution for small to moderate sample sizes. Illustrative examples in circular data analysis are also given. It is shown that a generalized Gini mean difference test has better asymptotic efficiency than the corresponding generalized Rao's test in the sense of Pitman asymptotic relative efficiency.
[5570] vixra:1007.0024 [pdf]
Verification of Cepheid Variable Distance Measurements Using Roxy's Ruler
There has been some controversy over the validation of using the period/luminosity relationship of Cepheid variables to measure the distance to galaxies[2]. We present here a statistical analysis of distance variations for 21 galaxies between Cepheid variables and Roxy's Ruler. The analysis shows there is no systemic error in Measurements to galaxies using Cepheid variables and that such measurements are valid within well defined degrees of error.
[5571] vixra:1007.0012 [pdf]
Surmounting the Cartesian Cut Further: Torsion Fields, the Extended Photon, Quantum Jumps, The Klein-Bottle, Multivalued Logic, the Time Operator Chronomes, Perception, Semiosis, Neurology and Cognition
We present a conception that surmounts the Cartesian Cut -prevailing in science- based on a representation of the fusion of the physical 'objective' and the 'subjective' realms. We introduce a mathematical-physics and philosophical theory for the physical realm and its mapping to the cognitive and perceptual realms and a philosophical reflection on the bearings of this fusion in cosmology, cognitive sciences, human and natural systems and its relations with a time operator and the existence of time cycles in Nature's and human systems. This conception stems from the self-referential construction of spacetime through torsion fields and its singularities; in particular the photon's self-referential character, basic to the embodiment of cognition ; we shall elaborate this in detail in perception and neurology. We discuss the relations between this embodiment, bio-photons and wave genetics, and the relation with the enactive approach in cognitive sciences due to Varela. We further discuss the relation of the present conception with Penrose's theory of consciousness related to non-computatibility -in the sense of the Goedel-Turing thesis- of quantum processes in the brain.
[5572] vixra:1006.0066 [pdf]
Possibility of Gravitational Wave from Generalized Maxwell Equations
The generalized Maxwell equations in vacuum are basically the same with Dirac's extended Maxwell equations, although intrinsic charges and currents are defined by the time differential and gradation of scalar fields, respectively. Consequently, the electromagnetic stress-energy tensors make important conservation laws. Then, we found scalar fields acting like the gravitational wave interacting with the electromagnetic wave. Interestingly, those gravitational waves due to the scalar fields push out the electromagnetic waves. Moreover, there is a possibility of the existence of the materials, from which we feel no gravitational forces although the electromagnetic waves are kicked out by those gravitational waves. We also discussed about the relation with weight.
[5573] vixra:1006.0064 [pdf]
A Clifford Algebra Realization of Supersymmetry and Its Polyvector Extension in Clifford Spaces
It is shown explicitly how to construct a novel (to our knowledge) realization of the Poincare superalgebra in 2D. These results can be extended to other dimensions and to (extended) superconformal and (anti) de Sitter superalgebras. There is a fundamental difference between the findings of this work with the other approaches to Supersymmetry (over the past four decades) using Grassmannian calculus and which is based on anti-commuting numbers. We provide an algebraic realization of the anticommutators and commutators of the 2D super-Poincare algebra in terms of the generators of the tensor product Cl1,1(R) x A of a two-dim Clifford algebra and an internal algebra A whose generators can be represented in terms of powers of a 3 x 3 matrix Q, such that Q3 = 0. Our realization differs from the standard realization of superalgebras in terms of differential operators in Superspace involving Grassmannian (anticommuting) coordinates θ<sup>α</sup> and bosonic coordinates x<sup>μ</sup>. We conclude in the final section with an analysis of how to construct Polyvector-valued extensions of supersymmetry in Clifford Spaces involving spinor-tensorial supercharge generators Qμ1μ2.....μn and momentum polyvectors Pμ1μ2....μn. Clifford-Superspace is an extension of Clifford-space and whose symmetry transformations are generalized polyvector-valued supersymmetries.
[5574] vixra:1006.0046 [pdf]
U-Statistics Based on Spacings
In this paper, we investigate the asymptotic theory for U-statistics based on sample spacings, i.e. the gaps between successive observations. The usual asymptotic theory for U-statistics does not apply here because spacings are dependent variables. However, under the null hypothesis, the uniform spacings can be expressed as conditionally independent Exponential random variables. We exploit this idea to derive the relevant asymptotic theory both under the null hypothesis and under a sequence of close alternatives. The generalized Gini mean difference of the sample spacings is a prime example of a U-statistic of this type. We show that such a Gini spacings test is analogous to Rao's spacings test. We find the asymptotically locally most powerful test in this class, and it has the same efficacy as the Greenwood statistic.
[5575] vixra:1006.0042 [pdf]
The Geometry of CP<sub>2</sub> and its Relationship to Standard Model
This appendix contains basic facts about CP<sub>2</sub> as a symmetric space and Kähler manifold. The coding of the standard model symmetries to the geometry of CP<sub>2</sub>, the physical interpretation of the induced spinor connection in terms of electro-weak gauge potentials, and basic facts about induced gauge fields are discussed
[5576] vixra:1006.0041 [pdf]
Could the Dynamics of Kähler Action Predict the Hierarchy of Planck Constants?
The original justification for the hierarchy of Planck constants came from the indications that Planck constant could have large values in both astrophysical systems involving dark matter and also in biology. The realization of the hierarchy in terms of the singular coverings and possibly also factor spaces of CD and CP<sub>2</sub> emerged from consistency conditions. It however seems that TGD actually predicts this hierarchy of covering spaces. The extreme non-linearity of the field equations defined by Kähler action means that the correspondence between canonical momentum densities and time derivatives of the imbedding space coordinates is 1-to-many. This leads naturally to the introduction of the covering space of CD x CP<sub>2</sub>, where CD denotes causal diamond defined as intersection of future and past directed light-cones.
[5577] vixra:1006.0040 [pdf]
Weak Form of Electric-Magnetic Duality and Its Implications
The notion of electric magnetic duality emerged already two decades ago in the attempts to formulate the Kähler geometry of the "world of classical worlds". Quite recently a considerable step of progress took place in the understanding of this notion. This concept leads to the identification of the physical particles as string like objects defined by magnetic charged wormhole throats connected by magnetic ux tubes. The second end of the string contains particle having electroweak isospin neutralizing that of elementary fermion and the size scale of the string is electro-weak scale would be in question. Hence the screening of electro-weak force takes place via weak confinement. This picture generalizes to magnetic color confinement. Electric-magnetic duality leads also to a detailed understanding of how TGD reduces to almost topological quantum field theory. A surprising outcome is the necessity to replace CP<sub>2</sub> Kähler form in Kähler action with its sum with S<sup>2</sup> Kähler form.
[5578] vixra:1006.0039 [pdf]
How to Define Generalized Feynman Diagrams?
Generalized Feynman diagrams have become the central notion of quantum TGD and one might even say that space-time surfaces can be identified as generalized Feynman diagrams. The challenge is to assign a precise mathematical content for this notion, show their mathematical existence, and develop a machinery for calculating them. Zero energy ontology has led to a dramatic progress in the understanding of generalized Feynman diagrams at the level of fermionic degrees of freedom. In particular, manifest finiteness in these degrees of freedom follows trivially from the basic identifications as does also unitarity and non-trivial coupling constant evolution. There are however several formidable looking challenges left. <ol> <li>One should perform the functional integral over WCW degrees of freedom for fixed values of on mass shell momenta appearing in the internal lines. After this one must perform integral or summation over loop momenta.</li> <li>One must define the functional integral also in the p-adic context. p-Adic Fourier analysis relying on algebraic continuation raises hopes in this respect. p-Adicity suggests strongly that the loop momenta are discretized and ZEO predicts this kind of discretization naturally.</li> </ol> In this article a proposal giving excellent hopes for achieving these challenges is discussed.
[5579] vixra:1006.0038 [pdf]
Physics as Generalized Number Theory: Infinite Primes
<p> The focus of this book is the number theoretical vision about physics. This vision involves three loosely related parts. </p><p> <OL><LI> The fusion of real physic and various p-adic physics to a single coherent whole by generalizing the number concept by fusing real numbers and various p-adic number fields along common rationals. Extensions of p-adic number fields can be introduced by gluing them along common algebraic numbers to reals. Algebraic continuation of the physics from rationals and their their extensions to various number fields (generalization of completion process for rationals) is the key idea, and the challenge is to understand whether how one could achieve this dream. A profound implication is that purely local p-adic physics would code for the p-adic fractality of long length length scale real physics and vice versa, and one could understand the origins of p-adic length scale hypothesis. <LI> Second part of the vision involves hyper counterparts of the classical number fields defined as subspaces of their complexifications with Minkowskian signature of metric. Allowed space-time surfaces would correspond to what might be called hyper-quaternionic sub-manifolds of a hyper-octonionic space and mappable to M<sup>4</sup>× CP<sub>2</sub> in natural manner. One could assign to each point of space-time surface a hyper-quaternionic 4-plane which is the plane defined by the modified gamma matrices but not tangent plane in general. Hence the basic variational principle of TGD would have deep number theoretic content. <LI> The third part of the vision involves infinite primes identifiable in terms of an infinite hierarchy of second quantized arithmetic quantum fields theories on one hand, and as having representations as space-time surfaces analogous to zero loci of polynomials on the other hand. Single space-time point would have an infinitely complex structure since real unity can be represented as a ratio of infinite numbers in infinitely many manners each having its own number theoretic anatomy. Single space-time point would be in principle able to represent in its structure the quantum state of the entire universe. This number theoretic variant of Brahman=Atman identity would make Universe an algebraic hologram. </p><p> Number theoretical vision suggests that infinite hyper-octonionic or -quaternionic primes could could correspond directly to the quantum numbers of elementary particles and a detailed proposal for this correspondence is made. Furthermore, the generalized eigenvalue spectrum of the Chern-Simons Dirac operator could be expressed in terms of hyper-complex primes in turn defining basic building bricks of infinite hyper-complex primes from which hyper-octonionic primes are obtained by dicrete SU(3) rotations performed for finite hyper-complex primes. </OL> </p><p> Besides this holy trinity I will discuss loosely related topics. Included are possible applications of category theory in TGD framework; TGD inspired considerations related to Riemann hypothesis; topological quantum computation in TGD Universe; and TGD inspired approach to Langlands program. <p>
[5580] vixra:1006.0037 [pdf]
Physics as Generalized Number Theory: Classical Number Fields
Physics as a generalized number theory program involves three threads: various p-adic physics and their fusion together with real number based physics to a larger structure, the attempt to understand basic physics in terms of classical number fields discussed in this article, and infinite primes whose construction is formally analogous to a repeated second quantization of an arithmetic quantum field theory. In this article the connection between standard model symmetries and classical number fields is discussed. The basis vision is that the geometry of the infinite-dimensional WCW ("world of classical worlds") is unique from its mere existence. This leads to its identification as union of symmetric spaces whose Kähler geometries are fixed by generalized conformal symmetries. This fixes space-time dimension and the decomposition M<sup>4</sup> x S and the idea is that the symmetries of the Kähler manifold S make it somehow unique. The motivating observations are that the dimensions of classical number fields are the dimensions of partonic 2-surfaces, space-time surfaces, and imbedding space and M<sup>8</sup> can be identified as hyper-octonions- a sub-space of complexified octonions obtained by adding a commuting imaginary unit. This stimulates some questions. Could one understand S = CP<sub>2</sub> number theoretically in the sense that M<sup>8</sup> and H = M<sup>4</sup> x CP<sub>2</sub> be in some deep sense equivalent ("number theoretical compactification" or M<sup>8</sup> - H duality)? Could associativity define the fundamental dynamical principle so that space-time surfaces could be regarded as associative or co-associative (defined properly) sub-manifolds of M<sup>8</sup> or equivalently of H. One can indeed define the associativite (co-associative) 4-surfaces using octonionic representation of gamma matrices of 8-D spaces as surfaces for which the modified gamma matrices span an associate (co-associative) sub-space at each point of space-time surface. Also M<sup>8</sup> - H duality holds true if one assumes that this associative sub-space at each point contains preferred plane of M<sup>8</sup> identifiable as a preferred commutative or co-commutative plane (this condition generalizes to an integral distribution of commutative planes in M<sup>8</sup>). These planes are parametrized by CP<sub>2</sub> and this leads to M<sup>8</sup> - H duality. WCW itself can be identified as the space of 4-D local sub-algebras of the local Clifford algebra of M<sup>8</sup> or H which are associative or co-associative. An open conjecture is that this characterization of the space-time surfaces is equivalent with the preferred extremal property of Kähler action with preferred extremal identified as a critical extremal allowing infinite-dimensional algebra of vanishing second variations.
[5581] vixra:1006.0036 [pdf]
Physics as Generalized Number Theory: P-Adic Physics and Number Theoretic Universality
Physics as a generalized number theory program involves three threads: various p-adic physics and their fusion together with real number based physics to a larger structure, the attempt to understand basic physics in terms of classical number fields (in particular, identifying associativity condition as the basic dynamical principle), and infinite primes whose construction is formally analogous to a repeated second quantization of an arithmetic quantum field theory. In this article p-adic physics and the technical problems relates to the fusion of p-adic physics and real physics to a larger structure are discussed. The basic technical problems relate to the notion of definite integral both at space-time level, imbedding space level and the level of WCW (the "world of classical worlds"). The expressibility of WCW as a union of symmetric spacesleads to a proposal that harmonic analysis of symmetric spaces can be used to define various integrals as sums over Fourier components. This leads to the proposal the p-adic variant of symmetric space is obtained by a algebraic continuation through a common intersection of these spaces, which basically reduces to an algebraic variant of coset space involving algebraic extension of rationals by roots of unity. This brings in the notion of angle measurement resolution coming as Δφ = 2π/p<sup>n</sup> for given p-adic prime p. Also a proposal how one can complete the discrete version of symmetric space to a continuous p-adic versions emerges and means that each point is effectively replaced with the p-adic variant of the symmetric space identifiable as a p-adic counterpart of the real discretization volume so that a fractal p-adic variant of symmetric space results. If the Kähler geometry of WCW is expressible in terms of rational or algebraic functions, it can in principle be continued the p-adic context. One can however consider the possibility that that the integrals over partonic 2-surfaces defining ux Hamiltonians exist p-adically as Riemann sums. This requires that the geometries of the partonic 2-surfaces effectively reduce to finite sub-manifold geometries in the discretized version of δM<sub>+</sub><sup>4</sup>. If Kähler action is required to exist p-adically same kind of condition applies to the space-time surfaces themselves. These strong conditions might make sense in the intersection of the real and p-adic worlds assumed to characterized living matter.
[5582] vixra:1006.0035 [pdf]
Construction of Configuration Space Spinor Structure
There are three separate approaches to the challenge of constructing WCW Kähler geometry and spinor structure. The first approach relies on a direct guess of Kähler function. Second approach relies on the construction of Kähler form and metric utilizing the huge symmetries of the geometry needed to guarantee the mathematical existence of Riemann connection. The third approach discussed in this article relies on the construction of spinor structure based on the hypothesis that complexified WCW gamma matrices are representable as linear combinations of fermionic oscillator operator for the second quantized free spinor fields at space-time surface and on the geometrization of super-conformal symmetries in terms of spinor structure. This implies a geometrization of fermionic statistics. The basic philosophy is that at fundamental level the construction of WCW geometry reduces to the second quantization of the induced spinor fields using Dirac action. This assumption is parallel with the bosonic emergence stating that all gauge bosons are pairs of fermion and antifermion at opposite throats of wormhole contact. Vacuum function is identified as Dirac determinant and the conjecture is that it reduces to the exponent of Kähler function. In order to achieve internal consistency induced gamma matrices appearing in Dirac operator must be replaced by the modified gamma matrices defined uniquely by Kähler action and one must also assume that extremals of Kähler action are in question so that the classical space-time dynamics reduces to a consistency condition. This implies also super-symmetries and the fermionic oscillator algebra at partonic 2-surfaces has intepretation as N = 1 generalization of space-time supersymmetry algebra different however from standard SUSY algebra in that Majorana spinors are not needed. This algebra serves as a building brick of various super-conformal algebras involved. The requirement that there exist deformations giving rise to conserved Noether charges requires that the preferred extremals are critical in the sense that the second variation of the Kähler action vanishes for these deformations. Thus Bohr orbit property could correspond to criticality or at least involve it. Quantum classical correspondence demands that quantum numbers are coded to the properties of the preferred extremals given by the Dirac determinant and this requires a linear coupling to the conserved quantum charges in Cartan algebra. Effective 2-dimensionality allows a measurement interaction term only in 3-D Chern-Simons Dirac action assignable to the wormhole throats and the ends of the space-time surfaces at the boundaries of CD. This allows also to have physical propagators reducing to Dirac propagator not possible without the measurement interaction term. An essential point is that the measurement interaction corresponds formally to a gauge transformation for the induced Kähler gauge potential. If one accepts the weak form of electric-magnetic duality Kähler function reduces to a generalized Chern-Simons term and the effect of measurement interaction term to Kähler function reduces effectively to the same gauge transformation. The basic vision is that WCW gamma matrices are expressible as super-symplectic charges at the boundaries of CD. The basic building brick of WCW is the product of infinite-D symmetric spaces assignable to the ends of the propagator line of the generalized Feynman diagram. WCW Kähler metric has in this case "kinetic" parts associated with the ends and "interaction" part between the ends. General expressions for the super-counterparts of WCW ux Hamiltoniansand for the matrix elements of WCW metric in terms of their anticommutators are proposed on basis of this picture.
[5583] vixra:1006.0034 [pdf]
Construction of Configuration Space Geometry from Symmetry Principles
There are three separate approaches to the challenge of constructing WCW Kähler geometry and spinor structure. The first one relies on a direct guess of Kähler function. Second approach relies on the construction of Kähler form and metric utilizing the huge symmetries of the geometry needed to guarantee the mathematical existence of Riemann connection. The third approach relies on the construction of spinor structure assuming that complexified WCW gamma matrices are representable as linear combinations of fermionic oscillator operator for the second quantized free spinor fields at space-time surface and on the geometrization of super-conformal symmetries in terms of spinor structure. In this article the construction of Kähler form and metric based on symmetries is discussed. The basic vision is that WCW can be regarded as the space of generalized Feynman diagrams with lines thickned to light-like 3-surfaces and vertices identified as partonic 2-surfaces. In zero energy ontology the strong form of General Coordinate Invariance (GCI) implies effective 2-dimensionality and the basic objects are pairs partonic 2-surfaces X<sup>2</sup> at opposite light-like boundaries of causal diamonds (CDs). The hypothesis is that WCW can be regarded as a union of infinite-dimensional symmetric spaces G/H labeled by zero modes having an interpretation as classical, non-quantum uctuating variables. A crucial role is played by the metric 2-dimensionality of the light-cone boundary δM<sub>+</sub><sup>4</sup> + and of light-like 3-surfaces implying a generalization of conformal invariance. The group G acting as isometries of WCW is tentatively identified as the symplectic group of δM<sub>+</sub><sup>4</sup> x CP<sub>2</sub> localized with respect to X<sup>2</sup>. H is identified as Kac-Moody type group associated with isometries of H = M<sub>+</sub><sup>4</sup> x CP<sub>2</sub> acting on light-like 3-surfaces and thus on X<sup>2</sup>. An explicit construction for the Hamiltonians of WCW isometry algebra as so called ux Hamiltonians is proposed and also the elements of Kähler form can be constructed in terms of these. Explicit expressions for WCW ux Hamiltonians as functionals of complex coordinates of the Cartesisian product of the infinite-dimensional symmetric spaces having as points the partonic 2-surfaces defining the ends of the the light 3-surface (line of generalized Feynman diagram) are proposed.
[5584] vixra:1006.0033 [pdf]
Identification of the Configuration Space Kähler Function
There are two basic approaches to quantum TGD. The first approach, which is discussed in this article, is a generalization of Einstein's geometrization program of physics to an infinitedimensional context. Second approach is based on the identification of physics as a generalized number theory. The first approach relies on the vision of quantum physics as infinite-dimensional Kähler geometry for the "world of classical worlds" (WCW) identified as the space of 3-surfaces in in certain 8-dimensional space. There are three separate approaches to the challenge of constructing WCW Kähler geometry and spinor structure. The first approach relies on direct guess of Kähler function. Second approach relies on the construction of Kähler form and metric utilizing the huge symmetries of the geometry needed to guarantee the mathematical existence of Riemann connection. The third approach relies on the construction of spinor structure based on the hypothesis that complexified WCW gamma matrices are representable as linear combinations of fermionic oscillator operator for second quantized free spinor fields at space-time surface and on the geometrization of super-conformal symmetries in terms of WCW spinor structure. In this article the proposal for Kähler function based on the requirement of 4-dimensional General Coordinate Invariance implying that its definition must assign to a given 3-surface a unique space-time surface. Quantum classical correspondence requires that this surface is a preferred extremal of some some general coordinate invariant action, and so called Kähler action is a unique candidate in this respect. The preferred extremal has intepretation as an analog of Bohr orbit so that classical physics becomes and exact part of WCW geometry and therefore also quantum physics. The basic challenge is the explicit identification of WCW Kähler function K. Two assumptions lead to the identification of K as a sum of Chern-Simons type terms associated with the ends of causal diamond and with the light-like wormhole throats at which the signature of the induced metric changes. The first assumption is the weak form of electric magnetic duality. Second assumption is that the Kähler current for preferred extremals satisfies the condition jK ^ djK = 0 implying that the ow parameter of the ow lines of jK defines a global space-time coordinate. This would mean that the vision about reduction to almost topological QFT would be realized. Second challenge is the understanding of the space-time correlates of quantum criticality. Electric-magnetic duality helps considerably here. The realization that the hierarchy of Planck constant realized in terms of coverings of the imbedding space follows from basic quantum TGD leads to a further understanding. The extreme non-linearity of canonical momentum densities as functions of time derivatives of the imbedding space coordinates implies that the correspondence between these two variables is not 1-1 so that it is natural to introduce coverings of CD x CP<sub>2</sub>. This leads also to a precise geometric characterization of the criticality of the preferred extremals.
[5585] vixra:1006.0032 [pdf]
Physics as Infinite-dimensional Geometry and Generalized Number Theory: Basic Visions
There are two basic approaches to the construction of quantum TGD. The first approach relies on the vision of quantum physics as infinite-dimensional Kähler geometry for the "world of classical worlds" identified as the space of 3-surfaces in in certain 8-dimensional space. Essentially a generalization of the Einstein's geometrization of physics program is in question. The second vision is the identification of physics as a generalized number theory. This program involves three threads: various p-adic physics and their fusion together with real number based physics to a larger structure, the attempt to understand basic physics in terms of classical number fields (in particular, identifying associativity condition as the basic dynamical principle), and infinite primes whose construction is formally analogous to a repeated second quantization of an arithmetic quantum field theory. In this article brief summaries of physics as infinite-dimensional geometry and generalized number theory are given to be followed by more detailed articles.
[5586] vixra:1006.0028 [pdf]
Warp Drive Basic Science Written For "Aficionados". Chapter II - Jose Natario.
Natario Warp Drive is one of the most exciting Spacetimes of General Relativity.It was the second Spacetime Metric able to develop Superluminal Velocities.However in the literature about Warp Drives the Natario Spacetime is only marginally quoted. Almost all the available literature covers the Alcubierre Warp Drive. It is our intention to present here the fully developed Natario Warp Drive Spacetime and its very interesting features.Our presentation is given in a more accessible mathematical formalism following the style of the current Warp Drive literature destined to graduate students of physics since the original Natario Warp Drive paper of 2001 was presented in a sophisticated mathematical formalism not accessible to average students. Like the Alcubierre Warp Drive Spaceime that requires a continuous function f(rs) in order to be completely analyzed or described we introduce here the Natario Shape Function n(r) that allows ourselves to study the amazing physical features of the Natario Warp Drive. The non-existence of a continuous Shape Function for the Natario Warp Drive in the original 2001 work was the reason why Natario Warp Drive was not covered by the standard literature in the same degree of coverage dedicated to the Alcubierre Warp Drive. We hope to change the situation because the Natario Warp Drive looks very promising.
[5587] vixra:1006.0019 [pdf]
Probabilistic Interpretation of Quantum Mechanics with Schrödinger Quantization Rule
Quantum theory is a probabilistic theory, where certain variables are hidden or non-accessible. It results in lack of representation of systems under study. However, I deduce system's representation in probabilistic manner, introducing probability of existence w, and quantize it exploiting Schrödinger's quantization rule. The formalism enriches probabilistic quantum theory, and enables systems's representation in probabilistic manner.
[5588] vixra:1006.0012 [pdf]
No-Time-Dilation Corrected Supernovae 1a and GRBs Data and Low-Energy Quantum Gravity
Earlier it was shown that in the model of low-energy quantum gravity by the author, observations of Supernovae 1a and GRBs, which are corrected by observers for characteristic for the standard cosmological model time dilation, may be fitted with the theoretical luminosity distance curve only up to z ~ 0:5; for higher redshifts the predicted luminosity distance is essentially bigger. The model itself has not time dilation due to another redshift mechanism. It is shown here that a correction of observations for no time dilation leads to a good accordance of observations and theoretical predictions for all achieved redshifts.
[5589] vixra:1006.0007 [pdf]
Surmounting the Cartesian Cut: Torsion Fields, the Extended Photon, Quantum Jumps, The Klein Bottle, Multivalued Logic, the Time Operator Chronomes, Perception, Semiotics, Neurology and Cognition
We present a conception that surmounts the Cartesian Cut -prevailing in science-based on a representation of the fusion of the physical 'objective' and the 'subjective' realms. We introduce a mathematical-physics and philosophical theory for the physical realm and its mapping to the cognitive and perceptual realms and a philosophical reflection on the bearings of this fusion in cosmology, cognitive sciences, human and natural systems and its relations with a time operator and the existence of time cycles in Nature's and human systems. This conception surges from the self-referential construction of spacetime through torsion fields and its singularities; in particular the photon's self-referential character, basic to the embodiment of cognition ; we shall elaborate this in detail in perception and neurology.
[5590] vixra:1006.0002 [pdf]
Detector of Aether Operating on Transverse Doppler Effect
Rotating the source of light around the point lying on the light's beam we can observe the transverse Doppler effect by a spectrometer located in the center of rotation. The anomalous shift of the electromagnetic wave's frequency was found from this experiment (performed in 1969-1974 years) that appeared to be much higher than anticipated from the standard relativistic expression taking into account solely the linear velocity of rotation of the source in the laboratory. The interpretation of the experimental observations admitting absolute motion of the Earth and respective accounting for reality of the Lorentz contraction and time dilation enabled us to determine the speed of the Earth relative to luminiferous aether. It appeared to be somewhat above 400 km/s that agrees well with the value formerly found by me using three methods of determining the speed of "aether wind" by Michelson-type interferometers thoroughly accounted for refractive indices of optical media.
[5591] vixra:1005.0104 [pdf]
Factors and Primes in Two Smarandache Sequences
Using a personal computer and freely available software, the author factored some members of the Smarandache consecutive sequence and the reverse Smarandache sequence. Nearly complete factorizations are given up to Sm(80) and RSm(80). Both sequences were excessively searched for prime members, with only one prime found up to Sm(840) and RSm(750): RSm(82) = 828180 ... 10987654321.
[5592] vixra:1005.0052 [pdf]
Tetron Model Building
Spin models are considered on a discretized inner symmetry space with tetrahedral symmetry as possible dynamical schemes for the tetron model. Parity violation, which corresponds to a change of sign for odd permutations, is shown to dictate the form of the Hamiltonian. It is further argued that such spin models can be obtained from more fundamental principles by considering a (6+1)- or (7+1)-dimensional spacetime with octonion multiplication.
[5593] vixra:1005.0024 [pdf]
Warp Drive Basic Science Written For "Aficionados". Chapter I - Miguel Alcubierre.
Alcubierre Warp Drive is one of the most exciting Spacetimes of General Relativity.It was the first Spacetime Metric able to develop Superluminal Velocities.However some physical problems associated to the Alcubierre Warp Drive seemed to deny the Superluminal Behaviour. We demonstrate in this work that some of these problems can be overcomed and we arrive at some interesting results although we used two different Shape Functions one continuous g(rs) as an alternative to the original Alcubierre f(rs) and a Piecewise Shape Function f<sub>pc</sub>(rs) as an alternative to the Ford-Pfenning Piecewise Shape Function with a behaviour similar to the Natario Warp Drive producing effectively an Alcubierre Warp Drive without Expansion/Contraction of the Spacetime. Horizons will exists and cannot be avoided however we found a way to "overcome" this problem.We also introduce here the Casimir Warp Drive.
[5594] vixra:1005.0020 [pdf]
Confidence Intervals for the Pythagorean Formula in Baseball
In this paper, we will investigate the problem of obtaining confidence intervals for a baseball team's Pythagorean expectation, i.e. their expected winning percentage and expected games won. We study this problem from two different perspectives. First, in the framework of regression models, we obtain confidence intervals for prediction, i.e. more formally, prediction intervals for a new observation, on the basis of historical binomial data for Major League Baseball teams from the 1901 through 2009 seasons, and apply this to the 2009 MLB regular season. We also obtain a Scheffé-type simultaneous prediction band and use it to tabulate predicted winning percentages and their prediction intervals, corresponding to a range of values for log(RS=RA). Second, parametric bootstrap simulation is introduced as a data-driven, computer-intensive approach to numerically computing confidence intervals for a team's expected winning percentage. Under the assumption that runs scored per game and runs allowed per game are random variables following independent Weibull distributions, we numerically calculate confidence intervals for the Pythagorean expectation via parametric bootstrap simulation on the basis of each team's runs scored per game and runs allowed per game from the 2009 MLB regular season. The interval estimates, from either framework, allow us to infer with better certainty as to which teams are performing above or below expectations. It is seen that the bootstrap confidence intervals appear to be better at detecting which teams are performing above or below expectations than the prediction intervals obtained in the regression framework.
[5595] vixra:1005.0006 [pdf]
Neutrality and Many-Valued Logics
This book written by A. Schumann & F. Smarandache is devoted to advances of non-Archimedean multiple-validity idea and its applications to logical reasoning. Leibnitz was the first who proposed Archimedes' axiom to be rejected. He postulated infinitesimals (infinitely small numbers) of the unit interval [0, 1] which are larger than zero, but smaller than each positive real number. Robinson applied this idea into modern mathematics in [117] and developed so-called non-standard analysis. In the framework of non-standard analysis there were obtained many interesting results examined in [37], [38], [74], [117].
[5596] vixra:1004.0136 [pdf]
On the 5D Extra-Force According to Basini-Capozziello-Ponce De Leon Formalism and Three Important Features: Chung-Freese Superluminal Braneworld,strong Gravitational Fields and the Pioneer Anomaly.
We use the 5D Extra Dimensional Force according to Basini-Capozziello-Ponce De Leon, Overduin-Wesson and Mashoon-Wesson-Liu Formalisms to study the behaviour of the Chung-Freese Superluminal BraneWorld compared to the Alcubierre Warp Drive and we arrive at some interesting results from the point of view of the Alcubierre ansatz although we used two different Shape Functions one continuous g(rs) as an alternative to the original Alcubierre f(rs) and a Piecewise Shape Function fpc(rs) with a behaviour similar to the Natario Warp Drive. We introduce here the Casimir Warp Drive.We also demonstrate that in flat 5D Minkowsky Spacetime or weak Gravitational Fields we cannot tell if we live in a 5D or a 4D Universe according to Basini-Capozziello-Ponce De Leon,Overduin-Wesson and Mashoon-Wesson-Liu Dimensional Reduction but in the extreme conditions of Strong Gravitational Fields we demonstrate that the effects of the 5D Extra Dimension becomes visible and perhaps the study of the extreme conditions in Black Holes can tell if we live in a Higher Dimensional Universe.We use a 5D Maartens-Clarkson Schwarzschild Cosmic Black String centered in the Sun coupled to the 5D Extra Force from Ponce De Leon together with Mashoon-Wesson-Liu and the definitions of Basini-Capozziello and Bertolami-Paramos for the Warp Fields in order to demonstrate that the Anomalous Effect disturbing two American space probes known as the Pioneer Anomaly is a force of 5D Extra Dimensional Nature.As a matter of fact the Pioneer Anomaly is the first experimental evidence of the "Fifth Force" predicted years ago by Mashoon-Wesson-Liu and we also demonstrate that this Extra Force is coming from the Sun.
[5597] vixra:1004.0094 [pdf]
Neutrosophy in Situation Analysis
In situation analysis (SA), an agent observing a scene receives information from heterogeneous sources of information including for example remote sensing devices, human reports and databases. The aim of this agent is to reach a certain level of awareness of the situation in order to make decisions. For the purpose of applications, this state of awareness can be conceived as a state of knowledge in the classical epistemic logic sense. Considering the logical connection between belief and knowledge, the challenge for the designer is to transform the raw, imprecise, conflictual and often paradoxical information received from the different sources into statements understandable by both man and machines. Hence, quantitative (i.e. measuring the world) and qualitative (i.e. reasoning about the structure of the world) information processing coexist in SA. A great challenge in SA is the conciliation of both aspects in mathematical and logical frameworks. As a consequence, SA applications need frameworks general enough to take into account the different types of uncertainty and information present in the SA context, doubled with a semantics allowing meaningful reasoning on situations. The aim of this paper is to evaluate the capacity of neutrosophic logic and Dezert- Smarandache theory (DSmT) to cope with the ontological and epistemological problems of SA.
[5598] vixra:1004.0086 [pdf]
Smarandache Spaces as a New Extension of the Basic Space-Time of General Relativity
This short letter manifests how Smarandache geometries can be employed in order to extend the "classical" (Riemannian geometry) basis of the General Theory of Relativity through joining the properties of two or more (different) geometries in the same single space. Perspectives in this way seem much profitly: the basic space-time of General Relativity can be extended to not only metric geometries, but even to non-metric ones (where no distances can be measured), or to spaces of the mixed kind which possess the properties of both metric and non-metric spaces (the latter should be referred to as "semi-metric spaces"). If both metric and non-metric properties possessed at the same (at least one) point of a space, it is one of Smarandache geometries, and should be referred to as "Smarandache semi-metric space". Such spaces can be introduced according to the mathematical apparatus of physically observable quantities (chronometric invariants), if considering a breaking of the observable space metric on the continuous background of the fundamental metric tensor.
[5599] vixra:1004.0083 [pdf]
Quark Lepton Braids and Heterotic Supersymmetry
A unique matrix is easily assigned to each Bilson-Thompson braid diagram. The quark and lepton matrices are then related to bosons via a twisted quantum Fourier transform, for which fermion and boson multiplets fit the dimension structure of heterotic strings.
[5600] vixra:1004.0065 [pdf]
S-Denying a Theory
In this paper we introduce the operators of validation and invalidation of a proposition, and we extend the operator of S-denying a proposition, or an axiomatic system, from the geometric space to respectively any theory in any domain of knowledge, and show six examples in geometry, in mathematical analysis, and in topology.
[5601] vixra:1004.0052 [pdf]
A Simple Proportional Conflict Redistribution Rule
One proposes a first alternative rule of combination to WAO (Weighted Average Operator) proposed recently by Josang, Daniel and Vannoorenberghe, called Proportional Conflict Redistribution rule (denoted PCR1). PCR1 and WAO are particular cases of WO (the Weighted Operator) because the conflicting mass is redistributed with respect to some weighting factors. In this first PCR rule, the proportionalization is done for each non-empty set with respect to the non-zero sum of its corresponding mass matrix - instead of its mass column average as in WAO, but the results are the same as Ph. Smets has pointed out. Also, we extend WAO (which herein gives no solution) for the degenerate case when all column sums of all non-empty sets are zero, and then the conflicting mass is transferred to the non-empty disjunctive form of all non-empty sets together; but if this disjunctive form happens to be empty, then one considers an open world (i.e. the frame of discernment might contain new hypotheses) and thus all conflicting mass is transferred to the empty set. In addition to WAO, we propose a general formula for PCR1 (WAO for non-degenerate cases). Several numerical examples and comparisons with other rules for combination of evidence published in literature are presented too. Another distinction between these alternative rules is that WAO is defined on the power set, while PCR1 is on the hyper-power set (Dedekind's lattice). A nice feature of PCR1, is that it works not only on non-degenerate cases but also on degenerate cases as well appearing in dynamic fusion, while WAO gives the sum of masses in this cases less than 1 (WAO does not work in these cases). Meanwhile we show that PCR1 and WAO do not preserve unfortunately the neutrality property of the vacuous belief assignment though the fusion process. This severe drawback can however be easily circumvented by new PCR rules presented in a companion paper.
[5602] vixra:1004.0028 [pdf]
Disproofs of Riemann's Hypothesis
As it is well known, the Riemann hypothesis on the zeros of the ζ(s) function has been assumed to be true in various basic developments of the 20-th century mathematics, although it has never been proved to be correct. The need for a resolution of this open historical problem has been voiced by several distinguished mathematicians. By using preceding works, in this paper we present comprehensive disproofs of the Riemann hypothesis. Moreover, in 1994 the author discovered the arithmetic function J<sub>n</sub>(ω) that can replace Riemann's ζ(s) function in view of its proved features: if J<sub>n</sub>(ω) ≠ 0, then the function has infinitely many prime solutions; and if J<sub>n</sub>(ω) = 0, then the function has finitely many prime solutions. By using the Jiang J<sub>2</sub>(ω) function we prove the twin prime theorem, Goldbach's theorem and the prime theorem of the form x<sup>2</sup> + 1. Due to the importance of resolving the historical open nature of the Riemann hypothesis, comments by interested colleagues are here solicited.
[5603] vixra:1004.0027 [pdf]
Foundations of Santilli's Isonumber Theory
In my works (see the bibliography at the end of the Preface) I often expressed the view that the protracted lack of resolution of fundamental problems in science signals the needs of basically new mathematics. This is the case, for example, for: quantitative representations of biological structures; resolution of the vexing problem of grand-unification; invariant treatment of irreversibility at the classical and operator levels; identification of hadronic constituents definable in our spacetime; achievement of a classical representation of antimatter; and other basic open problems.
[5604] vixra:1003.0275 [pdf]
Can the External Directed Edges of a Complete Graph Form a Radially Symmetric Field at Long Distance?
Using a numerical method, the external directed edges of a complete graph are tested for their level of fitness in terms of how well they form a radially symmetric field at long distance (e.g., a test for the inverse square law in 3D space). It is found that the external directed edges of a complete graph can very nearly form a radially symmetric field at long distance if the number of graph vertices is great enough.
[5605] vixra:1003.0247 [pdf]
Deceleration Parameter Q(Z) in 4D and 5D Geometries, and Implications of Graviton Mass in Mimicking Dark Energy in Both Geometries
The case for a four-dimensional graviton mass (non zero) influencing reacceleration of the universe in both four and five dimensions is stated, with particular emphasis on the question whether 4D and 5D geometries as given here yield new physical insight as to cosmological evolution. Both cases give equivalent reacceleration one billion years ago, which leads to the question whether other criteria can determine the relative benefits of adding additional dimensions to cosmology models.
[5606] vixra:1003.0214 [pdf]
Nom Associative SU(3)<sub>L</sub> X U(1)<sub>N</sub> Gauge Model and Predictions
A classical gauge model based on the Lie group SU(3)<sub>L</sub> X U(1)<sub>N</sub> with exotic quarks is reformulated within the formalism of non-associative geometry associated to an L-cycle. The N charges of the fermionic particles and the related parameters constraints are uniquely determinedalgebraic consequences. Moreover, the number of scalar particles are dictated by the non-associativity of the geometry. As a byproduct of this formalism, the scalar, charged and neutral gauge bosons masses as well as the mixing angles are derived. Furthermore, various expressions of the vector and axial couplings of the quarks and leptons with the neutral gauge bosons and lower bounds of the very heavy gauge bosons are also obtained.
[5607] vixra:1003.0208 [pdf]
Advances and Applications of DSmT for Information Fusion Collected Works Volume 1
This book is devoted to an emerging branch of Information Fusion based on new approach for modelling the fusion problematic when the information provided by the sources is both uncertain and (highly) conflicting. This approach, known in literature as DSmT (standing for Dezert-Smarandache Theory), proposes new useful rules of combinations. We gathered in this volume a presentation of DSmT from the beginning to the latest development. Part 1 of this book presents the current state-of-the-art on theoretical investigations while Part 2 presents several applications of this new theory. We hope that this first book on DSmT will stir up some interests to researchers and engineers working in data fusion and in artificial intelligence. Many simple but didactic examples are proposed throughout the book. As a young emerging theory, DSmT is probably not exempt from improvements and its development will continue to evolve over the years. We just want through this book to propose a new look at the Information Fusion problematic and open a new track to attack the combination of information.
[5608] vixra:1003.0197 [pdf]
Application of Probabilistic PCR5 Fusion Rule for Multisensor Target Tracking
This paper defines and implements a non-Bayesian fusion rule for combining densities of probabilities estimated by local (non-linear) filters for tracking a moving target by passive sensors. This rule is the restriction to a strict probabilistic paradigm of the recent and efficient Proportional Conflict Redistribution rule no 5 (PCR5) developed in the DSmT framework for fusing basic belief assignments. A sampling method for probabilistic PCR5 (p-PCR5) is defined. It is shown that p-PCR5 is more robust to an erroneous modeling and allows to keep the modes of local densities and preserve as much as possible the whole information inherent to each densities to combine. In particular, p-PCR5 is able of maintaining multiple hypotheses/modes after fusion, when the hypotheses are too distant in regards to their deviations. This new p-PCR5 rule has been tested on a simple example of distributed non-linear filtering application to show the interest of such approach for future developments. The non-linear distributed filter is implemented through a basic particles filtering technique. The results obtained in our simulations show the ability of this p-PCR5-based filter to track the target even when the models are not well consistent in regards to the initialization and real cinematic. Keywords: Filtering, Robust estimation, non-Bayesian fusion rule, PCR5, Particle filtering.
[5609] vixra:1003.0196 [pdf]
Qualitative Belief Conditioning Rules (QBCR)
In this paper we extend the new family of (quantitative) Belief Conditioning Rules (BCR) recently developed in the Dezert-Smarandache Theory (DSmT) to their qualitative counterpart for belief revision. Since the revision of quantitative as well as qualitative belief assignment given the occurrence of a new event (the conditioning constraint) can be done in many possible ways, we present here only what we consider as the most appealing Qualitative Belief Conditioning Rules (QBCR) which allow to revise the belief directly with words and linguistic labels and thus avoids the introduction of ad-hoc translations of quantitative beliefs into quantitative ones for solving the problem.
[5610] vixra:1003.0195 [pdf]
Enrichment of Qualitative Beliefs for Reasoning Under Uncertainty
This paper deals with enriched qualitative belief functions for reasoning under uncertainty and for combining information expressed in natural language through linguistic labels. In this work, two possible enrichments (quantitative and/or qualitative) of linguistic labels are considered and operators (addition, multiplication, division, etc) for dealing with them are proposed and explained. We denote them qe-operators, qe standing for "qualitative-enriched" operators. These operators can be seen as a direct extension of the classical qualitative operators (q-operators) proposed recently in the Dezert-Smarandache Theory of plausible and paradoxist reasoning (DSmT). q-operators are also justified in details in this paper. The quantitative enrichment of linguistic label is a numerical supporting degree in [0,∞), while the qualitative enrichment takes its values in a finite ordered set of linguistic values. Quantitative enrichment is less precise than qualitative enrichment, but it is expected more close with what human experts can easily provide when expressing linguistic labels with supporting degrees. Two simple examples are given to show how the fusion of qualitative-enriched belief assignments can be done.
[5611] vixra:1003.0192 [pdf]
Funcoids and Reloids
It is a part of my Algebraic General Topology research. In this article, I introduce the concepts of funcoids, which generalize proximity spaces and reloids, which generalize uniform spaces. The concept of funcoid is generalized concept of proximity, the concept of reloid is cleared from superfluous details (generalized) concept of uniformity. Also funcoids generalize pretopologies and preclosures. Also funcoids and reloids are generalizations of binary relations whose domains and ranges are filters (instead of sets). Also funcoids and reloids can be considered as a generalization of (oriented) graphs, this provides us with a common generalization of analysis and discrete mathematics. The concept of continuity is defined by an algebraic formula (instead of old messy epsilondelta notation) for arbitrarymorphisms (including funcoids and reloids) of a partially ordered category. In one formula are generalized continuity, proximity continuity, and uniform continuity.
[5612] vixra:1003.0191 [pdf]
On a Heuristic Approach to Mechanics and Electrodynamics of Moving Bodies
Determination to make the Einstein's treatment of simultaneity and relativistic notions of length and time interval measurement more intuitive and illustrative led to creation of a model in which light impulses are substituted with sound signals. The model uncovers the substance of Einstein's mathematical constructs and the mechanisms that give rise to relativistic effects. Consistent application of the model resulted in new constructions. The paper examines known mechanical and electromagnetic phenomena that can be clarified by this model. The use of such an approach leads to the notion of a distinguished frame of reference. In particular, the theory calls for the existence of electromagnetic interaction that contradicts the principle of relativity. The paper contains a description of an experimental apparatus built to test this prediction, as well as the results of the experiments.
[5613] vixra:1003.0190 [pdf]
A Brief Note on "Un-Particle" Physics
The possibility of a hidden sector of particle physics that lies beyond the energy range of the Standard Model has been recently advocated by many authors. A bizarre implication of this conjecture is the emergence of a continuous spectrum of massless fields with non-integral scaling dimensions called "un-particles". The purpose of this Letter is to show that the idea of "un-particles" was considered in at least two previous independent publications, prior to its first claimed disclosure.
[5614] vixra:1003.0165 [pdf]
A Neutrosophic Description Logic
Description Logics (DLs) are appropriate, widely used, logics for managing structured knowledge. They allow reasoning about individuals and concepts, i.e. set of individuals with common properties. Typically, DLs are limited to dealing with crisp, well defined concepts. That is, concepts for which the problem whether an individual is an instance of it is a yes/no question. More often than not, the concepts encountered in the real world do not have a precisely defined criteria of membership: we may say that an individual is an instance of a concept only to a certain degree, depending on the individual's properties. The DLs that deal with such fuzzy concepts are called fuzzy DLs. In order to deal with fuzzy, incomplete, indeterminate and inconsistent concepts, we need to extend the capabilities of fuzzy DLs further. In this paper we will present an extension of fuzzy ALC, combining Smarandache's neutrosophic logic with a classical DL. In particular, concepts become neutrosophic (here neutrosophic means fuzzy, incomplete, indeterminate and inconsistent), thus, reasoning about such neutrosophic concepts is supported. We will define its syntax, its semantics, describe its properties and present a constraint propagation calculus for reasoning in it.
[5615] vixra:1003.0161 [pdf]
DSmT: a New Paradigm Shift for Information Fusion
The management and combination of uncertain, imprecise, fuzzy and even paradoxical or high conflicting sources of information has always been and still remains of primal importance for the development of reliable information fusion systems. In this short survey paper, we present the theory of plausible and paradoxical reasoning, known as DSmT (Dezert-Smarandache Theory) in literature, developed for dealing with imprecise, uncertain and potentially highly conflicting sources of information. DSmT is a new paradigm shift for information fusion and recent publications have shown the interest and the potential ability of DSmT to solve fusion problems where Dempster's rule used in Dempster-Shafer Theory (DST) provides counter-intuitive results or fails to provide useful result at all. This paper is focused on the foundations of DSmT and on its main rules of combination (classic, hybrid and Proportional Conflict Redistribution rules). Shafer's model on which is based DST appears as a particular and specific case of DSm hybrid model which can be easily handled by DSmT as well. Several simple but illustrative examples are given throughout this paper to show the interest and the generality of this new theory.
[5616] vixra:1003.0159 [pdf]
An Introduction to the DSm Theory for the Combination of Paradoxical, Uncertain, and Imprecise Sources of Information
The management and combination of uncertain, imprecise, fuzzy and even paradoxical or high conflicting sources of information has always been, and still remains today, of primal importance for the development of reliable modern information systems involving artificial reasoning. In this introduction, we present a survey of our recent theory of plausible and paradoxical reasoning, known as Dezert-Smarandache Theory (DSmT) in the literature, developed for dealing with imprecise, uncertain and paradoxical sources of information. We focus our presentation here rather on the foundations of DSmT, and on the two important new rules of combination, than on browsing specific applications of DSmT available in literature. Several simple examples are given throughout the presentation to show the efficiency and the generality of this new approach.
[5617] vixra:1003.0157 [pdf]
Fusion of Qualitative Beliefs Using DSmT
This paper introduces the notion of qualitative belief assignment to model beliefs of human experts expressed in natural language (with linguistic labels). We show how qualitative beliefs can be efficiently combined using an extension of Dezert-Smarandache Theory (DSmT) of plausible and paradoxical quantitative reasoning to qualitative reasoning. We propose a new arithmetic on linguistic labels which allows a direct extension of classical DSm fusion rule or DSm Hybrid rules. An approximate qualitative PCR5 rule is also proposed jointly with a Qualitative Average Operator. We also show how crisp or interval mappings can be used to deal indirectly with linguistic labels. A very simple example is provided to illustrate our qualitative fusion rules.
[5618] vixra:1003.0156 [pdf]
Target Type Tracking with PCR5 and Dempster's Rules: a Comparative Analysis
In this paper we consider and analyze the behavior of two combinational rules for temporal (sequential) attribute data fusion for target type estimation. Our comparative analysis is based on Dempster's fusion rule proposed in Dempster-Shafer Theory (DST) and on the Proportional Conflict Redistribution rule no. 5 (PCR5) recently proposed in Dezert-Smarandache Theory (DSmT). We show through very simple scenario and Monte-Carlo simulation, how PCR5 allows a very efficient Target Type Tracking and reduces drastically the latency delay for correct Target Type decision with respect to Demspter's rule. For cases presenting some short Target Type switches, Demspter's rule is proved to be unable to detect the switches and thus to track correctly the Target Type changes. The approach proposed here is totally new, efficient and promising to be incorporated in real-time Generalized Data Association - Multi Target Tracking systems (GDA-MTT) and provides an important result on the behavior of PCR5 with respect to Dempster's rule. The MatLab source code is provided in [5].
[5619] vixra:1003.0154 [pdf]
The Combination of Paradoxical, Uncertain and Imprecise Sources of Information based on DSmT and Neutro-Fuzzy Inference
The management and combination of uncertain, imprecise, fuzzy and even paradoxical or high conflicting sources of information has always been, and still remains today, of primal importance for the development of reliable modern information systems involving artificial reasoning. In this chapter, we present a survey of our recent theory of plausible and paradoxical reasoning, known as Dezert-Smarandache Theory (DSmT) in the literature, developed for dealing with imprecise, uncertain and paradoxical sources of information. We focus our presentation here rather on the foundations of DSmT, and on the two important new rules of combination, than on browsing specific applications of DSmT available in literature. Several simple examples are given throughout the presentation to show the efficiency and the generality of this new approach. The last part of this chapter concerns the presentation of the neutrosophic logic, the neutro-fuzzy inference and its connection with DSmT. Fuzzy logic and neutrosophic logic are useful tools in decision making after fusioning the information using the DSm hybrid rule of combination of masses.
[5620] vixra:1003.0152 [pdf]
The Generalized Pignistic Transformation
This paper presents in detail the generalized pignistic transformation (GPT) succinctly developed in the Dezert-Smarandache Theory (DSmT) framework as a tool for decision process. The GPT allows to provide a subjective probability measure from any generalized basic belief assignment given by any corpus of evidence. We mainly focus our presentation on the 3D case and provide the complete result obtained by the GPT and its validation drawn from the probability theory.
[5621] vixra:1003.0150 [pdf]
On the Tweety Penguin Triangle Problem
In this paper, one studies the famous well-known and challenging Tweety Penguin Triangle Problem (TPTP or TP2) pointed out by Judea Pearl in one of his books. We first present the solution of the TP2 based on the fallacious Bayesian reasoning and prove that reasoning cannot be used to conclude on the ability of the penguin-bird Tweety to fly or not to fly. Then we present in details the counter-intuitive solution obtained from the Dempster-Shafer Theory (DST). Finally, we show how the solution can be obtained with our new theory of plausible and paradoxical reasoning (DSmT)
[5622] vixra:1003.0149 [pdf]
Infinite Classes of Counter-Examples to the Dempster's Rule of Combination
This paper presents several classes of fusion problems which cannot be directly attacked by the classical mathematical theory of evidence, also known as the Dempster-Shafer Theory (DST) either because the Shafer's model for the frame of discernment is impossible to obtain or just because the Dempster's rule of combination fails to provide coherent results (or no result at all). We present and discuss the potentiality of the DSmT combined with its classical (or hybrid) rule of combination to attack these infinite classes of fusion problems.
[5623] vixra:1003.0148 [pdf]
Combining Uncertain and Paradoxical Evidences for DSm Hybrid Models
This paper presents a general method for combining uncertain and paradoxical source of evidences for a wide class of fusion problems. From the foundations of the Dezert-Smarandache Theory (DSmT) we show how the DSm rule of combination can be adapted to take into account all possible integrity constraints (if any) of the problem under consideration due to the true nature of elements/concepts involved into it. We show how the Shafer's model can be considered as a specific DSm hybrid model and be easily handled by our approach and a new efficient rule of combination different from the Dempster's rule is obtained. Several simple examples are also provided to show the efficiency and the generality of the approach proposed in this work.
[5624] vixra:1003.0146 [pdf]
Partial Ordering of Hyper-Powersets and Matrix Representation of Belief Functions Within DSmT
In this paper, we examine several issues for ordering or partially ordering elements of hyperpowertsets involved in the recent theory of plausible, uncertain and paradoxical reasoning (DSmT) developed by the authors. We will show the benefit of some of these issues to obtain a nice and useful matrix representation of belief functions.
[5625] vixra:1003.0145 [pdf]
Non-equilibrium Dynamics as Source of Asymmetries in High Energy Physics
Understanding the origin of certain symmetry breaking scenarios in high-energy physics remains an open challenge. Here we argue that, at least in some cases, symmetry violation is an effect of non-equilibrium dynamics that is likely to develop somewhere above the energy scale of electroweak interaction. We also find that, imposing Poincare symmetry in non-equilibrium field theory, leads to fractalization of space-time continuum via period-doubling transition to chaos.
[5626] vixra:1003.0140 [pdf]
New Seiberg-Witten Fields Maps Through Weyl Symmetrization and The Pure Geometric Extension of The Standard Model
A unified description of a symmetrized and anti-symmetrized Moyal star product of the non-commutative infinitesimal gauge transformations is presented and the corresponding Seiberg-Witten maps are derived. Moreover, the noncommutative covariant derivative, field strenght tensor as well as gauge transformations are shown to be consistently constructed not on the enveloping but on the Lie and/or Poisson algebra. As an application, a pure geometric extension of the standard model is shown explicitly.
[5627] vixra:1003.0100 [pdf]
On the Blackman's Association Problem
Modern multitarget-multisensor tracking systems involve the development of reliable methods for the data association and the fusion of multiple sensor information, and more specifically the partioning of observations into tracks. This paper discusses and compares the application of Dempster-Shafer Theory (DST) and the Dezert-Smarandache Theory (DSmT) methods to the fusion of multiple sensor attributes for target identification purpose. We focus our attention on the paradoxical Blackman's association problem and propose several approaches to outperfom Blackman's solution. We clarify some preconceived ideas about the use of degree of conflict between sources as potential criterion for partitioning evidences.
[5628] vixra:1003.0080 [pdf]
On Nonlinear Quantum Mechanics, Noncommutative Phase Spaces, Fractal-Scale Calculus and Vacuum Energy
A novel (to our knowledge) Generalized Nonlinear Schrödinger equation based on the modifications of Nottale-Cresson's fractal-scale calculus and resulting from the noncommutativity of the phase space coordinates is explicitly derived. The modifications to the ground state energy of a harmonic oscillator yields the observed value of the vacuum energy density. In the concluding remarks we discuss how nonlinear and nonlocal QM wave equations arise naturally from this fractal-scale calculus formalism which may have a key role in the final formulation of Quantum Gravity.
[5629] vixra:1003.0064 [pdf]
Adaptative Combination Rule and Proportional Conflict Redistribution Rule for Information Fusion
This paper presents two new promising combination rules for the fusion of uncertain and potentially highly conflicting sources of evidences in the theory of belief functions established first in Dempster-Shafer Theory (DST) and then recently extended in Dezert-Smarandache Theory (DSmT). Our work is to provide here new issues to palliate the well-known limitations of Dempster's rule and to work beyond its limits of applicability. Since the famous Zadeh's criticism of Dempster's rule in 1979, many researchers have proposed new interesting alternative rules of combination to palliate the weakness of Dempster's rule in order to provide acceptable results specially in highly conflicting situations. In this work, we present two new combination rules: the class of Adaptive Combination Rules (ACR) and a new efficient Proportional Conflict Redistribution (PCR) rule. Both rules allow to deal with highly conflicting sources for static and dynamic fusion applications. We present some interesting properties for ACR and PCR rules and discuss some simulation results obtained with both rules for Zadeh's problem and for a target identification problem.
[5630] vixra:1003.0059 [pdf]
Combination of Qualitative Information with 2-Tuple Linguistic Representation in Dezert-Smarandache Theory
Modern systems for information retrieval, fusion and management need to deal more and more with information coming from human experts usually expressed qualitatively in natural language with linguistic labels. In this paper, we propose and use two new 2-Tuple linguistic representation models (i.e., a distribution function model (DFM) and an improved Herrera-Martínez's model) jointly with the fusion rules developed in Dezert-Smarandache Theory (DSmT), in order to combine efficiently qualitative information expressed in term of qualitative belief functions. The two models both preserve the precision and improve the efficiency of the fusion of linguistic information expressing the global expert's opinion. However, DFM is more general and efficient than the latter, especially for unbalanced linguistic labels. Some simple examples are also provided to show how the 2-Tuple qualitative fusion rules are performed and their advantages.
[5631] vixra:1003.0055 [pdf]
On PT-Symmetric Periodic Potential, Quark Confinement, and Other Impossible Pursuits
As we know, it has been quite common nowadays for particle physicists to think of six impossible things before breakfast, just like what their cosmology fellows used to do. In the present paper, we discuss a number of those impossible things, including PT-symmetric periodic potential, its link with condensed matter nuclear science, and possible neat link with Quark confinement theory. In recent years, the PT-symmetry and its related periodic potential have gained considerable interests among physicists. We begin with a review of some results from a preceding paper discussing derivation of PT-symmetric periodic potential from biquaternion Klein-Gordon equation and proceed further with the remaining issues. Further observation is of course recommended in order to refute or verify this proposition.
[5632] vixra:1003.0054 [pdf]
A Note of Extended Proca Equations and Superconductivity
It has been known for quite long time that the electrodynamics of Maxwell equations can be extended and generalized further into Proca equations. The implications of introducing Proca equations include an alternative description of superconductivity, via extending London equations. In the light of another paper suggesting that Maxwell equations can be written using quaternion numbers, then we discuss a plausible extension of Proca equation using biquaternion number. Further implications and experiments are recommended.
[5633] vixra:1003.0053 [pdf]
On Emergent Physics, "Unparticles" and Exotic "Unmatter" States
Emergent physics refers to the formation and evolution of collective patterns in systems that are nonlinear and out-of-equilibrium. This type of large-scale behavior often develops as a result of simple interactions at the component level and involves a dynamic interplay between order and randomness. On account of its universality, there are credible hints that emergence may play a leading role in the Tera-ElectronVolt (TeV) sector of particle physics. Following this path, we examine the possibility of hypothetical highenergy states that have fractional number of quanta per state and consist of arbitrary mixtures of particles and antiparticles. These states are similar to "un-particles", massless fields of non-integral scaling dimensions that were recently conjectured to emerge in the TeV sector of particle physics. They are also linked to "unmatter", exotic clusters of matter and antimatter introduced few years ago in the context of Neutrosophy. The connection between 'unmatter' and 'unparticle' is explained in details in this paper. Unparticles have very odd properties which result from the fact that they represent fractional field quanta. Unparticles are manifested as mixed states that contain arbitrary mixtures of particles and antiparticles (therefore they simultaneously evolve "forward" and "backward" in time). From this, the connection with unmatter. Using the fractal operators of differentiation and integration we get the connection between unparticle and unmatter. 'Unmatter' was coined by F. Smarandache in 2004 who published three papers on the subject.
[5634] vixra:1003.0052 [pdf]
International Injustice in Science
In the scientific research, it is important to keep our freedom of thinking and not being yoked by others' theories without checking them, no matter where they come from. Cogito, ergo sum (I think, therefore I am), said Descartes (1596-1650), and this Latin aphorism became his first principle in philosophy.
[5635] vixra:1003.0049 [pdf]
Numerical Solution of Radial Biquaternion Klein-Gordon Equation
In the preceding article we argue that biquaternionic extension of Klein-Gordon equation has solution containing imaginary part, which differs appreciably from known solution of KGE. In the present article we present numerical /computer solution of radial biquaternionic KGE (radialBQKGE); which differs appreciably from conventional Yukawa potential. Further observation is of course recommended in order to refute or verify this proposition.
[5636] vixra:1003.0048 [pdf]
Thirty Unsolved Problems in the Physics of Elementary Particles
Unlike what some physicists and graduate students used to think, that physics science has come to the point that the only improvement needed is merely like adding more numbers in decimal place for the masses of elementary particles or gravitational constant, there is a number of unsolved problems in this field that may require that the whole theory shall be reassessed. In the present article we discuss thirty of those unsolved problems and their likely implications. In the first section we will discuss some well-known problems in cosmology and particle physics, and then other unsolved problems will be discussed in next section.
[5637] vixra:1003.0047 [pdf]
Yang-Mills Field from Quaternion Space Geometry, and Its Klein-Gordon Representation
Analysis of covariant derivatives of vectors in quaternion (Q-) spaces performed using Q-unit spinor-splitting technique and use of SL(2C)-invariance of quaternion multiplication reveals close connexion of Q-geometry objects and Yang-Mills (YM) field principle characteristics. In particular, it is shown that Q-connexion (with quaternion non-metricity) and related curvature of 4 dimensional (4D) space-times with 3D Q-space sections are formally equivalent to respectively YM-field potential and strength, traditionally emerging from the minimal action assumption. Plausible links between YM field equation and Klein-Gordon equation, in particular via its known isomorphism with Duffin-Kemmer equation, are also discussed.
[5638] vixra:1003.0046 [pdf]
Reply to "Notes on Pioneer Anomaly Explanation by Satellite-Shift Formula of Quaternion Relativity"
In the present article we would like to make a few comments on a recent paper by A. Yefremov in this journal [1]. It is interesting to note here that he concludes his analysis by pointing out that using full machinery of Quaternion Relativity it is possible to explain Pioneer XI anomaly with excellent agreement compared with observed data, and explain around 45% of Pioneer X anomalous acceleration. We argue that perhaps it will be necessary to consider extension of Lorentz transformation to Finsler-Berwald metric, as discussed by a number of authors in the past few years. In this regard, it would be interesting to see if the use of extended Lorentz transformation could also elucidate the long-lasting problem known as Ehrenfest paradox. Further observation is of course recommended in order to refute or verify this proposition.
[5639] vixra:1003.0045 [pdf]
Numerical Solution of Time-Dependent Gravitational Schrödinger Equation
In recent years, there are attempts to describe quantization of planetary distance based on time-independent gravitational Schrödinger equation, including Rubcic & Rubcic's method and also Nottale's Scale Relativity method. Nonetheless, there is no solution yet for time-dependent gravitational Schrödinger equation (TDGSE). In the present paper, a numerical solution of time-dependent gravitational Schrödinger equation is presented, apparently for the first time. This numerical solution leads to gravitational Bohr-radius, as expected. In the subsequent section, we also discuss plausible extension of this gravitational Schrödinger equation to include the effect of phion condensate via Gross-Pitaevskii equation, as described recently by Moffat. Alternatively one can consider this condensate from the viewpoint of Bogoliubov-deGennes theory, which can be approximated with coupled time-independent gravitational Schrödinger equation. Further observation is of course recommended in order to refute or verify this proposition.
[5640] vixra:1003.0044 [pdf]
Less Mundane Explanation of Pioneer Anomaly from Q-Relativity
There have been various explanations of Pioneer blueshift anomaly in the past few years; nonetheless no explanation has been offered from the viewpoint of Q-relativity physics. In the present paper it is argued that Pioneer anomalous blueshift may be caused by Pioneer spacecraft experiencing angular shift induced by similar Qrelativity effect which may also affect Jupiter satellites. By taking into consideration "aether drift" effect, the proposed method as described herein could explain Pioneer blueshift anomaly within ~ 0.26% error range, which speaks for itself. Another new proposition of redshift quantization is also proposed from gravitational Bohr-radius which is consistent with Bohr-Sommerfeld quantization. Further observation is of course recommended in order to refute or verify this proposition.
[5641] vixra:1003.0043 [pdf]
Plausible Explanation of Quantization of Intrinsic Redshift from Hall Effect and Weyl Quantization
Using phion condensate model as described by Moffat [1], we consider a plausible explanation of (Tifft) intrinsic redshift quantization as described by Bell [6] as result of Hall effect in rotating frame. We also discuss another alternative to explain redshift quantization from the viewpoint of Weyl quantization, which could yield Bohr-Sommerfeld quantization.
[5642] vixra:1003.0042 [pdf]
A Note on Geometric and Information Fusion Interpretation of Bell's Theorem and Quantum Measurement
In this paper we present four possible extensions of Bell's Theorem: Bayesian and Fuzzy Bayesian intrepretation, Information Fusion interpretation, Geometric interpretation, and the viewpoint of photon fluid as medium for quantum interaction.
[5643] vixra:1003.0041 [pdf]
Schrödinger Equation and the Quantization of Celestial Systems
In the present article, we argue that it is possible to generalize Schrödinger equation to describe quantization of celestial systems. While this hypothesis has been described by some authors, including Nottale, here we argue that such a macroquantization was formed by topological superfluid vortice. We also provide derivation of Schrödinger equation from Gross-Pitaevskii-Ginzburg equation, which supports this superfluid dynamics interpretation.
[5644] vixra:1003.0039 [pdf]
Unmatter Entities Inside Nuclei, Predicted by the Brightsen Nucleon Cluster Model
Applying the R. A. Brightsen Nucleon Cluster Model of the atomic nucleus we discuss how unmatter entities (the conjugations of matter and antimatter) may be formed as clusters inside a nucleus. The model supports a hypothesis that antimatter nucleon clusters are present as a parton (sensu Feynman) superposition within the spatial confinement of the proton (<sup>1</sup>H<sub>1</sub>), the neutron, and the deuteron (<sup>1</sup>H<sub>2</sub>). If model predictions can be confirmed both mathematically and experimentally, a new physics is suggested. A proposed experiment is connected to othopositronium annihilation anomalies, which, being related to one of known unmatter entity, orthopositronium (built on electron and positron), opens a way to expand the Standard Model.
[5645] vixra:1003.0038 [pdf]
Verifying Unmatter by Experiments, More Types of Unmatter, and a Quantum Chromodynamics Formula
As shown, experiments registered unmatter: a new kind of matter whose atoms include both nucleons and anti-nucleons, while their life span was very short, no more than 10<sup>-20</sup>sec. Stable states of unmatter can be built on quarks and anti-quarks: applying the unmatter principle here it is obtained a quantum chromodynamics formula that gives many combinations of unmatter built on quarks and anti-quarks.
[5646] vixra:1003.0037 [pdf]
Entangled States and Quantum Causality Threshold in the General Theory of Relativity
This article shows, Synge-Weber's classical problem statement about two particles interacting by a signal can be reduced to the case where the same particle is located in two different points A and B of the basic space-time in the same moment of time, so the states A and B are entangled. This particle, being actual two particles in the entangled states A and B, can interact with itself radiating a photon (signal) in the point A and absorbing it in the point B. That is our goal, to introduce entangled states into General Relativity. Under specific physical conditions the entangled particles in General Relativity can reach a state where neither particle A nor particle B can be the cause of future events. We call this specific state Quantum Causality Threshold.
[5647] vixra:1003.0036 [pdf]
There Is No Speed Barrier for a Wave Phase Nor for Entangled Particles
In this short paper, as an extension and consequence of Einstein-Podolski-Rosen paradox and Bell's inequality, one promotes the hypothesis (it has been called the Smarandache Hypothesis [1, 2, 3]) that: There is no speed barrier in the Universe and one can construct arbitrary speeds, and also one asks if it is possible to have an infinite speed (instantaneous transmission)? Future research: to study the composition of faster-than-light velocities and what happens with the laws of physics at faster-thanlight velocities?
[5648] vixra:1003.0035 [pdf]
A New Form of Matter-Unmatter, Composed of Particles and Anti-Particles
Besides matter and antimatter there must exist unmatter (as a new form of matter) in accordance with the neutrosophy theory that between an entity <A> and its opposite <AntiA> there exist intermediate entities <NeutA>. Unmatter is neither matter nor antimatter, but something in between. An atom of unmatter is formed either by (1): electrons, protons, and antineutrons, or by (2): antielectrons, antiprotons, and neutrons. At CERN it will be possible to test the production of unmatter. The existence of unmatter in the universe has a similar chance to that of the antimatter, and its production also difficult for present technologies.
[5649] vixra:1003.0034 [pdf]
Quantum Quasi-Paradoxes and Quantum Sorites Paradoxes
There can be generated many paradoxes or quasi-paradoxes that may occur from the combination of quantum and non-quantum worlds in physics. Even the passage from the micro-cosmos to the macro-cosmos, and reciprocally, can generate unsolved questions or counter-intuitive ideas. We define a quasi-paradox as a statement which has a prima facie self-contradictory support or an explicit contradiction, but which is not completely proven as a paradox. We present herein four elementary quantum quasi-paradoxes and their corresponding quantum Sorites paradoxes, which form a class of quantum quasi-paradoxes.
[5650] vixra:1003.0023 [pdf]
Neutrosophic Methods in General Relativity
In this work the authors apply concepts of Neutrosophic Logic to the General Theory of Relativity to obtain a generalisation of Einstein's fourdimensional pseudo-Riemannian differentiable manifold in terms of Smarandache Geometry (Smarandache manifolds), by which new classes of relativistic particles and non-quantum teleportation are developed. Fundamental features of Neutrosophic Logic are its denial of the Law of Excluded Middle, and open (or estimated) levels of truth, falsity and indeterminancy. Both Neutrosophic Logic and Smarandache Geometry were invented some years ago by one of the authors (F. Smarandache). The application of these purely mathematical theories to General Relativity reveals hitherto unknown possibilities for Einstein's theory. The issue of how closely the new theoretical possibilities account for physical phenomena, and indeed the viability of the concept of a fourdimensional space-time continuum itself as a fundamental model of Nature, must of course be explored by experiment.
[5651] vixra:1003.0021 [pdf]
S-Denying of the Signature Conditions Expands General Relativity's Space
We apply the S-denying procedure to signature conditions in a four-dimensional pseudo-Riemannian space - i. e. we change one (or even all) of the conditions to be partially true and partially false. We obtain five kinds of expanded space-time for General Relativity. Kind I permits the space-time to be in collapse. Kind II permits the space-time to change its own signature. Kind III has peculiarities, linked to the third signature condition. Kind IV permits regions where the metric fully degenerates: there may be non-quantum teleportation, and a home for virtual photons. Kind V is common for kinds I, II, III, and IV.
[5652] vixra:1003.0020 [pdf]
Positive, Neutral and Negative Mass-Charges in General Relativity
As shown, any four-dimensional proper vector has two observable projections onto time line, attributed to our world and the mirror world (for a mass-bearing particle, the projections posses are attributed to positive and negative mass-charges). As predicted, there should be a class of neutrally mass-charged particles that inhabit neither our world nor the mirror world. Inside the space-time area (membrane) the space rotates at the light speed, and all particles move at as well the light speed. So, the predicted particles of the neutrally mass-charged class should seem as light-like vortices.
[5653] vixra:1003.0019 [pdf]
A Note on Unified Statistics Including Fermi-Dirac, Bose-Einstein, and Tsallis Statistics, and Plausible Extension to Anisotropic Effect
In the light of some recent hypotheses suggesting plausible unification of thermostatistics where Fermi-Dirac, Bose-Einstein and Tsallis statistics become its special subsets, we consider further plausible extension to include non-integer Hausdorff dimension, which becomes realization of fractal entropy concept. In the subsequent section, we also discuss plausible extension of this unified statistics to include anisotropic effect by using quaternion oscillator, which may be observed in the context of Cosmic Microwave Background Radiation. Further observation is of course recommended in order to refute or verify this proposition.
[5654] vixra:1003.0018 [pdf]
A New Derivation of Biquaternion Schrödinger Equation and Plausible Implications
In the preceding article we argue that biquaternionic extension of Klein-Gordon equation has solution containing imaginary part, which differs appreciably from known solution of KGE. In the present article we discuss some possible interpretation of this imaginary part of the solution of biquaternionic KGE (BQKGE); thereafter we offer a new derivation of biquaternion Schrödinger equation using this method. Further observation is of course recommended in order to refute or verify this proposition.
[5655] vixra:1003.0017 [pdf]
An Exact Mapping from Navier-Stokes Equation to Schrödinger Equation via Riccati Equation
In the present article we argue that it is possible to write down Schrödinger representation of Navier-Stokes equation via Riccati equation. The proposed approach, while differs appreciably from other method such as what is proposed by R. M. Kiehn, has an advantage, i.e. it enables us extend further to quaternionic and biquaternionic version of Navier-Stokes equation, for instance via Kravchenko's and Gibbon's route. Further observation is of course recommended in order to refute or verify this proposition.
[5656] vixra:1003.0016 [pdf]
A Note on Computer Solution of Wireless Energy Transmit via Magnetic Resonance
In the present article we argue that it is possible to find numerical solution of coupled magnetic resonance equation for describing wireless energy transmit, as discussed recently by Karalis (2006) and Kurs et al. (2007). The proposed approach may be found useful in order to understand the phenomena of magnetic resonance. Further observation is of course recommended in order to refute or verify this proposition.
[5657] vixra:1003.0015 [pdf]
What Gravity Is. Some Recent Considerations
It is well-known, that when it comes to discussions among physicists concerning the meaning and nature of gravitation, the room temperature can be so hot. Therefore, for the sake of clarity, it seems worth that all choices were put on a table, and we consider each choice's features and problems. The present article describes a nonexhaustive list of such gravitation theories for the purpose of inviting further and more clear discussions.
[5658] vixra:1003.0013 [pdf]
Numerical Solution of Quantum Cosmological Model Simulating Boson and Fermion Creation
A numerical solution of Wheeler-De Witt equation for a quantum cosmological model simulating boson and fermion creation in the early Universe evolution is presented. This solution is based on a Wheeler-DeWitt equation obtained by Krechet, Fil'chenkov, and Shikin, in the framework of quantum geometrodynamics for a Bianchi-I metric.
[5659] vixra:1003.0010 [pdf]
A Derivation of Maxwell Equations in Quaternion Space
Quaternion space and its respective Quaternion Relativity (it also may be called as Rotational Relativity) has been defined in a number of papers including [1], and it can be shown that this new theory is capable to describe relativistic motion in elegant and straightforward way. Nonetheless there are subsequent theoretical developments which remains an open question, for instance to derive Maxwell equations in Q-space. Therefore the purpose of the present paper is to derive a consistent description of Maxwell equations in Q-space. First we consider a simplified method similar to the Feynman's derivation of Maxwell equations from Lorentz force. And then we present another derivation method using Dirac decomposition, introduced by Gersten (1999). Further observation is of course recommended in order to refute or verify some implication of this proposition.
[5660] vixra:1002.0050 [pdf]
The Sea of Super-Strong Interacting Gravitons as the Cause of Gravity
The Newtonian attraction turns out to be the main statistical effect in the sea of super-strong interacting gravitons, with bodies themselves being not sources of gravitons - only correlational properties of in and out fluxes of gravitons in their neighbourhood are changed due to an interaction with bodies. Other quantum effects of low-energy quantum gravity are the following ones: redshifts, their analog - a deceleration of massive bodies, and an additional relaxation of any light flux.
[5661] vixra:1002.0048 [pdf]
Why Does the Electron and the Positron Possesses the Same Rest Mass But Different Charges of Equal Modulus and Opposite Signs??.and Why Both Annihilates??
We demonstrate how Rest Masses and Electric Charges are generated by the 5D Extra Dimension of a Universe possessing a Higher Dimensional Nature using the Hamilton-Jacobi equation in agreement with the point of view of Ponce De Leon explaining in the generation process how and why antiparticles have the same rest mass m<sub>0</sub> but charges of equal modulus and opposite signs when compared to particles and we also explains why both annihilates.
[5662] vixra:1002.0047 [pdf]
Gravitational Field of a Condensed Matter Model of the Sun: The Space Breaking Meets the Asteroid Strip
This seminal study deals with the exact solution of Einstein's field equations for a sphere of incompressible liquid without the additional limitation initially introduced in 1916 by Karl Schwarzschild, according to which the space-time metric must have no singularities. The obtained exact solution is then applied to the Universe, the Sun, and the planets, by the assumption that these objects can be approximated as spheres of incompressible liquid. It is shown that gravitational collapse of such a sphere is permitted for an object whose characteristics (mass, density, and size) are close to the Universe. Meanwhile, there is a spatial break associated with any of the mentioned stellar objects: the break is determined as the approaching to infinity of one of the spatial components of the metric tensor. In particular, the break of the Sun's space meets the Asteroid strip, while Jupiter's space break meets the Asteroid strip from the outer side. Also, the space breaks of Mercury, Venus, Earth, and Mars are located inside the Asteroid strip (inside the Sun's space break).
[5663] vixra:1002.0046 [pdf]
On the Speed of Rotation of the Isotropic Space: Insight into the Redshift Problem
This study applies the mathematical method of chronometric invariants, which are physically observable quantities in the four-dimensional space-time (Zelmanov A.L., Soviet Physics Doklady, 1956, vol.1, 227-230). The isotropic region of the space-time is considered (it is known as the isotropic space). This is the home of massless light-like particles (e.g. photons). It is shown that the isotropic space rotates with a linear velocity equal to the velocity of light. The rotation slows in the presence of gravitation. Even under the simplified conditions of Special Relativity, the isotropic space still rotates with the velocity of light. A manifestation of this effect is the observed Hubble redshift explained as energy loss of photons with distance, for work against the non-holonomity (rotation) field of the isotropic space wherein they travel (Rabounski D. The Abraham Zelmanov Journal, 2009, vol.2, 11-28). It is shown that the light-speed rotation of the isotropic space has a purely geometrical origin due to the space-time metric, where time is presented as the fourth coordinate, expressed through the velocity of light.
[5664] vixra:1002.0045 [pdf]
Hubble Redshift due to the Global Non-Holonomity of Space
In General Relativity, the change in energy of a freely moving photon is given by the scalar equation of the isotropic geodesic equations, which manifests the work produced on a photon being moved along a path. I solved the equation in terms of physical observables (Zelmanov A. L., Soviet Physics Doklady, 1956, vol. 1, 227-230) and in the large scale approximation, i.e. with gravitation and deformation neglected, while supposing the isotropic space to be globally non-holonomic (the time lines are non-orthogonal to the spatial section, a condition manifested by the rotation of the space). The solution is E = E<sub>0</sub> exp(-Ωat/c), where Ω is the angular velocity of the space (it meets the Hubble constant H<sub>0</sub> = c/a = 2.3x10<sup>-18</sup> sec<sup>-1</sup>), a is the radius of the Universe, t = r/c is the time of the photon's travel. Thus, a photon loses energy with distance due to the work against the field of the space non-holonomity. According to the solution, the redshift should be z = exp(H<sub>0</sub> r/c)-1 ≈ H<sub>0</sub> r/c. This solution explains both the redshift z = H<sub>0</sub> r/c observed at small distances and the non-linearity of the empirical Hubble law due to the exponent (at large r). The ultimate redshift in a non-expanding universe, according to the theory, should be z = exp(π)-1 = 22.14.
[5665] vixra:1002.0035 [pdf]
Two-World Background of Special Relativity. Part II
The two-world background of the Special Theory of Relativity started in part one of this article is continued in this second part. Four-dimensional inversion is shown to be a special Lorentz transformation that transforms the positive spacetime coordinates of a frame of reference in the positive universe into the negative spacetime coordinates of the symmetry-partner frame of reference in the negative universe in the two-world picture, contrary to the conclusion that four-dimensional inversion is impossible as actual transformation of the coordinates of a frame of reference in the existing one-world picture. By starting with the negative spacetime dimensions in the negative universe derived in part one, the signs of mass and other physical parameters and physical constants in the negative universe are derived by application of the symmetry of laws between the positive and negative universes. The invariance of natural laws in the negative universe is demonstrated. The derived negative sign of mass in the negative universe is a conclusion of over a century-old effort towards the development of the concept of negative mass in physics.
[5666] vixra:1002.0034 [pdf]
Two-World Background of Special Relativity. Part I
A new sheet of spacetime is isolated and added to the existing sheet, thereby yielding a pair of co-existing sheets of spacetimes, which are four-dimensional inversions of each other. The separation of the spacetimes by the special-relativistic event horizon compels an interpretation of the existence of a pair of symmetrical worlds (or universes) in nature. Further more, a flat two-dimensional intrinsic spacetime that underlies the flat four-dimensional spacetime in each universe is introduced. The four-dimensional spacetime is outward manifestation of the two-dimensional intrinsic spacetime, just as the Special Theory of Relativity (SR) on four-dimensional spacetime is mere outward manifestation of the intrinsic Special Theory of Relativity (φSR) on two-dimensional intrinsic spacetime. A new set of diagrams in the two-world picture that involves relative rotation of the coordinates of the two-dimensional intrinsic spacetime is drawn and intrinsic Lorentz transformation derived from it. The Lorentz transformation in SR is then written directly from intrinsic Lorentz transformation in φSR without any need to draw diagrams involving relative rotation of the coordinates of four-dimensional spacetime, as usually done until now. Indeed every result of SR can be written directly from the corresponding result of φSR. The non-existence of the light cone concept in the two-world picture is shown and good prospect for making the Lorentz group SO(3,1) compact in the two-world picture is highlighted.
[5667] vixra:1002.0033 [pdf]
Advanced Topics in Information Dynamics
This work is sequel to the book "A Treatise in Information Geometry", submitted to vixra in late 2009. The aim of this dissertation is to continue the development of fractal geometry initiated in the former volume. This culminates in the construction of first order self-referential geometry, which is a special form of 8-tensor construction on a differential manifold with nice properties. The associated information theory has many powerful and interesting consequences. Additionally within this treatise, various themes in modern mathematics are surveyed- Galois theory, Category theory, K-theory, and Sieve theory, and various connections between these structures and information theory investigated. In particular it is demonstrated that the exotic geometric analogues of these constructions - save for Category theory, which is foundational - form special cases of the self referential calculus.
[5668] vixra:1002.0024 [pdf]
A Derivation of π(n) Based on a Stability Analysis of the Riemann-Zeta Function
The prime-number counting function π(n), which is significant in the prime number theorem, is derived by analyzing the region of convergence of the real-part of the Riemann-Zeta function using the unilateral z-transform. In order to satisfy the stability criteria of the z-transform, it is found that the real part of the Riemann-Zeta function must converge to the prime-counting function.
[5669] vixra:1002.0023 [pdf]
The Dark Energy Problem
The proposal for dark energy based on Type Ia Supernovae redshift is examined. It is found that the linear and non-Linear portions in the Hubble Redshift are easily explained by the use of the Hubble Sphere model, where two interacting Hubble spheres sharing a common mass-energy density result in a decrease in energy as a function of distance from the object being viewed. Interpreting the non-linear portion of the redshift curve as a decrease in interacting volume between neighboring Hubble Spheres removes the need for a dark energy.
[5670] vixra:1002.0021 [pdf]
The Mass of the Universe and Other Relations in the Idea of a Possible Cosmic Quantum Mechanics
General Relativity predicts the existence of relativistic corrections to the static Newtonian potential which can be calculated and verified experimentally. The idea leading to quantum corrections at large distances is that of the interactions of massless particles which only involve their coupling energies at low energies. In this short paper we attempt to propose the Sagnac intrerferometric technique as a way of detecting the relativistic correction suggested for the Newtonian potential, and thus obtaining an estimate for phase difference using a satellite orbiting at an altitude of 250 km above the surface.
[5671] vixra:1002.0020 [pdf]
Satellite Motion in a Non-Singular Potential
We study the effects of a non-singular gravitational potential on satellite orbits by deriving the corresponding time rates of change of its orbital elements. This is achieved by expanding the non-singular potential into power series up to second order. This series contains three terms, the first been the Newtonian potential and the other two, here R1 (first order term) and R2 (second order term), express deviations of the singular potential from the Newtonian. These deviations from the Newtonian potential are taken as disturbing potential terms in the Lagrange planetary equations that provide the time rates of change of the orbital elements of a satellite in a non-singular gravitational field. We split these effects into secular, low and high frequency components and we evaluate them numerically using the low Earth orbiting mission Gravity Recovery and Climate Experiment (GRACE). We show that the secular effect of the second-order disturbing term R2 on the perigee and the mean anomaly are 4".307*10<sup>-9</sup>/a, and -2".533*10<sup>-15</sup>/a, respectively. These effects are far too small and most likely cannot easily be observed with today's technology. Numerical evaluation of the low and high frequency effects of the disturbing term R2 on low Earth orbiters like GRACE are very small and undetectable by current observational means.
[5672] vixra:1002.0016 [pdf]
Detection of the Relativistic Corrections to the Gravitational Potential Using a Sagnac Interferometer
General Relativity predicts the existence of relativistic corrections to the static Newtonian potential which can be calculated and verified experimentally. The idea leading to quantum corrections at large distances is that of the interactions of massless particles which only involve their coupling energies at low energies. In this short paper we attempt to propose the Sagnac intrerferometric technique as a way of detecting the relativistic correction suggested for the Newtonian potential, and thus obtaining an estimate for phase difference using a satellite orbiting at an altitude of 250 km above the surface of the Earth.
[5673] vixra:1002.0015 [pdf]
Geodetic Precession of the Spin in a Non-Singular Gravitational Potential
Using a non-singular gravitational potential which appears in the literature we analytically derived and investigated the equations describing the precession of a body's spin orbiting around a main spherical body of mass M. The calculation has been performed using a non-exact Schwarzschild solution, and further assuming that the gravitational field of the Earth is more than that of a rotating mass. General theory of relativity predicts that the direction of the gyroscope will change at a rate of 6.6 arcsec/year for a gyroscope in a 650 km high polar orbit. In our case a precession rate of the spin of a very similar magnitude to that predicted by general relativity was calculated resulting to a ΔS<sub>geo</sub>/S<sub>geo</sub> =-5.570*10<sup>-2</sup>
[5674] vixra:1002.0014 [pdf]
Particles Here and Beyond the Mirror
This is a research on all kinds of particles, which could be conceivable in the space-time of General Relativity. In addition to mass-bearing particles and light-like particles, zero-particles are predicted: such particles can exist in a fully degenerate space-time region (zero-space). Zero-particles seems as standing light waves, which travel in instant (non-quantum teleportation of photons); they might be observed in a further development of the "stopped light experiment" which was first conducted in 2001, at Harvard, USA. The theoretical existence of two separate regions in the space-time is also shown, where the observable time flows into the future and into the past (our world and the mirror world). These regions are separated by a space-time membrane wherein the observable time stops. A few other certain problems are considered. It is shown, through Killing's equations, that geodesic motion of particles is a result of stationary geodesic rotation of the space which hosts them. Concerning the theory of gravitational wave detectors, it is shown that both free-mass detector and solid-body detector may register a gravitational wave only if such a detector bears an oscillation of the butt-ends.
[5675] vixra:1002.0013 [pdf]
Fields, Vacuum, and the Mirror Universe
In this book, we build the theory of non-geodesic motion of particles in the space-time of General Relativity. Motion of a charged particle in an electromagnetic field is constructed in curved space-time (in contrast to the regular considerations held in Minkowski's space of Special Relativity). Spin particles are explained in the framework of the variational principle: this approach distinctly shows that elementary particles should have masses governed by a special quantum relation. Physical vacuum and forces of non-Newtonian gravitation acting in it are determined through the lambda-term in Einstein's equations. A cosmological concept of the inversion explosion of the Universe from a compact object with the radius of an electron is suggested. Physical conditions inside a membrane that separates space-time regions where the observable time flows into the future and into the past (our world and the mirror world) are examined.
[5676] vixra:1002.0009 [pdf]
Do Ultra High Energy Cosmic Rays Form a Part of Dark Matter?
It is considered whether or not recent ultra high energy cosmic ray observations hint at the possibility that the unaccounted for higher energy rays have become dark matter.
[5677] vixra:1001.0016 [pdf]
On N-ary Algebras, Polyvector Gauge Theories in Noncommutative Clifford Spaces and Deformation Quantization
Polyvector-valued gauge field theories in noncommutative Clifford spaces are presented. The noncommutative binary star products are associative and require the use of the Baker-Campbell-Hausdorff formula. An important relationship among the n-ary commutators of noncommuting spacetime coordinates [X<sup>1</sup>,X<sup>2</sup>, ......,X<sup>n</sup>] and the poly-vector valued coordinates X<sup>123...n</sup> in noncommutative Clifford spaces is explicitly derived and is given by [X<sup>1</sup>,X<sup>2</sup>, ......,X<sup>n</sup>] = n! X<sup>123...n</sup>. It is argued how the large N limit of n-ary commutators of n hyper-matrices X<sub>i<sub>1</sub></sub><sub>i<sub>2</sub></sub>....<sub>i<sub>n</sub></sub> leads to Eguchi-Schild p-brane actions when p+1 = n. A noncomutative n-ary generalized star product of functions is provided which is associated with the deformation quantization of n-ary structures. Finally, brief comments are made about the mapping of the Nambu-Heisenberg n-ary commutation relations of linear operators into the deformed Nambu-Poisson brackets of their corresponding symbols.
[5678] vixra:1001.0007 [pdf]
Antimatter in Voids Might Explain Dark Matter and Dark Energy
Traditional theories on cosmology require a sufficient amount of CP violation, undiscovered matter particles and missing energy to explain what is observed in our universe today. Traditional theories on antimatter assume that if antimatter atoms existed, they would distort space-time in the same way as normal matter. However, gravitational forces between antimatter atoms have not yet been experimentally measured. This paper speculates on what might happen if antimatter distorts space-time opposite to normal matter. The repulsive force of the anti-hydrogen atoms in the voids between galaxies would cause those voids to expand and would exert additional forces pressing inward on the galaxies. Simulations of this model produce galaxy rotation curves which match what is observed today without the need for any Dark Matter. An explanation of the MOND paradigm is also provided.
[5679] vixra:0912.0046 [pdf]
Essays on Science Management
Several recent essays are presented on the difficulties scientific research and science researchers have by now been facing for a number of decades due to what goes by the name of "science management".
[5680] vixra:0912.0044 [pdf]
Tidal Charges From BraneWorld Black Holes As An Experimental Proof Of The Higher Dimensional Nature Of The Universe.
If the Universe have more than 4 Dimensions then its Extra Dimensional Nature generates in our 4D Spacetime a projection of a 5D Bulk Weyl Tensor. We demonstrate that this happens not only in the Randall-Sundrum BraneWorld Model where this idea appeared first (developed by Shiromizu, Maeda and Sasaki)but also occurs in the Kaluza-Klein 5D Induced Matter Formalism.As a matter of fact this 5D Bulk Weyl Tensor appears in every Extra Dimensional Formalism (eg Basini-Capozziello-Wesson-Overduin Dimensional Reduction From 5D to 4D) because this Bulk Weyl tensor is being generated by the Extra Dimensional Nature of the Universe regardless and independently of the Mathematical Formalism used and the Dimensional Reduction From 5D to 4D of the Einstein and Ricci Tensors in both Kaluza-Klein and Randall-Sundrum Formalisms are similar.Also as in the Randall-Sundrum Model this 5D Bulk Weyl Tensor generates in the Kaluza-Klein formalism a Tidal "Electric" Charge "seen" in 4D as an Extra Term in the Schwarzschild Metric resembling the Reissner-Nordstrom Metric. We analyze the Gravitational Bending Of Light in this BraneWorld Black Hole Metric(known as the Dadhich,Maartens,Papadopolous and Rezania) affected by an Extra Term due to the presence of the Tidal Charge compared to the Bending Of Light in the Reissner-Nordstrom Metric with the Electric Charge also being generated by the Extra Dimension in agreement with the point of view of Ponce De Leon (explaining in the generation process how and why antiparticles have the same rest mass m<sub>0</sub> but charges of equal modulus and opposite signs when compared to particles)and unlike the Reissner-Nordstrom Metric the terms G/(c<sup>4</sup>) do not appear in the Tidal Charge Extra Term.Thereby we conclude that the Extra Term produced by the Tidal Charge in the Bending Of Light due to the presence of the Extra Dimensions is more suitable to be detected than its Reissner-Nordstrom counterpart and this line of reason is one of the best approaches to test the Higher Dimensional Nature of the Universe and we describe a possible experiment using Artificial Satellites and the rotating BraneWorld Black Hole Metric to do so
[5681] vixra:0912.0042 [pdf]
Wealth Creation and Science Research :Science Research, the Root of Wealth in Our Knowledge Society, is Endangered
Two vastly different historical stages in wealth creation are the traditional one based on agriculture during past millennia, and the one based on science research in our present globalizing knowledge society. The differences happen to be so considerable, and the emergence of the second stage relatively so recent, that the awareness of the full range of consequences regarding the proper pursuit of science research, which is the root of wealth in our knowledge society, is missing to an extent that may, even in the medium term, seriously endanger the sustainability of modern human society.
[5682] vixra:0912.0039 [pdf]
A Gravity, U(4) x U(4) Yang-Mills and Matter Unification in Clifford Spaces
A brief review of a Conformal Gravity and U(4) x U(4) Yang-Mills Unification model in four dimensions from a Clifford Gauge Field Theory in C-spaces (Clifford spaces) is presented. It based on the (complex) Clifford Cl(4,C) algebra underlying a complexified four dimensional spacetime (8 real dimensions). The 16 fermions of each generation can be arranged into the 16 entries of a 4 x 4 matrix associated with the A = 1, 2, 3, ....., 16 indices corresponding to the dimensions of the Cl(4) gauge algebra. The Higgs sector is also part of the Cl(4)-algebra polyvector valued gauge field in C-space. The Yukawa couplings which furnish masses to the fermions (after symmetry breaking) admit a C-space geometric interpretation as well.
[5683] vixra:0912.0020 [pdf]
Further on the Black Hole War, or To Make the World Deep, Precise and Large Enough for Physics
Two somewhat long overdue arguments presented here may help in further clarifying the so called "Black Whole War" and beyond that, may be useful in Physics at large.
[5684] vixra:0911.0054 [pdf]
Is Entropy Related to the Synchronization of the Input/output Power of a System of Oscillators?
The objective of this paper is to identify a way to relate entropy with the synchronization of the input/output power of a system of oscillators. This view is ultimately reconciled through an examination of the geometric differences that exist between 2D shell and 3D lattice oscillator arrangements.
[5685] vixra:0911.0049 [pdf]
Yang-Mills Interactions and Gravity in Terms of Clifford Algebra
A model of Yang-Mills interactions and gravity in terms of the Clifford algebra Cl<sub>0,6</sub> is presented. The gravity and Yang-Mills actions are formulated as different order terms in a generalized action. The feebleness of gravity as well as the smallness of the cosmological constant and theta terms are discussed at the classical level. The invariance groups, including the de Sitter and the Pati-Salam SU(4) subgroups, consist of gauge transformations from either side of an algebraic spinor. Upon symmetry breaking via the Higgs fields, the remaining symmetries are the Lorentz SO(1,3), color SU(3), electromagnetic U(1)<sub>EM</sub>, and an additional U(1). The first generation leptons and quarks are identified with even and odd parts of spinor idempotent projections. There are still several shortcomings with the current model. Further research is needed to fully recover the standard model results.
[5686] vixra:0911.0039 [pdf]
Cosmological Implications of the Tetron Model of Elementary Particles
Based on a possible solution to the tetron spin problem, a modification of the standard Big Bang scenario is suggested, where the advent of a spacetime manifold is connected to the appearance of tetronic bound states. The metric tensor is constructed from tetron constituents and the reason for cosmic inflation is elucidated. Furthermore, there are natural dark matter candidates in the tetron model. The ratio of ordinary to dark matter in the universe is calculated to be 1:5.
[5687] vixra:0911.0038 [pdf]
Towards A Moyal Quantization Program of the Membrane
A Moyal deformation quantization approach to a spherical membrane (moving in flat target backgrounds) in the light cone gauge is presented. The physical picture behind this construction relies in viewing the two spatial membrane coordinates σ<sub>1</sub>, σ<sub>2</sub> as the two phase space variables q, p, and the temporal membrane coordinate τ as time. Solutions to the Moyal-deformed equations of motion are explicitly constructed in terms of elliptic functions. A knowledge of the Moyal-deformed light-cone membrane's Hamiltonian density H(q, p, τ ) allows to construct a timedependent Wigner function ρ(q, p, τ ) as solutions of the Moyal-Liouville equation, and from which one can obtain the expectation values of the operator < H > = Trace (ρH) that define the quantum average values of the energy density configurations of the membrane at any instant of time. It is shown how a time-dependent quartic oscillator with q<sup>4</sup>, p<sup>4</sup>, q<sup>2</sup>p<sup>2</sup> terms plays a fundamental role in the quantum treatment of membranes and displays an important p ↔ q duality symmetry.
[5688] vixra:0911.0030 [pdf]
On the Radiation Problem of High Mass Stars
A massive star is defined to be one with mass greater than ~ 8-10M<sub>☉</sub>. Central to the on-going debate on how these objects [massive stars] come into being is the socalled Radiation Problem. For nearly forty years, it has been argued that the radiation field emanating from massive stars is high enough to cause a global reversal of direct radial in-fall of material onto the nascent star. We argue that only in the case of a non-spinning isolated star does the gravitational field of the nascent star overcome the radiation field. An isolated non-spinning star is a non-spinning star without any circumstellar material around it, and the gravitational field beyond its surface is described exactly by Newton's inverse square law. The supposed fact that massive stars have a gravitational field that is much stronger than their radiation field is drawn from the analysis of an isolated massive star. In this case the gravitational field is much stronger than the radiation field. This conclusion has been erroneously extended to the case of massive stars enshrouded in gas & dust. We find that, for the case of a non-spinning gravitating body where we take into consideration the circumstellar material, that at ~ 8 - 10M<sub>☉</sub>, the radiation field will not reverse the radial in-fall of matter, but rather a stalemate between the radiation and gravitational field will be achieved, i.e. in-fall is halted but not reversed. This picture is very different from the common picture that is projected and accepted in the popular literature that at ~ 8-10M<sub>☉</sub>, all the circumstellar material, from the surface of the star right up to the edge of the molecular core, is expected to be swept away by the radiation field. We argue that massive stars should be able to start their normal stellar processes if the molecular core from which they form has some rotation, because a rotating core exhibits an Azimuthally Symmetric Gravitational Field which causes there to be an accretion disk and along this disk. The radiation field cannot be much stronger than the gravitational field, hence this equatorial accretion disk becomes the channel via which the nascent massive star accretes all of its material.
[5689] vixra:0911.0029 [pdf]
Is the Doubly Special Relativity Theory Necessary?
Giovanni Amelino-Camelia (2002) has proposed a theory whose hope (should it be confirmed by experiments) is to supersede Einstein's 1905 Special Theory of Relativity (STR). This theory is known as the Doubly Special Relativity (DSR) and it proposes a new observer-independent scale-length. At this scale, it is agreed that a particle that has reached this scale-length, has entered the Quantum Gravity regime. According to the STR, observers will - in principle; not agree on whether or not a particle has reached this length hence they will not agree as to when does a particle enter the Quantum Gravity regime. This presents the STR with a "paradox". Amongst others, the DSR is fashioned to solve this "puzzle/paradox". We argue/show here, that the STR already implies such a scale-length - it is the complete embodiment of the STR, thus we are left to excogitate; "Is the Doubly Special Relativity theory necessary?".
[5690] vixra:0911.0025 [pdf]
Bipolar Outflows as a Repulsive Gravitational Phenomenon Azimuthally Symmetric Theory of Gravitation (II)
Abstract This reading is part in a series on the Azimuthally Symmetric Theory of Gravitation (ASTG) set-out in Nyambuya (2010a). This theory is built on Laplace-Poisson's well known equation and it has been shown therein (Nyambuya 2010a) that the ASTG is capable of explaining - from a purely classical physics standpoint; the precession of the perihelion of solar planets as being a consequence of the azimuthal symmetry emerging from the spin of the Sun. This symmetry has and must have an influence on the emergent gravitational field. We show herein that the emergent equations from the ASTG - under some critical conditions determined by the spin - do possess repulsive gravitational fields in the polar regions of the gravitating body in question. This places the ASTG on an interesting pedal to infer the origins of outflows as a repulsive gravitational phenomena. Outflows are an ubiquitous phenomena found in star forming systems and their true origins is a question yet to be settled. Given the current thinking on their origins, the direction that the present reading takes is nothing short of an asymptotic break from conventional wisdom; at the very least, it is a complete paradigm shift as gravitation is not at all associated; let alone considered to have anything to do with the out-pour of matter but is thought to be an all-attractive force that tries only to squash matter together into a single point. Additionally, we show that the emergent Azimuthally Symmetric Gravitational Field from the ASTG strongly suggests a solution to the supposed Radiation Problem that is thought to be faced by massive stars in their process of formation. That is, at ~ 8 - 10M<sub>☉</sub>, radiation from the nascent star is expected to halt the accretion of matter onto the nascent star. We show that in-falling material will fall onto the equatorial disk and from there, this material will be channelled onto the forming star via the equatorial plane thus accretion of mass continues well past the curtain value of ~ 8-10M<sub>☉</sub> albeit via the disk. Along the equatorial plane, the net force (with the radiation force included) on any material there-on right up-till the surface of the star, is directed toward the forming star, hence accretion of mass by the nascent star is un-hampered.
[5691] vixra:0911.0024 [pdf]
On a Four Dimensional Unified Field Theory of the Gravitational, Electromagnetic, Weak and the Strong Force.
The Gravitational, Electromagnetic, Weak & the Strong force are here brought together under a single roof via an extension of Reimann geometry to a new geometry (coined Reimann-Hilbert Space); that unlike Reimann geometry, preserves both the length and the angle of a vector under parallel transport. The affine connection of this new geometry - the Reimann-Hilbert Space, is a tensor and this leads us to a geodesic law that truly upholds the Principle of Relativity. The geodesic law emerging from the General Theory of Relativity (GTR) is well known to be in contempt of the Principle of Relativity which is a principle upon which the GTR is founded. The geodesic law for particles in the GTR must be formulated in special (or privileged) coordinate systems i.e. gaussian coordinate systems whereas the Principle of Relativity clearly forbids the existence of special (or privileged) coordinate systems in manner redolent of the way the Special Theory of Relativity forbids the existence of an absolute (or privileged) frame of reference. In the low energy regime and low spacetime curvature the unified field equations derived herein are seen to reduce to the well known Maxwell-Procca equation, the none-abelian nuclear force field equations, the Lorentz equation of motion for charged particles and the Dirac Equation. Further, to the already existing four known forces, the theory predicts the existence of yet another force. We have coined this the super-force and this force obeys SU(4, 4) gauge invariance. Furthermore, unlike in the GTR, gravitation is here represented by a single scaler potential, and electromagnetic field and the nuclear forces are described by the electromagnetic vector potential (A<sub>μ</sub>) which describes the metric tensor i.e. g<sub>μν</sub> = A<sub>μ</sub>A<sub>ν</sub>. From this (g<sub>μν</sub> = A<sub>μ</sub>A<sub>ν</sub>), it is seen that gravity waves may not exist in the sense envisaged by the GTR.
[5692] vixra:0911.0023 [pdf]
Distance, Rotational Velocities, Red Shift, Mass, Length and Angular Momentum of 111 Spiral Galaxies in the Southern Hemisphere
To date, methods of direct measurement of the distance to galaxies have been limited in their range[1]. This paper makes direct measurements of distant galaxies by comparing spiral arm structures to the expected locus of gravitational influence along the geodesic in a centripetally accelerating reference frame. Such measurements provide a method of independent validation of the extragalactic distance ladder without presupposition of the uniformly expanding universe theory. The methodology of this paper avoids the use of Hubble's constant in the measurement of the distance to galaxies beyond the range of contemporary direct measurement methods. The measurements are validated by meaningful trends between distance and other variables such as mass, rotational velocity, size and angular momentum to validate the measurements made. A Hubble diagram calculated using this method is presented from data obtained from 111 spiral galaxies in the southern hemisphere to about 200 MPc distance. The galactic red shift from these galaxies appears independent to distance. Galactic structure, size, masses and angular momentum are seen to have a distinct relationship to the spin velocity, or tangential velocity, associated with each galaxy.
[5693] vixra:0911.0016 [pdf]
A Comparisson of Distance Measurements to NGC 4258
The accurate measurement of extragalactic distances is a central challenge of modern astronomy, being required for any realistic description of the age, geometry and fate of the Universe. The measurement of relative extragalactic distances has become fairly routine, but estimates of absolute distances are rare.[1] In the vicinity of the Sun, direct geometric techniques for obtaining absolute distances, such as orbital parallax, are feasible, but heretofore such techniques have been difficult to apply to other galaxies. As a result, uncertainties in the expansion rate and age of the Universe are dominated by uncertainties in the absolute calibration of the extragalactic distance ladder[2]. Here we compare previous distance measurements to the galaxy NGC 4258 from both an estimate of Hubble's constant and a direct measurement of orbital motions in a disk of gas surrounding the nucleus of this galaxy to a direct measurement using a model of constant rotational velocity and galactic spiral morphology. The results of the comparison help validate methods of direct measurement of spiral galaxies to much greater distances.
[5694] vixra:0911.0014 [pdf]
New Curved Spacetime Dirac Equations
I propose three new curved spacetime versions of the Dirac Equation. These equations have been developed mainly to try and account in a natural way for the observed anomalous gyromagnetic ratio of Fermions. The derived equations suggest that particles including the Electron which is thought to be a point particle do have a finite spatial size which is the reason for the observed anomalous gyromagnetic ratio. A serendipitous result of the theory, is that, to of the equation exhibits an asymmetry in their positive and negative energy solutions the first suggestion of which is clear that a solution to the problem as to why the Electron and Moun - despite their acute similarities - exhibit an asymmetry in their mass is possible. The Moun is often thought as an Electron in a higher energy state. Another of the consequences of three equations emanating from the asymmetric serendipity of the energy solutions of two of these equations, is that, an explanation as to why Leptons exhibit a three stage mass hierarchy is possible.
[5695] vixra:0910.0064 [pdf]
On a General Spin Dirac Equation
In its bare and natural form, the Dirac Equation describes only spin-1/2 particles. The main purpose of this reading is to make a valid and justified mathematical modification to the Dirac Equation so that it describes any spin particle. We show that this mathematical modification is consistent with the Special Theory of Relativity (STR). We believe that the fact that this modification is consistent with the STR gives the present effort some physical justification that warrants further investigations. From the vantage point of unity, simplicity and beauty, it is natural to wonder why should there exist different equations to describe particles of different spins? For example, the Klein-Gordon equation describes spin-0 particles, while the Dirac Equation describes spin-1/2, and the Rarita-Schwinger Equation describes spin-3/2. Does it mean we have to look for another equation to describe spin-2 particles, and then spin-5/2 particles etc? This does not look beautiful, simple, or at the very least suggest a Unification of the Natural Laws. Beauty of a theory is not a physical principle but, one thing is clear to the searching mind - i.e., a theory that possesses beauty, appeals to the mind, and is (posteriori) bound to have something to do with physical reality if it naturally submits itself to the test of experience. The effort of the present reading is to make the attempt to find this equation.
[5696] vixra:0910.0051 [pdf]
Derivation of Gauge Boson Masses from the Dynamics of Levy Flows
Gauge bosons are fundamental fields that mediate the electroweak interaction of leptons and quarks. The underlying mechanism explaining how gauge bosons acquire mass is neither definitively settled nor universally accepted and several competing theories coexist. The prevailing paradigm is that boson masses arise as a result of coupling to a hypothetical scalar field called the Higgs boson. Within the current range of accelerator technology, compelling evidence for the Higgs boson is missing. We discuss in this paper a derivation of boson masses that bypasses the Higgs mechanism and is formulated on the basis of complexity theory. The key premise of our work is that the dynamics of the gauge field may be described as a stochastic process caused by the short range of electroweak interaction. It is found that, if this process is driven by Levy statistics, mass generation in the electroweak sector can be naturally accounted for. Theoretical predictions are shown to agree well with experimental data.
[5697] vixra:0910.0043 [pdf]
Pending Problems in Qsos
Quasars (Quasi Stellar Objects, abbreviated as QSOs) are still nowadays, close to half a century after their discovery, objects which are not completely understood. In this brief review a description of the pending problems, inconsistencies and caveats in the QSO's research is presented. The standard paradigm model based on the existence of very massive black holes that are responsible for the QSO's huge luminosities, resulting from to their cosmological redshifts, leaves many facts without explanation. There are several observations which lack a clear explanation, for instance: the absence of bright QSOs at low redshifts, a mysterious evolution not properly understood; the inconsistencies of the absorption lines, such as the di erent structure of the clouds along the QSO's line of sight and their tangential directions; the spatial correlation between QSOs and galaxies; and many others.
[5698] vixra:0910.0042 [pdf]
Chaotic Dynamics of the Renormalization Group Flow and Standard Model Parameters
Bringing closure to the host of open questions posed by the current Standard Model for particle physics (SM) continues to be a major challenge for theoretical physics community. Motivated by recent advances in the study of complex systems, our work suggests that the pattern of particle masses and gauge couplings emerges from the critical dynamics of renormalization group equations. Using the ε-expansion method along with the universal path to chaos in unimodal maps, we find that the observed hierarchies of SM parameters amount to a series of scaling ratios depending on the Feigenbaum constant.
[5699] vixra:0910.0030 [pdf]
Entropy, Neutrino Physics, and the Lithium Problem: Why Are there Stars with Essentially no Lithium Due to Serious Lithium de Ciency in Certain Spatial Regions in the Early Universe?
The consequences of abnormally low lithium abundance in a nearby population II star (which is almost as old as the supposed population III stars) as represented by HE0107-5240 are that standard BBN theory is out of sync with observations. Analysis of the big bang nucleosynthesis may help explain the anomalously low value of lithium abundance in the star HE0107-5240, which by orthodox BBN, should not exist, as explained by Shigeyama et al.
[5700] vixra:0910.0012 [pdf]
A New Formula for the Sum of the Sixth Powers of Fibonacci Numbers
Sloane's On-Line Encyclopedia of Integer Sequences incorrectly states a lengthy formula for the sum of the sixth powers of the first n Fibonacci numbers. In this paper we prove a more succinct formulation. We also provide an analogue for the Lucas numbers. Finally, we prove a divisibility result for the sum of certain even powers of the first n Fibonacci numbers.
[5701] vixra:0910.0009 [pdf]
Chaos in Quantum Chromodynamics and the Hadron Spectrum
We present analytic evidence that the distribution of hadron masses follows from the universal transition to chaos in non-equilibrium field theory. It is shown that meson and baryon spectra obey a scaling hierarchy with critical exponents ordered in natural progression. Numerical predictions are found to be in close agreement with experimental data.
[5702] vixra:0909.0050 [pdf]
Hypothetical Dark Matter/axion Rockets: Dark Matter in Terms of Space Physics Propulsion
Current Proposed photon Rocket designs include the Nuclear Photonic Rocket design and the anti matter photonic rocket design (as proposed Eugene Sanger , 1950s, as reported in reference 1) This paper examines the feasibility of improving the thrust of a photon rocket via either use of WIMPS, or similar DM candidate. Would a WIMP, if converted to power and thrust enable / improve the chances of interstellar travel ?
[5703] vixra:0909.0049 [pdf]
On 2 + 2-Dimensional Spacetimes, Strings, Black-Holes and Maximal Acceleration in Phase Spaces
We study black-hole-like solutions ( spacetimes with singularities ) of Einstein field equations in 3+1 and 2+2-dimensions. In the 3+1-dim case, it is shown how the horizon of the standard black hole solution at r = 2G<sub>N</sub>M can be displaced to the location r = 0 of the point mass M source, when the radial gauge function is chosen to have an ultra-violet cutoff R(r = 0) = 2G<sub>N</sub>M if, and only if, one embeds the problem in the Finsler geometry of the spacetime tangent bundle (or in phase space) that is the proper arena where to incorporate the role of the physical point mass M source at r = 0. We find three different cases associated with hyperbolic homogeneous spaces. In particular, the hyperbolic version of Schwarzschild's solution contains a conical singularity at r = 0 resulting from pinching to zero size r = 0 the throat of the hyperboloid H<sup>2</sup> and which is quite different from the static spherically symmetric 3+1-dim solution. Static circular symmetric solutions for metrics in 2+2 are found that are singular at ρ = 0 and whose asymptotic ρ → ∞ limit leads to a flat 1+2-dim boundary of topology S<sup>1</sup> x R<sup>2</sup>. Finally we discuss the 1+1-dim Bars-Witten stringy black-hole solution and show how it can be embedded into our 3 + 1-dimensional solutions with a displaced horizon at r = 0 and discuss the plausible stringy nature of a point-mass, along with the maximal acceleration principle in the spacetime tangent bundle (maximal force in phase spaces). Black holes in a 2 + 2-dimensional "spacetime" from the perspective of complex gravity in 1 + 1 complex dimensions and their quaternionic and octonionic gravity extensions deserve furher investigation. An appendix is included with the most general Schwarzschild-like solutions in D ≥ 4.
[5704] vixra:0909.0045 [pdf]
On n-ary Algebras, Branes and Polyvector Gauge Theories in Noncommutative Clifford Spaces
Polyvector-valued gauge field theories in noncommutative Clifford spaces are presented. The noncommutative star products are associative and require the use of the Baker-Campbell-Hausdorff formula. Actions for pbranes in noncommutative (Clifford) spaces and noncommutative phase spaces are provided. An important relationship among the n-ary commutators of noncommuting spacetime coordinates [X<sup>1</sup>,X<sup>2</sup>, ......,X<sup>n</sup>] with the poly-vector valued coordinates X<sup>123...n</sup> in noncommutative Clifford spaces is explicitly derived [X<sup>1</sup>,X<sup>2</sup>, ......,X<sup>n</sup>] = n! X<sup>123...n</sup>. The large N limit of n-ary commutators of n hyper-matrices X<sub>i<sub>1</sub>i<sub>2</sub></sub>....in leads to Eguchi-Schild p-brane actions for p + 1 = n. Noncommutative Clifford-space gravity as a poly-vector-valued gauge theory of twisted diffeomorphisms in Clifford spaces would require quantum Hopf algebraic deformations of Clifford algebras.
[5705] vixra:0909.0034 [pdf]
On Strategies Towards the Riemann Hypothesis :Fractal Supersymmetric QM and a Trace Formula
The Riemann's hypothesis (RH) states that the nontrivial zeros of the Riemann zeta-function are of the form s<sub>n</sub> = 1/2 + iλ<sub>n</sub>. An improvement of our previous construction to prove the RH is presented by implementing the Hilbert-Polya proposal and furnishing the Fractal Supersymmetric Quantum Mechanical (SUSY-QM) model whose spectrum reproduces the imaginary parts of the zeta zeros. We model the fractal fluctuations of the smooth Wu-Sprung potential ( that capture the average level density of zeros ) by recurring to P a weighted superposition of Weierstrass functions ΣW(x,p,D) and where the summation has to be performed over all primes p in order to recapture the connection between the distribution of zeta zeros and prime numbers. We proceed next with the construction of a smooth version of the fractal QM wave equation by writing an ordinary Schroedinger equation whose fluctuating potential (relative to the smooth Wu-Sprung potential) has the same functional form as the fluctuating part of the level density of zeros. The second approach to prove the RH relies on the existence of a continuous family of scaling-like operators involving the Gauss-Jacobi theta series. An explicit completion relation ( "trace formula") related to a superposition of eigenfunctions of these scaling-like operators is defined. If the completion relation is satisfied this could be another test of the Riemann Hypothesis. In an appendix we briefly describe our recent findings showing why the Riemann Hypothesis is a consequence of CT -invariant Quantum Mechanics, because < Ψ<sub>s</sub> | CT | Ψ<sub>s</sub> > ≠ 0 where s are the complex eigenvalues of the scaling-like operators.
[5706] vixra:0909.0027 [pdf]
A Proposal for a Unified Field Theory
A proposal outlining an approach to a unified field theory is presented. A general solution to the time-dependent Schrödinger Equation using an alternative boundary condition is found to derive the Heisenberg uncertainty formulae. A general relativity/quantum mechanical interaction between a photon and a gravitational field is examined to determine the degree of red shifting of light passing through a gravitational field. The Einstein field equations, complete with an arrangement of Faraday tensors, are presented with suggestions to determine the energy of a photon from Einstein's and Maxwell's equations. Schrödingers Equation is coupled with both the Einstein field equations and Maxwells equations to derive a possible foundation for string theory.
[5707] vixra:0909.0026 [pdf]
A First Order Singular Perturbation Solution to a Simple One-Phase Stefan Problem with Finite Neumann Boundary Conditions
This paper examines the difference between infinite and finite domains of a Stefan Problem. It is pointed out that attributes of solutions to the Diffusion Equation suggest assumptions of an infinite domain are invalid during initial times for finite domain Stefan Problems. The paper provides a solution for initial and early times from an analytical approach using a perturbation. This solution can then easily be applied to numerical models for later times. The differences of the two domains are examined and discussed.
[5708] vixra:0909.0021 [pdf]
Observations of "Wisps" in Magnetohydrodynamic Simulations of the Crab Nebula
In this letter, we describe results of new high-resolution axisymmetric relativistic MHD simulations of Pulsar Wind Nebulae. The simulations reveal strong breakdown of the equatorial symmetry and highly variable structure of the pulsar wind termination shock. The synthetic synchrotron maps, constructed using a new more accurate approach, show striking similarity with the well known images of the Crab Nebula obtained by Chandra, and the Hubble Space Telescope. In addition to the jet-torus structure, these maps reproduce the Crab's famous moving wisps whose speed and rate of production agree with the observations. The variability is then analyzed using various statistical methods, including the method of structure function and wavelet transform. The results point towards the quasi-periodic behaviour with the periods of 1.5 - 3 yr and MHD turbulence on scales below 1 yr. The full account of this study will be presented in a follow up paper.
[5709] vixra:0909.0020 [pdf]
Polyvector-valued Gauge Field Theories and Quantum Mechanics in Noncommutative Clifford Spaces
The basic ideas and results behind polyvector-valued gauge field theories and Quantum Mechanics in Noncommutative Clifford spaces are presented. The star products are noncommutative and associative and require the use of the Baker-Campbell-Hausdorff formula. The construction of Noncommutative Clifford-space gravity as polyvector-valued gauge theories of twisted diffeomorphisms in Clifford-spaces would require quantum Hopf algebraic deformations of Clifford algebras.
[5710] vixra:0909.0013 [pdf]
On the Coupling Constants, Geometric Probability and Complex Domains
By recurring to Geometric Probability methods it is shown that the coupling constants, α<sub>EM</sub>, α<sub>W</sub>, α<sub>C</sub>, associated with the electromagnetic, weak and strong (color) force are given by the ratios of measures of the sphere S<sup>2</sup> and the Shilov boundaries Q<sub>3</sub> = S<sup>2</sup> x RP<sup>1</sup>, squashed S<sup>5</sup>, respectively, with respect to the Wyler measure Ω<sub>Wyler</sub>[Q<sub>4</sub>] of the Shilov boundary Q<sub>4</sub> = S<sup>3</sup> x RP<sup>1</sup> of the poly-disc D<sub>4</sub> (8 real dimensions). The latter measure Ω<sub>Wyler</sub>[Q<sub>4</sub>] is linked to the geometric coupling strength α<sub>G</sub> associated to the gravitational force. In the conclusion we discuss briefly other approaches to the determination of the physical constants, in particular, a program based on the Mersenne primes p-adic hierarchy. The most important conclusion of this work is the role played by higher dimensions in the determination of the coupling constants from pure geometry and topology alone and which does not require to invoke the anthropic principle.
[5711] vixra:0909.0012 [pdf]
The Charge-Mass-Spin Relation of Clifford Polyparticles, Kerr-Newman Black Holes and the Fine Structure Constant
A Clifford-algebraic interpretation is proposed of the charge, mass, spin relationship found recently by Cooperstock and Faraoini which was based on the Kerr-Newman metric solutions of the Einstein-Maxwell equations. The components of the polymomentum associated with a Clifford polyparticle in four dimensions provide for such a charge, mass, spin relationship without the problems encountered in Kaluza-Klein compactifications which furnish an unphysically large value for the electron charge. A physical reasoning behind such charge, mass, spin relationship is provided, followed by a discussion on the geometrical derivation of the fine structure constant by Wyler, Smith, Gonzalez-Martin and Smilga. To finalize, the renormalization of electric charge is discussed and some remarks are made pertaining the modifications of the charge-scale relationship, when the spin of the polyparticle changes with scale, that may cast some light into the alleged Astrophysical variations of the fine structure constant.
[5712] vixra:0909.0011 [pdf]
On Nonextensive Statistics, Chaos and Fractal Strings
Motivated by the growing evidence of universality and chaos in QFT and string theory, we study the Tsallis non-extensive statistics (with a non-additive q-entropy) of an ensemble of fractal strings and branes of different dimensionalities. Non-equilibrium systems with complex dynamics in stationary states may exhibit large fluctuations of intensive quantities which are described in terms of generalized statistics. Tsallis statistics is a particular representative of such class. The non-extensive entropy and probability distribution of a canonical ensemble of fractal strings and branes is studied in terms of their dimensional spectrum which leads to a natural upper cutoff in energy and establishes a direct correlation among dimensions, energy and temperature. The absolute zero temperature (Kelvin) corresponds to zero dimensions (energy) and an infinite temperature corresponds to infinite dimensions. In the concluding remarks some applications of fractal statistics, quasi-particles, knot theory, quantum groups and number theory are briefly discussed within the framework of fractal strings and branes.
[5713] vixra:0909.0009 [pdf]
Discovery of a New Dimming Effect Specific to Supernovae and Gamma-Ray Bursts
Because type Ia supernovae (SNs) are anomalously dimmed with respect to the at (q<sub>o</sub> = 0.5) Friedman Expanding Universe model, I was surprised to find that the brightest cluster galaxies (BCGs) are not anomalously dimmed. Based on the absence of anomalous dimming in BCGs, the following conclusions were reached: <ul> <li>⋅ Since the light from the SNs and BCGs traverses the same space, the current hypothesis of an accelerated expansion of the universe to explain the anomalous dimming of SNs is disproved.</li> <li>⋅ The cause of the anomalous dimming must be specific to the SNs.</li> </ul> The first conclusion is important since current research in dark energy and the cosmological constant was initiated based on the accelerated expansion hypothesis. The disproof of this hypothesis, therefore, casts serious doubts on the existence of dark energy and the cosmological constant. The second conclusion indicates that the occurrence of anomalous dimming depends on a basic difference between the SNs and BCGs. The only difference besides the obvious - that SNs are exploding stars and the BCGs are galaxies - is that the light curves of the SNs are limited in duration. Due on this difference, I discovered that SNs light curves are broadened at the observer by a new Hubble redshift effect. Since the total energy of the light curve is then spread over a longer time period, the apparent luminosity is reduced at the observer, causing the observed anomalous dimming of SNs. I also show that BCGs are not anomalously dimmed because their absolute luminosity is approximately constant over the time required for the light to reach the observer. The above conclusions also apply to Gamma Ray Bursts (GRBs) since gamma-ray "light" curves are limited in duration. Finally, the light curve broadening effect can be used to determine if the universe is expanding or static. In the expanding universe model, a light curve broadening effect is predicted due to time-dilation for the SNs, GRBs and BCGs. Consequently, if the universe is expanding, two light curve broadening effects should occur for the SNs and GRBs. However, if the universe is static, only one light curve broadening effect will occur for the SNs and GRBs. Fortunately, Golhaber has measured the width's of SNs light curves and conclusively showed that only one light curve broadening effect occurs. Consequently, the expanding universe model is logically falsified.
[5714] vixra:0909.0004 [pdf]
On Nonlinear Quantum Mechanics, Brownian Motion, Weyl Geometry and Fisher Information
A new nonlinear Schrödinger equation is obtained explicitly from the (fractal) Brownian motion of a massive particle with a complex-valued diffusion constant. Real-valued energy plane-wave solutions and solitons exist in the free particle case. One remarkable feature of this nonlinear Schrödinger equation based on a ( fractal) Brownian motion model, over all the other nonlinear QM models, is that the quantum-mechanical energy functional coincides precisely with the field theory one. We finalize by showing why a complex momentum is essential to fully understand the physical implications of Weyl's geometry in QM, along with the interplay between Bohm's Quantum potential and Fisher Information which has been overlooked by several authors in the past.
[5715] vixra:0909.0003 [pdf]
Polyvector Super-Poincare Algebras, M, F Theory Algebras and Generalized Supersymmetry in Clifford-Spaces
Starting with a review of the Extended Relativity Theory in Clifford-Spaces, and the physical motivation behind this novel theory, we provide the generalization of the nonrelativistic Supersymmetric pointparticle action in Clifford-space backgrounds. The relativistic Supersymmetric Clifford particle action is constructed that is invariant under generalized supersymmetric transformations of the Clifford-space background's polyvector-valued coordinates. To finalize, the Polyvector Super-Poincare and M, F theory superalgebras, in D = 11, 12 dimensions, respectively, are discussed followed by our final analysis of the novel Clifford-Superspace realizations of generalized Supersymmetries in Clifford spaces.
[5716] vixra:0908.0112 [pdf]
On Holography and Quantum Mechanics in Yang's Noncommutative Spacetime with a Lower and Upper Scale
We explore Yang's Noncommutative space-time algebra (involving two length scales) within the context of QM defined in Noncommutative spacetimes; the Holographic principle and the area-coordinates algebra in Clifford spaces. Casimir invariant wave equations corresponding to Noncommutative coordinates and momenta in d-dimensions can be recast in terms of ordinary QM wave equations in d+2-dimensions. It is conjectured that QM over Noncommutative spacetimes (Noncommutative QM) may be described by ordinary QM in higher dimensions. Novel Moyal-Yang-Fedosov-Kontsevich star products deformations of the Noncommutative Poisson Brackets (NCPB) are employed to construct star product deformations of scalar field theories. Finally, generalizations of the Dirac-Konstant and Klein-Gordon-like equations relevant to the physics of D-branes and Matrix Models are presented.
[5717] vixra:0908.0111 [pdf]
On Modified Weyl-Heisenberg Algebras, Noncommutativity Matrix-Valued Planck Constant and QM in Clifford Spaces
A novel Weyl-Heisenberg algebra in Clifford-spaces is constructed that is based on a matrix-valued H<sup>AB</sup> extension of Planck's constant. As a result of this modifiedWeyl-Heisenberg algebra one will no longer be able to measure, simultaneously, the pairs of variables (x, p<sub>x</sub>); (x, p<sub>y</sub>); (x, p<sub>z</sub>); (y, p<sub>x</sub>), ... with absolute precision. New Klein-Gordon and Dirac wave equations and dispersion relations in Clifford-spaces are presented. The latter Dirac equation is a generalization of the Dirac-Lanczos-Barut-Hestenes equation. We display the explicit isomorphism between Yang's Noncommutative space-time algebra and the area-coordinates algebra associated with Clifford spaces. The former Yang's algebra involves noncommuting coordinates and momenta with a minimum Planck scale λ (ultraviolet cutoff) and a minimum momentum p = ℏ/R (maximal length R, infrared cutoff ). The double-scaling limit of Yang's algebra λ → 0, R → ∞, in conjunction with the large n → ∞ limit, leads naturally to the area quantization condition λR = L<sup>2</sup> = nλ<sup>2</sup> ( in Planck area units ) given in terms of the discrete angular-momentum eigenvalues n. It is shown how Modified Newtonian dynamics is also a consequence of Yang's algebra resulting from the modified Poisson brackets. Finally, another noncommutative algebra ( which differs from the Yang's algebra ) and related to the minimal length uncertainty relations is presented . We conclude with a discussion of the implications of Noncommutative QM and QFT's in Clifford-spaces.
[5718] vixra:0908.0110 [pdf]
On Time Dependent Black Holes and Cosmological Models from a Kaluza-Klein Mechanism
Novel static, time-dependent and spatial-temporal solutions of Einstein field equations, displaying singularities, with and without horizons, and in several dimensions are found based on a dimensional reduction procedure widely used in Kaluza-Klein type theories. The Kerr-Newman black-hole entropy as well as the Reissner-Nordstrom, Kerr and Schwarzschild black-hole entropy are derived from the corresponding Euclideanized actions. A very special cosmological model based on the dynamical interior geometry of a Black Hole is found that has no singularities at t = 0 due to the smoothing of the mass distribution. We conclude with another cosmological model equipped also with a dynamical horizon and which is related to Vaidya's metric (associated with the Hawking-radiation of black holes) by interchanging t ↔ r which might render our universe as a dynamical black hole.
[5719] vixra:0908.0106 [pdf]
The Extended Relativity Theory in Born-Clifford Phase Spaces with a Lower and Upper Length Scales and Clifford Group Geometric Unification
We construct the Extended Relativity Theory in Born-Clifford-Phase spaces with an upper R and lower length λ scales (infrared/ultraviolet cutoff ). The invariance symmetry leads naturally to the real Clifford algebra Cl(2, 6,R) and complexified Clifford Cl<sub>C</sub>(4) algebra related to Twistors. A unified theory of all Noncommutative branes in Clifford-spaces is developed based on the Moyal-Yang star product deformation quantization whose deformation parameter involves the lower/upper scale (ℏλ/R). Previous work led us to show from first principles why the observed value of the vacuum energy density (cosmological constant ) is given by a geometric mean relationship ρ ~ L<sup>-2</sup><sub>Planck</sub>R<sup>-2</sup> = L<sup>-4</sup><sub>P</sub>(L<sub>Planck</sub>/R)<sup>2</sup> ~ 10<sup>-122</sup>M<sup>4</sup><sub>Planck</sub>, and can be obtained when the infrared scale R is set to be of the order of the present value of the Hubble radius. We proceed with an extensive review of Smith's 8D model based on the Clifford algebra Cl(1,7) that reproduces at low energies the physics of the Standard Model and Gravity, including the derivation of all the coupling constants, particle masses, mixing angles, ....with high precision. Geometric actions are presented like the Clifford-Space extension of Maxwell's Electrodynamics, and Brandt's action related to the 8D spacetime tangentbundle involving coordinates and velocities ( Finsler geometries ). Finally we outline the reasons why a Clifford-Space Geometric Unification of all forces is a very reasonable avenue to consider and propose an Einstein-Hilbert type action in Clifford-Phase spaces (associated with the 8D Phase space) as a Unified Field theory action candidate that should reproduce the physics of the Standard Model plus Gravity in the low energy limit.
[5720] vixra:0908.0105 [pdf]
On Timelike Naked Singularities Associated with Noncompact Matter Sources
We show the existence of timelike naked singularities which are not hidden by a horizon and which are associated to spherically symmetric (noncompact) matter sources extending from r = 0 to r = ∞. Our asymptotically flat solutions do represent observable timelike naked singularities where the scalar curvature R and volume mass density ρ(r) are both singular at r = 0. To finalize we explain the Finsler geometric origins behind the matter field configuration obeying the weak energy conditions and that leads to a timelike naked singularity.
[5721] vixra:0908.0104 [pdf]
Strings and Membranes from Einstein Gravity, Matrix Models and W<sub>∞</sub> Gauge Theories as paths to Quantum Gravity
It is shown how w<sub>∞</sub>,w<sub>1+∞</sub> Gauge Field Theory actions in 2D emerge directly from 4D Gravity. Strings and Membranes actions in 2D and 3D originate as well from 4D Einstein Gravity after recurring to the nonlinear connection formalism of Lagrange-Finsler and Hamilton-Cartan spaces. Quantum Gravity in 3D can be described by aW1 Matrix Model in D = 1 that can be solved exactly via the Collective Field Theory method. We describe why a quantization of 4D Gravity could be attained via a 2D Quantum W<sub>∞</sub> gauge theory coupled to an infinite-component scalar-multiplet. A proof that non-critical W<sub>∞</sub> (super) strings are devoid of BRST anomalies in dimensions D = 27 (D = 11), respectively, follows and which coincide with the critical (super) membrane dimensions D = 27 (D = 11). We establish the correspondence between the states associated with the quasi finite highest weights irreducible representations of W<sub>∞</sub>,<span style="text-decoration: overline;">W</span><sub>∞</sub> algebras and the quantum states of the continuous Toda molecule. Schroedinger-like QM wave functional equations are derived and solutions are found in the zeroth order approximation. Since higher-conformal spin W<sub>∞</sub> symmetries are very relevant in the study of 2D W<sub>∞</sub> Gravity, the Quantum Hall effect, large N QCD, strings, membranes, ...... it is warranted to explore further the interplay among all these theories.
[5722] vixra:0908.0102 [pdf]
On Generalized Yang-Mills Theories and Extensions of the Standard Model in Clifford (Tensorial) Spaces
We construct the Clifford-space tensorial-gauge fields generalizations of Yang-Mills theories and the Standard Model that allows to predict the existence of new particles (bosons, fermions) and tensor-gauge fields of higher-spins in the 10 Tev regime. We proceed with a detailed discussion of the unique D<sub>4</sub> - D<sub>5</sub> - E<sub>6</sub> - E<sub>7</sub> - E<sub>8</sub> model of Smith based on the underlying Clifford algebraic structures in D = 8, and which furnishes all the properties of the Standard Model and Gravity in four-dimensions, at low energies. A generalization and extension of Smith's model to the full Clifford-space is presented when we write explictly all the terms of the extended Clifford-space Lagrangian. We conclude by explaining the relevance of multiple-foldings of D = 8 dimensions related to the modulo 8 periodicity of the real Cliford algebras and display the interplay among Clifford, Division, Jordan and Exceptional algebras, within the context of D = 26, 27, 28 dimensions, corresponding to bosonic string, M and F theory, respectively, advanced earlier by Smith. To finalize we describe explicitly how the E<sub>8</sub> X E<sub>8</sub> Yang-Mills theory can be obtained from a Gauge Theory based on the Clifford ( 16 ) group.
[5723] vixra:0908.0100 [pdf]
Noncommutative (Super) P-Branes and Moyal-Yang Star Products with a Lower and Upper Scale
Noncommutative p-brane actions, for even p+1 = 2n-dimensional world-volumes, are written explicitly in terms of the novel Moyal-Yang ( Fedosov-Kontsevich ) star product deformations of the Noncommutative Nambu Poisson Brackets (NCNPB) that are associated with the noncommuting world-volume coordinates q<sup>A</sup>, p<sup>A</sup> for A = 1, 2, 3, ...n. The latter noncommuting coordinates obey the noncommutative Yang algebra with an ultraviolet L<sub>P</sub> (Planck) scale and infrared (R ) scale cutoff. It is shown why our p-brane actions in the "classical" limit ℏ<sub>eff</sub> = ℏL<sub>P</sub>/R → 0 still acquire nontrivial noncommutative corrections that differ from ordinary p-brane actions. Super p-branes actions in the light-cone gauge are also amenable to Moyal-Yang star product deformations as well due to the fact that p-branes moving in flat spacetime backgrounds, in the light-cone gauge, can be recast as gauge theories of volume-preserving diffeomorphisms. The most general construction of noncommutative super p-branes actions based on non ( anti ) commuting superspaces and quantum group methods remains an open problem.
[5724] vixra:0908.0099 [pdf]
The Euclidean Gravitational Action as Black Hole Entropy, Singularities and Spacetime Voids
We argue why the static spherically symmetric (SSS) vacuum solutions of Einstein's equations described by the textbook Hilbert metric g<sub>μν</sub>(r) is not diffeomorphic to the metric g<sub>μν</sub>(|r|) corresponding to the gravitational field of a point mass delta function source at r = 0. By choosing a judicious radial function R(r) = r + 2G|M|Θ(r) involving the Heaviside step function, one has the correct boundary condition R(r = 0) = 0 , while displacing the horizon from r = 2G|M| to a location arbitrarily close to r = 0 as one desires, r<sub>h</sub> → 0, where stringy geometry and quantum gravitational effects begin to take place. We solve the field equations due to a delta function point mass source at r = 0, and show that the Euclidean gravitational action (in ℏ units) is precisely equal to the black hole entropy (in Planck area units). This result holds in any dimensions D ≥ 3 . In the Reissner-Nordsrom (massive-charged) and Kerr-Newman black hole case (massive-rotating-charged) we show that the Euclidean action in a bulk domain bounded by the inner and outer horizons is the same as the black hole entropy. When one smears out the point-mass and point-charge delta function distributions by a Gaussian distribution, the areaentropy relation is modified. We postulate why these modifications should furnish the logarithmic corrections (and higher inverse powers of the area) to the entropy of these smeared Black Holes. To finalize, we analyse the Bars-Witten stringy black hole in 1 + 1 dim and its relation to the maximal acceleration principle in phase spaces and Finsler geometries.
[5725] vixra:0908.0098 [pdf]
The Riemann Hypothesis is a Consequence of CT-Invariant Quantum Mechanics
The Riemann's hypothesis (RH) states that the nontrivial zeros of the Riemann zeta-function are of the form s<sub>n</sub> = 1/2 + iλ<sub>n</sub>. By constructing a continuous family of scaling-like operators involving the Gauss-Jacobi theta series and by invoking a novel CT-invariant Quantum Mechanics, involving a judicious charge conjugation C and time reversal T operation, we show why the Riemann Hypothesis is true. An infinite family of theta series and their Mellin transform leads to the same conclusions.
[5726] vixra:0908.0095 [pdf]
An Exceptional E<sub>8</sub> Gauge Theory of Gravity in D = 8, Clifford Spaces and Grand Unification
A candidate action for an Exceptional E<sub>8</sub> gauge theory of gravity in 8D is constructed. It is obtained by recasting the E<sub>8</sub> group as the semi-direct product of GL(8,R) with a deformed Weyl-Heisenberg group associated with canonical-conjugate pairs of vectorial and antisymmetric tensorial generators of rank two and three. Other actions are proposed, like the quartic E<sub>8</sub> group-invariant action in 8D associated with the Chern-Simons E<sub>8</sub> gauge theory defined on the 7-dim boundary of a 8D bulk. To finalize, it is shown how the E<sub>8</sub> gauge theory of gravity can be embedded into a more general extended gravitational theory in Clifford spaces associated with the Cl(16) algebra and providing a solid geometrical program of a grand-unification of gravity with Yang-Mills theories. The key question remains if this novel gravitational model based on gauging the E8 group may still be renormalizable without spoiling unitarity at the quantum level.
[5727] vixra:0908.0094 [pdf]
Complex Gravitational Theory and Noncommutative Gravity
Born's reciprocal relativity in flat spacetimes is based on the principle of a maximal speed limit (speed of light) and a maximal proper force (which is also compatible with a maximal and minimal length duality) and where coordinates and momenta are unified on a single footing. We extend Born's theory to the case of curved spacetimes and construct a deformed Born reciprocal general relativity theory in curved spacetimes (without the need to introduce star products) as a local gauge theory of the deformed Quaplectic group that is given by the semi-direct product of U(1,3) with the deformed (noncommutative) Weyl-Heisenberg group corresponding to noncommutative generators [Z<sub>a</sub>,Z<sub>b</sub>] ≠ 0. The Hermitian metric is complex-valued with symmetric and nonsymmetric components and there are two different complex-valued Hermitian Ricci tensors R<sub>μν</sub>, S<sub>μν</sub>. The deformed Born's reciprocal gravitational action linear in the Ricci scalars R, S with Torsion-squared terms and BF terms is presented. The plausible interpretation of Z<sub>μ</sub> = E<sub>μ</sub><sup>a</sup> Z<sub>a</sub> as noncommuting p-brane background complex spacetime coordinates is discussed in the conclusion, where E<sub>μ</sub><sup>a</sup> is the complex vielbein associated with the Hermitian metric G<sub>μν</sub> = g<sub>(μν)</sub> + ig<sub>[μν]</sub> = E<sub>μ</sub><sup>a</sup> Ē<sub>ν</sub><sup>b</sup> This could be one of the underlying reasons why string-theory involves gravity.
[5728] vixra:0908.0093 [pdf]
The Cosmological Constant and Pioneer Anomaly from Weyl Spacetimes and Mach's Principle
It is shown how Weyl's geometry and Mach's Holographic principle furnishes both the magnitude and sign (towards the sun) of the Pioneer anomalous acceleration a<sub>P</sub> ~ -c<sup>2</sup>/R<sub>Hubble</sub> firstly observed by Anderson et al. Weyl's Geometry can account for both the origins and the value of the observed vacuum energy density (dark energy). The source of dark energy is just the dilaton-like Jordan-Brans-Dicke scalar field that is required to implement Weyl invariance of the most simple of all possible actions. A nonvanishing value of the vacuum energy density of the order of 10<sup>-123</sup>M<sup>4</sup><sub>Planck</sub> is found consistent with observations. Weyl's geometry accounts also for the phantom scalar field in modern Cosmology in a very natural fashion.
[5729] vixra:0908.0090 [pdf]
On the Noncommutative and Nonassociative Geometry of Octonionic Spacetime, Modified Dispersion Relations and Grand Unification
The Octonionic Geometry (Gravity) developed long ago by Oliveira and Marques is extended to Noncommutative and Nonassociative Spacetime coordinates associated with octonionic-valued coordinates and momenta. The octonionic metric G<sub>μν</sub> already encompasses the ordinary spacetime metric g<sub>μν</sub>, in addition to the Maxwell U(1) and SU(2) Yang-Mills fields such that implements the Kaluza-Klein Grand Unification program without introducing extra spacetime dimensions. The color group SU(3) is a subgroup of the exceptional G<sub>2</sub> group which is the automorphism group of the octonion algebra. It is shown that the flux of the SU(2) Yang-Mills field strength F<sub>μν</sub> through the area momentum Σ<sup>μν</sup> in the internal isospin space yields corrections O(1/M<sup>2</sup><sub>Planck</sub>) to the energy-momentum dispersion relations without violating Lorentz invariance as it occurs with Hopf algebraic deformations of the Poincare algebra. The known Octonionic realizations of the Clifford Cl(8),Cl(4) algebras should permit the construction of octonionic string actions that should have a correspondence with ordinary string actions for strings moving in a curved Clifford-space target background associated with a Cl(3, 1) algebra.
[5730] vixra:0908.0089 [pdf]
The Large N Limit of Exceptional Jordan Matrix Models and M, F Theory
The large N → ∞ limit of the Exceptional F<sub>4</sub>,E<sub>6</sub> Jordan Matrix Models of Smolin-Ohwashi leads to novel Chern-Simons Membrane Lagrangians which are suitable candidates for a nonperturbative bosonic formulation of M Theory in D = 27 real, complex dimensions, respectively. Freudenthal algebras and triple Freudenthal products permits the construction of a novel E<sub>7</sub> X SU(N) invariant Matrix model whose large N limit yields generalized nonlinear sigma models actions on 28 complex dimensional backgrounds associated with a 56 real-dim phase space realization of the Freudenthal algebra. We argue why the latter Matrix Model, in the large N limit, might be the proper arena for a bosonic formulation of F theory. To finalize we display generalized Dirac-Nambu-Goto membrane actions in terms of 3 X 3 X 3 cubic matrix entries that match the number of degrees of freedom of the 27-dim exceptional Jordan algebra J<sub>3</sub>[0].
[5731] vixra:0908.0085 [pdf]
The Clifford Space Geometry Behind the Pioneer and Flyby Anomalies
It is rigorously shown how the Extended Relativity Theory in Clifford spaces (C-spaces) can explain the variable radial dependence a<sub>p</sub>(r) of the Pioneer anomaly; its sign (pointing towards the sun); why planets don't experience the anomalous acceleration and why the present day value of the Hubble scale R<sub>H</sub> appears. It is the curvature-spin coupling of the planetary motions that hold the key. The difference in the rate at which clocks tick in C-space translates into the C-space analog of Doppler shifts which may explain the anomalous redshifts in Cosmology, where objects which are not that far apart from each other exhibit very different redshifts. We conclude by showing how the empirical formula for the Flybys anomalies obtained by Anderson et al. can be derived within the framework of Clifford geometry.
[5732] vixra:0908.0084 [pdf]
The Extended Relativity Theory in Clifford Spaces
An introduction to some of the most important features of the Extended Relativity theory in Clifford-spaces (C-spaces) is presented whose "point" coordinates are non-commuting Clifford-valued quantities which incorporate lines, areas, volumes, hyper-volumes.... degrees of freedom associated with the collective particle, string, membrane, p-brane,... dynamics of p-loops (closed p-branes) in target Ddimensional spacetime backgrounds. C-space Relativity naturally incorporates the ideas of an invariant length (Planck scale), maximal acceleration, non-commuting coordinates, supersymmetry, holography, higher derivative gravity with torsion and variable dimensions/signatures. It permits to study the dynamics of all (closed) p-branes, for all values of p, on a unified footing. It resolves the ordering ambiguities in QFT, the problem of time in Cosmology and admits superluminal propagation ( tachyons ) without violations of causality. A discussion of the maximalacceleration Relativity principle in phase-spaces follows and the study of the invariance group of symmetry transformations in phase-space allows to show why Planck areas are invariant under acceleration-boosts transformations . This invariance feature suggests that a maximal-string tension principle may be operating in Nature. We continue by pointing out how the relativity of signatures of the underlying n-dimensional spacetime results from taking different n-dimensional slices through C-space. The conformal group in spacetime emerges as a natural subgroup of the Clifford group and Relativity in C-spaces involves natural scale changes in the sizes of physical objects without the introduction of forces nor Weyl's gauge field of dilations. We finalize by constructing the generalization of Maxwell theory of Electrodynamics of point charges to a theory in C-spaces that involves extended charges coupled to antisymmetric tensor fields of arbitrary rank. In the concluding remarks we outline briefly the current promising research programs and their plausible connections with C-space Relativity.
[5733] vixra:0908.0081 [pdf]
The Clifford Space Geometry of Conformal Gravity and U(4) X U(4) Yang-Mills Unification
It is shown how a Conformal Gravity and U(4) X U(4) Yang-Mills Grand Unification model in four dimensions can be attained from a Clifford Gauge Field Theory in C-spaces (Clifford spaces) based on the (complex) Clifford Cl(4,C) algebra underlying a complexified four dimensional spacetime (8 real dimensions). Upon taking a real slice, and after symmetry breaking, it leads to ordinary Gravity and the Standard Model in four real dimensions. A brief conclusion about the Noncommutative star product deformations of this Grand Unified Theory of Gravity with the other forces of Nature is presented.
[5734] vixra:0908.0080 [pdf]
P-Branes as Antisymmetric Nonabelian Tensorial Gauge Field Theories of Diffeomorphisms in P + 1 Dimensions
Long ago, Bergshoeff, Sezgin, Tanni and Townsend have shown that the light-cone gauge-fixed action of a super p-brane belongs to a new kind of supersymmetric gauge theory of p-volume preserving diffeomorphisms (diffs) associated with the p-spatial dimensions of the extended object. These authors conjectured that this new kind of supersymmetric gauge theory must be related to an infinite-dim nonabelian antisymmetric gauge theory. It is shown in this work how this new theory should be part of an underlying antisymmetric nonabelian tensorial gauge field theory of p+1-dimensional diffs (upon supersymmetrization) associated with the world volume evolution of the p-brane. We conclude by embedding the latter theory into a more fundamental one based on the Clifford-space geometry of the p-brane configuration space.
[5735] vixra:0908.0079 [pdf]
On the Riemann Hypothesis, Area Quantization, Dirac Operators, Modularity and Renormalization Group
Two methods to prove the Riemann Hypothesis are presented. One is based on the modular properties of Θ (theta) functions and the other on the Hilbert-Polya proposal to find an operator whose spectrum reproduces the ordinates ρ<sub>n</sub> (imaginary parts) of the zeta zeros in the critical line : s<sub>n</sub> = 1/2 + iρ<sub>n</sub> A detailed analysis of a one-dimensional Dirac-like operator with a potential V(x) is given that reproduces the spectrum of energy levels E<sub>n</sub> = ρ<sub>n</sub>, when the boundary conditions Ψ<sub>E</sub> (x = -∞) = ± Ψ<sub>E</sub> (x = +∞) are imposed. Such potential V(x) is derived implicitly from the relation x = x(V) = π/2(dN(V)/dV), where the functional form of N(V) is given by the full-fledged Riemann-von Mangoldt counting function of the zeta zeros, including the <i>fluctuating</i> as well as the O(E<sup>-n</sup>) terms. The construction is also extended to self-adjoint Schroedinger operators. Crucial is the introduction of an energy-dependent cut-off function Λ(E). Finally, the natural quantization of the phase space areas (associated to <i>nonperiodic</i> crystal-like structures) in <i>integer</i> multiples of π follows from the Bohr-Sommerfeld quantization conditions of Quantum Mechanics. It allows to find a physical reasoning why the average density of the primes distribution for very large x (O(1/logx)) has a one-to-one correspondence with the asymptotic limit of the <i>inverse</i> average density of the zeta zeros in the critical line suggesting intriguing connections to the Renormalization Group program.
[5736] vixra:0908.0076 [pdf]
Quantum Signatures of Solar System Dynamics
Let ω(i) be period of rotation of the i-th planet around the Sun (or ω<sub>j</sub>(i) be period of rotation of j-th satellite around the i-th planet). From empirical observations it is known that within margins of experimental errors Σ<sub>i</sub> n<sub>i</sub>ω(i) = 0 (or Σ<sub>j</sub> n<sub>j</sub>ω<sub>j</sub>(i) = 0) for some integers n<sub>i</sub> (or n<sub>j</sub> ), different for different satellite systems. These conditions, known as resonance conditions, make uses of theories such as KAM difficult to implement. The resonances in Solar System are similar to those encountered in old quantum mechanics where applications of methods of celestial mechanics to atomic and molecular physics were highly successful. With such a successes, the birth of new quantum mechanics is difficult to understand. In short, the rationale for its birth lies in simplicity with which the same type of calculations can be done using methods of quantum mechanics capable of taking care of resonances. The solution of quantization puzzle was found by Heisenberg. In this paper new uses of Heisenberg?s ideas are found. When superimposed with the equivalence principle of general relativity, they lead to quantum mechanical treatment of observed resonances in the Solar System. To test correctness of theoretical predictions the number of allowed stable orbits for planets and for equatorial stable orbits of satellites of heavy planets is calculated resulting in good agreement with observational data. In addition, the paper briefl?y discusses quantum mechanical nature of rings of heavy planets and potential usefulness of the obtained results for cosmology.
[5737] vixra:0908.0073 [pdf]
A Treatise on Information Geometry
In early 1999, Professor Frieden of the University of Arizona published a book through Cambridge University Press titled "Physics from Fisher Information". It is the purpose of this dissertation to further develop some of his ideas, as well as explore various exotic differentiable structures and their relationship to physics. In addition to the original component of this work, a series of survey chapters are provided, in the interest of keeping the treatise self-contained. The first summarises the main preliminary results on the existence of non-standard structures on manifolds from the Milnor-Steenrod school. The second is a standard introduction to semi-riemannian geometry. The third introduces the language of geometric measure theory, which is important in justifying the existence of smooth solutions to variational problems with smooth structures and smooth integrands. The fourth is a short remark on PDE existence theory, which is needed for the fifth, which is essentially a typeset version of a series of lectures given by Ben Andrews and Gerhard Huisken on the Hamilton-Perelman program for proving the Geometrisation Conjecture of Bill Thurston.
[5738] vixra:0908.0070 [pdf]
A Wave-Based Polishing Theory
The molecules of the reflecting surface are sources of Huygens' wavelets which make the reflected wavefront. These molecules can be nonplanar to the extent of a fraction of the wavelength while yet there exists practically reflected plane wavefront.
[5739] vixra:0908.0067 [pdf]
Why Evaporation Causes Coldness; a Challenge to the Second Law of Thermodynamics
In surface evaporation the liquid increases the potential energy of its molecules by taking heat while their kinetic energies remain unchanged. In such state the molecules are in the form of a gas (vapor). We know that in an isothermal system of a liquid and a gas adjacent to it, the temperature of the gas decreases due to the surface evaporation while some net heat is transferred from the gas to the liquid. So, if the temperature of the gas is lower than the temperature of the liquid only in a sufficiently small extent, some net heat will be still transferred from the gas to the liquid due to the surface evaporation and finally the gas and liquid (and vapor) will be isothermal (in a temperature lower than the initial temperature). This matter violates the Clausius (or refrigerator) statement of the second law of thermodynamics.
[5740] vixra:0908.0060 [pdf]
Compton Effect as a Doppler Effect
An electromagnetic wave with the wavelength lambda, which has some energy, descends on an electron and makes it move in the same direction of propagation of the wave. The wave makes the moving electron oscillate with a lower frequency. A simple analysis shows that this moving oscillating electron radiates, in the direction making angle theta with the direction of the incident wave, an electromagnetic wave which its wavelength is bigger by a factor proportional to lambda(1 − cos theta). The mechanism presented for pushing the electron, necessitates that Camton scattering to cease if the experiment is performed in vacuum. (I'm ready to prepare for doing such a critical test experiment in any university as a guest researcher.)
[5741] vixra:0908.0059 [pdf]
Actual Justification of the Crooks and Nichols Radiometers, and Failure of Solar Sails
Radiation energy causes fluctuation of the molecules in vanes of the Crooks radiometer. Through this fluctuation, molecules of the vanes strike the adjacent air molecules and as reaction cause recoil of the vanes. It seems that this is also the mechanism of Nichols radiometer. But, in the vacuum of the space, there are not practically such molecules to be leaned by the striking molecules of the vanes of solar sails. So, the vanes will not recoiled to be propelled. It is shown that a comet’s tail and antitail are common tides produced by Sun rather than by radiation pressure.
[5742] vixra:0908.0058 [pdf]
Geomagnetic Field Reason, Magnetic Inversions, and Extinction of Species
Conductive core of Earth is as hot as causing freedom of the valence electrons after which these released electrons distribute themselves toward the core surface and move along with the rotation of Earth causing that magnetic field which forms the big magnet inside Earth. This is a simple account for the geomagnetic field. By accepting this theory we also be leaded to a conclusion justifying the magnetic inversions of Earth based on the existence of several changes in axial rotation of Earth which most probably has had direct influence on expansion of polar ice on one hemisphere and a permanent day on the other hemisphere both causing extinction of species (including dinosaurs). Based on the presented discussions a practical way for direct determination of ionization energies of different elements is proposed.
[5743] vixra:0908.0054 [pdf]
Role of Air Pressure in the Force Between Currents
Density of lines of the magnetodynamic field arising from two parallel currents is more in the regions out of the distance between the two wires and then the molecular magnetic dipoles of air are pulled toward these regions and create a bigger pressure there which causes the two wires to be pushed (or to be attracted) toward each other. A similar reasoning applied conversely to two antiparallel currents justifies their repulsion arising from the created air pressure difference. Thus, most probably, railgun will not work very well in the absence of the air.
[5744] vixra:0908.0045 [pdf]
Cylindrical Wave, Wave Equation, and Method of Separation of Variables
It is shown that the wave equation cannot be solved for the general spreading of the cylindrical wave using the method of separation of variables. But an equation is presented in case of its solving the above act will have occurred. Also using this equation the above-mentioned general spreading of the cylindrical wave for large distances is obtained which contrary to what is believed consists of arbitrary functions.
[5745] vixra:0908.0043 [pdf]
Role of Air Pressure in Diamagnetism
In a gradient of magnetic field, magnetic dipoles of air are attracted toward the region of intense field. So, the air pressure is more in the regions of more intense field. The formed pressure gradient exerts a net force on a body placed in the air in this gradient of magnetic field toward the region of low pressure or the region having weaker field. This is like what takes place in the process of sink-float separation. To establish the presented theory we need only to perform diamagnetism experiment in vacuum (according to the presented guidelines)to see if it will cease.(I'm ready to prepare for such an experiment in any university as a guest researcher).
[5746] vixra:0908.0028 [pdf]
TGD and EEG
The TGD based general view about EEG developed in this book relies on the following general picture. </p><p> <OL> <LI> Magnetic body is the key actor in TGD inspired model of EEG and nerve pulse. Magnetic body acts as intentional agent using biological body as motor instrument and sensory receptor. There would be entire hierarchy of magnetic bodies associated with various body parts and characterized by the p-adic length scale and the level of dark matter hierarchy labeled by the value of Planck constant. The hierarchy of counterparts of EEGs associated with photons, Z<sup>0</sup> and W bosons, and gluons at various frequency scales involving dark bosons with energies above thermal threshold by the large value of hbar would make possible communication and control. In particular, cyclotron radiation from Bose-Einstein condensates at magnetic body and Josephson radiation from Josephson junctions associated with cell membrane and other bio-electrets would be involved and cyclotron and Josephson frequencies would correspond to EEG frequencies. <LI> DNA as topological quantum computer vision suggests a rather detailed view about how genome and cell membrane interact. Nucleotides and lipids would be connected by magnetic flux tubes carrying dark matter with varying values of Planck constant and define braiding affected by the 2-D flow of the lipids in liquid crystal state and giving rise to a topological quantum computation with program modules defined by liquid flow patterns resulting via self organization process in presence of metabolic energy feed. <LI> Sensory qualia could be associated with the generalized di-electric breakdowns between sensory organ and its magnetic body. The cyclotron phase transitions of Bose-Einstein condensates of biologically important ions generated by the dark EEG photons at the magnetic body could generate the analogs of somatosensory qualia identifiable as our cognitive and emotional qualia. Long ranged charge entanglement made possible by W MEs (topological light rays) could be essential element of motor control and generate exotic ionization of nuclei (new nuclear physics predicted by TGD) in turn inducing classical electric fields at space-time sheets carrying ordinary matter. These fields generate various responses such as ionic waves and nerve pulses yielding the desired physiological responses. The recent view about cell membrane as almost vacuum extremal of K&auml;hler action explains large parity breaking effects in living matter and also the peak frequencies of photoreceptors in retina. Also a model for the cell membrane as a kind of sensory homunculus with lipids identified as pixels of a sensory map representing basic qualia follows naturally. Furthermore, EEG photons and biophotons can be identified as decay products of same dark photons. </OL> </p><p> The plan of the book is roughly following. The chapter describing the magnetic sensory canvas hypothesis is followed by a model for nerve pulse and by three chapters devoted to EEG.
[5747] vixra:0908.0027 [pdf]
TGD Inspired Theory of Consciousness
This book tries to give an overall view about TGD inspired theory of consciousness as it stands now. </p><p> The basic notions TGD inspired theory of consciousness are quantum jump identified as a moment of consciousness, self identified as sequence of quantum jumps analogous to bound state of particles, self hierarchy with sub-selves experienced by self as mental images, and sharing and fusion of mental images by quantum entanglement. The topics of the book are organized in the following manner. </p><p> <OL> <LI> In the first part of the book TGD inspired theory of consciousness is discussed. There are three summarizing chapters giving a view about how ideas have evolved. There are also chapter about Negentropy Maximization Principle, about the notion of self, and about sensory representations. <LI> The second part of the book contains two chapters about the relationship between experienced and geometric time. The first one is more than decade old. The second one - inspired by zero energy ontology and written quite recently - provides a rather detailed vision about how the arrow of geometric time correlating with the arrow of experienced time and the localization of the contents of sensory experience to a narrow time interval emerge. The chapter explaining TGD based view about long term memory is also included. <LI> The third part of the book summarizes roughly decade old view about intelligence and cognition. p-Adic physics as physics of cognition and intentionality and many-fermion states as representations of Boolean statements are the key notions. In zero energy ontology also quantal versions of logical rules A&rarr; B realized as quantum variants of Boolean functions emerge at the fundamental level. A chapter about the role of dark matter hierarchy, in particular about topological quantum computation as a universal information processing tool, would be needed to make the picture more complete. <LI> The fourth chapter is devoted to remote mental interactions. The theoretical motivation for taking remote mental interactions seriously is that exactly the same mechanisms which are involved with the interaction between magnetic body and biological body apply also to remote mental interactions in TGD Universe. One could also understand why these phenomena are rare: a kind of immune system making it impossible for foreign magnetic bodies to control and communicate with the biological body possessed by a particular magnetic body would be a highly probable (but perhaps not unavoidable) outcome of evolutionary process. </OL>
[5748] vixra:0908.0026 [pdf]
Mathematical Aspects of Consciousness Theory
This book discusses general mathematical ideas behind TGD inspired theory of consciousness. </p><p> <I> PART I: New Physics And Mathematics Involved With TGD</I> </p><p> The Clifford algebra associated with point of configuration space ("world of classical worlds") decomposes to a direct integral of von Neumann algebras known as hyper-finite factors of type II<sub>1</sub>. This implies strong physical predictions and deep connections with conformal field theories, knot-, braid- and quantum groups, and topological quantum computation. <br> In TGD framework dark matter forms a hierarchy with levels characterized partially by the value of Planck constant labeling the pages of the book like structure formed by singular covering spaces of the imbedding space M<sup>4</sup>&times; CP<sub>2</sub> glued together along four-dimensional back. Particles at different pages are dark relative to each other since purely local interactions (vertices of Feynman diagram) involve only particles at the same page. The phase transitions changing the value of Planck constant having interpretation as tunneling between different pages of the book would induce phase transitions of gel phases abundant in living matter. </p><p> <I> Part II: TGD Universe as topological quantum computer</I> </p><p> The braids formed by magnetic flux tubes are ideal for the realization of topological quantum computations (tqc). Bio-systems are basic candidates for topological quantum computers. In DNA as tqc vision nucleotides and lipids are connected by flux tubes and the flow of lipids induces tqc programs. </p><p> <I> Part III: Categories, Number Theory And Consciousness</I> </p><p> Category theory could reflect the basic structures of conscious thought. The comparison of the inherent generalized logics associated with categories to the Boolean logic naturally associated with the configuration space spinor fields is also of interest. <br> The notion of infinite prime was the first mathematical invention inspired by TGD inspired theory of consciousness. The construction of infinite primes is very much analogous to a repeated second quantization of a super-symmetric arithmetic quantum field theory (with analogs of bound states included). Infinite primes form an infinite hierarchy and the physical realization of this hierarchy imply infinite hierarchy of conscious entities and that we represent only a single level in this hierarchy looking infinitesimal from the point of view of higher levels. The notion of infinite rational predicts an infinite number of real units with infinitely rich number theoretical anatomy and single space-time point becomes a Platonia able to represent every quantum state of the entire Universe in its structure: kind of algebraic Brahman=Atman identity.
[5749] vixra:0908.0025 [pdf]
Magnetospheric Consciousness
This book is about the notion of magnetic body, which is one of the key notions in TGD inspired model of living matter and predicts a hierarchy of generalized EEGs associated with the magnetic bodies responsible for communication and control. Not only personal magnetic bodies of living systems need to be relevant and the idea about entire magnetosphere as a conscious system controlling the behavior of biosphere emerges naturally. </p><p> <OL> <LI> Part I discusses the idea about magnetosphere as a fractally scaled up version of biological body and brain. p-Adic fractality and dark matter hierarchy give some plausibility to this idea. Also a vision about evolution in many-sheeted space-time is discussed. <LI> Part II introduces the notion of semitrance involving quantum entanglement of subself of self, say subsystem of brain, with remote system. The entanglement of sub-systems of two unentangled systems possible in many-sheeted space-time means sharing and fusion of mental images (say stereo vision). The notion of finite measurement resolution justifies this concept.<br> Semitrance could have been basic control and communication tool of collective levels of consciousness during the period of human consciousness which Jaynes calls bicamerality. The idea that human consciousness might have had totally different character for only few millenia ago, finds additional support from the notions of super- and hyper genome implicated naturally by the dark matter hierarchy and the notion of magnetic body. The identification of memes as hyper-genes looks attractive. The evolution of hyper-genome could have driven the explosive evolution of human civilizations during last two millenia. <LI> Part III entitled "Crazy Stuff" discusses the idea that crop circles are due to intentional action of magnetospheric higher level self or a higher level self using magnetosphere as a tool to build them. Two special crop circles, Chilbolton and Crabwood crop circles, are discussed. The proposal is that they provide information about the genomes of life forms responsible for them. The most science fictive identification for these life-forms would be ourselves in distant geometric future using time mirror mechanism to affect geometric past. </OL> </p><p> Most of the material of this book has been written much before the dark matter revolution and formulation of the zero energy ontology and that I have only later added comments to the existing text. I hope that I can later add new material in which the implications of the dark matter hierarchy are discussed in more detail.
[5750] vixra:0908.0024 [pdf]
Bio-Systems as Conscious Holograms
Brain as a hologram is an old idea and it emerges naturally also in TGD framework both at quantum and classical level, which by quantum classical correspondence is expected to reflect what happens at the deeper quantum level. The book is organized as follows. </p><p> <OL> <LI> The new view about the relationship between experienced and geometric time underlies the notion of 4-D dynamical hologram. Therefore the first part of the book contains three chapters about this topic reflecting the development of ideas. The first one is more than decade old. The third one written quite recently is inspired by zero energy ontology and provides a rather detailed vision about how the arrow of geometric time correlating with the arrow of experienced time and the localization of the contents of sensory experience to a narrow time interval emerge. It must be emphasized that the details of the model for the relationship between experienced and geometric time are still uncertain. <LI> Second part of the book is devoted to the development of hologram idea. Also a general model of sensory qualia and a model for sensory receptor as electret, which are very abundant in living matter, are discussed. The recent view about cell membrane as almost vacuum extremal of K&auml;hler action explains large parity breaking effects in living matter and also the peak frequencies of photoreceptors in retina. Also a model for the cell membrane as a kind of sensory homunculus with lipids identified as pixels of a sensory map representing basic qualia follows naturally. Furthermore, EEG photons and biophotons can be identified as decay products of same dark photons. <LI> The third part of the book is devoted to water memory and metabolism. The title suggests a connection between these two and this kind of connection has indeed emerged. A discovery - certainly one of the greatest surprises of my professional life - popped up as an outcome of an attempt to understand the mechanism behind water memory for which rather strong support exists now. The idea was that dark nuclei which sizes zoomed up to atomic size scale could provide a representation of genes with dark nucleons consisting of three quarks representing DNA codons. It turned out that the model for dark nucleon consisting of three quarks predicts counterparts of 64 DNAs, 64 RNAs, and 20 aminoacids and allows to identify vertebrate genetic code as a natural mapping of DNA type states to amino-acid type states. The population of dark nuclei would be new life-form possibly responsible for the water memory. The chapter about metabolism represents a model of metabolism based on the identification of universal metabolic energy quanta as increments of zero point kinetic energies emitted or absorbed as particles are transferred between space-time time sheets characterized by different p-adic primes. </OL>
[5751] vixra:0908.0023 [pdf]
Genes and Memes
The first part of book discusses the new physics relevant to biology and the vision about Universe as topological quantum computer (tqc). </p><p Second part describes concrete physical models. </p><p> <OL> <LI> The notion of many-sheeted DNA and a model of genetic code inspired by the notion of Combinatorial Hierarchy predicting what I call memetic code are introduced. The almost exact symmetries of the code table with respect to the third letter inspire the proposal that genetic code could have evolved as fusion of two-letter code and single-letter code. <LI> A model for how genome and cell membrane could act as tqc is developed. Magnetic flux tubes containing dark matter characterized by large value of Planck constant would make living matter a macroscopic quantum system. DNA nucleotides and lipids of the cell membrane would be connected by magnetic flux tubes and the flow of the 2-D liquid formed by lipids would induce dynamical braiding defining the computation. <LI> The net of magnetic flux tubes could explain the properties of gel phase. Phase transitions reducing Planck constant would induce a contraction of the flux tubes explaining why bio-molecules manage to find each other in a dense soup of bio-molecules. The topology of the net would be dynamical and ADP &harr; ATP transformation could affect it. The anomalies related to ionic currents, nerve pulse activity, and interaction of ELF radiation with vertebrate brain find an explanation in this framework. The number theoretic entanglement entropy able to have negative values could be seen as the real quintenssence associated with the metabolic energy transfer, and the poorly understood high energy phosphate bond could be interpreted in terms of negentropic entanglement rather than ordinary bound state entanglement. <LI> The discoveries of Peter Gariaev about interaction of ordinary and laser light with genome combined with ideas about dark matter and water memory lead to a model for the interaction of photons with DNA. Dark &harr; ordinary transformation for photons could allow to "see" dark matter by allowing ordinary light to interact with DNA. <LI> A physical model for genetic code emerged from an attempt to understand the mechanism behind water memory. Dark nuclei which sizes zoomed up to atomic size scale could represent genes. The model for dark nucleon consisting of three quarks predicts counterparts of 64 DNAs, 64 RNAs, and 20 aminoacids and allows to identify genetic code as a natural mapping of DNA type states to amino-acid type states and consistent with vertebrate genetic code. </OL> </p><p> The third part of the book discusses number theoretical models of the genetic code based on p-adic thermodynamics and maximization of entropy or negentropy. These models reproduce the genetic code but fail to make killer predictions.
[5752] vixra:0908.0022 [pdf]
Quantum Hardware of Living Matter
This book represents a view about quantum hardware of living systems in TGD framework. Since the vision is bound to look highly speculative, it is good to emphasize that the most important predictions follow almost without any reference to the classical field equations using only quantum classical correspondence. </p><p> The new conceptual elements are the notion of many-sheeted space-time having fractal hierarchical structure, 4-D spin glass degeneracy of the preferred extremals of K&auml;hler action providing huge information storage capacity, topological field quantization, the notion of magnetic/field body serving as intentional agent using biological body as sensory receptor and motor instrument, zero energy ontology leading to a new view about energy and about the relationship between experienced and geometric time, dark matter hierarchy realized in terms of book like structure of 8-D imbedding space with pages labeled by values of Planck constant, the assumption that the phase transitions changing the value of Planck constant provide a key control tool in living matter, and p-adic length scale hypothesis allowing quantitative grasp to the situation. </p><p> <OL> <LI> Three chapters of this book are devoted to the model of high $T<sub>c</sub>$ super-conductivity relying strongly on the notions of quantum criticality and dark matter. <LI> Two chapters discuss quantum antenna hypothesis inspired by topological light rays (M(assless) E(xtremals)) and the notion of wormhole magnetic fields. Notice that the notion of wormhole magnetic field was introduced much before the hypothesis that bosons and also interaction fermions have a natural identification as wormhole contacts emerged. The recent view about quantum TGD suggests the interpretation wormhole magnetic fields as dark scaled up versions of elementary particles identified as K&auml;hler magnetic flux tubes identified as pairs of magnetically charged wormhole contacts carrying at the second wormhole throat neutrino pair neutralizing the weak isospin of the fermion at the second wormhole throat. For ordinary value of Planck constant the length of is flux would be given by weak length scale. Therefore weak interactions in biological length scales are involved giving rise to large parity breaking effects. Also scaled up QCD is involved and implying hadron like states in biological length scales. <LI> Two chapters are devoted to the possible biological implications of the hypothesis that dark matter corresponds to macroscopic quantum phases characterized by a large value of Planck constant and is the key actor in living matter. <LI> A possible identification of quantum correlates of sensory qualia is discussed assuming that qualia are in one-one correspondence with the increments of quantum numbers in quantum jump. Also a simple model for sensory receptor is introduced. The recent view about cell membrane as almost vacuum extremal of K&auml;hler action explains large parity breaking effects in living matter and also the peak frequencies of photoreceptors in retina. Also a model for the cell membrane as a kind of sensory homunculus with lipids identified as pixels of a sensory map representing basic qualia follows naturally. Furthermore, EEG photons and biophotons can be identified as decay products of same dark photons. </OL>
[5753] vixra:0908.0021 [pdf]
Bio-Systems as Self-Organizing Quantum Systems
The book describes basic quantum TGD in its recent form. <OL> <LI>The properties of the preferred extremals of Kähler action are crucial for the construction and the discussion of known extremals is therefore included. <LI>General coordinate invariance and generalized super-conformal symmetries - the latter present only for 4-dimensional space-time surfaces and for 4-D Minkowski space - define the basic symmetries of quantum TGD. <LI>In zero energy ontology S-matrix is replaced with M-matrix and identified as time-like entanglement coefficients between positive and negative energy parts of zero energy states assignable to the past and future boundaries of 4-surfaces inside causal diamond defined as intersection of future and past directed light-cones. M-matrix is a product of diagonal density matrix and unitary S-matrix and there are reasons to believe that S-matrix is universal. Generalized Feynman rules based on the generalization of Feynman diagrams obtained by replacing lines with light-like 3-surfaces and vertices with 2-D surfaces at which the lines meet. <LI> A category theoretical formulation of quantum TGD is considered. Finite n measurement resolution realized in terms of a fractal hierarchy of causal diamonds inside causal diamonds leads to a stringy formulation of quantum TGD involving effective replacement of the 3-D light-like surface with a collection of braid strands representing the ends of strings. A formulation in terms of category theoretic concepts is proposed and leads to a hierarchy of algebras forming what is known as operads. <LI>Twistors emerge naturally in TGD framework and could allow the formulation of low energy limit of the theory in the approximation that particles are massless. The replacement of massless plane waves with states for which amplitudes are localized are light-rays is suggestive in twistor theoretic framework. Twistors could allow also a dual representation of space-time surfaces in terms of surfaces of X×CP<sub>2</sub>, where X is 8-D twistor space or its 6-D projective variant. These surfaces would have dimension higher than four in non-perturbative phases meaning an analogy with branes. In full theory a massive particles must be included but represent a problem in approach based on standard twistors. The interpretation of massive particles in 4-D sense as massless particles in 8-D sense would resolve the problem and requires a generalization of twistor concept involving in essential manner the triality of vector and spinor representations of SO(7,1). <LI>In TGD Universe bosons are in well-defined sense bound states of fermion and anti-fermion. This leads to the notion of bosonic emergence meaning that the fundamental action is just Dirac action coupled to gauge potentials and bosonic action emerges as part of effective action as one functionally integrates over the spinor fields. This kind of approach predicts the evolution of all coupling constants if one is able to fix the necessary UV cutoffs of mass and hyperbolic angle in loop integrations. The guess for the hyperbolic cutoff motivated by the geometric view about finite measurement resolution predicts coupling constant evolution which is consistent with that predicted by standard model. The condition that all N-vertices defined by fermiomic loops vanish for N>3 when incoming particles are massless gives hopes of fixing completely the hyperbolic cutoff from fundamental principles. </OL>
[5754] vixra:0908.0020 [pdf]
P-Adic Length Scale Hypothesis and Dark Matter Hierarchy
The book is devoted to the applications of p-adic length scale hypothesis and dark matter hierarchy. </p><p> <OL> <LI> p-Adic length scale hypothesis states that primes p&asymp; 2<sup>k</sup>, k integer, in particular prime, define preferred p-adic length scales. Physical arguments supporting this hypothesis are based on the generalization of Hawking's area law for blackhole entropy so that it applies in case of elementary particles. A deeper number theory based justification for this hypothesis is based on the generalization of the number concept fusing real number fields and p-adic number fields among common rationals or numbers in their non-trivial algebraic extensions. This approach also justifies the notion of multi-p-fractality and allows to understand scaling law in terms of simultaneous p&asymp; 2<sup>k</sup>- and 2-fractality. <LI> In TGD framework the levels of dark matter hierarchy are labeled by the values of dynamical quantized Planck constant. The justification for the hypothesis provided by quantum classical correspondence and the fact the sizes of space-time sheets identifiable as quantum coherence regions can be arbitrarily large. <LI> The weak form of electric-magnetic duality is the newest building brick of the vision and leads to a detailed view about electro-weak screening and color confinement and predicts new physics below weak scales. The weak form of electric-magnetic duality allows to identify Higgs bosons and to understand how they provide the longitudinal polarizations of gauge bosons. The most natural option is that photon eats the remaining Higgs scalar and receives a small mass. This true for all bosons regarded as massless and allows to have exact Yangian symmetry requiring the vanishing of IR divergences. Higgs potential and vacuum expectation of Higgs are not needed in the model. <LI> Twistors emerge naturally in TGD framework and several proposal for twistorialization of TGD is discussed in two chapters devoted to the topic. Twistorial approach combined with zero energy ontology, bosonic emergence, and the properties of the Chern-Simons Dirac operator leads to the conjecture that all particles -also string like objects- can be regarded as bound states of massless particles identifiable as wormhole throats. Also virtual particles would consist of massles wormhole throats but bound state property is not assumed anymore and the energies of wormhole throats can have opposite signs so that space-like momentum exchanges become possible. This implies extremely strong constraints on loop momenta and manifest finiteness of loop integrals. </OL> </p><p> The first part of the book is about the description of elementary particle massivation in terms of p-adic thermodynamics and Higgsy contribution affecting the vacuum conformal weight. In the first chapter the view about quantum TGD from particle physics perspective is discussed and the remaining chapters are devoted to the detailed calculation of masses of elementary particles and hadrons, and to various new physics suggested or predicted by the resulting scenario. </p><p> Second part of the book is devoted to the application of p-adic length scale hypothesis above elementary particle length scales. The so called leptohadron physics, originally developed on basis of experimental anomalies, is discussed as a particular instance of an infinite fractal hierarchy of copies of standard model physics, predicted by TGD and consistent with what is known about ordinary elementary particle physics. TGD based view about nuclear physics involves light exotic quarks as a essential element, and dark nuclear physics could have implications also at the level of condensed matter physics and biology. TGD based view about high T<sub>c</sub> superconductors involves also in an essential manner dark matter and is summarized in the closing chapter.
[5755] vixra:0908.0019 [pdf]
Towards M-Matrix
This book is devoted to a detailed representation of the recent state of quantum TGD. </p><p> The first part of the book summarizes quantum TGD in its recent form. </p><p> <OL> <LI> General coordinate invariance and generalized super-conformal symmetries are the basic symmetries of TGD and Equivalence Principle can be generalized using generalized coset construction. <LI> In zero energy ontology the basis of classical WCW spinors fields forms unitary U-matrix having M-matrices as its orthogonal rows. M-matrix defines time-like entanglement coefficients between positive and negative energy parts of the zero energy states. M-matrix is a product of a hermitian density matrix and unitary S-matrix commuting with it. The hermitian density matrices define infinite-dimensional Lie-algebra extending to a generalization of Kac-Moody type algebra with generators defined as products of hermitian density matrices and powers of S-matrix. Yangian type algebra is obtained if only non-negative powers of S are allowed. The interpretation is in terms of the hierarchy of causal diamonds with size scales coming as integer multiples of CP<sub>2</sub> size scale. Zero energy states define their own symmetry algebra. For generalized Feynman diagrams lines correspond to light-like 3-surfaces and vertices to 2-D surfaces. <LI> Finite measurement resolution realized using fractal hierarchy of causal diamonds (CDs) inside CDs implies a stringy formulation of quantum TGD involving replacement of 3-D light-like surfaces with braids representing the ends of strings. Category theoretical formulation leads to a hierarchy of algebras forming an operad. <LI> Twistors emerge naturally in TGD framework and several proposal for twistorialization of TGD is discussed in two chapters devoted to the topic. Twistorial approach combined with zero energy ontology, bosonic emergence, and the properties of the Chern-Simons Dirac operator leads to the conjecture that all particles -also string like objects- can be regarded as bound states of massless particles identifiable as wormhole throats. Also virtual particles would consist of massles wormhole throats but bound state property is not assumed anymore and the energies of wormhole throats can have opposite signs so that space-like momentum exchanges become possible. This implies extremely strong constraints on loop momenta and manifest finiteness of loop integrals. </p><p> An essential element of the formulation is exact Yangian symmetry obtained by replacing the loci of multilocal symmetry generators of Yangian algebra with partonic 2-surfaces so that conformal algebra of Minkowski space is extened to infinite-dimensional algebra bringing in also the conformal algebra assigned to the partonic 2-surfaces. Yangian symmetry requires the vanishing of both UV and IR divergences achieved if the physical particles are bound states of massless wormhole throats. </p><p> Rather general arguments suggest the formulation of TGD in terms of holomorphic 6-surfaces in the product CP<sub>3</sub>&times; CP<sub>3</sub> of twistor spaces leading to a unique partial differential equations determining these surfaces in terms of homogenous polynomials of the projective complex coordinates of the two twistor spaces. </OL> </p><p> Second part of the book is devoted to hyper-finite factors and hierarchy of Planck constants. </p><p> <OL> <LI> The Clifford algebra of WCW is hyper-finite factor of type II<sub>1</sub>. The inclusions provide a mathematical description of finite measurement resolution. The included factor is analogous to gauge symmetry group since the action of the included factor creates states not distinguishable from the original one. TGD Universe would be analogous to Turing machine able to emulate any internally consistent gauge theory (or more general theory) so that finite measurement resolution would provide TGD Universe with huge simulational powers. <LI> In TGD framework dark matter corresponds to ordinary particles with non-standard value of Planck constant. The simplest view about the hierarchy of Planck constants is as an effective hierarchy describable in terms of local, singular coverings of the imbedding space. The basic observation is that for K&auml;hler action the time derivatives of the imbedding space coordinates are many-valued functions of canonical momentum densities. If all branches for given values of canonical momentum densities are allowed, one obtains the analogs of many-sheeted Riemann surfaces with each sheet giving same contribution to the K&auml;hler action so that Planck constant is effectively a multiple of the ordinary Planck constant. Dark matter could be in quantum Hall like phase localized at light-like 3-surfaces with macroscopic size and analogous to black-hole horizons. </OL>
[5756] vixra:0908.0018 [pdf]
TGD as a Generalized Number Theory
The focus of this book is the number theoretical vision about physics. This vision involves three loosely related parts. </p><p> <OL><LI> The fusion of real physic and various p-adic physics to a single coherent whole by generalizing the number concept by fusing real numbers and various p-adic number fields along common rationals. Extensions of p-adic number fields can be introduced by gluing them along common algebraic numbers to reals. Algebraic continuation of the physics from rationals and their their extensions to various number fields (generalization of completion process for rationals) is the key idea, and the challenge is to understand whether how one could achieve this dream. A profound implication is that purely local p-adic physics would code for the p-adic fractality of long length length scale real physics and vice versa, and one could understand the origins of p-adic length scale hypothesis. <LI> Second part of the vision involves hyper counterparts of the classical number fields defined as subspaces of their complexificationsnwith Minkowskian signature of metric. Allowed space-time surfaces would correspond to what might be callednhyper-quaternionic sub-manifolds of a hyper-octonionic space and mappable to M<sup>4</sup>&times; CP<sub>2</sub> in natural manner. One could assign to each point of space-time surface a hyper-quaternionic 4-plane which is the plane defined by the induced or modified gamma matrices defined by the canonical momentum currents of K&auml;hler action. Induced gamma matrices seem to be preferred mathematically: they correspond to modified gamma matrices assignable to 4-volume action, and one can develop arguments for why K&auml;hler action defines the dynamics. </p><p> Also a general vision about preferred extremals of K&auml;hler action emerges. The basic idea is that imbedding space allows octonionic structure and that field equations in a given space-time region reduce to the associativity of the tangent space or normal space: space-time regions should be quaternionic or co-quaternionic. The first formulation is in terms of the octonionic representation of the imbedding space Clifford algebra and states that the octonionic gamma "matrices" span a complexified quaternionic sub-algebra. Another formulation is in terms of octonion real-analyticity. Octonion real-analytic function f is expressible as f=q<sub>1</sub>+Iq<sub>2</sub>, where q<sub>i</sub> are quaternions and I is an octonionic imaginary unit analogous to the ordinary imaginary unit. q<sub>2</sub> (q<sub>1</sub>) would vanish for quaternionic (co-quaternionic) space-time regions. The local number field structure of the octonion real-analytic functions with composition of functions as additional operation would be realized as geometric operations for space-time surfaces. The conjecture is that these two formulations are equivalent. <LI> The third part of the vision involves infinite primes identifiable in terms of an infinite hierarchy of second quantized arithmetic quantum fields theories on one hand, and as having representations as space-time surfaces analogous to zero loci of polynomials on the other hand. Single space-time point would have an infinitely complex structure since real unity can be represented as a ratio of infinite numbers in infinitely many manners each having its own number theoretic anatomy. Single space-time point would be in principle able to represent in its structure the quantum state of the entire universe. This number theoretic variant of Brahman=Atman identity would make Universe an algebraic hologram. </p><p> Number theoretical vision suggests that infinite hyper-octonionic or -quaternionic primes could could correspond directly to the quantum numbers of elementary particles and a detailed proposal for this correspondence is made. Furthermore, the generalized eigenvalue spectrum of the Chern-Simons Dirac operator could be expressed in terms of hyper-complex primes in turn defining basic building bricks of infinite hyper-complex primes from which hyper-octonionic primes are obtained by dicrete SU(3) rotations performed for finite hyper-complex primes. </OL> </p><p> Besides this holy trinity I will discuss in the first part of the book loosely related topics such as the relationship between infinite primes and non-standard numbers. </p><p> Second part of the book is devoted to the mathematical formulation of the p-adic TGD. The p-adic counterpart of integration is certainly the basic mathematical challenge. Number theoretical universality and the notion of algebraic continuation from rationals to various continuous number fields is the basic idea behind the attempts to solve the problems. p-Adic integration is also a central problem of modern mathematics and the relationship of TGD approach to motivic integration and cohomology theories in p-adic numberfields is discussed. </p><p> The correspondence between real and p-adic numbers is second fundamental problem. The key problem is to understand whether and how this correspondence could be at the same time continuous and respect symmetries at least in discrete sense. The proposed explanation of Shnoll effect suggests that the notion of quantum rational number could tie together p-adic physics and quantum groups and could allow to define real-p-adic correspondence satisfying the basic conditions. </p><p> The third part is develoted to possible applications. Included are category theory in TGD framework; TGD inspired considerations related to Riemann hypothesis; topological quantum computation in TGD Universe; and TGD inspired approach to Langlands program.
[5757] vixra:0908.0017 [pdf]
Physics in Many-Sheeted Space-Time
This book is devoted to what might be called classical TGD. </p><p> <OL> <LI> Classical TGD identifies space-time surfaces as kind of generalized Bohr orbits. It is an exact part of quantum TGD. <LI> The notions of many-sheeted space-time, topological field quantization and the notion of field/magnetic body, follow from simple topological considerations. Space-time sheets can have arbitrarily large sizes and their interpretation as quantum coherence regions implies that in TGD Universe macroscopic quantum coherence is possible in arbitrarily long scales. Also long ranged classical color and electro-weak fields are predicted. <LI> TGD Universe is fractal containing fractal copies of standard model physics at various space-time sheets and labeled by p-adic primes assignable to elementary particles and by the level of dark matter hierarchy characterized partially by the value of Planck constant labeling the pages of the book like structure formed by singular covering spaces of the imbedding space M<sup>4</sup>&times; CP<sub>2</sub> glued together along four-dimensional back. Particles at different pages are dark relative to each other since local interactions defined in terms of the vertices of Feynman diagram involve only particles at the same page. </p><p> The simplest view about the hierarchy of Planck constants is as an effective hierarchy describable in terms of local, singular coverings of the imbedding space. The basic observation is that for K&auml;hler action the time derivatives of the imbedding space coordinates are many-valued functions of canonical momentum densities. If all branches for given values of canonical momentum densities are allowed, one obtains the analogs of many-sheeted Riemann surfaces with each sheet giving same contribution to the K&auml;hler action so that Planck constant is effectively a multiple of the ordinary Planck constant. <LI> Zero energy ontology brings in additional powerful interpretational principle. </OL> </p><p> The topics of the book are organized as follows. <OL> <LI> In Part I extremals of K&auml;hler action are discussed and the notions of many-sheeted space-time, topological field quantization, and topological condensation and evaporation are introduced. <LI> In Part II many-sheeted-cosmology and astrophysics are summarized. p-Adic and dark matter hierarchies imply that TGD inspired cosmology is fractal. Cosmic strings and their deformations giving rise to magnetic flux tubes are basic objects of TGD inspired cosmology. Magnetic flux tubes can in fact be interpreted as carriers of dark energy giving rise to accelerated expansion via negative magnetic "pressure". The study of imbeddings of Robertson-Walker cosmology shows that critical and over-critical cosmology are unique apart from their duration. The idea about dark matter hierarchy was originally motivated by the observation that planetary orbits could be interpreted as Bohr orbits with enormous value of Planck constant, and this picture leads to a rather detailed view about macroscopically quantum coherent dark matter in astrophysics and cosmology. </p><p> <LI> Part III includes old chapters about implications of TGD for condensed matter physics. The phases of CP<sub>2</sub> complex coordinates could define phases of order parameters of macroscopic quantum phases manifesting themselves in the properties of living matter and even in hydrodynamics. For instance, Z<sup>0</sup> magnetic gauge field could make itself visible in hydrodynamics and Z<sup>0</sup> magnetic vortices could be involved with super-fluidity. </OL>
[5758] vixra:0908.0016 [pdf]
Physics as Infinite-Dimensional Geometry
The topics of this book is a vision about physics as infinite-dimensional K&auml;hler geometry of the "world of classical worlds" (WCW), with "classical world" identified either as light-like 3-D surface X<sup>3</sup> of a unique Bohr orbit like 4-surface X<sup>4</sup>(X<sup>3</sup>) or X<sup>4</sup>(X<sup>3</sup>) itself. The non-determinism of K&auml;hler action defining K&auml;hler function forces to generalize the notion of 3-surface. Zero energy ontology allows to formulate this generalization elegantly using a hierarchy of causal diamonds (CDs) defined as intersections of future and past directed light-cones, and a geometric realization of coupling constant evolution and finite measurement resolution emerges. </p><p> The general vision about quantum dynamics is that the basis for WCW spinor fields defines in zero energy ontology unitary U-matrix having as orthogonal rows M-matrices. Given M-matrix is expressible as a product of hermitian square root of density matrix and S-matrix. M-matrices define time-like entanglement coefficients between positive and negative energy parts of zero energy states represented by the modes of WCW spinor fields. </p><p> One encounters two challenges. <OL><LI> Provide WCW with K&auml;hler geometry consistent with 4-dimensional general coordinate invariance. Clearly, the definition of metric must assign to given light-like 3-surface X<sup>3</sup> a 4-surface X<sup>4</sup>(X<sup>3</sup>) as kind of Bohr orbit. <LI> Provide WCW with spinor structure. The idea is to express configuration space gamma matrices using super algebra generators expressible using second quantized fermionic oscillator operators for induced free spinor fields at X<sup>4</sup>(X<sup>3</sup>). Isometry generators and contractions of Killing vectors with gamma matrices would generalize Super Kac-Moody algebra. </OL> </p><p> The condition of mathematical existence poses stringent conditions on the construction. </p><p> <OL> <LI> The experience with loop spaces suggests that a well-defined Riemann connection exists only if this space is union of infinite-dimensional symmetric spaces. Finiteness requires that vacuum Einstein equations are satisfied. The coordinates labeling these symmetric spaces do not contribute to the line element and have interpretation as non-quantum fluctuating classical variables. <LI> The construction of the K&auml;hler structure requires the identification of complex structure. Direct construction of K&auml;hler function as action associated with a preferred extremal for K&auml;hler action leads to a unique result. The group theoretical approach relies on direct guess of isometries of the symmetric spaces involved. Isometry group generalizes Kac-Moody group by replacing finite-dimensional Lie group with the group of symplectic transformations of &delta; M<sup>4</sup><sub>+</sub>&times; CP<sub>2</sub>, where &delta; M<sup>4</sup><sub>+</sub> is the boundary of 4-dimensional future light-cone. The generalized conformal symmetries assignable to light-like 3-surfaces and boundaries of causal diamonds bring in stringy aspect and Equivalence Principle can be generalized in terms of generalized coset construction. <LI> Configuration space spinor structure geometrizes fermionic statistics and quantization of spinor fields. Quantum criticality can be formulated in terms of the modified Dirac equation for induced spinor fields allowing a realization of super-conformal symmetries and quantum gravitational holography. <LI> Zero energy ontology combined with the weak form of electric-magnetic duality led to a breakthrough in the understanding of the theory. The boundary conditions at light-like wormhole throats and at space-like 3-surfaces defined by the intersection of the space-time surface with the light-like boundaries of causal diamonds reduce the classical quantization of K&auml;hler electric charge to that for K&auml;hler magnetic charge. The integrability of field equations for the preferred extremals reduces to the condition that the flow lines of various isometry currents define Beltrami fields for which the flow parameter by definition defines a global coordinate. The assumption that isometry currents are proportional to the instanton current for K&auml;hler action reduces K&auml;hler function to a boundary term which by the weak form of electric-magnetic duality reduces to Chern-Simons term. This realizes TGD as almost topological QFT. <LI> There are also number theoretical conjectures about the character of the preferred extremals. The basic idea is that imbedding space allows octonionic structure and that field equations in a given space-time region should reduce to the associativity of the tangent space or normal space so that space-time regions should be quaternionic or co-quaternionic. The first formulation is in terms of the octonionic representation of the imbedding space Clifford algebra and states that the octonionic gamma "matrices" span a quaternionic sub-algebra. Another formulation is in terms of octonion real-analyticity. Octonion real-analytic function f is expressible as f=q<sub>1</sub>+Iq<sub>2</sub>, where q<sub>i</sub> are quaternions and I is an octonionic imaginary unit analogous to the ordinary imaginary unit. q<sub>2</sub> (q<sub>1</sub>) would vanish for quaternionic (co-quaternionic) space-time regions. The local number field structure of octonion real-analytic functions with composition of functions as additional operation would be realized as geometric operations for space-time surfaces. The conjecture is that these two formulations are equivalent. <LI> An important new interpretational element is the identification of the K&auml;hler action from Minkowskian space-time regions as a purely imaginary contribution identified as Morse function making possible quantal interference effects. The contribution from the Euclidian regions interpreted in terms of generalized Feynman graphs is real and identified as K&auml;hler function. These contributions give apart from coefficient identical Chern-Simons terms at wormhole throats and at the space-like ends of space-time surface: it is not clear whether only the contributions these 3-surfaces are present. <LI> Effective 2-dimensionality suggests a reduction of Chern-Simons terms to a sum of real and imaginary terms corresponding to the total areas of of string world sheets from Euclidian and Minkowskian string world sheets and partonic 2-surfaces, which are an essential element of the proposal for what preferred extremals should be. The duality between partonic 2-surfaces and string world sheets suggests that the total area of partonic 2-surfaces is same as that for string world sheets. <LI> The approach leads also to a highly detailed understanding of the Chern-Simons Dirac equation at the wormhole throats and space-like 3-surfaces and K&auml;hler Dirac equation in the interior of the space-time surface. The effective metric defined by the anticommutators of the modified gamma matrices has an attractive interpretation as a geometrization for parameters like sound velocity assigned with condensed matter systems in accordance with effective 2-dimensionality and strong form of holography. </OL>
[5759] vixra:0908.0015 [pdf]
TGD and Fringe Physics
The topics of this book could be called fringe physics involving claimed phenomena which do not have explanation in terms of standard physics. </p><p> Many-sheeted space-time with p-adic length scale hierarchy, the predicted dark matter hierarchy with levels partially characterized by quantized dynamical Planck constant, and the prediction of long ranged color and weak forces alone predict a vast variety of new physics effects. Zero energy ontology predicts that energy can have both signs and that classical signals can propagate in reversed time direction at negative energy space-time sheets and an attractive identification for negative energy signals would be as generalizations of phase conjugate laser beams. This vision leads to a coherent view about metabolism, memory, and bio-control and it is natural to ask whether the reported anomalies might be explained in terms of the mechanisms giving hopes about understanding the behavior of living matter. </p><p> <OL> <LI> The effects involving coin words like antigravity, strong gravity, and electro-gravity motivate the discussion of possible anomalous effects related to long range electro-weak fields and many-sheeted gravitation. For instance, TGD leads to a model for the strange effects reported in rotating magnetic systems. <LI> Tesla did not believe that Maxwell's theory was an exhaustive description of electromagnetism. He claimed that experimental findings related to pulsed systems require the assumption of what he called scalar waves not allowed by Maxwell's electrodynamics. TGD indeed allows scalar wave pulses propagating with light velocity. The dropping of particles to larger space-time sheets liberating metabolic energy, transformation of ordinary charged matter to dark matter and vice versa, dark photons, etc... might be needed to explain Tesla's findings. Also phase conjugate, possibly dark, photons making possible communications with geometric past might be involved. <br> These speculative ideas receive unexpected support from the TGD inspired view about particle physics. The recent TGD inspired view about Higgs mechanism suggests strongly that photon eats the remaining component of Higgs boson and in this manner gets longitudinal polarization and small mass allowing to avoid infrared divergences of scattering amplitudes. <LI> The reports about ufos represent a further application for TGD based view about Universe. Taking seriously the predicted presence of infinite self hierarchy represented by dark matter hierarchy makes it almost obvious that higher civilizations are here, there, and everywhere, and that their relationship to us is like that of our brain to its neurons, so that Fermi paradox (Where are they all?) would disappear. Although the space travel might be quite too primitive idea for the civilizations at higher levels of hierarchy, ufos might be real objects representing more advanced technology rather than plasmoid like life forms serving as mediums in telepathic communications. </OL>
[5760] vixra:0908.0014 [pdf]
Topological Geometrodynamics: Overview
This book tries to give an overall view about quantum TGD as it stands now. The topics of this book are following. <OL><LI> Part I: An overall view about the evolution of TGD and about quantum TGD in its recent form. Two visions about physics are discussed at general level. According to first vision physical states of the Universe correspond to classical spinor fields in the world of the classical worlds (WCW) identified as 3-surfaces or equivalently as corresponding 4-surfaces analogous to Bohr orbits and identified as special extrema of K&auml;hler action. TGD as a generalized number theory vision leading naturally also to the emergence of p-adic physics as physics of cognitive representations is the second vision. <LI> Part II: The vision about physics as infinite-dimensional configuration space geometry. The basic idea is that classical spinor fields in WCW describe the quantum states of the Universe. Quantum jump remains the only purely quantal aspect of quantum theory in this approach since there is no quantization at the level of the configuration space. Space-time surfaces correspond to special extremals of the K&auml;hler action analogous to Bohr orbits and define what might be called classical TGD discussed in the first chapter. The construction of the configuration space geometry and spinor structure are discussed in remaining chapters. <LI> Part III: Physics as generalized number theory. Number theoretical vision involves three loosely related approaches: fusion of real and various p-adic physics to a larger whole as algebraic continuations of what might be called rational physics; space-time as a hyper-quaternionic surface of hyper-octonion space, and space-time surfaces as a representations of infinite primes. <LI> Part IV: The first chapter summarizes the basic ideas related to von Neumann algebras known as hyper-finite factors of type II<sub>1</sub> about which configuration space Clifford algebra represents canonical example. Second chapter is devoted to the basic ideas related to the hierarchy of Planck constants and related generalization of the notion of imbedding space to a book like structure. <LI> Part V: Physical applications of TGD. Cosmological and astrophysical applications are summarized and applications to elementary particle physics are discussed at the general level. TGD explains particle families in terms of generation-genus correspondence (particle families correspond to 2-dimensional topologies labeled by genus). The general theory for particle massivation based on p-adic thermodynamics is discussed at the general level. </OL>
[5761] vixra:0907.0037 [pdf]
The Graviton Background Vs. Dark Energy
In the model of low-energy quantum gravity by the author, cosmological redshifts are caused by interactions of photons with gravitons. Non-forehead collisions with gravitons will lead to an additional relaxation of any photonic flux. It gives a possibility of another interpretation of supernovae 1a data. Every massive body would be decelerated due to collisions with gravitons that may be connected with the Pioneer 10 anomaly. This mechanism needs graviton pairing and "an atomic structure" of matter for working it. Also an existence of black holes contradicts to the equivalence principle: any black hole should have a gravitational mass to be much bigger - about three orders - than an inertial one.
[5762] vixra:0907.0036 [pdf]
Gravitational Asymptotic Freedom and Matter Filling of Black Holes
The property of asymptotic freedom of the model of low-energy quantum gravity by the author leads to the unexpected consequence: if a black hole arises due to a collapse of a matter with some characteristic mass of particles, its full mass should be restricted from the bottom. For usual baryonic matter, this limit of mass is of the order 107M<sub>☉</sub>
[5763] vixra:0907.0030 [pdf]
Signal Photon Flux and Background Noise in a Coupling Electromagnetic Detecting System for High Frequency Gravitational Waves
Coupling system between Gaussian type-microwave photon flux, static magnetic field and fractal membranes (or other equivalent microwave lenses) can be used to detect high-frequency gravitational waves (HFGWs) in the microwave band. We study the signal photon flux, background photon flux and the requisite minimal accumulation time of the signal in the coupling system. Unlike pure inverse Gertsenshtein effect (G-effect) caused by the HFGWs in the GHz band, the the electromagnetic (EM) detecting scheme (EDS) proposed by China and the US HFGW groups is based on the composite effect of the synchro-resonance effect and the inverse G-effect. Key parameters in the scheme is the first-order perturbative photon flux (PPF) and not the second-order PPF; the distinguishable signal is the transverse first-order PPF and not the longitudinal PPF; the photon flux focused by the fractal membranes or other equivalent microwave lenses is not only the transverse first-order PPF but the total transverse photon flux, and these photon fluxes have different signal-to-noise ratios at the different receiving surfaces. Theoretical analysis and numerical estimation show that the requisite minimal accumulation time of the signal at the special receiving surfaces and in the background noise fluctuation would be ~ 10<sup>3</sup> -10<sup>5</sup> seconds for the typical laboratory condition and parameters of h<sub>r.m.s.</sub> ~ 10<sup>-26</sup> - 10<sup>-30</sup>/√Hz at 5GHz with bandwidth ~1Hz. In addition, we review the inverse G-effect in the EM detection of the HFGWs, and it is shown that the EM detecting scheme based only on the pure inverse G-effect in the laboratory condition would not be useful to detect HFGWs in the microwave band.
[5764] vixra:0907.0023 [pdf]
Properties of the Geometric Phases on Qbits
Since Berry demostrated that the standard description of adiabatic processes in quantum mechanics was incomplete, geometric phases have been studied in many areas of physics. Both adiabatic and non-adiabatic phase are described in detail, with the mathematical background. Then we study the qbit, that is the principal unit of information of the quantum computation, and its representation on the Bloch sphere. Finally we find the general expression for geometric phases for qbits. Those final expression are in fact related to the solid angle enclosed in the circuit on the Bloch sphere.
[5765] vixra:0907.0015 [pdf]
Hypothesis of Dark Matter and Dark Energy with Negative Mass
From the observance of the HSS team and SCP team in 1998, they gained the mass density of the negative(HSS: Ω<sub>M</sub> = -0.38(±0.22), SCP: Ω<sub>M</sub> = -0.4(±0.1) ), using field equations which do not have the cosmological constant. In they thought, the quantity of the mass couldn't be a negative value, so the value was discarded. We have to know that not the field equation has disposed the value, but our thought disposed that value. In the world of positive mass, ground state is a point that energy is low, but in case of negative mass, ground state is a point that energy is the highest. Accordingly, in the world of negative mass, energy level is filled from the highest to the lowest, and stable state means the highest energy state, so the catastrophe to energy level of minus infinity never happens even if negative mass spontaneously emits energy. Assuming that negative mass exists, Newton's Law of motion was derived in between negative and positive masses and also between negative and negative masses. As a method for proving the existence of negative mass, an explanation on the revolution velocity of the galaxy through negative mass has been presented. In this process, the existence of spherical mass distribution was given; furthermore, explanation was done using this, to show observation results where dark matter effect through negative mass is proportional to distance r. If Ω<sub>M</sub> is -0.38, universe's age is 14.225 Gyr. It is in the range estimated by other observations. Universe's radius R is 96:76[<sub>-11:44</sub><sup>+12:13</sup>]Gly = (85.32 ~ 108.89)Gly. Assuming that negative mass and positive mass were born together at the beginning of universe, it satisfies the various problems that previous dark matter and dark energy possess, such as, centripetal force effects of galaxy and galaxy clusters from previous dark matters, mass effects that is proportional to the distance r, repulsive force needed for expansion, dark energy that has positive values, low interaction between dark matter when collision occurs between dark matter, deceleration expansion and acceleration expansion of universe, formation of void, in ation mechanism, fine tuning problem of mass density, collision of Bullet cluster, universe's age, universe's size, the reason of that dark energy seems to has a small and non-zero value. Also, we prove to the dark energy observation value (10<sup>-47</sup>GeV<sup>4</sup>). As a result, the necessity of observation focusing on exact computation and detection of negative mass is stated.
[5766] vixra:0907.0005 [pdf]
U(1) Axial as a Force Between Neturinos
We show that when left and right handed neutrinos a have majorana mass matrix, local guage invariance produces a fifth force acting between chiral charges on neutrinos and quarks. The force is a carried by a massless (or low mass) 1-spin guage boson, we call an axiphoton. The force is caused by a U(1) axial guage symmetry in the way as the electromagnetic force. We expect from renormalisation that the force constant, α<sub>a</sub> is about 1/60 of the electromagnetic force constant α. We show that this force can explain dark energy. Our model predicts decaying right handed neutrinos in the eV-MeV range, and explain the heating of the solar corona. Finally we show that the Tajmar experiment detecting a force due to a rotating superconductor, may be detection of our force.
[5767] vixra:0904.0006 [pdf]
Koide Mass Equations for Hadrons
Koide's mass formula relates the masses of the charged leptons. It is related to the discrete Fourier transform. We analyze bound states of colored particles and show that they come in triplets also related by the discrete Fourier transform. Mutually unbiased bases are used in quantum information theory to generalize the Heisenberg uncertainty principle to finite Hilbert spaces. The simplest complete set of mutually unbiased bases is that of 2 dimensional Hilbert space. This set is compactly described using the Pauli SU(2) spin matrices. We propose that the six mutually unbiased basis states be used to represent the six color states R, G, B, R-bar, G-bar, and B-bar. Interactions between the colors are defined by the transition amplitudes between the corresponding Pauli spin states. We solve this model and show that we obtain two different results depending on the Berry-Pancharatnam (topological) phase that, in turn, depends on whether the states involved are singlets or doublets under SU(2). A postdiction of the lepton masses is not convincing, so we apply the same method to hadron excitations and find that their discrete Fourier transforms follow similar mass relations. We give 39 mass fits for 137 hadrons.
[5768] vixra:0903.0006 [pdf]
Conformal Gravity, Maxwell and Yang-Mills Unification in 4D from a Clifford Gauge Field Theory
A model of Emergent Gravity with the observed Cosmological Constant from a BF-Chern-Simons-Higgs Model is revisited which allows to show how a Conformal Gravity, Maxwell and SU(2) x SU(2) x U(1) x U(1) Yang-Mills Unification model in four dimensions can be attained from a Clifford Gauge Field Theory in a very natural and geometric fashion.
[5769] vixra:0901.0001 [pdf]
On Dark Energy, Weyl Geometry and Brans-Dicke-Jordan Scalar Field
We review firstly why Weyl's Geometry, within the context of Friedman-Lemaitre-Robertson-Walker cosmological models, can account for both the origins and the value of the observed vacuum energy density (dark energy). The source of dark energy is just the dilaton-like Jordan-Brans-Dicke scalar field that is required to implement Weyl invariance of the most simple of all possible actions. A nonvanishing value of the vacuum energy density of the order of 10<sup>-123</sup>M<sup>4</sup><sub>Planck</sub> is derived in agreement with the experimental observations. Next, a Jordan-Brans-Dicke gravity model within the context of ordinary Riemannian geometry, yields also the observed vacuum energy density (cosmological constant) to very high precision. One finds that the temporal flow of the scalar field φ(t) in ordinary Riemannian geometry, from t = 0 to t = to, has the same numerical effects (as far as the vacuum energy density is concerned) as if there were Weyl scalings from the field configuration φ(t), to the constant field configuration φ<sub>o</sub>, in Weyl geometry. Hence, Weyl scalings in Weyl geometry can recapture the flow of time which is consistent with Segal's Conformal Cosmology, in such a fashion that an expanding universe may be visualized as Weyl scalings of a static universe. The main novel result of this work is that one is able to reproduce the observed vacuum energy density to such a degree of precision 10<sup>-123</sup>M<sup>4</sup><sub>Planck</sub>, while still having a Big-Bang singularity at t = 0 when the vacuum energy density blows up. This temporal flow of the vacuum energy density, from very high values in the past, to very small values today, is not a numerical coincidence but is the signal of an underlying Weyl geometry (conformal invariance) operating in cosmology, combined with the dynamics of a Brans-Dicke-Jordan scalar field.
[5770] vixra:0812.0006 [pdf]
Quantum Hall Effect and Hierarchy of Planck Constants
I have already earlier proposed the explanation of FQHE, anyons, and fractionization of quantum numbers in terms of hierarchy of Planck constants realized as a generalization of the imbedding space H = M<sup>4</sup> x CP<sub>2</sub> to a book like structure. The book like structure applies separately to CP<sub>2</sub> and to causal diamonds (CD ⊂ M<sup>4</sup>) defined as intersections of future and past directed light-cones. The pages of the Big Book correspond to singular coverings and factor spaces of CD (CP<sub>2</sub>) glued along 2-D subspace of CD (CP<sub>2</sub>) and are labeled by the values of Planck constants assignable to CD and CP<sub>2</sub> and appearing in Lie algebra commutation relations. The observed Planck constant h, whose square defines the scale of M<sup>4</sup> metric corresponds to the ratio of these Planck constants. The key observation is that fractional filling factor results if h is scaled up by a rational number. In this chapter I try to formulate more precisely this idea. The outcome is a rather detailed view about anyons on one hand, and about the Kähler structure of the generalized imbedding space on the other hand.
[5771] vixra:0812.0003 [pdf]
New Evidence for Colored Leptons
The recent discovery of CDF anomaly suggest the existence of a new long-lived particle which means a dramatic deviation from standard model. This article summarizes the quantum model of CDF anomaly. The anomaly is interpreted in terms of production of τ-pions which can be regarded as pion like bound states of color octet excitations of τ-leptons and corresponding neutrinos. Colored leptons are one of the basic predictions of TGD distinguishing it from standard model and for 18 years ago were applied to explain the anomalous production of electron-positron pairs in heavy ion collisions near Coulomb wall. First it is shown that the model explains the basic characteristics of the anomaly. Then various alternatives generalizing the earlier model for electro-pion production are discussed and a general formula for differential cross section is deduced. Three alternatives inspired by eikonal approximation generalizing the earlier model inspired by Born approximation to a perturbation series in the Coulomb interaction potential of the colliding charges. The requirement of manifest relativistic invariance for the formula of differential cross section leaves only two options, call them I and II. The production cross section for τ-pion is estimated and found to be consistent with the reported cross section of order 100 pb for option I using same energy cutoff for lepto-pions as in the model for electro-pion production. For option II the production cross section is by several orders of magnitude too small under these assumptions. Since the model involves only fundamental coupling constants, the result can be regarded as a further success of the τ-pion model of CDF anomaly. Analytic expressions for the production amplitude are deduced in the Appendix as a Fourier transform for the inner product of the non-orthogonal magnetic and electric fields of the colliding charges in various kinematical situations. This allows to reduce numerical integrations to an integral over the phase space of lepto-pion and gives a tight analytic control over the numerics.
[5772] vixra:0811.0004 [pdf]
Formation of Extrasolar Systems and Moons of Large Planets in Clusters
TTwo models the membrane model and the equivalent model were used for the solution of some of the questions related to formation of the Solar System. Both models show that the planets create clusters in which lies a higher probability of origination of large masses. The rings and belt of asteroids between Mars and Jupiter and the belt of asteroids behind the Neptune track are the beginnings of these clusters. According to the equivalent model, the Solar System went through a different development than other extrasolar systems. Both models show the wave principle, which is the same for other planetary systems and systems of moons of large planets [1].
[5773] vixra:0810.0013 [pdf]
Water Memory and the Realization of Genetic Code at Elementary Particle Level
This article represents a speculative model in which a connection between homeopathy and water memory with phantom DNA e ect is proposed and on basis of this connection a vision about how the hardware of topological quantum computation (tqc) represented by the genome is actively developed by subjecting it to evolutionary pressures represented by a virtual world representation of the physical environment and internal mileau including basic bio-molecules. The most important result is the discovery that the analogs of DNA, RNA and aminoacid are realized as dark nuclei realized as neutral dark nuclear strings. Vertebrate nuclear genetic code is predicted correctly. The result suggests deep and totally unexpected connection between elementary particle physics and biology.
[5774] vixra:0810.0012 [pdf]
About the Nature of Time
The identi cation of the experienced time te and geometric time tg involves well-known problems. Physicist is troubled by the reversibility of tg contra irreversibility of te, by the con ict between determinism of Schrödinger equation and the non-determinism of state function reduction, and by the poorly understood the origin of the arrow of tg. In biology the second law of thermodynamics might be violated in its standard form for short time intervals. Neuroscientist knows that the moment of sensory experience has a nite duration, does not understand what memories really are, and is bothered by the Libet's puzzling nding that neural activity seems to precede conscious decision. These problems are discussed in the framework of Topological Geometrodynamics (TGD) and TGD inspired theory of consciousness constructed as a generalization of quantum measurement theory. In TGD space-times are regarded as 4-dimensional surfaces of 8-dimensional space-time H = M4xCP2 and obey classical eld equations. The basic notions of consciousness theory are quantum jump and self. Subjective time is identi ed as a sequence of quantum jumps. Self has as a geometric correlate a xed volume of H- "causal diamond"-de ning the perceptive eld of self. Quantum states are regarded as quantum superpositions of space-time surfaces of H and by quantum classical correspondence assumed to shift towards the geometric past of H quantum jump by quantum jump. This creates the illusion that perceiver moves to the direction of the geometric future. Self is curious about the geometric future and induces the shift bringing it to its perceptive eld. Macroscopic quantum coherence and the identi cation of space-times as surfaces in H play a crucial role in this picture allowing to understand also other problematic aspects in the relationship between experienced and geometric time.
[5775] vixra:0810.0010 [pdf]
The Notion of Wave-Genome and DNA as Topological Quantum Computer
Peter Gariaev and collaborators have reported several strange e ects of laser light and also ordinary light on DNA. These ndings include the rotation of polarization plane of laser light by DNA, phantom DNA e ect, the transformation of laser light to radio-wave photons having biological e ects, the coding of DNA sequences to the modulated polarization plane of laser light and the ability of this kind of light to induce gene expression in another organisms provided the modulated polarization pattern corresponds to an "address" characterizing the organism, and the formation of images of what is believed to be DNA sample itself and of the objects of environment by DNA sample in a cell irradiated by ordinary light in UV-IR range. In this chapter a TGD based model for these e ects is discussed. A speculative picture proposing a connection between homeopathy, water memory, and phantom DNA e ect is discussed and on basis of this connection a vision about how the tqc hardware represented by the genome is actively developed by subjecting it to evolutionary pressures represented by a virtual world representation of the physical environment. The speculation inspired by this vision is that genetic code as well as DNA-, RNA- and amino-acid sequences should have representation in terms of nuclear strings. The model for dark baryons indeed leads to an identi cation of these analogs and the basic numbers of genetic code including also the numbers of aminoacids coded by a given number of codons are predicted correctly. Hence it seems that genetic code is universal rather than being an accidental outcome of the biological evolution.
[5776] vixra:0810.0009 [pdf]
A Model for Protein Folding and Bio-catalysis
The model for the evolution of genetic code leads to the idea that the folding of proteins obeys a folding code inherited from the genetic code. After some trials one ends up with a general conceptualization of the situation with the identi cation of wormhole magnetic ux tubes as correlates of attention at molecular level so that a direct connection with TGD inspired theory of consciousness emerges at quantitative level. This allows a far reaching generalization of the DNA as topological quantum computer paradigm and makes it much more detailed. By their asymmetric character hydrogen bonds are excellent candidates for magnetic ux tubes serving as correlates of attention at molecular level.
[5777] vixra:0810.0008 [pdf]
Evolution in Many-Sheeted Space-Time
The topics of the chapter has been restricted to those, which seem to represent the most well-established ideas. There are many other, more speculative, ideas such as the strong form of the hypothesis that plasmoid like life forms molecular life forms has evolved in "Mother Gaia's womb", maybe even in the hot environment de ned by the boundary of mantle and core.
[5778] vixra:0810.0007 [pdf]
DNA as Topological Quantum Computer
This article represents a vision about how DNA might act as a topological quantum computer (tqc). Tqc means that the braidings of braid strands de ne tqc programs and M-matrix (generalization of S-matrix in zero energy ontology) de ning the entanglement between states assignable to the end points of strands de ne the tqc usually coded as unitary time evolution for Schrödinger equation. One can ends up to the model in the following manner.
[5779] vixra:0810.0006 [pdf]
TGD Inspired Quantum Model of Living Matter
Basic ideas of TGD inspired view about quantum biology are discussed. TGD inspired theory of consciousness provides the basic conceptual framework besides new view about spacetime and quantum theory, in particular dark matter hierarchy whose levels are labelled by the increasing value of Planck constant so that macroscopic quantum systems are predicted to be present in all length scales. This gives a justi cation for the notion of eld body having onionlike fractal structure with astrophysical size and using biological body as a sensory receptor and motor instrument. Great evolutionary leaps correspond naturally to the increases of Planck constant for the highest level in "personal" hierarchy of Planck constants implying a scaling up of time scales of long term memory and planned action. The notion of magnetic body leads to a generalization of the notion genome: one can assign coherently expressed super genome to single organ and hyper genome to an entire population. Important experimental input comes from high Tc superconductivity, from the strange ndings related to cell membrane, and from the e ects of ELF em elds on vertebrate brain. The model for EEG based on "dark" photons predicts correctly its band structure and also narrow resonances in theta and beta bands in terms of cyclotron resonance frequencies of biologically important ions.
[5780] vixra:0810.0005 [pdf]
Topological Geometrodynamics: What Might Be the First Principles?
A brief summary of various competing visions about the basic principles of quantum Topological Geometrodynamics (TGD) and about tensions between them is given with emphasis on the recent developments. These visions are following. Quantum physics as as classical spinor field geometry of the "world of classical worlds" consisting or light-like 3-surfaces of the 8-D imbedding space H = M4xCP2; zero energy ontology in which physical states correspond to physical events; TGD as almost topological quantum field theory for light-like 3-surfaces; physics as a generalized number theory with associativity defining the fundamental dynamical principle and involving a generalization of the number concept based on the fusion of real and p-adic number fields to a larger book like structure, the identification of real and various p-adic physics as algebraic completions of rational physics, and the notion of infinite prime; the identification of configuration space Clifford algebra elements as hyper-octonionic conformal fields with associativity condition implying what might be called number theoretic compacticitation; a generalization of quantum theory based on the introduction of hierarchy of Planck constants realized geometrically via a generalization of the notion of imbedding space H to a book like structure with pages which are coverings and orbifolds of H; the notion of finite measurement resolution realized in terms of inclusions of hyperfinite factors as the fundamental dynamical principle implying a generalization of S-matrix to M-matrix identified as Connes tensor product for positive and negative energy parts of zero energy states; two different kinds of extended super-conformal symmetries assignable to the light-cone of H and to the light-like 3-surfaces leading to a concrete construction recipe of M-matrix in terms of generalized Feynman diagrams having light-like 3-surfaces as lines and allowing to formulate generalized Einstein's equations in terms of coset construction.
[5781] vixra:0808.0004 [pdf]
Black Holes and Quantum Theory: The Fine Structure Constant Connection
The new dynamical theory of space is further confirmed by showing that the effective �black hole� masses M<sub>BH</sub> in 19 spherical star systems, from globular clusters to galaxies, with masses M, satisfy the prediction that M<sub>BH</sub> = α/2 M, where α is the fine structure constant. As well the necessary and unique generalisations of the Schrödinger and Dirac equations permit the first derivation of gravity from a deeper theory, showing that gravity is a quantum effect of quantum matter interacting with the dynamical space. As well the necessary generalisation of Maxwell�s equations displays the observed light bending effects. Finally it is shown from the generalised Dirac equation where the spacetime mathematical formalism, and the accompanying geodesic prescription for matter trajectories, comes from. The new theory of space is non-local and we see many parallels between this and quantum theory, in addition to the fine structure constant manifesting in both, so supporting the argument that space is a quantum foam system, as implied by the deeper information-theoretic theory known as Process Physics. The spatial dynamics also provides an explanation for the �dark matter� effect and as well the non-locality of the dynamics provides a mechanism for generating the uniformity of the universe, so explaining the cosmological horizon problem.
[5782] vixra:0808.0003 [pdf]
3-Space In-Flow Theory of Gravity: Boreholes, Blackholes and the Fine Structure Constant
A theory of 3-space explains the phenomenon of gravity as arising from the timedependence and inhomogeneity of the differential flow of this 3-space. The emergent theory of gravity has two gravitational constants: G<sub>N</sub> - Newton�s constant, and a dimensionless constant α. Various experiments and astronomical observations have shown that α is the fine structure constant =~ 1/137. Here we analyse the Greenland Ice Shelf and Nevada Test Site borehole g anomalies, and confirm with increased precision this value of a. This and other successful tests of this theory of gravity, including the supermassive black holes in globular clusters and galaxies, and the �dark-matter� effect in spiral galaxies, shows the validity of this theory of gravity. This success implies that the non-relativistic Newtonian gravity was fundamentally flawed from the beginning, and that this flaw was inherited by the relativistic General Relativity theory of gravity.
[5783] vixra:0807.0012 [pdf]
A Theory of Gravity Based on Quantum Clocks and Moving Space
A theory of gravity based on quantum clocks and moving space is proposed. The theory is based on the hypothesis of the quantum clock equivalence principle (QCEP): it is impossible for a locally isolated observer to distinguish between a red-shift in a moving inertial frame of reference and a red-shift in a reference frame that is at rest in a field of gravity, if the red-shift is all the information he has. This allows us to formulate a time-dilatation measurement based definition of the speed of space in a gravity field. The QCEP is then used to predict the frequency shift of a quantum clock at rest in a g-field, moving in a closed circular orbit and in free fall. The cosmological and quantum gravitational possibilities of the QCEP hypothesis are shortly mentioned.
[5784] vixra:0807.0011 [pdf]
Biquaternion Formulation of Relativistic Tensor Dynamics
In this paper we show how relativistic tensor dynamics and relativistic electrodynamics can be formulated in a bi-quaternion tensor language. The treatment is restricted to mathematical physics, known facts as the Lorentz Force Law and the Lagrange Equation are presented in a relatively new formalism. The goal is to fuse anti-symmetric tensor dynamics, as used for example in relativistic electrodynamics, and symmetric tensor dynamics, as used for example in introductions to general relativity, into one single formalism: a specific kind of biquaternion tensor calculus.
[5785] vixra:0807.0010 [pdf]
A Remark on an Ansatz by M.W. Evans and the so-Called Einstein-Cartan-Evans Unified Field Theory
M.W. Evans tried to relate the electromagnetic field strength to the torsion of a Riemann-Cartan spacetime. We show that this ansatz is untenable for at least two reasons: (i) Geometry: Torsion is related to the (external) translation group and cannot be linked to an internal group, like the U(1) group of electrodynamics. (ii) Electrodynamics: The electromagnetic field strength as a 2-form carries 6 independent components, whereas Evans� electromagnetic construct F<sup>α</sup> is a vector-valued 2-form with 24 independent components. This doesn�t match. One of these reasons is already enough to disprove the ansatz of Evans.
[5786] vixra:0807.0001 [pdf]
Dynamical 3-Space: A Review
For some 100 years physics has modelled space and time via the spacetime concept, with space being merely an observer dependent perspective effect of that spacetime - space itself had no observer independent existence - it had no ontological status, and it certainly had no dynamical description. In recent years this has all changed. In 2002 it was discovered that a dynamical 3-space had been detected many times, including the Michelson-Morley 1887 light-speed anisotropy experiment. Here we review the dynamics of this 3-space, tracing its evolution from that of an emergent phenomena in the information-theoretic Process Physics to the phenomenological description in terms of a velocity field describing the relative internal motion of the structured 3-space. The new physics of the dynamical 3-space is extensively tested against experimental and astronomical observations, including the necessary generalisation of the Maxwell, Schrödinger and Dirac equations, leading to a derivation and explanation of gravity as a refraction effect of the quantum matter waves. Phenomena now explainable include the bore hole anomaly, the systematics of black hole masses, the flat rotation curves of spiral galaxies, gravitational light bending and lensing, and the supernova and Gamma-Ray Bursts magnitude-redshift data, for the dynamical 3-space possesses a Hubble expanding 3-space solution. Most importantly none of these phenomena now require dark matter nor dark energy. The flat and curved spacetime formalism is derived from the new physics, so explaining the apparent many successes of those formalisms, but which have now proven to be ontologically and experimentally flawed.
[5787] vixra:0805.0002 [pdf]
Distribution of Distances in the Solar System
The recently published application of a diffusion equation to prediction of distances of planets in the solar system has been identified as a two-dimensional Coulomb problem. A different assignment of quantum numbers in the solar system has been proposed. This method has been applied to the moons of Jupiter on rescaling.
[5788] vixra:0804.0008 [pdf]
Quantum and Hadronic Mechanics, the Diffusion and Iso-Diffusion Representations
It is appropiate to start by quoting page xv �a first meaning of the novel hadronic mechanics is that of providing the first known methods for quantitative studies of the interplay between matter and the underlying substratum. The understanding is that space is the final frontier of human knowledge, with potential outcomes beyond the most vivid science fiction of today�. In this almost prophetic observations, Prof. Santilli has pointed out to the essential role of the substratum, its geometrical structure and the link with consciousness. In the present article, which we owe to the kind invitation of Prof. Santilli, we shall present similar views, specifically in presenting both quantum and hadronic mechanics as spacetime fluctuations, and we shall discuss the role of the substratum. As for the problem of human knowledge, we shall very briefly indicate on how the present approach may be related to the fundamental problem of consciousness, which is that of self-reference.
[5789] vixra:0804.0007 [pdf]
Torsion Fields, Brownian Motions, Quantum and Hadronic Mechanics
We review the relation between space-time geometries with torsion fields (the so-called Riemann-Cartan-Weyl (RCW) geometries) and their associated Brownian motions. In this setting, the metric conjugate of the tracetorsion one-form is the drift vector field of the Brownian motions. Thus, in the present approach space-time fluctuations as Brownian motions are -in distinction with Nelson�s Stochastic Mechanics- space-time structures. Thus, space and time have a fractal structure. We discuss the relations with Nottale�s theory of Scale Relativity, which stems from Nelson�s approach. We characterize the Schroedinger equation in terms of the RCW geometries and Brownian motions. In this work, the Schroedinger field is a torsion generating field. The potential functions on Schroedinger equations can be alternatively linear or nonlinear on the wave function, leading to nonlinear and linear creation-annihilation of particles by diffusion systems.
[5790] vixra:0804.0004 [pdf]
Quantum and Hadronic Mechanics, the Diffusion and Iso-Heisenberg Representations
It is appropiate to start by quoting Prof. Santilli: �a first meaning of the novel hadronic mechanics is that of providing the first known methods for quantitative studies of the interplay between matter and the underlying substratum. The understanding is that space is the final frontier of human knowledge, with potential outcomes beyond the most vivid science fiction of today�. In this almost prophetic observation, Prof. Santilli has pointed out to the essential role of the substratum, its geometrical structure and the link with consciousness. In the present article, which we owe to the kind invitation of Prof. Santilli, we shall present similar views, specifically in presenting both quantum and hadronic mechanics as space-time fluctuations, and we shall discuss the role of the substratum. As for the problem of human knowledge, we shall very briefly indicate on how the present approach may be related to the fundamental problem of consciousness, which is that of self-reference.
[5791] vixra:0802.0002 [pdf]
An Objection to Copenhagen Interpretation and an Explanation of the Two-Slit Experiment from the Viewpoint of Waviness.
We suggest that the electron is a wave in the whole process between the electron gun and the sensor. Between the two-slit and the sensor, the following two phenomena happen to the waves: interference and Fraunhofer diffraction. Due to these two phenomena, a considerably sharp shape of wave is finally made in front of the sensor, and a bright spot appears on the sensor. The experiment result that a bright spot appears at random can be explained by the abovementioned two phenomena and the �fluctuation� of the potential energy that the filament of the biprism makes. All are the wave motion phenomena, and, put simply, the particle called an electron does not exist.
[5792] vixra:0712.0004 [pdf]
A Thermodynamical Approach to a Ten Dimensional Inflationary Universe
The inflationary phase of the evolution of the ten dimensional universe is considered. The form of the stress-energy tensor of the matter in the very early universe is determined by making use of some thermodynamical arguments. In this way, the Einstein field equations are written and some inflationary cosmological solution is found to these equations in which, while the actual dimensions are exponentially expanding, the others are contracting.
[5793] vixra:0712.0001 [pdf]
Born�s Reciprocal General Relativity Theory and Complex Nonabelian Gravity as Gauge Theory of the Quaplectic Group : A novel path to Quantum Gravity
Born�s Reciprocal Relativity in flat spacetimes is based on the principle of a maximal speed limit (speed of light) and a maximal proper force (which is also compatible with a maximal and minimal length duality) and where coordinates and momenta are unified on a single footing. We extend Born�s theory to the case of curved spacetimes and construct a Reciprocal General Relativity theory (in curved spacetimes) as a local Gauge Theory of the Quaplectic Group and given by the semidirect product Q(1, 3) x U(1, 3) sH(1, 3), where the Nonabelian Weyl-Heisenberg group is H(1, 3). The gauge theory has the same structure as that of Complex Nonabelian Gravity. Actions are presented and it is argued why such actions based on Born�s Reciprocal Relativity principle, involving a maximal speed limit and a maximal proper force, is a very promising avenue to Quantize Gravity that does not rely in breaking the Lorentz symmetry at the Planck scale, in contrast to other approaches based on deformations of the Poincare algebra, Quantum Groups. It is discussed how one could embed the Quaplectic gauge theory into one based on the U(1, 4),U(2, 3) groups where the observed cosmological constant emerges in a natural way. We conclude with a brief discussion of Complex coordinates and Finsler spaces with symmetric and nonsymmetric metrics studied by Eisenhart as relevant closed-string target space backgrounds where Born�s principle may be operating.
[5794] vixra:0707.0001 [pdf]
4D Quantum Gravity via W_infinity Gauge Theories in 2D, Collective Fields and Matrix Models
It is shown how Quantum Gravity in D = 3 can be described by a W_inifnity Matrix Model in D = 1 that can be solved exactly via the Collective Field Theory method. A quantization of 4D Gravity can be attained via a 2D Quantum W_inifnity gauge theory coupled to an infinite-component scalar-multiplet ; i.e. the quantization of Einstein Gravity in 4D admits a reformulation in terms of a 2D Quantum W_inifnity gauge theory coupled to an infinite family of scalar fields. Since higher-spin W_inifnity symmetries are very relevant in the study of 2D W_inifnity Gravity, the Quantum Hall effect, large N QCD, strings, membranes, topological QFT, gravitational instantons, Noncommutative 4D Gravity, Modular Matrix Models and the Monster group, it is warranted to explore further the interplay among all these theories.
[5795] vixra:0706.0001 [pdf]
Quantum Astrophysics
The vision that the quantum dynamics for dark matter is behind the formation of the visible structures suggests that the formation of the astrophysical structures could be understood as a consequence of Bohr rules.
[5796] vixra:0704.0004 [pdf]
Langlands Program and TGD
Number theoretic Langlands program can be seen as an attempt to unify number theory on one hand and theory of representations of reductive Lie groups on the other hand. So called automorphic functions to which various zeta functions are closely related define the common denominator. Geometric Langlands program tries to achieve a similar conceptual unification in the case of function fields. This program has caught the interest of physicists during last years.
[5797] vixra:0704.0002 [pdf]
Nuclear String Hypothesis
Nuclear string hypothesis is one of the most dramatic almost-predictions of TGD. The hypothesis in its original form assumes that nucleons inside nucleus form closed nuclear strings with neighboring nuclei of the string connected by exotic meson bonds consisting of color magnetic flux tube with quark and anti-quark at its ends. The lengths of flux tubes correspond to the p-adic length scale of electron and therefore the mass scale of the exotic mesons is around 1 MeV in accordance with the general scale of nuclear binding energies. The long lengths of em flux tubes increase the distance between nucleons and reduce Coulomb repulsion. A fractally scaled up variant of ordinary QCD with respect to p-adic length scale would be in question and the usual wisdom about ordinary pions and other mesons as the origin of nuclear force would be simply wrong in TGD framework as the large mass scale of ordinary pion indeed suggests.
[5798] vixra:0703.0053 [pdf]
A Hidden Dimension, Clifford Algebra, and Centauro Events
This paper eshes out the arguments given in a 20 minute talk at the Phenomenology 2005 meeting at the University of Wisconsin at Madison, Wisconsin on Monday, May 2, 2005. The argument goes as follow: A hidden dimension is useful for explaining the phase velocity of quantum waves. The hidden dimension corresponds to the proper time parameter of standard relativity. This theory has been developed into a full gravitational theory, "Euclidean Relativity" by other authors. Euclidean relativity matches the results of Einstein's gravitation theory. This article outlines a compatible theory for elementary particles. The massless Dirac equation can be generalized from an equation of matrix operators operating on vectors to an equation of matrix operators operating on matrices. This allows the Dirac equation to model four particles simultaneously. We then examine the natural quantum numbers of the gamma matrices of the Dirac equation, and generalize this result to arbitrary complexi ed Cli ord algebras. Fitting this "spectral decomposition" to the usual elementary particles, we nd that one hidden dimension is needed as was similarly needed by Euclidean relativity, and that we need a set of eight subparticles to make up the elementary fermions. These elementary particles will be called \binons", and each comes in three possible subcolors. The details of the binding force between binons will be given as a paper associated with a talk by the author at the APSNW 2005 meeting at the University of Victoria, at British Columbia, Canada on May 15, 2005. After an abbreviated introduction, this paper will concentrate on the phenomenological aspects of the binons, particularly as applied to the Centauro type cosmic rays, and gamma-ray bursts.
[5799] vixra:0703.0050 [pdf]
Chern-Simons (Super) Gravity and E8 Yang-Mills from a Clifford Algebra Gauge Theory
It is shown why the E8 Yang-Mills can be constructed from a Cl(16) algebra Gauge Theory and why the 11D Chern-Simons (Super) Gravity theory is a very small sector of a more fundamental theory based on a Cl(11) algebra Gauge theory. These results may shed some light into the origins behind the hidden E8 symmetry of 11D Supergravity and reveal more important features of a Clifford-algebraic structure underlying M, F theory.
[5800] vixra:0703.0049 [pdf]
Noncommutative Branes in Clifford-Space Backgrounds and Moyal-Yang Star Products with uv-ir Cutoffs
A novel Moyal-Yang star product deformation of generalized p-brane actions in Clifford-space target backgrounds involving multivectors ( polyvectors, antisymmetric tensors ) valued coordinates is constructed based on the novel Moyal-Yang star product deformations of Generalized-Yang-Mills theories. ...
[5801] vixra:0703.0019 [pdf]
About Correspondence Between Infinite Primes, Space-time Surfaces, and Configuration Space Spinor Fields
The idea that configuration space CH of 3-surfaces, �the world of classical worlds�, could be realized in terms of number theoretic anatomies of single space-time point using the real units formed from infinite rationals, is very attractive.
[5802] vixra:0703.0012 [pdf]
Weyl�s Geometry Solves the Riddle of Dark Energy
We rigorously prove why the proper use ofWeyl�s Geometry within the context of Friedman-Lemaitre-Robertson-Walker cosmological models can account for both the origins and the value of the observed vacuum energy density ( dark energy )...
[5803] vixra:0703.0011 [pdf]
The Total Differential Integral of Calculus
I deduce a series which satisfies the fundamental theorem of calculus without dependence on an explicit function. I prove Taylor�s theorem and show that it is closely related. I deduce a series for the logarithm function and from this series deduce the power series representation of the logarithm function along with the interval of convergence. I also solve an ordinary differential equation.
[5804] vixra:0703.0009 [pdf]
Viscous and Magneto Fluid-Dynamics, Torsion Fields, and Brownian Motions Representations on Compact Manifolds and the Random Symplectic Invariants
We reintroduce the Riemann-Cartan-Weyl geometries with trace torsion and their associated Brownian motions on spacetime to extend them to Brownian motions on the tangent bundle and exterior powers of them. We characterize the diffusion of differential forms, for the case of manifolds without boundaries and the smooth boundary case. We present implicit representations for the Navier-Stokes equations (NS) for an incompressible fluid in a smooth compact manifold without boundary as well as for the kinematic dynamo equation (KDE, for short) of magnetohydrodynamics. We derive these representations from stochastic differential geometry, unifying gauge theoretical structures and the stochastic analysis on manifolds (the Ito-Elworthy formula for differential forms. From the diffeomorphism property of the random flow given by the scalar lagrangian representations for the viscous and magnetized fluids, we derive the representations for NS and KDE, using the generalized Hamilton and Ricci random flows (for arbitrary compact manifolds without boundary), and the gradient diffusion processes (for isometric immersions on Euclidean space of these manifolds). We solve implicitly this equations in 2D and 3D. Continuing with this method, we prove that NS and KDE in any dimension other than 1, can be represented as purely (geometrical) noise processes, with diffusion tensor depending on the fluid�s velocity, and we represent the solutions of NS and KDE in terms of these processes. We discuss the relations between these representations and the problem of infinite-time existance of solutions of NS and KDE. We finally discuss the relations between this approach with the low dimensional chaotic dynamics describing the asymptotic regime of the solutions of NS. We present the random symplectic theory for the Brownian motions generated by these Riemann-Cartan-Weyl geometries, and the associated random Poincare-Cartan invariants. We apply this to the Navier-Stokes and kinematic dynamo equations. In the case of 2D and 3D, we solve the Hamiltonian equations.
[5805] vixra:0703.0008 [pdf]
Torsion Fields, The Quantum Potential, Cartan-Weyl Space-Time and State-space Geometries and their Brownian Motions
We review the relation between space-time geometries with torsion fields (the so-called Riemann-Cartan-Weyl (RCW) )geometries) and their associated Brownian motions. In this setting, the metric conjugate of the tracetorsion one-form is the drift vectorfield of the Brownian motions. Thus, in the present approach, Brownian motions, in distinction with Nelson�s Stochastic Mechanics, are spacetime structures. We extend this to the state-space of non-relativistic quantum mechanics and discuss the relation between a noncanonical quantum RCW geometry in state-space associated with the gradient of the quantum-mechanical expectation value of a self-adjoint operator given by the generalized laplacian operator defined by a RCW geometry. We discuss the reduction of the wave function in terms of a RCW quantum geometry in state-space. We characterize the Schroedinger equation for both an observed and unobserved quantum systems in terms of the RCW geometries and Brownian motions. Thus, in this work, the Schroedinger field is a torsion generating field, and the U and R processes, in the sense of Penrose, are associated, the former to spacetime geometries and their associated Brownian motions, and the latter to their extension to the state-space of nonrelativistic quantum mechanics given by the projective Hilbert space. In this setting, the Schroedinger equation can be either linear or nonlinear. We discuss the problem of the many times variables and the relation with dissipative processes. We present as an additional example of RCW geometries and their Brownian motions counterpart, the dynamics of viscous fluids obeying the invariant Navier-Stokes equations. We introduce in the present setting an extension of R. Kiehn�s approach to dynamical systems starting from the notion of the topological dimension of one-forms, to apply it to the trace-torsion one-form whose metric conjugate is the Brownian motion�s drift vectorfield and discuss the topological notion of turbulence. We discuss the relation between our setting and the Nottale theory of Scale Relativity, and the work of Castro and Mahecha in this volume in nonlinear quantum mechanics, Weyl geometries and the quantum potential.
[5806] vixra:0703.0006 [pdf]
On Geometric Probability, Holography, Shilov Boundaries and the Four Physical Coupling Constants of Nature
By recurring to Geometric Probability methods, it is shown that the coupling constants, EM; W; C associated with Electromagnetism, Weak and the Strong (color ) force are given by the ratios of the ratios of the measures of the Shilov boundaries...
[5807] vixra:0703.0005 [pdf]
Martingale Problem Approach to the Representations of the Navier-Stokes Equations on Smoothboundary Manifolds and Semispace
We present the random representations for the Navier-Stokes vorticity equations for an incompressible fluid in a smooth manifold with smooth boundary and reflecting boundary conditions for the vorticity.
[5808] vixra:0703.0004 [pdf]
Running Newtonian Coupling and Horizonless Solutions in Quantum Einstein Gravity
It is shown how the exact Nonperturbative Renormalization Group flow of the running Newtonian coupling G(r) in Quantum Einstein Gravity is consistent with the existence of an ultra-violet cutoff...
[5809] vixra:0703.0001 [pdf]
On (Anti) de-Sitter-Schwarzschild Metrics, the Cosmological Constant and Dirac-Eddington�s Large Numbers
A class of proper generalizations of the (Anti) de Sitter solutions are presented that could provide a very plausible resolution of the cosmological constant problem along with a natural explanation of the ultraviolet/infrared ( UV/IR) entanglement required to solve this problem. A nonvanishing value of the vacuum energy density of the order of 10^-121 M^4_Planck is derived in perfect agreement with the experimental observations. Exact solutions of the cubic equations associated with the location of the horizons of this class of ( Anti ) de Sitter-Schwarzschild metrics are found.
[5810] vixra:0702.0059 [pdf]
Isospin Doctoring
In the standard model, isospin is not defined for all elementary particles nor is it conserved in all interactions. A study of the isospin subalgebra in the author�s U(3, 2) theory of matter shows that the standard model assigned the wrong isospin values to many elementary particles. The redefined isospin is defined for all particles and is conserved in all interactions. This leads to a new interpretation of the isospin algebra as a model of pion exchange between protons in the nucleus.
[5811] vixra:0702.0055 [pdf]
Towards an Einsteinian Quantum Theory
A theory of quantum mechanics in terms of a quantized spacetime shows that Einstein was correct in his debate with Bohr. The conflict of the axioms of quantum field theory and the axioms of general relativity may resolved by modifying both and equating quantum field theory with harmonic analysis on the complex space-time QAdS = U(3, 2)/U(3, 1)xU(1). This is consistent with the geometry of particle interactions introduced in Love
[5812] vixra:0702.0053 [pdf]
Elementary Particles as Oscillations in Anti-de Sitter Space-Time
Using the spinor differential operator representation of U(3, 2) to explore the hidden symmetries of the complex space-time U(3, 2)/U(3, 1)x U(1) leads to an interpretation of this complex space-time as excited states of Anti-de Sitter space-time. This in turn leads to new Lie Algebraic Quantum Field Theory and a mathematical model of the internal structure of elementary particles as oscillations of complex space-time. This is a quantum theory of gravity which satisfies Einstein�s criteria for a unified field theory
[5813] vixra:0702.0052 [pdf]
Koide Mass Formula for Neutrinos
Since 1982 the Koide mass relation has provided an amazingly accurate relation between the masses of the charged leptons. In this note we show how the Koide relation can be expanded to cover the neutrinos, and we use the relation to predict neutrino masses.
[5814] vixra:0702.0047 [pdf]
The Triality of Electromagnetic-Condensational Waves in a Gas-Like Ether
In a gas-like ether, the duality between the oscillating electric and magnetic fields, which are transverse to the direction of propagation of electromagnetic waves, becomes a triality with the longitudinal oscillations of motion of the ether, if electric field, magnetic field and motion are coexistent and mutually perpendicular. It must be shown, therefore, that if electromagnetic waves comprise also longitudinal condensational oscillations of a gas-like ether, analogous to sound waves in a material gas, then all three aspects of such waves must propagate together along identical wave-fronts. To this end, the full characteristic hyperconoids are derived for the equations governing the motion and the electric and magnetic field-strengths in a gas-like ether, in three space variables and time. It is shown that they are, in fact, identical. The equations governing the motion and the electric and magnetic field-strengths in such an ether, and their common characteristic hyperconoid, are all invariant under Galilean transformation.
[5815] vixra:0702.0045 [pdf]
The Refraction of Light in Stationary and Moving Refractive Media
A new theory of the refraction of light is presented, using the mathematical fact that the equations of acoustics and optics are identical and that light may therefore be treated as waves in a fluid ether. Light waves are penetrated by the more slowly moving constituents of a refractive medium and so the rays behind them are perturbed and made wavy as they are diffracted around material particles. The arc-length along a wavy ray is thus increased by a factor...
[5816] vixra:0702.0044 [pdf]
Real or Imaginary Space-Time? Reality or Relativity?
The real space-time of Newtonian mechanics and the ether concept is contrasted with the imaginary space-time of the non-ether concept and relativity. In real space-time (x, y, z, ct) characteristic theory shows that Maxwell�s equations and sound waves in any uniform fluid at rest have identical wave surfaces. Moreover, without charge or current, Maxwell�s equations reduce to the same standard wave equation which governs such sound waves. This is not a general and invariant equation but it becomes so by Galilean transformation to any other reference-frame. So also do Maxwell�s equations which are, likewise, not general but unique to one reference-frame. The mistake of believing that Maxwell�s equations were invariant led to the Lorentz transformation and to relativity; and to the misinterpretation of the differential equation for the wave cone through any point as the quadratic differential form of a Riemannian metric in imaginary space-time (x, y, z, ict). Mathematics is then required to tolerate the same equation being transformed in different ways for different applications. Otherwise, relativity is untenable and recourse must then be made to real space-time, normal Galilean transformation and an ether with Maxwellian statistics and Planck�s energy distribution.
[5817] vixra:0702.0043 [pdf]
Real and Apparent Invariants in the Transformation of the Equations Governing Wave-Motion in the General Flow of a General Fluid
The ten equations are derived that govern, to the first order, the propagation of small general perturbations in the general unsteady flow of a general fluid, in three spacevariables and time. The condition that any hypersurface is a wave hypersurface of these equations is obtained, and the envelope of all such wave hypersurfaces that pass through a given point at a given time, i .e . the wave hyperconoid, is determined. These results, which are all invariant under Galilean transformation, are progressively specialized, through homentropic flow and irrotational homentropic flow, to steady uniform flow, for which both the convected wave-equation and the standard waveequation, with their wave hypersurfaces, are finally recovered. A special class of reference-frames is considered, namely those whose origins move with the fluid. It is then shown that, for observers at the origins of all such reference frames, the wave hypersurfaces satisfy specially simple equations locally. These equations are identical with those for waves in a uniform fluid at rest relative to the reference frame, except that the wave speed is not constant but varies with position and time in accordance with the variable mean flow. These specially simple equations appear to be invariant for Galilean transformations between all such observers. These results are briefly applied, in reverse order, to Maxwell�s equations, and to equations more general than Maxwell�s, for the electric and magnetic field-strengths.
[5818] vixra:0702.0040 [pdf]
The Kinetic Theory of Electromagnetic Radiation
It is shown that Planck�s energy distribution for a black-body radiation field can be simply derived for a gas-like ether with Maxwellian statistics. The gas consists of an infinite variety of particles, whose masses are integral multiples n of the mass of the unit particle, the abundance of n-particles being proportional to...
[5819] vixra:0702.0038 [pdf]
The Foundations of Relativity
Maxwell�s equations were, and still are, derived for a uniform stationary ether and are not, therefore, the general equations of electromagnetism. The true general equations, for an ether in general motion, have been derived and given in the literature for many years but are continually ignored. Here, a further attempt is made to bring home irrefutably the mathematics which negates the concepts of no-ether and non-Newtonian relativity. Alternative derivations of the general equations of electromagnetism are given in the simplest possible terms, from basic principles. It is shown that the mathematical techniques required are exactly the same as those which were used to derive the general equations of fluid motion, long before the advent of Maxwell�s equations.
[5820] vixra:0702.0036 [pdf]
The New Aspects of General Relativity
It is possible one thinks that the General Theory of Relativity is a fossilized science, all achievements of which were reached decades ago. In particular it is right - the mathematical apparatus of Riemannian geometry, being a base of the theory, remains unchanged. At the same time the mathematical technics have many varieties: general covariant methods, the tetrad method, etc. Developing the technics we can create new possibilities in theoretical physics, unknown before.
[5821] vixra:0702.0031 [pdf]
Anomalous Spacetimes
The usual interpretations of solutions for Einstein�s gravitational field satisfying the static vacuum conditions contain anomalies that are not mathematically permissible. It is shown herein that the usual solutions must be modified to account for the intrinsic geometry associated with the relevant line-elements.
[5822] vixra:0702.0030 [pdf]
Relativistic Cosmology Revisited
In a previous paper the writer treated of particular classes of cosmological solutions for certain Einstein spaces and claimed that no such solutions exist in relation thereto. In that paper the assumption that the proper radius is zero when the line-element is singular was generally applied. This general assumption is unjustified and must be dropped. Consequently, solutions do exist in relation to the aforementioned types, and are explored herein. The concept of the Big Bang cosmology is found to be inconsistent with General Relativity.
[5823] vixra:0702.0026 [pdf]
Unification of Four Approaches to the Genetic Code
A proposal unifying four approaches to genetic code is discussed. The first approach is introduced by Pitkanen and is geometric: genetic code is interpreted as an imbedding of the aminoacid space to DNA space possessing a fiber bundle like structure with DNAs coding for a given aminoacid forming a discrete fiber with a varying number of points. Also Khrennikov has proposed an analogous approach based on the identification of DNAs coding for a given aminoacid as an orbit a discrete flow defined by iteration of a map of DNA space to itself.
[5824] vixra:0702.0012 [pdf]
Three Solar System Anomalies Indicating the Presence of Macroscopically Quantum Coherent Dark Matter in Solar System
Three anomalies associated with the solar system, namely Pioneer anomaly [3], the evidence for shrinking of planetary orbits [7, 8, 9], and flyby anomaly [4] are discussed. The first anomaly is explained by a universal 1/r distribution of dark matter, second anomaly finds a trivial explanation in TGD based quantum model for planetary orbits as Bohr orbits with Bohr quantization reflecting macroscopically quantum coherent character of dark matter with a gigantic value of Planck constant [11]. Fly-by anomaly can be understood if planetary orbits are surrounded by a flux tube containing quantum coherent dark matter. Also spherical shells can be considered.
[5825] vixra:0702.0003 [pdf]
Nonlinear Classical Fields
We regard a classical field as medium. Then additional parameter is appearing. It is the local fourvelocity vector of field. If the one itself regard as potential of same field then all field�s self energies became finite. As examples electromagnetic, mechanical, pionic, and somewhat gluonic fields are regarding
[5826] vixra:0702.0002 [pdf]
Could Q-Laguerre Equation Explain the Claimed Fractionation of the Principal Quantum Number for Hydrogen Atom?
In [G2] a semiclassical model based on dark matter and hierarchy of Planck constants is developed for the fractionized principal quantum number n claimed by Mills [1] to have at least the values n =