Quantitative Biology
[1]
vixra:2203.0087 [
pdf]
Frequentist and Bayesian Analysis Methods for Case Series Data and Application to Early Outpatient Covid-19 Treatment Case Series of High Risk Patients
When confronted with a public health emergency, significant innovative treatment protocols can sometimes be discovered by medical doctors at the front lines based on repurposed medications. We propose a very simple hybrid statistical framework for analyzing the case series of patients treated with such new protocols, that enables a comparison with our prior knowledge of expected outcomes, in the absence of treatment. The goal of the proposed methodology is not to provide a precise measurement of treatment efficacy, but to establish the existence of treatment efficacy, in order to facilitate the binary decision of whether the treatment protocol should be adopted on an emergency basis. The methodology consists of a frequentist component that compares a treatment group against the probability of an adverse outcome in the absence of treatment, and calculates an efficacy threshold that has to be exceeded by this probability, in order to control the corresponding $p$-value, and reject the null hypothesis. The efficacy threshold is further adjusted with a Bayesian technique, in order to also control the false positive rate. A selection bias threshold is then calculated from the efficacy threshold to control for random selection bias. Exceeding the efficacy threshold establishes efficacy by the preponderance of evidence, and exceeding the more demanding selection bias threshold establishes efficacy by the clear and convincing evidentiary standard. The combined techniques are applied to case series of high-risk COVID-19 outpatients, that were treated using the early Zelenko protocol and the more enhanced McCullough protocol. The resulting efficacy thresholds are then compared against our prior knowledge of mortality and hospitalization rates of untreated high-risk COVID-19 patients, as reported in the research literature.
[2]
vixra:2108.0156 [
pdf]
The Effect of Artificial Amalgamates on Identifying Pathogenesis
The purpose of this research was to define acceleration in diagnostic procedures for airborne diseases. Airborne pathogenicity can be troublesome to diagnose due to intrinsic variation and overlapping symptoms. Coronavirus testing was an instance of a flawed diagnostic biomarker. The levels of independent variables (IV) were vanilla, sparse, and dense amalgamates formed from multilayer perceptrons and image processing algorithms. The dependent variable (DV) was the classification accuracy. It was hypothesized that if a dense amalgamate is trained to identify Coronavirus, the accuracy would be the highest. The amalgamates were trained to analyze the morphological patches within radiologist-verified medical imaging retrieved from online databanks. Using cross-validation simulations augmented with machine-learning, the DV was consulted for each amalgamate. Self-calculated t-tests supported the research hypothesis, with the dense amalgamate achieving 85.37% correct classification rate. The null hypothesis was rejected. Flaws within the databanks were possible sources of error. A new multivariate algorithm invented here performed better than the IV. It identified Coronavirus and other airborne diseases from 96-99% accuracy. The model was also adept in identifying heterogeneity and malignancy of lung cancer as well as differentiating viral and bacterial pathogenicity of infections. Future modifications would involve extending the algorithm to diseases in other anatomical structures such as osteopenia/osteoporosis in the vertebral column.
[3]
vixra:2103.0147 [
pdf]
The Equations of Life
This study will first define the "equation of life" via the principle of least action. Then the paper will show how this "equation of life" can be used to derive smaller equations, involving transcription and translation, for [computer] modeling and simulation of a cell. The conclusion will provide a terse description of its uses in the realm of Systems Biology.
[4]
vixra:2102.0138 [
pdf]
Asymptotic Analysis of the Sir Model: Applications to Covid-19 Modelling
The SIR (Susceptible-Infected-Removed) model can be very useful in modelling epidemic outbreaks. The present paper derives the parametric solution of the model in terms of quadratures. The paper demonstrates a simple analytical asymptotic solution for the I-variable, which is valid on the entire real line. Moreover, the solution can be used successfully for parametric estimation either in stand-alone mode or as a preliminary step in the parametric estimation using numerical inversion of the parametric solution. The approach is applied to the ongoing coronavirus disease 2019 (COVID-19) pandemic in three European countries -- Belgium, Italy and Sweden.
[5]
vixra:2011.0027 [
pdf]
Comments on "Analytical Features of the Sir Model and Their Applications to Covid-19"
In their article, Kudryashov et al. (2021) try to establish the analytical solution of the SIR epidemiological model. One of the equations given there is wrong, which invalidates the presented solution, derived from this result. The objective of the present letter is to indicate this error and present the correct analytical solution to the SIR epidemiological model.
[6]
vixra:2004.0089 [
pdf]
Lower Bound for the Number of Asymptomatics in Infected by COVID-19
We propose a method for evaluating the number of asymptomatics in a COVID-19 Outbreak. The method will give only a lower bound for the real number.
[7]
vixra:2003.0554 [
pdf]
Early Evaluation and Effectiveness of Social Distancing Measures for Controlling COVID-19 Outbreaks
Based on real data, we study the effectiveness and we propose an early evaluation method for COVID-19 social distancing measures. Version v2 posted on 26/03/20. Version v3 posted on 26/04/20. In version v3 sections 7 and 8 have been added leaving unchanged previous sections.
[8]
vixra:1906.0245 [
pdf]
The Contributions of the Gallo Team and the Montagnier Team to the Discovery of the AIDS Virus
In this paper I review the main works of the teams headed by Robert Gallo and Luc Montagnier which led to the discovery of the HIV retrovirus and to the blood test with which one can prove HIV infection. I show that this discovery which saved millions of human lifes (and perhaps the survival of mankind) was made possible only (i) because Gallo's team discovered the T-cell lymphocyte growth factor with which they were able to discover the first retrovirus that infects humans (HTLV-I) and their hypothesis that AIDS is caused by a retrovirus, and (ii) because Montagnier's team detected an antibody against alpha interferon in order to enhance retrovirus production with which they were able to discover the HIV retrovirus and their examination and blood test that gave evidence that HIV causes AIDS. Their examination was improved by the Gallo team who proved without doubt that HIV is the cause of AIDS. I leave the question open whether Gallo deserved the Nobel Prize or whether the Nobel committee's decision to award the prize only to Montagnier and Barre-Sinoussi was correct.
[9]
vixra:1905.0006 [
pdf]
Arguments that Prehistorical and Modern Humans Belong to the Same Species
I argue that the evidence of the Out-of-Africa hypothesis and the evidence of multiregional evolution of prehistorical humans can be understood if there has been interbreeding between Homo erectus, Homo neanderthalensis, and Homo sapiens at least during the preceding 700,000 years. These interbreedings require descendants who are capable of reproduction and therefore parents who belong to the same species. I suggest that a number of prehistorical humans who are at present regarded as belonging to different species belong in fact to one single species.
[10]
vixra:1811.0352 [
pdf]
Failure of Complex Systems, Cascading Disasters, and the Onset of Disease
Complex systems can fail through different routes, often progressing through a series of (rate-limiting) steps and modified by environmental exposures. The onset of disease, cancer in particular, is no different. A simple but very general mathematical framework is described for studying the failure of complex systems, or equivalently, the onset of disease. It includes the Armitage-Doll multi-stage cancer model as a particular case, and has potential to provide new insights into how diseases arise and progress. A method described by E.T. Jaynes is developed to provide an analytical solution for the models, and highlights connections between the convolution of Laplace transforms, sums of random samples, and Schwinger/Feynmann parameterisations. Examples include: exact solutions to the Armitage-Doll model, the sum of Gamma-distributed variables with integer-valued shape parameters, a clonal-growth cancer model, and a model for cascading disasters. The approach is sufficiently general to be used in many contexts, such as engineering, project management, disease progression, and disaster risk for example, allowing the estimation of failure rates in complex systems and projects. The intended result is a mathematical toolkit for the study of failure rates in complex systems and the onset of disease, cancer in particular.
[11]
vixra:1701.0300 [
pdf]
Golden and Harmonic Mean in the Genetic Code
In previous two works [1], [2] we have shown the determination of genetic code by golden and harmonic mean within standard Genetic Code Table (GCT), i.e. nucleotide triplet table, whereas in this paper we show the same determination through a specific connection between two tables – of nucleotide doublets Table (DT) and triplets Table (TT), over polarity of amino acids, measured by Cloister energy. (Miloje M. Rakočević) (Belgrade, 6.01.2017) (www.rakocevcode.rs) (mirkovmiloje@gmail.com)
[12]
vixra:1508.0110 [
pdf]
Estimating the PML Risk on Natalizumab: a Simple Approach
In this short note, we show how to quickly verify the correctness of the estimates of the PML risk on natalizumab established in [Borchardt 2015]. Our approach is simple and elementary in that it requires virtually no knowledge of either statistics or probability theory. For a Kaplan-Meier curve of the PML incidence may be found in [O'Connor et al 2014], based on postmarketing data as of early August 2013, and just using the information from that chart, it is possible to directly derive estimates of the risk of PML in JCV-seropositive natalizumab-treated patients according to prior or no prior immunosuppression. Actually, the resulting figures are almost identical to the ones in [Borchardt 2015], even though the latter were obtained in a very different fashion.
[13]
vixra:1505.0226 [
pdf]
Whywhere2.0: an R Package for Modeling Species Distributions on Big Environmental Data
Previous studies have indicated that multi-interval discretization (segmentation) of continuous-valued attributes for classification learning might provide a robust machine learning approach to modelling species distributions. Here we apply a segmentation model to the $Bradypus~variegatus$ -- the brown-throated three-toed sloth -- using the species occurrence and climatic data sets provided in the niche modelling R package \texttt{dismo} and a set of 940 global data sets of mixed type on the Global Ecosystems Database. The primary measure of performance was the area under the curve of the receiver operating characteristic (AUC) on a k-fold validation of predictions of the segmented model and a third order generalized linear model (GLM). This paper also presents further advances in the \texttt{WhyWhere} algorithm available as an R package from the development site at http://github.com/davids99us/whywhere.
[14]
vixra:1504.0148 [
pdf]
Carefully Estimating the Incidence of Natalizumab-Associated PML
We show that the quarterly updates about the risk of PML during natalizumab therapy, while in principle helpful, underestimate the real incidences systematically and significantly. Calculating the PML incidences using an appropriate method and on realistic assumptions, we obtain estimates that are up to 80% higher. In fact, with the recent paper [Plavina et al 2014], our approximate incidences are up to ten times as high. The present article describes the shortcomings of the methods used in [Bloomgren et al 2012] and by Plavina et al for computing incidences, and demonstrates how to properly estimate the true (prospective) risk of developing PML during natalizumab treatment. One application is that the newest data concerning the advances in risk-mitigation through the extension of dosing intervals, although characterised as not quite statistically significant, are in fact significant. Lastly, we discuss why the established risk-stratification algorithms, even on assessing the PML incidences correctly, are no longer state-of-the-art; in the light of all the progress that has been made so far, already today it is possible to reliably identify over 95% of patients in whom (a personalised regimen of) natalizumab should be very safe.
[15]
vixra:1304.0162 [
pdf]
Theoretical Basis of in Vivo Tomographic Tracer Kinetics
In vivo tracer kinetics, as probed by current tomographic techniques, is revisited from the point of view of fluid kinematics. Proofs of the standard intravascular advective perfusion model from first premises reveal underlying assumptions and demonstrate that all single input models apply at best to undefined tube-like systems, not to the ones defined by tomography, \textit{i.e.} the voxels. In particular, they do not and cannot account for the circulation across them. More generally, it is simply not possible to define a single non-zero steady volumetric flow rate per voxel. Restarting from the fact that kinematics requires the definition of six volumetric flow rates per voxel, one for each face, minimalist, 4D spatiotemporal analytic models of the advective transport of intravascular tracers in the whole organ of interest are obtained. Their many parameters, plasmatic volumetric flow rates and volumes, can be readily estimated at least in some specific cases. Estimates should be quasi-absolute in homogeneous tissue regions, regardless of the tomographic technique. Potential applications such as dynamic angio-tractography are presented. By contrast, the transport of mixed intra/extravascular tracers cannot be described by conservation of the mass alone and requires further investigation. Should this theory eventually supersede the current one(s), it shall have a deep impact on our understanding of the circulatory system, hemodynamics, perfusion, permeation and metabolic processes and on the clinical applications of tracer tracking tomography to numerous pathologies.
[16]
vixra:1304.0056 [
pdf]
Discovering Taxon Specific Oligomer Repeats in Microbial Genomes
Using the computational approach, we studied the oligonucleotides repeats in current available bacterial whole genomes. Though, repeats only count for a small portion in bacterial genomes, they still prevail. Our study shows, some of these oligonucleotides have a large copy number in genomes while maintain its taxon specificity. Generally, a length larger than 12 is enough to make a oligonucleotides repeats genus specific. Longer oligonucleotides will become more specific and be the species or strain marker sequences. We show here some examples in archaea and bacteria with different specific taxon levels. As we have a large volume of computational results, we make it available online by our TSOR server.It deals with user’s query and in this thesis we give examples on how to use this server. Moreover as these TSOR sequences are both specific and highly repeated, they would become possible nice candidate for biased microbial community genomes amplification
[17]
vixra:1302.0092 [
pdf]
A Theoretical Solution for Ventricular Septal Defects And Pulmonary Vein Stenosis
Ventricular Septal Defects (VSD) and Pulmonary Vein Stenosis (PVS) are both normally non- life- threatening problems for survivors of early childhood. However, it can be a large hindrance to many patients who want a normal life. With this proposed solution, patients should be able to achieve a life mostly free of problems. Hopefully, only regular check-ups will be required after the initial treatment.
[18]
vixra:1302.0027 [
pdf]
On the K-Mer Frequency Spectra of Organism Genome and Proteome Sequences with a Preliminary Machine Learning Assessment of Prime Predictability
A regular expression and region-specific filtering system for biological records at the National Center for Biotechnology database is integrated into an object-oriented sequence counting application, and a statistical software suite is designed and deployed to interpret the resulting k-mer frequencies---with a priority focus on nullomers. The proteome k-mer frequency spectra of ten model organisms and the genome k-mer frequency spectra of two bacteria and virus strains for the coding and non-coding regions are comparatively scrutinized. We observe that the naturally-evolved (NCBI/organism) and the artificially-biased (randomly-generated) sequences exhibit a clear deviation from the artificially-unbiased (randomly-generated) histogram distributions. Furthermore, a preliminary assessment of prime predictability is conducted on chronologically ordered NCBI genome snapshots over an 18-month period using an artificial neural network; three distinct supervised machine learning algorithms are used to train and test the system on customized NCBI data sets to forecast future prime states---revealing that, to a modest degree, it is feasible to make such predictions.
[19]
vixra:1202.0076 [
pdf]
Life as Evolving Software
In this paper we present an information-theoretic analysis of Darwin's theory of evolution, modeled as a hill-climbing algorithm on a fitness landscape. Our space of possible organisms consists of computer programs, which are subjected to random mutations. We study the random walk of increasing fitness made by a single mutating organism. In two different models we are able to show that evolution will occur and to characterize the rate of evolutionary progress, i.e., the rate of biological creativity.