Digital Signal Processing

[1] vixra:2309.0075 [pdf]
Microcontrollers A Comprehensive Overview and Comparative Analysis of Diverse Types
This review paper provides a comprehensive overview of five popular microcontrollers: AVR, 8052, PIC, ESP32, and STM32. Each microcontroller is analyzed in terms of its architecture, peripherals, development environment, and application areas. A comparison is provided to highlight the key differences between these microcontrollers and assist engineers in selecting the most appropriate microcontroller for their specific needs. This paper serves as a valuable resource for beginners and experienced engineers alike, providing a comprehensive understanding of the different microcontrollers available and their respective applications.
[2] vixra:2208.0172 [pdf]
A Novel 1D State Space for Efficient Music Rhythmic Analysis
Inferring music time structures has a broad range of applications in music production, processing and analysis. Scholars have proposed various methods toanalyze different aspects of time structures, such as beat, downbeat, tempo and meter.Many state-of-the-art (SOFA) methods, however, are computationally expensive. This makes them inapplicable in real-world industrial settings where the scale of the music collections can be millions. This paper proposes a new state space and a semi-Markov model for music time structure analysis. The proposed approach turns the commonly used 2D state spaces into a 1D model through a jump-back reward strategy. It reduces the state spaces size drastically. We then utilize the proposed method for causal, joint beat, downbeat, tempo, and meter tracking, and compare it against several previous methods. The proposed method delivers similar performance with the SOFA joint causal models with a much smaller state space and a more than 30 times speedup.
[3] vixra:2205.0050 [pdf]
FC1: A Powerful, Non-Deterministic, Symmetric Key Cipher
In this paper we describe a symmetric key algorithm that offers an unprecedented grade of confidentiality. Based on the uniqueness of the modular multiplicative inverse of a positive integer a modulo n and on its computability in a polynomial time, this non-deterministic cipher can easily and quickly handle keys of millions or billions of bits that an attacker does not even know the length of. The algorithm’s primary key is the modulo, while the ciphertext is given by the concatenation of the modular inverse of blocks of plaintext whose length is randomly chosen within a predetermined range. In addition to the full specification, we present a working implementation of it in Julia Programming Language, accompanied by real examples of encryption and decryption.
[4] vixra:2204.0110 [pdf]
Ġasaq: Provably Secure Key Derivation
This paper proposes Ġasaq; a provably secure key derivation method that, when given access to a true random number generator (TRNG), allows communicating parties, that have a pre-shared secret password p, to agree on a secret key k that is indistinguishable from truly random numbers with a guaranteed entropy of min(H(p), |k|). Ġasaq's security guarantees hold even in a post-quantum world under Grover's algorithm, or even if it turns out that P = NP. Such strong security guarantees, that are similar to those of the one time pad (OTP), became attractive after the introduction of Băhēm; a similarly provably secure symmetric cipher that is strong enough to shift cipher's security bottleneck to the key derivation function. State of art key derivation functions such as the PBKDF, or even memory-hard variants such as Argon2, are not provably secure, but rather not fully broken yet. They do not guarantee against needlessly losing password entropies; that is, the output key could have an entropy lower than password's entropy, even if such entropy is less than key's bit length. In addition to assuming that P != NP, and, even then, getting their key space square-rooted under Grover's algorithm---none of which are limitations of Ġasaq. Using such key derivation functions, as the PBKDF or Argon2, is acceptable with conventional ciphers, such as ChaCha20 or AES, as they, too, suffer the same limitations, hence none of them are bottlenecks for the other. Similarly to how a glass door is not a security bottleneck for a glass house. However, a question is: why would a people secure their belongings in a glass made structure, to justify a glass door, when they can use a re-enforced steel structure at a similar cost? This is where Ġasaq comes to offer Băhēm the re-enforced steel door that matches its security.
[5] vixra:2204.0064 [pdf]
Băhēm: A Provably Secure Symmetric Cipher
This paper proposes Băhēm; a symmetric cipher that, when used with a pre-shared secret key k, no cryptanalysis can degrade its security below H(k) bits of entropy, even under Grover's algorithm or even if it turned out that P = NP. Băhēm's security is very similar to that of the one-time pad (OTP), except that it does not require the communicating parties the inconvenient constraint of generating a large random pad in advance of their communication. Instead, Băhēm allows the parties to agree on a small pre-shared secret key, such as |k| = 128 bits, and then generate their random pads in the future as they go. For any operation, be it encryption or decryption, Băhēm performs only 4 exclusive-or operations (XORs) per cleartext bit including its 2 overhead bits. If it takes a CPU 1 cycle to perform an XOR between a pair of 64 bit variables, then a Băhēm operation takes 4 / 8 = 0.5 cycles per byte. Further, all Băhēm's operations are independent, therefore a system with n many CPU cores can perform 0.5 / n cpu cycles per byte per wall-clock time. While Băhēm has an overhead of 2 extra bits per every encrypted cleartext bit, its early single-threaded prototype implementation achieves a faster /decryption/ than OpenSSL's ChaCha20's, despite the fact that Băhēm's ciphertext is 3 times larger than ChaCha20's. This support that the 2 bit overhead is practically negligible for most applications. Băhēm's early prototype has a slower /encryption/ time than OpenSSL's ChaCha20 due to its use of a true random number generator (TRNG). However, this can be trivially optimised by gathering the true random bits in advance, so Băhēm gets the entropy conveniently when it runs. Aside from Băhēm's usage as a provably-secure general-purpose symmetric cipher, it can also be used, in some applications such as password verification, to enhance existing hashing functions to become provably one-way, by using Băhēm to encrypt a predefined string using the hash as the key. A password is then verified if its hash decrypts the Băhēm ciphertext to retrieve the predefined string.
[6] vixra:2201.0141 [pdf]
A Blind Source Separation Technique for Document Restoration Based on Edge Estimation
In this paper we study a Blind Source Separation (BSS) problem, and in particular we deal with document restoration. We consider the classical linear model. To this aim, we analyze the derivatives of the images instead of the intensity levels. Thus, we can establish a non-overlapping constraints on document sources. Moreover, we impose that the rows of the mixture matrices of the sources have sum equal to 1, in order to keep equal the lightnesses of the estimated sources and of the data. Here we give a technique which uses the symmetric factorization, whose goodness is tested by the experimental results.
[7] vixra:2201.0050 [pdf]
Blind Source Separation in Document Restoration: an Interference Level Estimation
We deal with the problem of blind separation of the components, in particular for documents corrupted by bleed-through and show-through. So, we analyze a regularization technique, which estimates the original sources, the interference levels and the blur operators. We treat the estimate of the interference levels, given the original sources and the blur operators. In particular, we investigate several GNC-type algorithms for minimizing the energy function. In the experimental results, we find which algorithm gives more precise estimates of the interference levels.
[8] vixra:2110.0151 [pdf]
Foundations for Strip Adjustment of Airborne Laserscanning Data with Conformal Geometric Algebra
Typically, airborne laserscanning includes a laser mounted on an airplane or drone (its pulsed beam direction can scan in flight direction and perpendicular to it) an intertial positioning system of gyroscopes, and a global navigation satellite system. The data, relative orientation and relative distance of these three systems are combined in computing strips of ground surface point locations in an earth fixed coordinate system. Finally, all laserscanning strips are combined via iterative closes point methods to an interactive three-dimensional terrain map. In this work we describe the mathematical framework for how to use the iterative closest point method for the adjustment of the airborne laserscanning data strips in the framework of conformal geometric algebra.
[9] vixra:2102.0028 [pdf]
Hridai: a Tale of Two Categories of Ecgs
This work presents a geometric study of computational disease tagging of ECGs problems. Using ideas like Earthmover’s distance (EMD) and Euclidean distance, it clusters category 1 and category −1 ECGs in two clusters, computes their average and then predicts the category of 100 test ECGs, if they belong to category 1 or category −1. We report 80% success rate using Euclidean distance at the cost of intense computation investment and 69% success using EMD. We suggest further ways to augment and enhance this automated classification scheme using bio-markers like Troponin isoforms, CKMB, BNP. Future direc- tions include study of larger sets of ECGs from diverse populations and collected from a heterogeneous mix of patients with different CVD conditions. Further we advocate the robustness of this programmatic approach as compared to deep learning kind of schemes which are amenable to dynamic instabilities. This work is a part of our ongoing framework Heart Regulated Intelligent Decision Assisted Information (HRIDAI) system.
[10] vixra:2005.0126 [pdf]
Detecting a Valve Spring Failure of a Piston Compressor with the Help of the Vibration Monitoring.
The article presents problems related to vibration diagnostics in reciprocating compressors. This paper presents the evaluation of several techniques of the digital signal processing, such as the spectrum calculation with the Discrete Fourier Transform (DFT), Continuous Wavelet Transform (CWT), Segmented Analysis for detection the spring failure in reciprocating compressor valve with the help of the vibration monitoring. The experimental investigation to collect the data from the compressor with both the faultless valve and the valve with spring failure was conducted. Three 112DV1 vibration acceleration probes manufactured by TIK were mounted on the cylinder of the compressor. The keyphasor probe was mounted on the compressor’s flywheel. The signal of the vibration acceleration probe mounted on the top of the cylinder was used for the Condition Monitoring and Fault Detection of the valve. The TIK-RVM system of monitoring and data acquisition was used for gathering the signal samples from the probes. The sampling frequency was 30193.5 Hz, signal length was 65535 samples. To imitate the spring fault, the exhaust valve spring was replaced by the shortened one with the same stiffness. As it can be seen from the signal processing results in the article, the techniques used are showing quite different results for the cases of the normal valve spring and the short one. It seems what for this type of the compressor and valve, the valve spring failure can be quite reliably detected with the help of the vibration monitoring. To see if this is a case for other compressor types and other valve types, the additional experiments are needed.
[11] vixra:2005.0089 [pdf]
The Theoretical Average Encoding Length for Micro-States in Boltzmann System Based on Deng Entropy
Because of the good performance of handling uncertainty, Dempster-Shafer evidence theory (evidence theory) has been widely used. Recently, a novel entropy, named as Deng entropy, is proposed in evidence theory, which is a generalization of Shannon entropy. Deng entropy and the maximum Deng entropy have been applied in many fields due to their efficiency and reliability of measuring uncertainty. However, the maximum Deng entropy lacks a proper explanation in physics, which limits its further application. Thus, in this paper, with respect to thermodynamics and Shannon's source coding theorem, the theoretical average encoding length for micro-states in Boltzmann system based on Deng entropy is proposed, which is a possible physical interpretation of the maximum Deng entropy.
[12] vixra:2004.0578 [pdf]
How to Read Faces Without Looking at Them
Face reading is the most intuitive aspect of emotion recognition. Unfortunately, digital analysis offacial expression requires digitally recording personal faces. As emotional analysis is particularlyrequired in more poised scenario, capturing faces becomes a gross violation of privacy. In thispaper, we use the concept ofcompressive analysisintroduced in [1] to conceptualise a systemwhich compressively acquires faces in order to ascertain unusable reconstruction, while allowing foracceptable (and adjustable) accuracy in inference.
[13] vixra:2004.0257 [pdf]
Image Reconstruction with a NON–PARALLELISM Constraint
We consider the problem of restorating images from blur and noise. We find the minimum of the primal energy function, which has two terms, related to faithfulness to the data, and smoothness constraints, respectively. In general, we do not know and we have to estimate the discontinuities of the ideal image. We require that the obtained images are piecewise continuous and with thin edges. We associate with the primal energy function a dual energy function, which treats discontinuities implicitly. We determine a dual energy function, which is convex and takes into account non-parallelism constraints, in order to have thin edges. The proposed dual energy can be used as initial function in a GNC (Graduated Non-Convexity)-type algorithm, to obtain reconstructed images with Boolean discontinuities. In the experimental results, we show that the formation of parallel lines is inhibited.
[14] vixra:2004.0225 [pdf]
A New Method for Image Super-Resolution
The aim of this paper is to demonstrate that it is possible to reconstruct coherent human faces from very degraded pixelated images with a very fast algorithm, more faster than compressed sensing (CS) algorithm, easier to compute and without deep learning, so without important information technology resources, i.e. a large database of thousands training images (see https://arxiv.org/pdf/2003.13063.pdf). This technological breakthrough has been patented in 2018 with the demand of french patent FR 1855485 (https://patents.google.com/patent/FR3082980A1). The Face Super-Resolution (FSR) has many interests, in particular in a remote surveillance context which already exists in China but which can be a reality in USA and European countries. Today, deep learning methods and artificial intelligence (AI) appears in this context but these methods are difficult to put in their systems because of the need of important data. The demand of chinese patent CN107563965 and the scientist publication "Pixel Recursive Super Resolution", R. Dahl, M. Norouzi, J. Shlens propose such methods (see https://arxiv.org/pdf/1702.00783.pdf). In this context, this new method could help governments, institutions and enterprises to accelerate the generalisation of automatic facial identification and to earn time for reconstruction process in industrial steps such as terahertz imaging, medical imaging or spatial imaging.
[15] vixra:2004.0221 [pdf]
Multi-Key Homomorphic Encryption based Blockchain Voting System
During the pandemic covid-19. More than 70 national elections scheduled for the rest of the year worldwide, the coronavirus (COVID-19) pandemic is putting into question whether some of these elections will happen on time or at all. We proposed a novel solution based on multi-key homomorphic encryption and blockchain technology, which is unhackable,privacy-preserving and decentralized. We first introduce the importance of a feasible voting system in this special era, then we demonstrated how we construct the system. finally, we made a thorough comparison of the possible solutions.
[16] vixra:1911.0406 [pdf]
Improved Methodology for Computing Coastal Upwelling Indices from Satellite Data
The article discusses an improved methodology for determining coastal upwelling indices from satellite maps of sea surface temperature and near-surface wind. The main difference of this technique is the determination of upwelling parameters by monthly climatic masks. The algorithm for choosing monthly climate masks is considered in detail. The substantiation of the choice of the region remote from upwelling waters in the open sea is given for calculating the thermal upwelling index and its modifications. In addition to the generally accepted upwelling indices (thermal and Ekman), new indices have been introduced: cumulative and apparent upwelling power, allowing to take into account the upwelling surface area. The technique is considered on the example of the Canary Upwelling. This technique allows you to determine the boundaries of upwelling in each climate month, and therefore, more accurately calculate its indices and environmental parameters located in the upwelling region (surface wind, sea level, geostrophic current, etc.)
[17] vixra:1910.0532 [pdf]
Preprocessing Quaternion Data in Quaternion Spaces Using the Quaternion Domain Fourier Transform
Recently a new type of hypercomplex Fourier transform has been suggested. It consequently transforms quaternion valued signals (for example electromagnetic scalar-vector potentials, color data, space-time data, etc.) defined over a quaternion domain (space-time or other 4D domains) from a quaternion ”position” space to a quaternion ”frequency” space. Therefore the quaternion domain Fourier transform (QDFT) uses the full potential provided by hypercomplex algebra in higher dimensions, such as 3D and 4D transformation covariance. The QDFT is explained together with its main properties relevant for applications such as quaternionic data preprocessing.
[18] vixra:1910.0140 [pdf]
Remote Sensing and Computer Science
The implications of optimal archetypes have been far-reaching and pervasive. In fact, few analysts would disagree with the visualization of neural networks. While such a hypothesis is largely an appropriate objective, it is supported by existing work in the field. Our focus in this paper is not on whether A* search can be made peer-to-peer, pseudorandom, and pseudorandom, but rather on presenting a real-time tool for visualizing RAID [1]
[19] vixra:1909.0448 [pdf]
Face Alignment Using a Three Layer Predictor
Face alignment is an important feature for most facial images related algorithms such as expression analysis, face recognition or detection etc. Also, some images lose information due to factors such as occlusion and lighting and it is important to obtain those lost features. This paper proposes an innovative method for automatic face alignment by utilizing deep learning. First, we use second order gaussian derivatives along with RBF-SVM and Adaboost to classify a first layer of landmark points. Next, we use branching based cascaded regression to obtain a second layer of points which is further used as input to a parallel and multi-scale CNN that gives us the complete output. Results showed the algorithm gave excellent results in comparison to state-of-the-art algorithms.
[20] vixra:1908.0486 [pdf]
Minimizing Acquisition Maximizing Inference a Demonstration on Print Error Detection
Is it possible to detect a feature in an image without ever being able to look at it? Images are known to be very redundant in spatial domain. When transformed to bases like Discrete Cosine Transform (DCT) or wavelets, they acquire a sparser (more effective) representation. Compressed Sensing is a technique which proposes simultaneous acquisition and compression of any signal by taking very few random linear measurements (M) instead of uniform samples at more than twice the bandwidth frequency (Shannon-Nyquist theorem). The quality of reconstruction directly relates with M, which should be above a certain threshold (determined by the level of sparsity, k) for a reliable recovery. Since these measurements can non-adaptively reconstruct the signal to a faithful extent using purely analyticalmethodslikeBasisPursuit,MatchingPursuit,Iterativethresholding,etc.,wecanbeassured that these compressed samples contain enough information about any relevant macro-level feature contained in the (image) signal. Thus if we choose to deliberately acquire an even lower number of measurements-inordertothwartthepossibilityofacomprehensiblereconstruction,buthighenough to infer whether a relevant feature exists in an image - we can achieve accurate image classification while preserving its privacy. Through the print error detection problem, it is demonstrated that such a novel system can be implemented in practise.
[21] vixra:1906.0561 [pdf]
Emerging Trends Indigitalauthentication
This manuscript attempts to shed the light on the authentication systems’ evolution towards Multi-factor Authentication (MFA) from traditional text based password systems. The evolution of authen-tication systems is commensurate with that of security breaching techniques. While many strongauthentication products, such as multi-factor authentication (MFA), single sign-on (SSO), biometricsand privileged access management (PAM), have existed for a long time, the constant deluge ofdata breaches and password database leaks has re-illustrated the weakness in many authenticationparadigms. As a result, the industry is both re-thinking they way we approach authentication andmaking efforts to simplify previously complex or expensive authentication technologies for the everyhuman being.
[22] vixra:1905.0023 [pdf]
Fast Frame Rate Up-conversion Using Video Decomposition
Video is one of the most popular media in the world. However, video standards that are followed by different broadcasting companies and devices differ in several parameters. This results in compatibility issues in different hardware while handling a particular video type. One of such major, yet important parameter is frame rate of a video. Though it is easy to reduce the frame rate of a video by dropping frames at a particular interval, frame rate up-conversion is a non-trivial yet important problem in video communication. In this paper, we apply video decomposition algorithm to extract the moving regions in a video and interpolate the background and the sparse information separately for a fast up-conversion. We test our algorithm for different video contents and establish that the proposed algorithm performs faster than the existing up-conversion method without producing any visual distortion.
[23] vixra:1904.0525 [pdf]
An Analysis of Noise Folding for Low-Rank Matrix Recovery
Previous work regarding low-rank matrix recovery has concentrated on the scenarios in which the matrix is noise-free and the measurements are corrupted by noise. However, in practical application, the matrix itself is usually perturbed by random noise preceding to measurement. This paper concisely investigates this scenario and evidences that, for most measurement schemes utilized in compressed sensing, the two models are equivalent with the central distinctness that the noise associated with (\ref{eq.3}) is larger by a factor to $mn/M$, where $m,~n$ are the dimension of the matrix and $M$ is the number of measurements. Additionally, this paper discusses the reconstruction of low-rank matrices in the setting, presents sufficient conditions based on the associating null space property to guarantee the robust recovery and obtains the number of measurements. Furthermore, for the non-Gaussian noise scenario, we further explore it and give the corresponding result. The simulation experiments conducted, on the one hand show effect of noise variance on recovery performance, on the other hand demonstrate the verifiability of the proposed model.
[24] vixra:1904.0471 [pdf]
On the In-Band Full-Duplex Gain Scalability in On-demand Spectrum Wireless Local Area Networks
The advent Self-Interference Cancellation (SIC) techniques has turned in-band Full-Duplex (FD) radios into a reality. FD radios doubles the theoretical capacity of a half-duplex wireless link by enabling simultaneous transmission and reception in the same channel. A challenging question raised by that advent is whether it is possible scale the FD gain in Wireless Local Area Networks (WLANs). Precisely, the question concerns on how a random access Medium Access Control (MAC) protocol can sustain the FD gain over an increasing number of stations. Also, to ensure bandwidth resources match traffic demands, the MAC protocol design is also expected to enable On-Demand Spectrum Allocation (ODSA) policies in the presence of the FD feature. In this sense, we survey the related literature and find out a coupled FD-ODSA MAC solution lacks. Also, we identify a prevailing practice for the design of FD MAC protocols we refer to as the 1:1 FD MAC guideline. Under this guideline, an FD MAC protocol ‘sees’ the whole FD bandwidth through a single FD PHYsical (PHY) layer. The protocol attempts to occupy the entire available bandwidth with up to two arbitrary simultaneous transmissions. With this, the resulting communication range impair the spatial reuse offer which penalizes network throughput. Also, modulating each data frame across the entire wireless bandwidth demands stronger Received Signal Strength Indication (RSSI) (in comparison to narrower bandwidths). These drawbacks can prevent 1:1 FD MAC protocols to scale the FD gain. To face these drawbacks, we propose the 1:N FD MAC design guideline. Under the 1:N guideline, FD MAC protocols ‘see’ the FD bandwidth through N >1 orthogonal narrow-channel PHY layers. Channel orthogonality increases spatial reuse offer and narrow channels relaxes RSSI requisites. Also, the multi-channel arrangement we adopt facilitates the development of ODSA policies at the MAC layer. To demonstrate how an FD MAC protocol can operate under the 1:N design guideline, we propose two case studies. A case study consists of a novel random access protocol under the 1:N design guideline called the Piece by Piece Enhanced Distributed Channel Access (PbP-EDCA). The other case study consists in adapting an existing FD Wi-Fi MAC protocol [Jain et al., 2011]) – we name as the 1:1 FD Busy Tone MAC protocol (FDBT) – to the 1:N design guideline. Through analytical performance evaluation studies, we verify the 1:N MAC protocols can outperform the 1:1 FDBT MAC protocol’s saturation throughput even in scenarios where 1:1 FDBT is expected to maximize the FD gain. Our results indicate that the capacity upper-bound of an arbitrary 1:1 FD MAC protocol improves if the protocol functioning can be adapted to work under the 1:N MAC design guideline. To check whether that assertion is valid, we propose an analytical study and a proof-of-concept software-defined radio experiment. Our results show the capacity upper-bound gains of both 1:1 and 1:N design guidelines corresponds to 2× and 2.2× the capacity upper-bound achieved by a standard half-duplex WLAN at the MAC layer, respectively. With these results, we believe our proposal can inspire a new generation of MAC protocols that can scale the FD gain in WLANs.
[25] vixra:1902.0135 [pdf]
Emerging NUI-based Methods for User Authentication: A New Taxonomy and Survey
As the convenience and cost benefits of Natural User Interface (NUI) technologies are hastening their wide adoption, computing devices equipped with such interfaces are becoming ubiquitous. Used for a broad range of applications, from accessing email and bank accounts to home automation and interacting with a healthcare provider, such devices require, more than ever before, a secure yet convenient user authentication mechanism. This paper introduces a new taxonomy and presents a survey of “point-of-entry” user-device authentication mechanisms that employ a natural user interaction. The taxonomy allows a grouping of the surveyed techniques based on the sensor type used to capture user input, the actuator a user applies during interaction, and the credential type used for authentication. A set of security and usability evaluation criteria are then proposed based on the Bonneau, Herley, Van Oorschot and Stajano framework. An analysis of a selection of techniques and, more importantly, the broader taxonomy elements they belong to, based on these evaluation criteria, are provided. This analysis and taxonomy provide a framework for the comparison of different authentication alternatives given an application and a targeted threat model. Similarly, the taxonomy and analysis also offer insights into possibly unexplored, yet potentially rewarding, research avenues for NUI-based user authentication that could be explored.
[26] vixra:1808.0037 [pdf]
Generalization of Pollack Rule and Alternative Power Equation
After showing that only one of the different versions of Pollack's rule found on the literature agrees with the experimental behavior of a CPU running at stock frequency versus the same CPU overclocked, we introduce a formal simplified model of a CPU and derive a generalized Pollack's rule also valid for multithread architectures, caches, clusters of processors, and other computational devices described by this model. A companion equation for power consumption is also proposed.
[27] vixra:1807.0224 [pdf]
A Fast Algorithm for the Demosaicing Problem Concerning the Bayer Pattern
In this paper we deal with the demosaicing problem when the Bayer pattern is used. We propose a fast heuristic algorithm, consisting of three parts. In the first one, we initialize the green channel by means of an edge-directed and weighted average technique. In the second part, the red and blue channels are updated, thanks to an equality constraint on the second derivatives. The third part consists of a constant-hue-based interpolation. We show experimentally how the proposed algorithm gives in mean better reconstructions than more computationally expensive algorithms.
[28] vixra:1805.0284 [pdf]
Minimum Amount of Text Overlapping in Document Separation
We consider a Blind Source Separation problem. In particular we focus on reconstruction of digital documents degraded by bleed-through and show-through effects. In this case, since the mixing matrix, the source and data images are nonnegative, the solution is given by a Nonnegative Factorization. As the problem is ill-posed, further assumptions are necessary to estimate the solution. In this paper we propose an iterative algorithm in order to estimate the correct overlapping level from the verso to the recto of the involved document. Thus, the proposed method is a Correlated Component Analysis technique. This method has low computational costs and is fully unsupervised. Moreover, we give an extension of the proposed algorithm in order to deal with a not translation invariant model. Our experimental results confirm the goodness of the method.
[29] vixra:1803.0393 [pdf]
Microstrip Quad-Channel Diplexer Using Quad-Mode Square Ring Resonators
A new compact microstrip quad-channel diplexer (2.15/3.60 GHz and 2.72/5.05 GHz) using quad-mode square ring resonators is proposed. The quad-channel diplexer is composed of two quad-mode square ring resonators (QMSRR) with one common input and two output coupled-line structures. By adjusting the impedance ratio and length of the QMSRR, the resonant modes can be easily controlled to implement a dual-band bandpass filter. The diplexer show a small circuit size since it’s constructed by only two QMSRRs and common input coupledline structure while keeping good isolations (> 28 db). Good agreements are achieved between measurement and simulation.
[30] vixra:1712.0534 [pdf]
Horizontal Planar Motion Mechanism (HPMM) Incorporated to the Existing Towing Carriage for Ship Manoeuvring Studies
Planar Motion Mechanism (PMM) equipment is a facility generally attached with Towing Tank to perform experimental studies with ship models to determine the manoeuvring characteristics of a ship. Ship model is oscillated at prescribed amplitude and frequency in different modes of operation while it is towed along the towing tank at predefined speed.The hydrodynamic forces and moments are recorded, analyzed and processed to get the hydrodynamic derivatives appearing in the manoeuvring equations of motion of a ship. This paper presents the details about the Horizontal Planar Motion Mechanism (HPMM) equipment which is designed, developed and installed in Towing Tank laboratory at IIT Madras.
[31] vixra:1710.0029 [pdf]
Perturbations of Compressed Data Separation with Redundant Tight Frames
In the era of big data, the multi-modal data can be seen everywhere. Research on such data has attracted extensive attention in the past few years. In this paper, we investigate perturbations of compressed data separation with redundant tight frames via ˜Φ-ℓq-minimization. By exploiting the properties of the redundant tight frame and the perturbation matrix, i.e., mutual coherence, null space property and restricted isometry property, the condition on reconstruction of sparse signal with redundant tight frames is established and the error estimation between the local optimal solution and the original signal is also provided. Numerical experiments are carried out to show that ˜Φ-ℓq-minimization are robust and stable for the reconstruction of sparse signal with redundant tight frames. To our knowledge, our works may be the first study concerning perturbations of the measurement matrix and the redundant tight frame for compressed data separation.
[32] vixra:1709.0402 [pdf]
Detection and Prevention of Non-PC Botnets
Botnet attacks are serious and well-established threat to the internet community. These attacks are not only restricted to PC or laptops but spreading their roots to a device such as smartphones, refrigerators, and medical instruments. According to users, they are devices which are least prone to attacks. On the other hand, a device that is expected to be least vulnerable has low-security aspects which attract the attackers. In this paper, we have listed the details of latest Botnet attacks and common vulnerabilities behind such attacks. We have also explained as well as suggested proved Detection ways based on their types. After an analysis of attacks and detection techniques, we have suggested recommendations which can be utilized in order to mitigate such attacks.
[33] vixra:1709.0359 [pdf]
A Sharp Sufficient Condition of Block Signal Recovery Via $l_2/l_1$-Minimization
This work gains a sharp sufficient condition on the block restricted isometry property for the recovery of sparse signal. Under the certain assumption, the signal with block structure can be stably recovered in the present of noisy case and the block sparse signal can be exactly reconstructed in the noise-free case. Besides, an example is proposed to exhibit the condition is sharp. As byproduct, when $t=1$, the result enhances the bound of block restricted isometry constant $\delta_{s|\mathcal{I}}$ in Lin and Li (Acta Math. Sin. Engl. Ser. 29(7): 1401-1412, 2013).
[34] vixra:1709.0164 [pdf]
RSA Cryptography over Polynomials (II)
Here is presented a cryptosystem near the RSA cryptosystem but for polynomials over a finite field, more precisely two irreducible polynomials instead of two prime numbers.
[35] vixra:1705.0157 [pdf]
OPRA Technique for M-QAM over Nakagami-m Fading Channel with Imperfect CSI
Analysis of an Optimum Power and Rate Adaptation (OPRA) technique has been carried out for Multilevel-Quadrature Amplitude Modulation (M-QAM) over Nakagami-m ?at fading channels considering an imperfect channel estimation at the receiver side. The optimal solution has been derived for a continuous adaptation, which is a specific bound function and not possible to express in close mathematical form. Therefore, a sub-optimal solution is derived for the continuous adaptation and it has been observed that it tends to the optimum solution as the correlation coefficient between the true channel gain and its estimation tends to one. It has been observed that the receiver performance degrades with an increase in estimation error.
[36] vixra:1612.0241 [pdf]
Ds-Bidens: a Novel Computer Program for Studying Bacterial Colony Features
Optical forward-scattering systems supported by image analysis methods are increasingly being used for rapid identification of bacterial colonies (Vibrio parahaemolyticus, Vibrio vulnificus, Vibrio cholera, etc.). The conventional detection and identification of bacterial colonies comprises a variety of methodologies based on biochemical, serological or DNA/RNA characterization. Such methods involve laborious and time-consuming procedures in order to achieve confirmatory results. In this article we present ds-Bidens, a novel software for studying bacterial colony features. The software ds-Bidens was programmed using C++, Perl and wxBasic programming languages. A graphical user interface (GUI), an image processing tool and functions to compute bacterial colony features were programmed. We obtained versatile software that provides key tools for studying bacterial colony images as: texture analysis, invariant moment and color (CIELab) calculation, etc., simplifying operations previously carried out by MATLAB applications. The new software can be of particular interest in fields of microbiology, both for bacterial colonies identification and the study of their growth, changes in color and textural features. Additionally ds-Bidens offers to the users a versatile environment to study bacterial colonies images. ds-Bidens is freely available from: http://ds-bidens.sourceforge.net/
[37] vixra:1610.0330 [pdf]
Method for Organizing Wireless Computer Network in Chemical System
Method for organizing wireless computer network in chemical system. This invention relates to physical chemistry and computer technology. The nodes of this network are computers with connected chemical feed systems set up to feed substances into the chemical system and online chemical analyzers set up to conduct the chemical analysis of the substance located in the chemical system and register the results of chemical analysis of the substance located in the chemical system. The invention is method for organizing wireless computer network in chemical system, comprising the fact that the transmission of electronic messages from one node to another node of this network is produced through communication channel of this wireless network, created in the chemical system which is organized by connecting a source computer to the chemical feed system, feeding substances into the chemical system by means of the operation of the chemical feed system in accordance with the finite sequence of settings modes of chemical feed system representing electronic message transmitted from the source computer and which is received from the source computer, and by connecting to the receiving computer an online chemical analyzer by which the chemical analysis of the substance located in the chemical system is conducted and the results of chemical analysis of the substance located in the chemical system are registered, and through which, on the receiving computer, the results of registration of the results of chemical analysis of the substance located in the chemical system are received, and the electronic message is restored from the results of registration of the results of chemical analysis of the substance located in the chemical system. In addition, each node of this wireless computer network confer capabilities to receive electronic messages through the connected online chemical analyzer from another node of this wireless network, and transmit electronic messages through the connected chemical feed system to another node of this wireless computer network through communication channels of this wireless computer network, in chemical system. The technical result of this invention is that the radio systems are not used in each wireless communication channel of this wireless computer network in the chemical system. This article is identical to the patent application ”Method for organizing wireless computer network in chemical system” with number: 2015113357, which was published in Russian and filed at Russian Patent Office: Federal Institute For Intellectual Property, Federal Service For Intellectual Property (Rospatent), Russian Federation.
[38] vixra:1610.0041 [pdf]
Method for Organizing Wireless Computer Network in Biological Tissue
Method for organizing wireless computer network in biological tissue. This invention relates to computer technology and biophysics, and can be used for the establishment and operation of a wireless computer network in biological tissue. The nodes of this network are computers connected to the vibration meters and vibration generators. The contact surfaces of vibration generators and vibration meters are brought into contact with the biological tissue. The invention is method for organizing wireless computer network in biological tissue, comprising the fact that the transmission of electronic messages from one node to another node of this network is produced through communication channel of this wireless network, created in the biological tissue which is organized by connecting a source computer to the vibration generator, bringing the contact surface of the vibration generator in contact with the biological tissue, creating and transferring the controlled mechanical motions to the biological tissue through the contact surface of the vibration generator by means of the operation of the vibration generator in accordance with the finite sequence of settings modes of vibration generator representing electronic message transmitted from the source computer and which is received from the source computer, and by connecting to the receiving computer a vibration meter by which the parameters of mechanical motions are registered and which are received by the vibration meter from biological tissue through the contact surface of the vibration meter which is brought into contact with the biological tissue, and through which, on the receiving computer, the results of registration of parameters of mechanical motions are received, and the electronic message is restored from the results of registration of mechanical motions parameters. In addition, each node of this wireless computer network confer capabilities to receive electronic messages through the connected vibration meter from another node of this wireless computer network, and transmit electronic messages through the connected vibration generator to another node of this wireless computer network through communication channels of this wireless computer network, through biological tissue. The technical result is that the radio systems are not used in each wireless communication channel of this wireless computer network in the biological tissue.
[39] vixra:1609.0019 [pdf]
Recognition and Tracking Analytics for Crowd Control
We explore and apply methods of image analization in several forms in order to monitor the condition and health of a crowd. Stampedes, congestion, and traffic all occur as a result of inefficient crowd management. Our software identifies congested areas and determines solutions to avoid congestion based on live data. The data is then processed by a local device which is fed via camera. This method was tested in simulation and proved to create a more efficient and congestion-free scenario. Future plans include depth sensing for automatic calibration and suggested course of action.
[40] vixra:1608.0223 [pdf]
Computational Fluid Dynamic Analysis of Aircraft Wing with Assorted Flap Angles at Cruising Speed
An aircraft wing is actually manufactured by the composite materials with the fibre angled in every ply aligned in multi- direction. Dissimilar thickness of the airfoil and layer directions were almost taken to study the result of bending-torsion. These laminated features are usually designed using the different layers, sequence of stacking, geometrical and mechanical properties. Finite number of layers can be integrated to form many laminates, The wing loading was due to its self-weight and weight of other propulsion systems or due to acceleration due to gravity was deliberated and the deflection over here can be found, this actually studied by aero elasticity. The aircraft wing is severely affected by the loads on along wing direction or vertical direction.NACA 2412 airfoil was taken for designing wing, and it was scaled through a profile with a calculated wingspan to obtain wing model. FLUENT and CFX were used for computational fluid dynamic analysis to determine the lift and drag for wing during zero degreed flaps and angled flaps. By this we intend to show how fast retraction flaps effects the drag and lift of aircraft at cruising speed.
[41] vixra:1607.0107 [pdf]
Analog Computer Understanding of Hamiltonian Paths, and a Possible Digitization
This paper explores finding existence of undirected hamiltonian paths in a graph using lumped/ideal circuits, specifically low-pass filters. While other alternatives are possible, a first-order RC low-pass filter is chosen to describe the process. The paper proposes a way of obtaining time complexity for counting the number of hamiltonian paths in a graph, and then shows that the time complexity of the circuits is around $O(n \log n)$ where $n$ is the number of vertices in a graph. Because analog computation is often undesirable due to several aspects, a possible digitization scheme is proposed in this paper.
[42] vixra:1605.0212 [pdf]
Electromagnetic Force Modification in Fault Current Limiters under Short-Circuit Condition Using Distributed Winding Configuration
The electromagnetic forces caused by short-circuits consisting of radial and axial forces impose mechanical damages and failures to the windings. The engineers have tried to decrease these forces using dierent techniques and innovations. Utilization of various kinds of winding arrangements is one of these methods, which enable the transformers and fault current limiters to tolerate higher forces without a substantial increase in construction and fabrication costs. In this paper, a distributed winding arrangement is investigated in terms of axial and radial forces during short-circuit condition in a three-phase FCL. To calculate the force magnitudes of AC and DC supplied windings, a model based on the nite element method in time stepping procedure is employed. The three-phase AC and DC supplied windings are split into multiple sections for more accuracy in calculating the forces. The simulation results are compared with a conventional winding arrangement in terms of leakage ux and radial and axial force magnitudes. The comparisons show that the distributed winding arrangement mitigates radial and especially axial force magnitudes signicantly.
[43] vixra:1601.0184 [pdf]
Closed Loop Current Control of Three Phase Photovoltaic Grid Connected System
The paper presents a closed loop current control technique of three phase grid connected systems with a renewable energy source. The proposal optimizes the system design, permitting reduction of system losses and harmonics for the three phase grid connected system. The performance of the proposed controller of grid connected PV array with DC-DC converter and multilevel inverter is evaluated through MATLAB-Simulation. The results obtained with the proposed method are compared with those obtained when using without current controller for three-phase photovoltaic multilevel inverter in terms of THD and switching frequency. Experimental works were carried out with the PV module WAREE WS 100, which has a power rating of 10 W, 17 V output voltages and 1000 W=m2 ir-radiance. The test results show that the proposed design exhibits a good performance.
[44] vixra:1508.0099 [pdf]
Encrypted Transmission of a PGP Public Key to Destinations
To protect your private information, you may use a data encryption and decryption computer program like PGP. But for an espionage agency even the PGP Public Key is not completely unbreakable. So you may prefer to encipher the Public Key before you send it to the destination, then it would become probably an impossible goal for the Internet fraud operatives to decipher the contents.
[45] vixra:1507.0028 [pdf]
Pathchecker: an RFID Application for Tracing Products in Suply-Chains
In this paper, we present an application of RFIDs for supply- chain management. In our application, we consider two types of readers. On one part, we have readers that will mark tags at given points. After that, these tags can be checked by another type of readers to tell whether a tag has followed the correct path in the chain. We formalize this notion and define adequate adversaries. Morever, we derive requirements in or- der to meet security against counterfeiting, cloning and impersonation attacks.
[46] vixra:1504.0188 [pdf]
Erratum: Single and Cross-Channel Nonlinear Interference in the Gaussian Noise Model with Rectangular Spectra
We correct a typo in the key equation (20) of reference [Opt.Express 21(26), 32254–32268 (2013)] that shows an upper bound on the cross-channel interference nonlinear coefficient in coherent optical links for which the Gaussian Noise model applies.
[47] vixra:1504.0112 [pdf]
A Segmented DAC based Sigma-Delta ADC by Employing DWA
Data weighted averaging algorithm work well for relatively low quantization levels , it begin to present significant problems when internal quantization levels are extended farther. Each additional bit of internal quantization causes an exponential increase in the complexity, size, and power dissipation of the DWA logic and DAC. This is because DWA algorithms work with unit-element DACs. The DAC must have 2N - 1 elements (where N is the number of bits of internal quantization), and the DWA logic must deal with the control signals feeding those 2N-1 unit elements. This paper discusses the prospect of using a segmented feedback path with coarse and ne signals to reduce DWA complexity for modulators with large internal quantizers. However, it also creates additional problems. mathematical analysis of the problems involved with segmenting the digital word in a P- ADC feedback path are presented, along with a potential solution that uses frequency-shapes this mismatch error. A potential circuit design for the frequency-shaping method is presented in detail. Mathematical analysis and behavioral simulation results are presented.
[48] vixra:1504.0111 [pdf]
Analysis Bio-Potential to Ascertain Movements for Prosthetic Arm with Different Weights Using Labview
The Prosthetic is a branch of biomedical engineering that deals with missing human body parts with artificial one. SEMG powered prosthetic required SEMG signals. The SEMG is a common method of measurement of muscle activity. The analysis of SEMG signals depends on a number of factors, such as amplitude as well as time and frequency domain properties. In the present work, the study of SEMG signals at different location, below elbow and bicep branchii muscles for two operation of hand like grip the different weights and lift the different weights are carried out. SEMG signals are extracted by using a single channel SEMG amplifier. Biokit Datascope is used to acquire the SEMG signals from the hardware. After acquiring the data from two selected location, analysis are done for the estimation of parameters of the SEMG signal using LabVIEW 2012 (evaluation copy). An interpretation of grip/lift operations using time domain features like root mean square (rms) value, zero crossing rate, mean absolute value and integrated value of the EMG signal are carried out. For this study 30 university students are used as subjects with 12 female and 18 male that will be a very helpful for the research in understanding the behavior of SEMG for the development for the prosthetic hand.
[49] vixra:1504.0110 [pdf]
Design and Control of Grid Interfaced Voltage Source Inverter with Output LCL Filter
This paper presents design and analysis of an LCL-based voltage source converter using for delivering power of a distributed generation source to power utility and local load. LCL filer in output of the converter analytically is designed and its different transfer functions are obtained for assessment on elimination of any probable parallel resonant in power system. The power converter uses a controller system to work on two modes of operation, stand-alone and grid-connected modes, and also has a seamless transfer between these two modes of operation. Furthermore, a fast semiconductor-based protection system is designed for the power converter. Performance of the designed grid interface converter is evaluated by using an 85kV A industrial setup.
[50] vixra:1504.0109 [pdf]
FF Algorithm for Design of SSSC-Based Facts Controller
Power-system stability improvement by a static synchronous series compensator (SSSC)- based damping controller considering dynamic power system load is thoroughly investigated in this paper. Only remote input signal is used as input to the SSSC-based controller. For the controller design, Firefly algorithm is used to find out the optimal controller parameters. To check for the robustness and effectiveness of the proposed controller, the system is subjected to various disturbances for both single-machine infinite bus power system and multi-machine power system. Detailed analysis regarding dynamic load is done taking practical power system loads into consideration. Simulation results are presented.
[51] vixra:1504.0106 [pdf]
Analysis of Histogram Based Shot Segmentation Techniques for Video Summarization
Content based video indexing and retrieval has its foundations in the analyses of the prime video temporal structures. Thus, technologies for video segmentation have become important for the development of such digital video systems. Dividing a video sequence into shots is the first step towards VCA and content-based video browsing and retrieval. This paper presents analysis of histogram based techniques on the compressed video features. Graphical User Interface is also designed in MATLAB to demonstrate the performance using the common performance parameters like, precision, recall and F1.
[52] vixra:1504.0105 [pdf]
Sliding Mode based D.C.Motor Position Control using Multirate Output Feed back Approach
The paper presents discrete time sliding mode Position control of d.c.motor using MROF. Discrete state space model is obtained from continuous time system of d.c.motor. Discrete state variables and control inputs are used for sliding mode controller design using Multirate Output Feed back approach(MROF) with fast output sampling. In this system output is sampled at a faster rate as compared to control input. This approach does not use present output or input. In this paper simulations are carried out for separately excited d.c.motor position control.
[53] vixra:1502.0079 [pdf]
Channel Access-Aware User Association with Interference Coordination in Two-Tier Downlink Cellular Networks
The diverse transmit powers of the base stations (BSs) in a multi-tier cellular network, on one hand, lead to uneven distribution of the traffic loads among different BSs when received signal power (RSP)-based user association is used. This causes under utilization of the resources at low-power BSs. On the other hand, strong interference from high-power BSs affects the downlink transmissions to the users associated with low-power BSs. In this context, this paper proposes a channel access-aware (CAA) user association scheme that can simultaneously enhance the spectral efficiency (SE) of downlink transmission and achieve traffic load balancing among different BSs. The CAA scheme is a network-assisted user association scheme that requires traffic load information from different BSs in addition to the channel quality indicators. We develop a tractable analytical framework to derive the SE of downlink transmission to a user who associates with a BS using the proposed CAA scheme. To mitigate the strong interference, the almost blank subframe (ABS)-based interference coordination is exploited first in macrocell-tier and then in smallcell-tier. The performance of the proposed CAA scheme is analyzed in presence of these two interference coordination methods. The derived expressions provide approximate solutions of reasonable accuracy compared to the results obtained from Monte-Carlo simulations. Numerical results comparatively analyze the gains of CAA scheme over conventional RSP-based association and biased RSP-based association with and without the interference coordination method. Also, the results reveal insights regarding the selection of the proportion of ABS in macrocell/smallcell-tiers for various network scenarios.
[54] vixra:1412.0277 [pdf]
Analysis of Histogram Based Shot Segmentation Techniques for Video Summarization
Content based video indexing and retrieval has its foundations in the analyses of the prime video temporal structures. Thus, technologies for video segmentation have become important for the development of such digital video systems. Dividing a video sequence into shots is the first step towards VCA and content-based video browsing and retrieval. This paper presents analysis of histogram based techniques on the compressed video features. Graphical User Interface is also designed in MATLAB to demonstrate the performance using the common performance parameters like, precision, recall and F1.
[55] vixra:1410.0035 [pdf]
Efficient Linear Fusion of Partial Estimators
Many signal processing applications require performing statistical inference on large datasets, where computational and/or memory restrictions become an issue. In this big data setting, computing an exact global centralized estimator is often either unfeasible or impractical. Hence, several authors have considered distributed inference approaches, where the data are divided among multiple workers (cores, machines or a combination of both). The computations are then performed in parallel and the resulting partial estimators are finally combined to approximate the intractable global estimator. In this paper, we focus on the scenario where no communication exists among the workers, deriving efficient linear fusion rules for the combination of the distributed estimators. Both a constrained optimization perspective and a Bayesian approach (based on the Bernstein-von Mises theorem and the asymptotic normality of the estimators) are provided for the derivation of the proposed linear fusion rules. We concentrate on finding the minimum mean squared error (MMSE) global estimator, but the developed framework is very general and can be used to combine any type of unbiased partial estimators (not necessarily MMSE partial estimators). Numerical results show the good performance of the algorithms developed, both in problems where analytical expressions can be obtained for the partial estimators, and in a wireless sensor network localization problem where Monte Carlo methods are used to approximate the partial estimators.
[56] vixra:1409.0129 [pdf]
Sparse Representations and Its Applications in Digital Communication
Sparse representations are representations that account for most or all information of a signal with a linear combination of a small number of elementary signals called atoms. Often, the atoms are chosen from a so called over-complete dictionary. Formally, an over-complete dictionary is a collection of atoms such that the number of atoms exceeds the dimension of the signal space, so that any signal can be represented by more than one combination of different atoms. Sparseness is one of the reasons for the extensive use of popular transforms such as the Discrete Fourier Transform, the wavelet transform and the Singular Value Decomposition. The aim of these transforms is often to reveal certain structures of a signal and to represent these structures using a compact and sparse representation. Sparse representations have therefore increasingly become recognized as providing extremely high performance for applications as diverse as: noise reduction, compression, feature extraction, pattern classification and blind source separation. Sparse representation ideas also build the foundations of wavelet denoising and methods in pattern classification, such as in the Support Vector Machine and the Relevance Vector Machine, where sparsity can be directly related to learnability of an estimator. The technique of finding a representation with a small number of significant coefficients is often referred to as Sparse Coding. Decoding merely requires the summation of the relevant atoms, appropriately weighted. However, unlike a transform coder with its invertible transform, the generation of the sparse representation with an over-complete dictionary is non-trivial. Indeed, the general problem of finding a representation with the smallest number of atoms from an arbitrary dictionary has been shown to be NP-hard. This has led to considerable effort being put into the development of many sub-optimal schemes. These include algorithms that iteratively build up the signal approximation one coefficient at a time, e.g. Matching Pursuit, Orthogonal Matching Pursuit, and those that process all the coefficients simultaneously, e.g. Basis Pursuit, Basis Pursuit De-Noising and the Focal Underdetermined System Solver family of algorithms.
[57] vixra:1403.0580 [pdf]
Direct Processing of Run-Length Compressed Document Image for Segmentation and Characterization of a Specified Block
Extracting a block of interest referred to as segmenting a specified block in an image and studying its characteristics is of general research interest, and could be a challenging if such a segmentation task has to be carried out directly in a compressed image. This is the objective of the present research work. The proposal is to evolve a method which would segment and extract a specified block, and carry out its characterization without decompressing a compressed image, for two major reasons that most of the image archives contain images in compressed format and 'decompressing' an image indents additional computing time and space. Specifically in this research work, the proposal is to work on run-length compressed document images.
[58] vixra:1306.0144 [pdf]
Physical-Layer Encryption on the Public Internet: a Stochastic Approach to the Kish-Sethuraman Cipher
While information-theoretic security is often associated with the one-time pad and quantum key distribution, noisy transport media leave room for classical techniques and even covert operation. Transit times across the public internet exhibit a degree of randomness, and cannot be determined noiselessly by an eavesdropper. We demonstrate the use of these measurements for information-theoretically secure communication over the public internet.
[59] vixra:1301.0120 [pdf]
Preliminary Study in Healthy Subjects of Arm Movement Speed
Many clinical studies have shown that the arm movement of patients with neurological injury is often slow. In this paper, the speed analysis of arm movement is presented, with the aim of evaluating arm movement automatically using a Kinect camera. The consideration of arm movement appears trivial at rst glance, but in reality it is a very complex neural and biomechanical process that can potentially be used for detecting a neurological disorder. This is a preliminary study, on healthy subjects, which investigates three dierent arm-movement speeds: fast, medium and slow. With a sample size of 27 subjects, our developed algorithm is able to classify the three dierent speed classes (slow, normal, and fast) with overall error of 5.43% for interclass speed classication and 0.49% for intraclass classication. This is the rst step towards enabling future studies that investigate abnormality in arm movement, via use of a Kinect camera.
[60] vixra:1301.0058 [pdf]
Revisiting QRS Detection Methodologies for Portable, Wearable, Battery-Operated, and Wireless ECG Systems
Cardiovascular diseases are the number one cause of death worldwide. Currently, portable batteryoperated systems such as mobile phones with wireless ECG sensors have the potential to be used in continuous cardiac function assessment that can be easily integrated into daily life. These portable point-of-care diagnostic systems can therefore help unveil and treat cardiovascular diseases. The basis for ECG analysis is a robust detection of the prominent QRS complex, as well as other ECG signal characteristics. However, it is not clear from the literature which ECG analysis algorithms are suited for an implementation on a mobile device. We investigate current QRS detection algorithms based on three assessment criteria: 1) robustness to noise, 2) parameter choice, and 3) numerical eciency, in order to target a universal fast-robust detector. Furthermore, existing QRS detection algorithms may provide an acceptable solution only on small segments of ECG signals, within a certain amplitude range, or amid particular types of arrhythmia and/or noise. These issues are discussed in the context of a comparison with the most conventional algorithms, followed by future recommendations for developing reliable QRS detection schemes suitable for implementation on battery-operated mobile devices.
[61] vixra:1301.0057 [pdf]
Fast QRS Detection with an Optimized Knowledge-Based Method: Evaluation on 11 Standard ECG Databases
The current state-of-the-art in automatic QRS detection methods show high robustness and almost negligible error rates. In return, the methods are usu- ally based on machine-learning approaches that require sucient computational re- sources. However, simple-fast methods can also achieve high detection rates. There is a need to develop numerically ecient algorithms to accommodate the new trend towards battery-driven ECG devices and to analyze long-term recorded signals in a time-ecient manner. A typical QRS detection method has been reduced to a basic approach consisting of two moving averages that are calibrated by a knowledge base using only two parameters. In contrast to high-accuracy methods, the proposed method can be easily implemented in a digital lter design.
[62] vixra:1301.0056 [pdf]
Fast T Wave Detection Calibrated by Clinical Knowledge with Annotation of P and T Waves
Background: There are limited studies on the automatic detection of T waves in arrhythmic electrocardiogram (ECG) signals. This is perhaps because there is no available arrhythmia dataset with annotated T waves. There is a growing need to develop numerically-efficient algorithms that can accommodate the new trend of battery-driven ECG devices. Moreover, there is also a need to analyze long-term recorded signals in a reliable and time-efficient manner, therefore improving the diagnostic ability of mobile devices and point-of-care technologies. Methods: Here, the T wave annotation of the well-known MIT-BIH arrhythmia database is discussed and provided. Moreover, a simple fast method for detecting T waves is introduced. A typical T wave detection method has been reduced to a basic approach consisting of two moving averages and dynamic thresholds. The dynamic thresholds were calibrated using four clinically known types of sinus node response to atrial premature depolarization (compensation, reset, interpolation, and reentry). Results: The determination of T wave peaks is performed and the proposed algorithm is evaluated on two~well-known databases, the QT and MIT-BIH Arrhythmia databases. The detector obtained a sensitivity of 97.14% and a positive predictivity of 99.29% over the first lead of the validation databases (total of 221,186 beats). Conclusions: We present a simple yet very reliable T wave detection algorithm that can be potentially implemented on mobile battery-driven devices. In contrast to complex methods, it can be easily implemented in a digital filter design.
[63] vixra:1301.0055 [pdf]
Can Heart Rate Variability (HRV) be Determined Using Short-Term Photoplethysmograms?
To date, there have been no studies that investigate the independent use of the photoplethysmogram (PPG) signal to determine heart rate variability (HRV). However, researchers have demonstrated that PPG signals offer an alternative way of measuring HRV when electrocardiogram (ECG) and PPG signals are collected simultaneously. Based on these findings, we take the use of PPGs to the next step and investigate a different approach to show the potential independent use of short 20-second PPG signals collected from healthy subjects after exercise in a hot environment to measure HRV. Our hypothesis is that if the PPG–HRV indices are negatively correlated with age, then short PPG signals are appropriate measurements for extracting HRV parameters. The PPGs of 27 healthy male volunteers at rest and after exercise were used to determine the HRV indices: standard deviation of heartbeat interval (SDNN) and the root-mean square of the difference of successive heartbeats (RMSSD). The results indicate that the use of the aa interval, derived from the acceleration of PPG signals, is promising in determining the HRV statistical indices SDNN and RMSSD over 20-second PPG recordings. Moreover, the post-exercise SDNN index shows a negative correlation with age. There tends to be a decrease of the PPG–SDNN index with increasing age, whether at rest or after exercise. This new outcome validates the negative relationship between HRV in general with age, and consequently provides another evidence that short PPG signals have the potential to be used in heart rate analysis without the need to measure lengthy sequences of either ECG or PPG signals.
[64] vixra:1301.0054 [pdf]
Detection of c, d, and e Waves in the Acceleration Photoplethysmogram
Analyzing the acceleration photoplethysmogram (APG) is becom- ing increasingly important for diagnosis. However, processing an APG signal is challenging, especially if the goal is to detect its small com- ponents (c, d, and e waves). Accurate detection of c, d, and e waves is an important first step for any clinical analysis of APG signals. In this paper, a novel algorithm that can detect c, d, and e waves simul- taneously in APG signals of healthy subjects that have low amplitude waves, contain fast rhythm heart beats, and suffer from non-stationary effects was developed. The performance of the proposed method was tested on 27 records collected during rest, resulting in 97.39% sensitiv- ity and 99.82% positive predictivity.
[65] vixra:1301.0053 [pdf]
Detection of a and B Waves in the Acceleration Photoplethysmogram
Background: Analyzing acceleration photoplethysmogram (APG) signals measured after exercise is challenging. In this paper, a novel algorithm that can detect a waves and consequently b waves under these conditions is proposed. Accurate a and b wave detection is an important rst step for the assessment of arterial stiness and other cardiovascular parameters. Methods: Nine algorithms based on xed thresholding are compared, and a new algorithm is introduced to improve the detection rate using a testing set of heat stressed APG signals containing a total of 1,540 heart beats. Results: The new a detection algorithm demonstrates the highest overall detection accuracy|99.78% sensitivity, 100% positive predictivity|over signals that suer from 1) non-stationary eects, 2)irregular heartbeats, and 3) low amplitude waves. In addition, the proposed b detection algorithm achieved an overall sensitivity of 99.78% and a positive predictivity of 99.95%. Conclusions: The proposed algorithm presents an advantage for real-time applications by avoiding human intervention in threshold determination.
[66] vixra:1212.0087 [pdf]
The Double-Padlock Problem: is Secure Classical Information Transmission Possible Without Key Exchange?
The idealized Kish-Sethuraman (KS) cipher is theoretically known to offer perfect security through a classical information channel. However, realization of the protocol is hitherto an open problem, as the required mathematical operators have not been identified in the previous literature. A mechanical analogy of this protocol can be seen as sending a message in a box using two padlocks; one locked by the Sender and the other locked by the Receiver, so that theoretically the message remains secure at all times. We seek a mathematical representation of this process, considering that it would be very unusual if there was a physical process with no mathematical description and indeed we find a solution within a four dimensional Clifford algebra. The significance of finding a mathematical description that describes the protocol, is that it is a possible step toward a physical realization having benefits in increased security with reduced complexity.
[67] vixra:1208.0149 [pdf]
Access Control for Healthcare Data Using Extended XACML-SRBAC Model
In the modern health service, data are accessed by doctors and nurses using mobile, Personal Digital Assistants, and other electronic handheld devices. An individual’s health related information is normally stored in a central health repository and it can be accessed only by authorized doctors. However, this Data is prone to be exposed to a number of mobile attacks while being accessed. This paper proposes a framework of using XACML and XML security to support secure, embedded and fine-grained access control policy to control the privacy and data access of health service data accessed through handheld devices. Also we consider one of the models, namely Spatial Role-based access control (SRBAC) and model it using XACML.
[68] vixra:1208.0082 [pdf]
A Cryptosystem for XML Documents
In this paper, we propose a cryptosystem (encrypting/decryption) for XML data using Vigenere cipher algorithm and EL Gamal cryptosystem. Such a system is designed to achieve some of security aspects such as confidentiality, authentication, integrity, and non-repudiation. We used XML data as an experimental work. Since, we have used Vigenere cipher which is not monoalphabetic, then the number of possible keywords of length m in a Vigenere Cipher is 26 m, so even for relatively small values of m, an exhaustive key search would require a long time.
[69] vixra:1208.0080 [pdf]
Storing XML Documents and XML Policies in Relational Databases
In this paper, We explore how to support security models for XML documents by using relational databases. Our model based on the model in [6], but we use our algorithm to store the XML documents in relational databases.