AUTHOREA
Log in Sign Up Browse Preprints
LOG IN SIGN UP

2168 communication, networking and broadcast technologies Preprints

Related keywords
communication, networking and broadcast technologies mac link-level simulator rate-distortion optimization ofdm Heterogeneous servers Resource Management uav probabilistic routing channel allocation peak age of information lorawan distributed computing lora Exploratory Jobs Last come first serve Context-aware Embedded Systems System Security and Trust Matlab Amplifyand-Forward gnpy Openness 6g uniform 3D gain space high dynamic range AI-Enabled Hardware Trojan Detection + show more keywords
Parallel MPI learned compression scalability graph MIMO random forest (rf) qos Relay Attack OFDM waveform numerology scheduling netconf quality of transmission Integrating Sensing and Communications super-gain-boosting technique machine learning amplify-and-forward GIS and Remote Sensing csi coexistence artificial intelligence (ai) aerospace quantum vector general topics for engineers support vector machine (svm) unstructured grid age of information Sharding protocols k-means Military operations industrial internet of things (iiot) Image/video/graphics coding reconfigurable intelligent surface uncertainty wireless power transfer Near-field propagation quantum-inspired differential evolution status update system robust optimization Hypergraph Service Function Chaining Failure engineered materials, dielectrics and plasmas cmos edge computing domain decomposition quantum annealing quantum inspired evolutionary algorithms app user allocation computing and processing first come first serve queueing discipline Minibatch K-Means Ubiquitous Near-field RF environment modelling anomaly detection cybersecurity Parallel Cloud Computing Throughput Decode-and-Forward simulation perceptual distortion measures channel state information Lossy embedded network openacc, fuzzy c-means Distributed Cloud Services Lasso Regression (LASSO) Urban Cities Smart Safety and Security PHY Channel Fingerprint stochastic hybrid system differentiable proxy differential evolution algorithm Drone technology 5G clustering signal processing and analysis super-resolution shallow water flows artificial intelligence TDD near-fmax amplifier mpi multispectral Unsupervised Machine Learning with Side-Channel Anomaly Detection THz ISAC deep learning machine learning (ml) Liner Regression (LINER) first come first serve blockchain Disaggregated Optical Networks
FOLLOW
  • Email alerts
  • RSS feed
Please note: These are preprints and have not been peer reviewed. Data may be preliminary.
Parallel Computation of Shallow Water Flows Using Hybrid MPI/OpenACC
Syngman Rhee

Syngman Rhee

February 06, 2024
A parallel shallow water flow model is introduced in this paper. The explicit-time finite volume approach is adopted to solve the 2D shallow water equations on an unstructured triangular mesh. The proposed scheme is second-order accurate in temporal and spatial terms using the two-stage Runge-Kutta and the monotone upwind scheme for conservation law (MUSCL) methods, respectively. Based on Message Passing Interface (MPI) and OpenACC, a multi-GPU model is presented with the METIS library to produce the domain decomposition. A CUDA-aware MPI library through GPUDirect for peer-to-peer (P2P) transfer between two GPUs and overlapping computation and MPI communication are used to speed up MPI memory exchange and the performance of the code. A 2D circular dam break test with wet and dry downstream beds and grid resolutions of about 2 million cells is considered to verify the accuracy of the code, and good results were achieved compared to the numerical simulations of published studies. Compared with the multi-CPU version of the 6-core CPU, maximum speedups of 56.18 and 331.51 were obtained using the single-GPU and multi-GPU versions, respectively. Results indicate that acceleration performance improves as the mesh resolution increases.
Hypergraph-based Modelling for Coexistence-Aware Channel Allocation
Tawachi Nyasulu

Tawachi Nyasulu

and 3 more

January 31, 2024
In this paper a novel technique for modelling a radio frequency (RF) environment based on hypergraph theory is investigated for solving coexistence management of heterogeneous networks and efficient channel allocation for spectrum sharing. Conventionally, traditional graph theory is used to model interference relationships and exclusive channel allocation. The demand for wireless services is increasing, hence the need for efficient spectrum management techniques, such as spectrum sharing among coexistent networks. However, coexistence-aware channel allocation would require representation of both interference and spectral coexistence relationships in the RF environment model. A graph cannot be used to represent multiple and multifaceted relationships without violating consistency with graph theory. This paper therefore proposes representing the RF environment using a hypergraph. The network coexistence method used is an implementation of the IEEE 802.19.1 method for co-sharing via network geometry classification. The simulation results show that the hypergraph-based model allocated channels, on average, up to 8% more networks than the graph-based model. The results also show that, for the same RF environment, the hypergraph model requires up to 36% fewer channels than the graph model to achieve an average of 100% operational networks. The rate of growth of the running time of the hypergraph-based algorithm with respect to the input size is quadratic, like the graph-based algorithm.
Robust Virtual Network Function Optimization Under Post-Failure Uncertainty: Classica...
Mahzabeen Emu

Mahzabeen Emu

and 2 more

January 29, 2024
The Service Function Chaining (SFC) failure is a rare event that comes with escalating unexpected costs. It is an overly simplistic assumption that during the installation of the Virtual Network Function (VNF) recovery instance, the network conditions impacting the cost of redeployment will remain unchanged. In this paper, we propose a deterministic optimization model using traditional Integer Linear Programming (ILP) that maneuvers the resource allocation for prior and post-failure SFC deployment. Afterwards, we design a robust optimization model that accounts for the uncertainty of the redeployment costs. As per the strong duality theorem, we derive the dual formulation of the robust optimization model for reduced computational complexity. Further along this line, we propose a quantum annealing-driven quadratic optimization (QUBO) model that demonstrates inherent robustness even without explicitly considering the uncertainty bounds of SFC redeployment costs. Extensive simulation studies demonstrate the superiority of robust solutions over deterministic approaches and explore the potential strengths of quantum annealing in terms of intrinsic resiliency. Although quantum computing is not yet ready to solve large-scale SFC deployment, it can support VNF services that demand ultra-low running time/real-time decision-making.
Sandwiched Compression: Repurposing Standard Codecs with Neural Network Wrappers
Onur G. Guleryuz

Onur G. Guleryuz

and 8 more

February 12, 2024
We propose sandwiching standard image and video codecs between pre-and post-processing neural networks. The networks are jointly trained through a differentiable codec proxy to minimize a given rate-distortion loss. This sandwich architecture not only improves the standard codec's performance on its intended content, it can effectively adapt the codec to other types of image/video content and to other distortion measures. Essentially, the sandwich learns to transmit "neural code images" that optimize overall rate-distortion performance even when the overall problem is well outside the scope of the codec's design. Through a variety of examples, we apply the sandwich architecture to sources with different numbers of channels, higher resolution, higher dynamic range, and perceptual distortion measures. The results demonstrate substantial improvements (up to 9 dB gains or up to 30% bitrate reductions) compared to alternative adaptations. We derive VQ equivalents for the sandwich, establish optimality properties, and design differentiable codec proxies approximating current standard codecs. We further analyze model complexity, visual quality under perceptual metrics, as well as sandwich configurations that offer interesting potentials in image/video compression and streaming.
A Comprehensive Survey of Data-Driven Solutions for LoRaWAN: Challenges & Future...
Poonam Maurya

Poonam Maurya

and 4 more

January 29, 2024
LoRaWAN is an innovative and prominent communication protocol in the domain of Low Power Wide Area Network (LPWAN), known for its ability to provide long-range communication with low energy consumption. However, the practical implementation of the LoRaWAN protocol, operating at the Medium Access Control (MAC) layer and built upon the LoRa physical (PHY) layer, presents numerous research challenges, including network congestion, interference, optimal resource allocation, collisions, scalability, and security. To mitigate these challenges effectively, the adoption of cutting-edge data-driven technologies such as Deep Learning (DL) and Machine Learning (ML) emerges as a promising approach. Interestingly, very few existing survey or tutorial has addressed the importance of ML or DL-based techniques for LoRaWAN in its current state. This article provides a comprehensive survey of current LoRaWAN challenges and recent solutions, particularly using DL and ML algorithms. The primary objective of this survey is to stimulate further research efforts to enhance the performance of LoRa networks and facilitate their practical deployments. We start by providing a technical background to LoRa alliances, LoRa, and LoRaWAN. Furthermore, we discuss an overview of the most utilized DL and ML algorithms for overcoming LoRaWAN challenges. We also present an interoperable reference architecture for LoRaWAN and validate its effectiveness using a wide range of applications. Additionally, we shed light on several evolving challenges of LoRa and LoRaWAN for the future digital network, along with possible solutions. Finally, we conclude our discussion by briefly summarizing our work.
Reconfigurable Intelligent Surface Constructing 6G Near-field Networks
Yajun ZHAO

Yajun Zhao

January 29, 2024
Near-field propagation, especially when facilitated by reconfigurable intelligent surface (RIS), has become a significant field of inquiry in recent times. Despite this, there remains a noticeable absence of an exhaustive literature review concerning RIS-empowered near-field technologies. This paper seeks to bridge this knowledge gap by presenting a concise overview of near-field principles, coupled with a systematic examination of the latest advancements in RIS-driven near-field technologies. It specifically concentrates on three pivotal areas: the establishment of pervasive near-field wireless propagation environments through RIS, the introduction of novel near-field frameworks for 6G networks via RIS, and the array of challenges inherent in RIS-based near-field technologies. The objective of this technical review is to bolster the progression and innovative exploration in the realm of RIS-oriented near-field technologies.
Online Scheduling for Exploratory Training Jobs in Deep Learning Clusters

Hongliang Li

and 5 more

January 26, 2024
Resource management for Deep Learning (DL) clusters is essential for system efficiency and model training quality. Existing schedulers provided by DL frameworks are mostly adaptations from traditional HPC clusters and usually work on jobs' makespan, assuming that DL training jobs finish completely. Unfortunately, a fair amount of training jobs are exploratory jobs and often finish unsuccessfully (over 30%) in production clusters. Existing DL cluster schedulers using offline algorithms are not suitable for exploratory jobs when unexpected early terminations can cause noticeable resource waste. Moreover, DL training jobs are iterative and usually yield diminishing returns as they progress, which results in inefficiency when equally allocating resource among training iterations. The fundamental goal of a DL training job is to gain model quality improvement, usually indicated by the loss reduction (job profit) of a DNN model. This paper introduces a novel scheduling problem for exploratory jobs that seeks to maximize the overall profit of a DL cluster. To solve it, we propose a solution based on the primal-dual framework, coupled with a resource price function that emphasizes the importance of job profit to resource consumption ratio, that resulting in a competitive ratio of 2α that belongs to O(ln n). We design an efficient online algorithm ExplSched, which integrates Dynamic Programming (DP) and heuristic algorithms to jointly consider both the scheduling performance and overhead with a time complexity of O(nE j). Experimental results show that ExplSched achieved an average system utility improvement of 83.82% compared with other related work.
A Secure Hybrid Deep Learning Technique for Anomaly Detection in IIoT Edge Computing

Bharath Konatham

and 4 more

January 26, 2024
The IIoT network involves smart sensors, actuators, and technologies extending IoT capabilities across industrial sectors. With the rapid development in connected technology and communications in industrial applications, IIoT networks and devices are increasingly integrated into less secure physical environments. Anomaly detection in IIoT is crucial for cybersecurity. This paper proposes a novel anomaly detection model for IIoT systems, leveraging a hybrid deep learning (DL) model. The hybrid DL approach combines Gated Recurrent Units (GRU) and Convolutional Neural Networks (CNN) for anomaly detection in IoT edge computing. The proposed CNN+GRU model achieves a notable 94.94% accuracy, underscoring the importance of careful model selection for IIoT anomaly detection. The paper suggests exploring XGBoost with hybrid CNN+GRU architectures as a future direction for high accuracy in complex IIoT contexts. The Experimental results indicate a 96.41% accuracy, excelling in metrics like false alarm rate (FAR), recall, precision, and F1score. Based on these findings, we recommend future researchers consider advanced hybrid architectures and enhance efficiency using XGBoost with hybrid CNN+GRU. This approach holds promise for significant contributions to IIoT systems' security and Performance evolution.
Reference-Plane Invariant Free Space Dielectric Material Characterization up to 330 G...

Salvador Moreno-Rodríguez

and 7 more

January 26, 2024
This paper describes the process followed to implement a system to characterize the complex permittivity of materials in the 10-330 GHz frequency band. Firstly, the method used and the system's calibration process are shown, consisting of a double calibration TRL (Thru-Reflect-Line) and GRL (Gated-Reflect-Line). Subsequently, a smoothing technique is used to improve the accuracy of the results. Finally, a test is performed on quartz and glass fiber samples, showing that the results are quite reliable over the entire measured bandwidth.
Setup for Material Characterization in the 110-170 GHz Band

Salvador Moreno-Rodríguez

and 6 more

January 26, 2024
This work outlines the procedure to establish a system for assessing materials' complex permittivity and permeability within the 110-170 GHz frequency range. We present the employed methodology and the calibration procedure for the system, incorporating a dual approach using TRL (Thru-Reflect-Line) and GRL (Gated-Reflect-Line) methods. Following that, a smoothing technique is used to enhance the accuracy of the results. Tests were conducted on a HIPS sample to validate the system's performance, demonstrating the results' reliability across the entire measured bandwidth.
Unsupervised-based Distributed Machine Learning for Efficient Data Clustering and Pre...

Vishnu Baligodugula

and 3 more

January 26, 2024
Unsupervised ML-based approaches have emerged for driving critical decisions about training data samples to help solve challenges in many life critical applications. This paper proposes parallel and distributed computing unsupervised ML techniques to improve the execution time of different ML algorithms. Various unsupervised ML models are developed, implemented, and tested to demonstrate the efficiency, in terms of execution time and accuracy, of the serial methods as compared to the parallelized ones. We developed sequential, parallel, and distributed cloud computing unsupervised ML models based and determined the most efficient model through comparative analysis. As a case study, sequential, parallel, and distributed approaches of Simple K-Means, Minibatch K-means, and Fuzzy C-Means are investigated to study the developed models' efficiency using country datasets for multiple organizations to train and test the developed model. Parallel and distributed computing models are developed utilizing could computing architect, i.e., cloud Amazon SageMaker, to study their efficiency in the execution time and model accuracy. The results show that the proposed parallel and distributed Fuzzy C-Means outperforms the other two clustering methods in terms of execution time with 0.932ms and 0.623ms with a minimal impact on the accuracy of the developed models.
AI-enabled Hardware Trojan Detection for Secure and Trusted Context-Aware Embedded Sy...

Ashutosh Ghimire

and 4 more

January 26, 2024
Context-aware computing applications depend on embedded hardware systems, utilizing sensors embedded in the hardware to gather real-time data and interact with specialized OS software (firmware) for autonomous processing and analysis of intelligent data. Securing embedded IC hardware systems and ensuring their trustworthiness requires an intelligent approach to effectively detect spontaneous hardware Trojans (HTs) insertions and modification attacks aiming to compromise the system's integrity, potentially leaking sensitive information or causing destruction. Implementing robust and advanced intrusion detection systems against supply chain hardware Trojan to countermeasure and continuous monitoring the behavior of these malicious hardware is essential to enable trust in context-aware computing applications. AI-enabled hardware side-channel analysis, involving power and timing assessments, assists in detection of anomalies that may signify potential Trojans. This paper propose intelligent AI approach utilizing unsupervised machine learning in conjunction with hardware side-channel analysis to eliminate the need for golden data samples and efficiently detect hardware Trojan detection. Employing unsupervised clustering, the methodology not only showcased a superior false positive rate but also demonstrated a comparable accuracy level when compared to supervised counterparts such as the K-Nearest Neighbors (KNN) classifier, Support Vector Machine (SVM), and Gaussian classifier-methods reliant on the availability of golden data for training. Notably, the proposed model exhibited an impressive accuracy rate of 93%, particularly excelling in pinpointing diminutive Trojans triggered by concise events, surpassing the capabilities of preceding techniques. In conclusion, this research advances a groundbreaking paradigm in hardware Trojan detection, accentuating its potential in bolstering the integrity of semiconductor IC supply chains.
AI-assisted Distributed Cloud Services Framework for Enhanced Safety in Urban Smart C...

Niveshitha Niveshitha

and 3 more

January 26, 2024
Smart cities have emerged to tackle life critical challenges that can thwart the overwhelming urbanization process, such as expensive health care, increasing energy demand, traffic jams, and environmental pollution. This paper proposes efficient and high-quality cloud-based machine-learning solutions for safe urban smart city environment. For that, supervised MLbased models, i.e., regression and classification, are developed utilizing cloud-based solutions to forecast high performance in execution time and enhanced quality of the solution in terms of the accuracy of the implemented cloud-based ML solution. To predict AQI, i.e. air quality index, ML models utilize pollutants in the air data sets. The mean absolute error, mean squared error, root means the squared error, R2 score are used to validate and test the designed models. As classification models, we perform the support vector machine and random forest algorithms, which are measured using the accuracy score and confusion matrix. Execution times and accuracy of the developed models are computed and contrasted with the times for the cloud-based versions of these models. The results show that among the regression algorithms, lasso regression has an r2 score of 80 percent, while linear regression has an r2 score of 75 percent. Furthermore, among the classification models, the random forest algorithm performs better with an accuracy of 99 percent than the support vector machine approach with 95 percent accuracy. In conclusion, our findings demonstrate that run-time is minimized when models are executed on a cloud platform compared to a desktop machine. Moreover, the accuracy of our models is maintained with reduced execution time.
Analyze the Loss Utilization in Near-fmax Embedded Amplifiers Using Uniform 3D Gain S...
Fei He

Fei He

and 3 more

January 26, 2024
In this paper, an analytical tool of uniform 3D gainspace approach is proposed to analyze the impact of the lossy, linear and reciprocal embedding networks, as well as the lossy matching networks for near-fmax embedded amplifiers. Based on the uniform gain space approach, a super-gain-boosting technique, which involves the employment of the cross conductance to the differential pair as well as Y/Z-embedding networks is thoroughly proposed to significantly boost the power gain. Compared to the conventional gain boosting techniques, the proposed can significantly boost the Mason's U of a transistor, but also further obtain gain improvement benefiting from the intuitive uniform 3D gain-space approach. Finally, to validate the proposed analytical approach, a three-stage amplifier is implemented in a 65nm CMOS process based on the proposed super gain-boosting technique and over-push gain-boosting technique. The three-stage amplifier demonstrates a measured Psat of-1.95dBm and a maximum PAE of 2.87% at 189GHz, along with a maximum power gain of 32.1dB.
Disaggregated Optical Networks: A Survey
Sergio Cruzes

Sergio Cruzes

January 26, 2024
Disaggregated networks allow operators to select components from different vendors, promoting vendor neutrality. This flexibility enables the selection of best-of-breed solutions for specific network elements. By decoupling hardware and software, disaggregated networks can potentially reduce costs. Operators can choose cost-effective devices and upgrade or replace them independently. Disaggregation also facilitates the adoption of new technologies and innovations and often adheres to open standards promoting interoperability between different equipment vendors. The Yet Another Next Generation (YANG) data modeling has been identified as the preferred language to interface the management and control system. The Network Configuration Protocol (NETCONF) is gaining prominence as a Software-Defined Networking (SDN) protocol standardized by the Internet Task Force (IETF). This paper provides an overview based on a survey of the best practices employed in designing, planning, and operating a disaggregated optical network. It presents the general system architecture including the open software tools SDN controller (based on the Open Operating Network System (ONOS)), optical line system controller (OLC), the QoT estimator based on the Gaussian Noise Simulation in Python (GNPy), and the orchestrator module.
Predictive QoS for Cellular-Connected UAV Communications

Ann Varghese

and 4 more

January 26, 2024
Unmanned aerial vehicles (UAVs), or drones, are transforming industries due to their affordability, ease of use, and adaptability. This emphasizes the need for reliable communication links, especially in beyond-line-of-sight scenarios. This paper investigates the feasibility of predicting future quality of service (QoS) in UAV payload communication links, with a special focus on 5G cellular technology. Through field tests conducted in a suburban environment, we explore challenges and trade-offs that cellular-connected UAVs face, particularly in the context of frequency band selection. We employed machine learning models to forecast uplink (UL) throughput for UAV payload communication, highlighting the significance of diverse training data for accurate predictions. The results reveal the effect of frequency band selection on UAV UL throughput rates at varying altitudes and the influence of integrating diverse feature sets, including radio, network, and spatial features, on ML model performance. These insights provide a foundation for addressing the complexities in UAV communications and enhancing UAV operations in modern networks.
Application of Geographic Information Systems and Remote Sensing in Military Operatio...
Ezra Chipatiso

Ezra Chipatiso

January 26, 2024
Geographic Information System (GIS) and Remote Sensing have been considered significant in the military due to their spatiality in nature. In this study, the descriptive-analytical method was used to illustrate the applications of GIS in military operations, drawing lessons from land based military developments from selected studies. Recent military developments have seen various military institutions depending on reliable and accurate spatial mapping tools, for the purpose of Command, Control, Communication and Coordination in military operations. The study notes that high resolution satellite data and or drone technology integrated with machine learning and Artificial Intelligence (AI), have been utilized in the military for a variety of applications including cartography, terrain analysis, intelligence collection and dissemination, object recognition, safeguarding military vital installations, as well as historical construction. The integration of GIS machine learning and AI is of significance to the military planning and deployment, as the understanding of landscape is useful in determining strategic positions in the battle ground in real-time. The study recommends the need to train military personnel in geospatial techniques and ensure proper deployment, for fruitful military operations.
Comprehensive Link-Level Simulator for Terahertz MIMO Integrated Sensing and Communic...
Hanchen Shi
Chuang Yang

Hanchen Shi

and 2 more

January 26, 2024
Terahertz (THz) integrated sensing and communication (ISAC) with multiple-input multiple-output (MIMO) architecture is recognized as a promising interdisciplinary technology for ultra-high-rate mobile communications since the systems enable narrow beam tracking which is necessary in the THz band. In this work, a link-level simulator for THz MIMO ISAC in time-division duplex (TDD) operation is proposed to design and analyze mobile systems. Compared to the simulators in the literature, the proposed simulator is more practical and comprehensive, employing two-dimensional motion simulation instead of numerical evaluation, and considering THz characteristics such as wideband echo, multipath components and molecular absorption. Specifically, the simulator supports the standard orthogonal frequency division multiplexing (OFDM) and discrete Fourier transform spread OFDM (DFT-s-OFDM) waveforms for sensing and communication simultaneously. Trade-offs between communication and sensing metrics required for waveform numerology design are investigated. In particular, by exploiting TDD framework's integration capability, range-velocity-angle estimation with virtual array and sensing-aided downlink spatial multiplexing are co-designed. Additionally, a user interface with elaborate parameter configuration is introduced. Finally, we implement an urban vehicle-to-vehicle (V2V) application case to verify the simulator. The simulation results present the feasibility of the developed integrated architecture.
Relay Attack Detection using OFDM Channel-Fingerprinting
Radi Abubaker

Radi Abubaker

and 1 more

January 26, 2024
Decode-and-forward and amplify-and-forward relay attacks are a powerful tool for defeating challenge-response authentication protocols. Current solutions for detecting these relay attacks utilize round-trip time distance-bounding. Unfortunately, secure implementations of distance-bounding require dedicated ultra-wideband hardware that only provide low data rates and operate at relatively short distances. In this paper we propose two novel symmetric-key challenge-response authentication protocols that can detect either a decode-andforward relay attack or prevent a decode-and-forward and detect an amplify-and-forward relay attack. Both protocols utilize the channel state information in a far-field communication system to perform the detection. The first protocol utilizes the correlation of the adjusted channel frequency response to detect decode-and-forward relay attacks. The second protocol prevents decode-and-forward relay attacks through the use of randomized pilots, and detects amplify-and-forward relay attacks by classifying the distribution of the channel frequency response that is caused by multiple relays. The protocols utilize orthogonal frequency division multiplexing to estimate the channel frequency response between the legitimate communicating parties to identify if a relay attack is occurring in a physical-layer challenge-response authentication protocol. The proposed protocols can be leveraged on many existing hardware platforms and can simultaneously support high data rates. To evaluate the performance of the protocol, MATLAB simulations are done to gather Monte-Carlo results on performance criterion.
Proactive Obsolete Packet Management based analysis of Age of Information for LCFS He...
Y.Arun Kumar Reddy

Y. Arun Kumar Reddy

and 1 more

January 26, 2024
This paper utilizes a novel approach to analyzing remote monitoring systems by using the Age of Information (AoI) as a metric to measure the timeliness of status updates. The study focuses on a scenario with a single source and destination queueing system, two parallel heterogeneous servers, and no additional buffer, where the analysis of AoI or PAoI is difficult because of the out-of-order reception of packets. To analyze the performance of the system, we use stochastic hybrid systems (SHS) approach. We evaluate the AoI and Peak AoI (PAoI) of the system using different queueing disciplines, including Last Come First Serve (LCFS) and LCFS with probabilistic routing. We use Proactive Obsolete Packet Management (POPMAN) method to identify obsolete packets that can be discarded in advance, which saves server processing time and receive packets in order at the receiver. The study compares the performance of this system to a similar system with two homogeneous servers. The results show that by using the POPMAN method, there is an improvement in the AoI and PAoI for LCFS queueing system and the performance of LCFS with probabilistic routing using the POPMAN approach is same as the traditional scheme. Additionally, the study analyzes various parameters, such as the probability of preempted packets, the probability of obsolete packets, the probability that a packet is informative or successfully delivered, and optimal splitting probabilities for probabilistic routing.
Age of Information of an FCFS Queueing System in Heterogeneous Servers with Proactive...
Y.Arun Kumar Reddy

Y. Arun Kumar Reddy

and 1 more

January 26, 2024
The Age of Information (AoI) is a metric in a remote monitoring system to evaluate the timeliness of status updates received at the destination. In this paper, we study the AoI and Peak AoI (PAoI) in a queueing system with two parallel heterogeneous servers and no extra buffer. Analyzing AoI or PAoI with a system having two parallel heterogeneous servers is challenging because of the out-of-order reception of packets. A stochastic hybrid systems (SHS) approach has been used to analyze the performance of the system. We study the system's performance under two different queueing disciplines: First Come First Serve (FCFS) and FCFS with probabilistic routing. The findings are then compared to a similar system with two homogeneous servers. A novel methodology called "Proactive Obsolete Packet Management" denoted as "POPMAN", has been proposed to proactively identify obsolete packets, thereby reducing server processing time and improving the AoI and PAoI of the system. POPMAN technique has been applied to both FCFS and FCFS with probabilistic routing queueing systems. We observe that the proposed schemes exhibit improvement compared to traditional methods and also packets receive in order of reception. Our simulations are validated for various parameters, such as the packet dropping probability, the probability of obsolete packets due to outdated information in the servers, probability that a packet is informative or successfully delivered, and optimal splitting probabilities for probabilistic routing.
Relative Freshness Stochastic Hybrid Systems Markov Chain (RF-SHS-MC) model for the s...
Y.Arun Kumar Reddy

Y. Arun Kumar Reddy

and 1 more

February 14, 2024
The Age of Information (AoI) quantifies the freshness of status updates in a remote monitoring system. This paper introduces a new technique to calculate the average AoI efficiently using stochastic hybrid systems (SHS) analysis. In this paper, the relative freshness of packets in a queueing system is used to define the discrete states of the SHS Markov chain. We validate the average AoI for M/M/1/1 and M/M/1/2 queuing systems with First Come First Serve (FCFS) queuing discipline using this relative freshness approach. Furthermore, we extend our analysis to dual-queue status update systems. We also validate the known results for the packet-dropping probability and the probability that a packet is informative or successfully delivered.
Quantum-Inspired Differential Evolution with Decoding using Hashing for Efficient Use...

Marlom Bey

and 2 more

January 26, 2024
Modern apps require high computing resources for real-time data processing, allowing app users (AUs) to access real-time information. Edge computing (EC) provides dynamic computing resources to AUs for real-time data processing. However, ESs in specific areas can only serve a limited number of AUs due to resource and coverage constraints. Hence, the app user allocation problem (AUAP) becomes challenging in the EC environment. In this paper, a quantum-inspired differential evolution algorithm (QDE-UA) is proposed for efficient user allocation in the EC environment. The quantum vector is designed to provide a complete solution to the AUAP. The fitness function considers factors such as minimum ES required, user allocation rate (UAR), energy consumption, and load balance. Extensive simulations are performed along with hypotheses-based statistical analyses (ANOVA, Friedman test) to show the significance of the proposed QDE-UA. The results indicate that QDE-UA outperforms existing strategies with an average UAR improvement of 116.63%, a 77.35% reduction in energy consumption, and 46.22% enhancement in load balance while utilizing 13.98% fewer ESs.
The State-of-the-Art and Promising Future of Blockchain Sharding
Qinglin Yang

Qinglin Yang

and 8 more

January 26, 2024
Blockchain sharding is a significant technical branch of improving the scalability of blockchain systems. It is regarded as one of the potential solutions that can achieve on-chain scaling and significantly improve the scalability of blockchains without alleviating the decentralization feature of blockchain. To provide a reference and inspire participation from both the academic and industrial sectors in the research on blockchain sharding, we have walked through the state-of-the-art studies published in the past three years on blockchain sharding. We also conducted experiments to show the performance of representative sharding protocols such as Monoxide, LBF, Metis, and BrokerChain. We finally envision the potential challenges and promising future of sharding techniques in terms of the urgent demands of high throughput required by emerging applications such as Web3, Metaverse, and Decentralized Finance (DeFi). We hope that this article is helpful to researchers, engineers, and educators, and will inspire subsequent studies in the field of blockchain sharding.
← Previous 1 2 3 4 5 6 7 8 9 … 90 91 Next →
Back to search
Authorea
  • Home
  • About
  • Product
  • Preprints
  • Pricing
  • Blog
  • Twitter
  • Help
  • Terms of Use
  • Privacy Policy