AUTHOREA
Log in Sign Up Browse Preprints
LOG IN SIGN UP

2168 communication, networking and broadcast technologies Preprints

Related keywords
communication, networking and broadcast technologies knowledge distillation Internet of things (IoT) attention mechanisms data preprocessing human pose estimation 6G networks multi-armed bandits Human Privacy 6th generation (6G) bayesian optimization esp8266 Generative Pre-trained Transformers vehicular networks soft knowledge Jointly typical decoding lorawan transmit beamforming Ground Segment Infrastructure 6g middleware scalability climate Synonymous typical set Discriminative AI + show more keywords
pure exploration Zero-Touch Service Management environmental data iot machine learning variational autoencoders machine learning models inference on the edge Time Varying 𝑀 isochronous stream ad hoc networks mec integrated sensing and communications Bi-LSTM markov chain network intrusion detection Deep Learning (DL) Team Multi-Role Assignment time-series analysis Generative Transformers Semantic rate distortion function fields, waves and electromagnetics holographic beamforming predictive monitoring semantic communications agricultural irrigation home automation genetic algorithm Semantic source channel coding hierarchical distributed architecture engineering profession Software Defined Networks VOLUME Collaborative Computing Quintuple Helix Model robotics and control systems 5G signal processing and analysis IEEE Backhauling Synonymous length Maximum likelihood group decoding collision-free MAC Y. Iraqi artificial intelligence lightweight network Role Dependency fog computing RNN photonics and electrooptics breast cancer Semantic channel capacity generative ai Jointly typical encoding image enhancement automatic repeat request (arq) age of information performance evaluation streams based architecture E. Damiani lstm power, energy and industry applications 5th generation (5G) Through the Wall large language models channel model ai-generated content 1 queueing system mmWave bioengineering Bundle Protocol Weather Data Analysis urllc wireless control channel sounding UAV-enabled relaying lora MEC systems Dataset Up/Down semantic mutual information K-cross validation E-CARGO A. Al-Dweik millimeter-wave channel modeling generative adversarial network Synonymous mapping Adversarial ML mixed-integer linear programming (milp) timeon-air Semantically jointly typical set backhauling 𝐸 𝑘 Network Function Virtualization explainable AI aerospace data center Semantic relative entropy H. Yahya internet server markov state model non-orthogonal multiple access (noma) generative adversarial networks policy Semantic entropy computing and processing Semantic distortion diffusion models Throughput Semantically typical set interoperability protocol channel state information Delay-Tolerant Networking Transfer learning packet drop Normalizing Flows classification network protocols performance analysis components, circuits, devices and systems mammographic image distributed critical infrastructures artifical intelligence LE Audio generative models location local connectivity Emerging Technology Disruption Near-Field Communication deep learning cyber-attack spreading factor (SF) relays Soil Moisture Prediction Image dehazing Extremely Large Antenna Arrays general topics for engineers Intrusion Detection System Gurobi Solver bluetooth low energy Convolutional Neural Networks transportation
FOLLOW
  • Email alerts
  • RSS feed
Please note: These are preprints and have not been peer reviewed. Data may be preliminary.
Millimeter-Wave Urban Celluar Channel Characterization and Recipe for High-Precision...

Hibiki Tsukada

and 4 more

February 19, 2024
To design a reliable communication system utilizing millimeter-wave (mm-wave) technology, which is gaining popularity due to its ability to deliver multi-gigabit-per-second data rate, it's essential to consider the site-specific nature of the mmwave propagation. Conventional site-general stochastic channel models are often unsatisfactory for accurately reproducing the channel responses under specific usage scenarios or environments. For high-precision channel simulation that reflects sitespecific characteristics, this paper proposes a channel model framework leveraging a widely accepted 3GPP map-based hybrid channel modeling approach, and it provides a detailed recipe to apply it to an actual scenario using some examples. First, an extensive measurement campaign was conducted in typical urban macro and micro cellular environments using an inhouse dual-band (24/60 GHz) double-directional channel sounder. Subsequently, the mm-wave channel behavior was characterized, focusing on the difference between the two frequencies. Then, the site-specific large-scale and small-scale channel properties were parameterized. As an essential component for improving prediction accuracy, this paper proposes an exponential decay model for power delay characteristics of non-line-of-sight clusters, of which powers are significantly overestimated by deterministic prediction tools. Finally, using the in-house channel model simulator (CPSQDSIM) developed for grid-wise channel data (PathGridData) generation, a significant improvement in prediction accuracy compared with the existing 3GPP map-based channel model was demonstrated.
An event stream architecture for the distributed inference execution of predictive mo...
Juan C. Dueñas

Juan C. Dueñas

and 3 more

February 19, 2024
Predictive monitoring on distributed critical infrastructures (DCI) is the ability to anticipate events that will likely occur in the DCI before they actually appear, improving the response time to avoid the rise of critical incidents. Distributed into a region or country, DCIs such as smart grids or microgrids rely on IoT, edge-fog continuum computing and the growing capabilities of distributed application architectures to collect, transport, and process data generated by the infrastructure. We present a model-agnostic distributed architecture for the inference execution of machine learning window-based prediction models of predictive monitoring applications to be used in this context. This architecture transports the events generated by the DCI using event streams to be processed by a hierarchy of nodes holding predictive models. It also handles the offloading of inferences from resource-scarce devices at lower levels to the resourceful upper nodes. Therefore, the timing requirements for setting predictions before they occur are met.
A Collision-Free Information Freshness-Aware MAC Scheme for Congested Vehicular Ad-ho...

Mohsen Tajallifar

and 8 more

February 14, 2024
Sharing basic safety messages (BSMs) among connected vehicles (CVs) in a timely and reliable manner is of paramount importance in vehicular networks. When CVs are connected through ad hoc networks, the timely delivery of BSMs is very challenging due to the randomness in medium access control (MAC) and may lead to collision, especially in crowded networks. Besides, although the channel acquisition in conventional methods via transmission and reception of control signals results in collision-free message delivery, it adds high overhead cost. In this paper, we propose an efficient MAC scheme to carefully address these issues by improving communication efficiency and reducing the signaling overhead. The proposed scheme dedicates each time slot to only one CV and consequently is collision-free. Since BSMs contain similar information of a CV, we adopt the age of information (AoI) as the performance metric. We derive mathematical expressions for the MAC delay and AoI of the collision-free scheme by proposing a two dimensional Markov model. We compare the performance of proposed scheme with IEEE 802.11p standard and another lowcomplexity random scheme. AoI, delay, and collision rate are evaluated by OPNET network simulator, which provides realworld implementation scenario. Simulations results show that the collision-free scheme performs significantly better than IEEE 802.11p in highly congested networks. As an example, for a dense scenario where BSMs are generated every 10 ms, AoI of collisionfree scheme is about 50 ms while those of IEEE 802.11p and random scheme are about 140 and 150 ms, respectively, which are assumed too high for safety applications. Besides, the results show almost perfect match between mathematical derivations and results obtained by OPNET.
MiddleFog: A Middleware for Protocol Interoperability in Heterogeneous IoT Environmen...
Desiree dos Santos

Desiree dos Santos

and 5 more

February 14, 2024
The Internet of Things (IoT) enables billions of smart devices to capture, process, and transform data to improve decision-making. IoT demands a critical mobile edge computing (MEC) ecosystem to provide dependability. It requires an efficient and distributed architecture with multiple IoT communication protocols. In this way, intelligent middleware is needed to achieve the efficiency, throughput, and reliability of data delivery on different protocols without interference from the local setup of the device. This paper proposes a modular and interoperable middleware called MiddleFog to select the most appropriate communication protocol among MQTT and CoAP dynamically. Also, the approach minimizes communication limitations caused by latency, package loss, and low network throughput between MEC and Cloud. The initial evaluations show a message loss rate lower than 25% for small messages, and performance improves around 48% for medium-sized delivery messages.
Solving the unsolvable non-stationary 𝑴/𝑬 𝒌 /𝟏 queue's state variable open problem
Dr Ismail A Mageed

Dr Ismail A Mageed

February 14, 2024
This paper is a continuation on my revolutionary theory of solving the pointwise fluid flow approximation model for time-varying queues. Thus, the long-standing simulative approach has now been replaced by an exact solution by using a constant ratio 𝛽 (Ismail's ratio) , offering an exact analytical solution. The stability dynamics of the time-varying 𝑀/𝐸 𝑘 /1 queueing system are then examined numerically in relation to time, 𝛽, and the queueing parameters.
Exploring the Potential of ESP8266: A Wireless Control Experiment
Paulo Ricardo

Paulo Ricardo

and 2 more

February 14, 2024
This paper details an experiment utilizing ESP8266 modules as servers to wirelessly control diverse electrical appliances in home automation. The experiment showcased the modules' capability to respond to commands via a web interface on both mobile and desktop platforms or even tablets. While most of the experiment ran smoothly, occasional freezing and connectivity disruptions were observed. The abstract encapsulates the experiment's successes, discusses encountered challenges, and outlines a forward-looking perspective, including the integration of a custom PCB for enhanced system stability.
Soil Moisture Prediction with Attention-Enhanced Models: A Deep Learning Approach
Vlado Grubisic

Vlado Grubišić

and 5 more

February 14, 2024
This research explores the efficacy of machine learning and deep learning models in predicting soil moisture, a critical factor in optimizing agricultural irrigation systems. Utilizing data from the Vantage Vue weather station and Watermark 200 SS soil moisture sensors, we conducted a comparative analysis of traditional models like RandomForest and MLP against advanced deep learning models, particularly LSTM and 1D Convolutional Neural Networks, enhanced with attention mechanisms. The study reveals that attention-augmented models, especially the CONV1D+Attention model achieving an R2 value of 0.51, excel in capturing the complex dynamics of soil moisture. These results underscore the potential of such models in handling complex timeseries data in contexts like soil moisture levels influenced by weather conditions, offering significant insights for improved water management and sustainable agricultural practices globally.
Enhancing Human Life and Safety through Transparent and Predictable Artificial Intell...
Budee U Zaman

Budee U Zaman

February 13, 2024
In the pursuit of enhancing human life and safety through artificial intelligence (AI), transparency and predictability emerge as crucial elements. The potential for self-awareness in AI, we argue, hinges on its autonomy in decision-making. To address this, we propose three fundamental laws of AI, emphasizing their practical implementability. These laws aim to establish a framework that ensures transparency, predictability, and independence in decision-making, ultimately contributing to the responsible development and deployment of artificial intelligence for the benefit of humanity.
E-CARGO-Based Team Multi-Role Assignment Problem with Role Dependency

Wenan Tan

and 8 more

February 13, 2024
With the rapid development of collaborative computing, the Environments-Classes, Agents, Roles, Groups, and Objects (E-CARGO) model has been initially applied as a technique of Role-Based Collaboration (RBC). By extending the Group Multi-Role Assignment (GMRA) model after in-depth study on E-CARGO-based Group Role Assignment (GRA), a Team Multi-Role Assignment with Role Dependency (TMRARD) model is proposed with a higher degree of generalizability. The model aims to address how to effectively assign collaborative units (agents) to multiple roles for maximizing group performance and synergistic effect in the collaborative process with consideration of the role dependencies and the common team goals. By analyzing the characteristics and elements of the E-CARGO model, the TMRARD model is formally modeled based on E-CARGO. After analyzing the rationalization of the scale of role-dependent inputs, a Gurobi solution based on Mixed-Integer Linear Programming (MILP) has proposed and developed with consideration of the complexity of role-dependent constraints. Simulation experiments have verified the effectiveness and robustness of this method, demonstrating high performance in large-scale and complex constraint situations, in order to provide more scientific and efficient decision-making support for the field of collaborative computing.
Intelligent NOMA-Based Wireless Backhauling for IoT Applications without End-Device C...

A Ahmed

and 4 more

February 13, 2024
The article introduces an innovative wireless backhauling approach employing non-orthogonal multiple access (NOMA) and automatic repeat request (ARQ) mechanisms. In this novel scheme, power allocation follows a round-robin (RR) method, ensuring equitable performance among paired users. To address the potential packet loss afterARQ, an intelligent packet repair technique is incorporated to recover the dropped packets. A key feature involves storing dropped data packets for subsequent processing before forwarding to their respective IoT devices (IoDs). The proposed methodology hinges on recognizing that interference within a dropped packet may correspond to a packet retrievable in a forthcoming transmission, facilitating recovery through iterative successive interference cancellation (SIC). Significantly, the scheme enhances data reliability without necessitating an increase in the ARQ retransmission limit, which makes it particularly suited for certain Internet of things (IoT) applications. Empirical results confirm a substantial success rate in recovering dropped packets. Notably, the iterative interference cancellation (IIC) technique demonstrated a noteworthy reduction in the packet drop rate (PDR) from 10 −1 to 10 −3 , representing a 100-fold improvement. This implies the successful recovery of 99% of the packets initially dropped in specific scenarios, showcasing the efficacy of the proposed approach.
Modeling and Analysis of the Performance for CIS-based Bluetooth LE Audio
Zhongjiang Yan
Hao Xu

Zhongjiang Yan

and 2 more

February 12, 2024
Wireless audio transmission has always been the focus of Bluetooth application scenarios. Future audio use cases, such as multi-streaming to assist stereo imaging experiences, broadcast audio sharing, hearing aid support, etc., have higher requirements for quality of service (QoS). LE Audio based on Bluetooth Low Energy (BLE) is considered to be a replacement for the Bluetooth classic audio standard in the next generation of audio applications. However, until now, the performance of CIS-based LE Audio has not been fully analyzed. In this paper, we propose a mathematical model to evaluate the performance of CIS such as packet loss rate (PLR), throughput, backlog, delay, and average power consumption. In addition, the feasibility of multi-hop transmission based on CIS is explored, and the model is extended to analyze the end-to-end PLR and throughput. Finally, the accuracy of the proposed model is verified by simulation results, and the relationship between CIS parameters and performance is analyzed, providing guidance for parameter selection in LE Audio applications.
Facilitating URLLC vis-à-vis UAV-enabled relaying for MEC Systems in 6G Networks
Ali Ranjha

Ali Ranjha

and 3 more

February 12, 2024
The futuristic sixth-generation (6G) networks will empower ultra-reliable and low latency communications (URLLC), enabling a wide array of mission-critical applications such as mobile edge computing (MEC) systems, which are largely unsupported by fixed communication infrastructure. To remedy this issue, unmanned aerial vehicle (UAV) has recently come to the limelight to facilitate MEC for internet of things (IoT) devices as they provide desirable line-of-sight (LoS) communications compared to fixed terrestrial networks, thanks to their added flexibility and three-dimensional (3D) positioning. In this paper, we consider UAV-enabled relaying for MEC systems for uplink transmissions in 6G networks, and we aim to optimize mission completion time subject to the constraints of resource allocation, including UAV transmit power, UAV CPU frequency, decoding error rate, blocklength, communication bandwidth, and task partitioning as well as 3D UAV positioning. Moreover, to solve the non-convex optimization problem, we propose three different algorithms, including successive convex approximations (SCA), altered genetic algorithm (AGA) and smart exhaustive search (SES). Thereafter, based on time-complexity, execution time, and convergence analysis, we select AGA to solve the given optimization problem. Simulation results demonstrate that the proposed algorithm can successfully minimize the mission completion time, perform power allocation at the UAV side to mitigate information leakage and eavesdropping as well as map a 3D UAV positioning, yielding better results compared to the fixed benchmark sub-methods. Lastly, subject to 3D UAV positioning, AGA can also effectively reduce the decoding error rate for supporting URLLC services.
StethoNet: Robust Breast Cancer Mammography Classification Framework
Charalampos Lamprou

Charalampos Lamprou

and 5 more

February 12, 2024
Despite the emergence of numerous Deep Learning (DL) models for breast cancer detection via mammograms, there is a lack of evidence about their robustness to perform well on new unseen mammograms. To fill this gap, we introduce StethoNet, a DL-based framework that consists of multiple Convolutional Neural Network (CNN) trained models for classifying benign and malignant tumors. StethoNet was trained on the Chinese Mammography Database (CMMD), and tested on unseen images from CMMD, as well as on images from two independent datasets, i.e., the Vindr-Mammo and the INbreast datasets. To mitigate domain-shift effects, we applied an effective entropy-based domain adaptation technique at the preprocessing stage. Furthermore, a Bayesian hyperparameters optimization scheme was implemented for StethoNet optimization. To ensure interpretable results that corroborate with prior clinical knowledge, attention maps generated using Gradientweighted Class Activation Mapping (GRAD-CAM) were compared with Regions of Interest (ROIs) identified by radiologists. StethoNet achieved impressive Area Under the receiver operating characteristics Curve (AUC) scores: 90.7% (88.6%-92.8%), 83.9% (76.0%-91.8%), and 85.7% (82.1%-89.4%) for the CMMD, INbreast, and Vindr-Mammo datasets, respectively. These results surpass the current state of the art and highlight the robustness and generalizability of StethoNet, scaffolding the integration of DL models into breast cancer mammography screening workflows. Our code is available at https://github.com/CharLamp10/breast cancer detection.git.
A Review of the Factors Impacting the Optimal Placement of Data Centers
Viraj Nain

Viraj Nain

February 12, 2024
Our current digital landscape relies heavily on existing data centers to store and process information for online applications accessed by millions of users. Any failure of these operations significantly affects productivity as well as significant operations of an organization. Additionally, data centers are high energy consumers. Climate change and the scarcity of energy resources due to political or local constraints emphasize the need for us to deeply analyze the strategic placement of these data centers to understand factors contributing to risks and opportunities in data center placement.
DTN Demonstrations with ESA Ground Segment
Camillo

Camillo Malnati

and 1 more

February 19, 2024
In this paper we will present the results of two DTN demonstration activities carried out in the ESA Ground Segment. The first demonstration has been prepared with the OPS-SAT spacecraft, to demonstrate a full DTN protocol stack with CFDP, Bundle Protocol, LTP, CCSDS Space Packet Protocol and show the ESA Ground Segment Bundle Protocol implementation capabilities. The second demonstration has been performed in collaboration with Morehead State University, NASA JPL and D3TN, with the aim to show interoperability of DTN implementations across space agencies and external partners.
CSIPose: Unveiling Body Pose Using Commodity WiFi Devices Through the Wall
Yangyang Gu

Yangyang Gu

and 7 more

February 12, 2024
The popularity of WiFi devices and the development of WiFi sensing have alerted people to the threat of WiFi sensingbased privacy leakage, especially the privacy of human poses. Existing work on human pose estimation is deployed in indoor scenarios or simple occlusion (e.g., a wooden screen) scenarios, which are less privacy-threatening in attack scenarios. To reveal the risk of leakage of the pose privacy to users from commodity WiFi devices, we propose CSIPose, a privacy-acquisition attack that passively estimates dynamic and static human poses in through-the-wall scenarios. We design a three-branch network based on knowledge distillation, self-encoder, and self-attention mechanisms to realize the supervision of video frames over CSI frames to generate human pose skeleton frames. Notably, we design AveCSI, a unified framework for preprocessing and feature extraction of CSI data corresponding to dynamic and static poses. This framework uses the average of CSI sequences to generate CSI frames to mitigate the instability of passively collected CSI data, and utilizes a self-attention mechanism to enhance key features. We evaluate the performance of CSIPose across different room layouts, subjects, devices, subject locations, and device locations, and the evaluation results emphasize the generalizability of the system. Finally, we discuss measures to mitigate this attack.
Automatic Network Intrusion Detection System Using Machine learning and Deep learning
Mohammed Mynuddin
Sultan Uddin Khan

Mohammed Mynuddin

and 6 more

March 06, 2024
In recent years, the popularity of network intrusion detection systems (NIDS) has surged, driven by the widespread adoption of cloud technologies. Given the escalating network traffic and the continuous evolution of cyber threats, the need for a highly efficient NIDS has become paramount for ensuring robust network security. Typically, intrusion detection systems utilize either a pattern-matching system or leverage machine learning for anomaly detection. While pattern-matching approaches tend to suffer from a high false positive rate (FPR), machine learning-based systems, such as SVM and KNN, predict potential attacks by recognizing distinct features. However, these models often operate on a limited set of features, resulting in lower accuracy and higher FPR. In our research, we introduced a deep learning model that harnesses the strengths of a Convolutional Neural Network (CNN) combined with a Bidirectional LSTM (Bi-LSTM) to learn spatial and temporal data features. The model, evaluated using the NSL-KDD dataset, exhibited a high detection rate with a minimal false positive rate. To enhance accuracy, K-fold cross-validation was employed in training the model. This paper showcases the effectiveness of the CNN with Bi-LSTM algorithm in achieving superior performance across metrics like accuracy, F1-score, precision, and recall. The binary classification model trained on the NSLKDD dataset demonstrates outstanding performance, achieving a high accuracy of 99.5% after 10-fold cross-validation, with an average accuracy of 99.3%. The model exhibits remarkable detection rates (0.994) and a low false positive rate (0.13). In the multiclass setting, the model maintains exceptional precision (99.25%), reaching a peak accuracy of 99.59% for k-value=10. Notably, the Detection Rate for k-value=10 is 99.43%, and the mean False Positive Rate is calculated as 0.214925.
At the Dawn of Generative AI Era: A Tutorial-cum-Survey on New Frontiers in 6G Wirele...
Abdulkadir Celik

Abdulkadir Celik

and 1 more

February 12, 2024
As we transition from the 5G epoch, a new horizon beckons with the advent of 6G, seeking a profound fusion with novel communication paradigms and emerging technological trends, bringing once-futuristic visions to life along with added technical intricacies. Although analytical models lay the foundations and offer systematic insights, we have recently witnessed a noticeable surge in research suggesting machine learning (ML) and artificial intelligence (AI) can efficiently deal with complex problems by complementing or replacing model-based approaches. The majority of data-driven wireless research leans heavily on discriminative AI (DAI) that requires vast real-world datasets. Unlike the DAI, Generative AI (GenAI) pertains to generative models (GMs) capable of discerning the underlying data distribution, patterns, and features of the input data. This makes GenAI a crucial asset in wireless domain wherein real-world data is often scarce, incomplete, costly to acquire, and hard to model or comprehend. With these appealing attributes, GenAI can replace or supplement DAI methods in various capacities. Accordingly, this combined tutorial-survey paper commences with preliminaries of 6G and wireless intelligence by outlining candidate 6G applications and services, presenting a taxonomy of state-of-the-art DAI models, exemplifying prominent DAI use cases, and elucidating the multifaceted ways through which GenAI enhances DAI. Subsequently, we present a tutorial on GMs by spotlighting seminal examples such as generative adversarial networks, variational autoencoders, flow-based GMs, diffusion-based GMs, generative transformers, large language models, autoregressive GMs, to name a few. Contrary to the prevailing belief that GenAI is a nascent trend, our exhaustive review of approximately 120 technical papers demonstrates the scope of research across core wireless research areas, including 1) physical layer design; 2) network optimization, organization, and management; 3) network traffic analytics; 4) cross-layer network security; and 5) localization & positioning. Furthermore, we outline the central role of GMs in pioneering areas of 6G network research, including semantic communications, integrated sensing and communications, THz communications, extremely large antenna arrays, near-field communications, digital twins, AI-generated content services, mobile edge computing and edge AI, adversarial ML, and trustworthy AI. Lastly, we shed light on the multifarious challenges ahead, suggesting potential strategies and promising remedies. Given its depth and breadth, we are confident that this tutorial-cum-survey will serve as a pivotal reference for researchers and professionals delving into this dynamic and promising domain.
A Comparison of Neural Network-Based Intrusion Detection Against Signature-Based Dete...
Max Schrötter

Max Schrötter

and 2 more

February 09, 2024
Over the last few years, a plethora of papers presenting machine learning-based approaches for intrusion detection has been published. However, the majority of those papers does not compare their results with a proper baseline of a signature-based intrusion detection system. Thus violating good machine learning practices. In order to evaluate the pros and cons of the machine learning-based approach, we replicated a research study which use a deep neural network model for intrusion detection. The results of our replicated research study expose several systematic problems with the used datasets and evaluation methods. In our experiments, a signature-based intrusion detection system with a minimal setup was able to outperform the tested model even under small traffic changes. While testing the replicated neural network on a new dataset recorded in the same environment with the same attacks using the same tools showed that the accuracy of the neural network dropped to 54%. Furthermore, the often claimed advantage of being able to detect zero-day attacks could not be seen in our experiments.
A Mathematical Theory of Semantic Communication
niukai

Kai Niu

and 1 more

February 12, 2024
The year 1948 witnessed the historic moment of the birth of classic information theory (CIT). Guided by CIT, modern communication techniques have approached the theoretic limitations, such as, entropy function H(U), channel capacity C = max p(x) I(X; Y) and rate-distortion function R(D) = min p(x|x):Ed(x,x)≤D I(X; X). Semantic communication paves a new direction for future communication techniques whereas the guided theory is missed. In this paper, we try to establish a systematic framework of semantic information theory (SIT). We investigate the behavior of semantic communication and find that synonym is the basic feature so we define the synonymous mapping between semantic information and syntactic information. Stemming from this core concept, synonymous mapping, we introduce the measures of semantic information, such as semantic entropy H s (Ũ), up/down semantic mutual information I s (X; Ỹ) (I s (X; Ỹ)), semantic capacity C s = max p(x) I s (X; Ỹ), and semantic rate-distortion function R s (D) = min p(x|x):Eds(x, x)≤D I s (X; X). Furthermore, we prove three coding theorems of SIT by using random coding and (jointly) typical decoding/encoding, that is, the semantic source coding theorem, semantic channel coding theorem, and semantic rate-distortion coding theorem. We find that the limits of SIT are extended by using synonymous mapping, that is, H s (Ũ) ≤ H(U), C s ≥ C and R s (D) ≤ R(D). All these works composite the basis of semantic information theory. In addition, we discuss the semantic information measures in the continuous case. Especially, for band-limited Gaussian channel, we obtain a new channel capacity formula, C s = B log S 4 1 + P N0B with the synonymous length S. In summary, the theoretic framework of SIT proposed in this paper is a natural extension of CIT and may reveal great performance potential for future communication.
Soft Knowledge-based Distilled Dehazing Networks
Le-Anh Tran

Le-Anh Tran

and 1 more

February 06, 2024
A two-stage knowledge transfer framework for distilling efficient dehazing networks is proposed in this paper. Recently, lightweight dehazing studies based on knowledge distillation have shown great promise and potential. However, existing approaches have only focused on exploiting knowledge extracted from clean images (hard knowledge) while neglecting the concise knowledge encoded from hazy images (soft knowledge). Additionally, recent methods have solely emphasized process-oriented learning rather than response-oriented learning. Motivated by these observations, the proposed framework is targeted toward aptly exploiting soft knowledge and response-oriented learning to produce improved dehazing models. A general encoder-decoder dehazing structure is utilized as the teacher network as well as a basis for constructing the student model with drastic complexity reduction using a channel multiplier. A transmissionaware loss is adopted that leverages the transmission information to enhance the network's generalization ability across different haze densities. The derived network, called Soft knowledgebased Distilled Dehazing Network (SDDN), achieves a significant reduction in complexity while maintaining satisfactory performance or even showing better generalization capability in certain cases. Experiments on various benchmark datasets have demonstrated that SDDN can be compared competitively with prevailing dehazing approaches. Moreover, SDDN shows a promising applicability to intelligent driving systems. When combined with YOLOv4, SDDN can improve the detection performance under hazy weather by 9.1% with only a negligible increase in the number of parameters (0.87%). The code of this work is publicly available at https://github.com/tranleanh/sddn.
UB3: Fixed Budget Best Beam Identification in mmWave Massive MISO via Pure Exploratio...
Debamita Ghosh
mhanawal

Debamita Ghosh

and 2 more

February 02, 2024
One of the core problems in millimeter wave (mmWave) massive multiple-input-single-output (MISO) communication systems, which significantly affects the data rate, is the misalignment of the beam direction of the transmitter towards the receiver. In this paper, we investigate strategies that identify the best beam within a fixed duration of time. To this end, we develop an algorithm, named Unimodal Bandit for Best Beam (UB3), that exploits the unimodal structure of the mean received signal strength as a function of the available beams and identifies the best beam within a fixed time duration using pure exploration strategies. We derive an upper bound on the probability of misidentifying the best beam, and we prove that the upper bound is of the order O (log 2 K exp {−αnA}), where K is the number of beams, A is a problem-dependent constant, and αn is the number of pilots used in the channel estimation phase. In contrast, when the unimodal structure is not exploited, the error probability is of order O (log 2 K exp {−αnA/(K log K)}). Thus, by exploiting the unimodal structure, we achieve a much better error probability, which depends only logarithmically on K. We demonstrate that UB3 outperforms the state-of-the-art algorithms through extensive simulations.
A Decentralized Dynamic Relaying-Based Framework for Enhancing LoRa Networks Performa...
Hamza Haif

Hamza Haif

and 2 more

January 30, 2024
Long-Range (LoRa) technology holds tremendous potential for regulating and coordinating communication among Internet-of-Things (IoT) devices due to its low power consumption and cost-effectiveness. However, LoRa faces significant obstacles such as reduction in coverage area, a high packet drop ratio (PDR), and an increased likelihood of collisions, all of which result in substandard data rates. In this paper, we present a novel approach that employs a relaying node capable of allocating resources dynamically based on signal parameters. In particular, the geometric placement of the relay node is determined by a genetic algorithm that maximizes signal-tonoise ratio (SNR) and signal-to-interference ratio (SIR) success probabilities. Using equal-area based (EAB) spreading factor (SF) distance allocation scheme, the coverage area is sliced into distinct regions in order to derive the success probabilities for different communication stages. Furthermore, we present a frequency channel shuffling algorithm to prevent collisions between end devices (EDs) without increasing the complexity of the relaying nodes. Through extensive simulations, we demonstrate that our proposed scheme effectively expands the coverage area, conserves transmission resources, and enhances the system's throughput. Specifically, our approach extends the range by up to 40%, increases the throughput by up to 50% compared to conventional methods, and achieves a 40% increase in success probability. To validate the practicality of our approach, we implement our algorithm in an active LoRa network utilizing an ESP32 LoRa SX1276 module, showcasing its compatibility in real-world scenarios.
Pragmatic and Symbiotic Quintuple Helix Model Mitigating Emerging Technologies Disrup...
Hadi Naghavi pour

Hadi Naghavi pour

and 5 more

February 06, 2024
Rapid changes arising from the technological revolution, mainly the advent of artificial intelligence, have been defining moments of the 21st century. Education and employment stand at the forefront of AI disruption; as a result, academia and industry, as the principal agents of this development, are deemed mutually involved in mitigating risks. Nonetheless, triggered by massive technological, economic, and social development at an accelerating rate, the industry-academia gap in orientation and direction is getting wider, leading to detrimental disarray. As the disruptive impact has far-reaching consequences, the role of government entities as the regulatory body and civil society as end users of these technologies and their impact on the environment is relevant and imperative. The extensive work on the quintuple helix model of innovation has been theorizing innovation ecosystems and processes without addressing the pragmatic aspect of stimulating innovation for specific problems. This paper put forward a vision for mitigating AI disruption by leveraging value creation and exchange harnessing symbiotic aspects of the helices by considering the necessity of knowledge circulation to address environmental and social impact. Finally, it advocates a series of mitigating strategies through a collaborative council empowered by human-centric ICT-Platform that can transcend to a dynamic policy-making mechanism.
← Previous 1 2 3 4 5 6 7 8 9 … 90 91 Next →
Back to search
Authorea
  • Home
  • About
  • Product
  • Preprints
  • Pricing
  • Blog
  • Twitter
  • Help
  • Terms of Use
  • Privacy Policy