Representation learning includes a set of machine learning (ML) techniques that learn to extract a good representation of the raw data. This representation is then used to further train a ML model that can help predict future states or values that support decision making. Neural representation learning is a subset of representation learning techniques that use deep learning for this task. Recently, neural representations have been learnt for text, images, logic gates and voice. For voice signals, it has been shown that implicit neural representations (INRs) with Periodic Activation Functions (i.e. sinusoidal) are able to reconstruct signals with higher resolution compared to the state of the art by having an edge in learning representations also for high frequencies, thus having an advantage for more accurately detecting sudden changes. While INRs are a recently emerging and highly active area of research, they have already been shown to be suitable also for imputation of time series and forecasting.
Emerging AI based 6G cellular and beyond network orchestrators are making use of network metrics (i.e. time series) for efficient resource management and fault management. However, sometimes samples from the metrics are irregular due to loss or jitter causing the typical AI models that use explicit representations to underperform. Therefore we aim to study the potential of INRs to improve model performance for decision making in emerging wireless networks.