Data analytics for networks involves the use of advanced techniques and tools to extract insights and knowledge from large and complex datasets generated by network devices, applications, and services. This process involves collecting, storing, processing, and analyzing large amounts of data to identify patterns, trends, and anomalies that can provide valuable information for network operators. By leveraging data analytics, network researchers can make informed decisions about network planning, capacity management, service delivery, and customer experience. Additionally, data analytics can help network operators to detect and respond to security threats and attacks, by analyzing network traffic, identifying abnormal behavior, and detecting potential vulnerabilities. Overall, data analytics is a critical component of massive networks, enabling network researchers to extract valuable insights from massive datasets and improve network performance, efficiency, and security.
Panos Pardalos
University of Florida
Panos Pardalos was born in Drosato (Mezilo) Argitheas GR in 1954 and graduated from Athens University (Department of Mathematics). He received his PhD (Computes and Information Sciences) from the University of Minnesota. He is an Emeritus Distinguished Professor in the Department of Industrial and Systems Engineering at the University of Florida, and an affiliated faculty of Biomedical Engineering and Computer Science & Information & Engineering departments. Since 2011 he has been the academic advisor of LATNA of NRU HSE.
Panos Pardalos is a world-renowned leader in Global Optimization, Mathematical Modeling, Energy Systems, Financial applications, and Data Sciences. He is a Fellow of AAAS, AAIA, AIMBE, EUROPT, and INFORMS and was awarded the 2013 Constantin Caratheodory Prize of the International Society of Global Optimization. In addition, Panos Pardalos has been awarded the 2013 EURO Gold Medal prize bestowed by the Association for European Operational Research Societies. This medal is the preeminent European award given to Operations Research (OR) professionals for “scientific contributions that stand the test of time.”
Panos Pardalos has been awarded a prestigious Humboldt Research Award (2018-2019). The Humboldt Research Award is granted in recognition of a researcher’s entire achievements to date – fundamental discoveries, new theories, insights that have had significant impact on their discipline.
Panos Pardalos is also a Member of several Academies of Sciences, and he holds several honorary PhD degrees and affiliations. He is the Founding Editor of Optimization Letters, Energy Systems, and Co-Founder of the International Journal of Global Optimization, Computational Management Science, and Springer Nature Operations Research Forum. He has published over 600 journal papers, and edited/authored over 200 books. He is one of the most cited authors and has graduated 71 PhD students so far. Details can be found in www.ise.ufl.edu/pardalos
After more than two years of scanning the sky the eROSITA X-ray telescope aboard SRG orbital observatory produced the best ever X-ray maps of the sky and discovered more than three million X-ray sources, of which about 20% are stars with active coronas in the Milky Way, and most of the rest are galaxies with active nuclei, quasars and clusters of galaxies. eROSITA detected over 10^3 sources that changed their luminosity by more than an order of magnitude, including about a hundred tidal disruption events. Two tidal disruption events are associated with IceCube neutrinos. SRG/eROSITA samples of quasars and galaxy clusters will make it possible to study the large-scale structure of the Universe at z~1 and measure its cosmological parameters. I will review some of the SRG/eROSITA results in the Eastern Galactic hemisphere.
Marat Gilfanov
https://iki.cosmos.ru/news/60-let-akademiku-maratu-ravilevichu-gilfanovu
Institute of Space Research RAS, Max Planck Institute for Astrophysics
Marat Gilfanov is an astrophysicist working in the interface of observational and theoretical astrophysics, focusing on a broad range of problems of high energy astrophysics, extragalactic astronomy and cosmology. He made a number of widely-known contributions to astrophysics of relativistic compact objects, physics of accretion and boundary layer, X-ray scaling relations for star-forming galaxies, nature of progenitors of SN Ia, fluctuations of the cosmic X-ray background. Presently, the main of focus of Marat’s research is SRG/eROSITA X-ray all-sky survey. Using its data he studies tidal disruption events, growth of supermassive black holes, large scale structure of the Universe and use of the all-sky survey data for cosmology. He is one of the creators of the X-ray all-sky map obtained by eROSITA in 2020. Marat Gilfanov holds a title of professor of astrophysics, he is a full member of the Russian Academy of sciences, member of Academia Europaea. He was a recipient of the COSPAR (Committee of Space Research) Yakov Zeldovich medal for young scientists, and the Russian Academy of Sciences A.A.Belopolsky Prize in astrophysics.
A time series is a chronologically ordered sequence of real-valued data points that reflect a particular process or phenomenon. Time series are currently ubiquitous in various domains, including digital industry, personal health care, the Internet of Things, climate modeling, and others. Anomaly detection and event prediction in time series have become hot topics in these areas, especially when dealing with online processing. In this plenary talk, we will present our work on the above problems. We have developed two parallel algorithms for discovering time series patterns on a single GPU or high-performance cluster, called “snippets”. These patterns are then used in our deep learning models to detect anomalies and predict events in real-time. We will also show real-world case studies to demonstrate the effectiveness of our approach.
Mikhail Zymbler
Doctor of Science (Physics and Mathematics)
South Ural State University, Chelyabinsk, Russia
Deputy Director of the Scientific and Educational Center “Artificial Intelligence and Quantum Technologies”
Research interests:
Machine Learning, time series mining, parallel algorithms, database management systems
Machine learning in high dimensions meets many specific problems (curse of dimensionality) and benefits (blessing of dimensionality). A source of many problems is the impossibility of restoring probability distributions even in moderately large dimensions. The notion of dimensionality of data also needs to be refined because even the standard linear algebra notions like the rank of empirical data matrices are irrelevant in high dimensions. Solutions to problems offered by AI in high dimensions are extremely unstable and vulnerable to many attacks. The a priori estimates of reliability in high dimensions are often impossible or unpractical. In this situation we propose to consider every high dimensional machine learning task in the frame of Hypothetico-Deductive Method developed by Galileo for physics and analyzed in detail by Feynman in his “Character of physical laws”: “We never are definitely right, we can only be sure we are wrong… We are trying to prove ourselves wrong as quickly as possible, because only in that way can we find progress”. Instead of the classical engineering reliability theory, we have to use more indefinite approach of natural science. In our talk we present a review of new notions of data dimensionality, methods and software of its evaluation, general theorems of instability and vulnerability in high dimensional AI decisions, and blessing of dimensionality effects in classification and organisation of memory. General statements are illustrated by a series of examples from various areas: pattern recognition, medical diagnosis and natural language processing
Alexander Gorban
University of Leicester, UK
Alexander N. Gorban has held a Personal Chair in applied mathematics at the University of Leicester since 2004. He worked for the Russian Academy of Sciences, Siberian Branch, and ETH Zurich, was a visiting professor and scholar at Clay Mathematics Institute (Cambridge, MA), IHES (Bures-sur-Yvette), Courant Institute (New York), and Isaac Newton Institute (Cambridge, UK). His main research interests are machine learning, data mining and model reduction problems, kinetics, and applied dynamics.
Ivan Tyukin
King’s College, London, UK and Centre for Artificial Intelligence,
Skolkovo, Russia
Professor Tyukin is the Head of the Mathematical Data Science Group at the Department of Mathematics at King’s College London. After completing PhD in 2001 he worked as a Research Scientist at RIKEN Brain Science Institute, Japan. In 2006 he was awarded a DSc (Habilitation) degree, and in 2007 he became an RCUK Academic Fellow at the University of Leicester. Since then he was promoted to the roles of Lecturer in Applied Mathematics, Reader, and Professor in Applied Mathematics in 2012, 2014, and 2018, respectively. In 2022 he joined King's College London as a Professor of Mathematical Data Science and Modelling. He is an Editor of Communications in Nonlinear Science and Numerical Simulations.
In 2019 - 2021 he served as an Adjunct Professorship at the Norwegian University of Science and Technology (NTNU). In 2021 he has been awarded a UKRI Turing AI Acceleration Fellowship to work on the mathematics underpinning the development of robust, stable, and resilient AI. He actively works in this area to date.
Graph databases have recently emerged as very useful tools to analyse and process graph data. At the same time, relational databases also had capabilities to work with graphs. And recently, with the standardisation of graph query language, SQL graph extensions have been proposed. Already, some studies have emerged on how to implement SQL PGQ extensions efficiently over relational databases, and open prototype systems have been built, such as the DuckPGQ project. Based on my experience at TigerGraph - a top company in large graph analytics space, I will discuss these developments with the goal of establishing a roadmap for competitive graph implementation in relational systems.
Pavel Velikhov
Pavel Velikhov is a researcher and developer in database management systems, he's held leading roles in the development of commercial and open-source systems: Enosys, SciDB, GaussDB, TigerGraph. Currently he's a leading developer of YDB, leading the analytics direction.
Modern information systems scale due to the parallelism of hardware and applications. However, the architecture of classical relational DBMSs was mainly developed in the world of single-core CPUs. The talk considers the architecture of the new DBMS LINTER SOQOL and its performance on modern hardware.
The talk will be given in Russian.
Andrey Korotchenko
Andrey Korotchenko is the chief architect of the SoQoL DBMS, graduated from VSU (Voronezh State University). He has more than 25 years of development experience at scientific and production company RELEX. RELEX is a Russian database management systems developer with more than 30 years of experience.
Submission deadline for papers | June 17, 2024 |
Submission deadline for tutorials | June 3, 2024 |
Notification for the first round | August 12, 2024 |
Deadline for revised versions of the papers forwarded to the second round of reviewing | September 2, 2024 |
Final notification of acceptance | September 16, 2024 |
Deadline for camera-ready versions of the accepted papers | September 16, 2024 |
Conference | October 23-25, 2024 |