Artificial Intelligence (AI) and Machine Learning (ML) is a perfect match for 5G. While 5G offers capabilities to support low latency and very high speeds (e.g., eMBB), massive number of devices (e.g., mMTC), heterogenous mix of traffic types from a diverse and demanding suite of applications (e.g., URLLC), AI/ML complements by learning from complex patterns to provide scope for autonomous operation, transforming 5G into a scalable real-time network that is data-driven.

AI/ML is being used for 5G network planning, automation of network operations (e.g., provisioning, optimization, fault prediction, security, fraud detection), network slicing, reducing operating costs, and improving both the quality of service and customer experience based on chatbots, recommender systems, and techniques such as robotic process automation (RPA). Further, AI and ML is being used across all layers – from disaggregated radio access layer (5G RAN), to integrated access backhaul (IAB) to the distributed cloud layer (5G Edge/Core) to fine tune performance.

For 5G distributed cloud layer, AI and ML is being used for optimizing use of system resources, autoscaling, anomaly detection, predictive analytics, prescriptive policies, and so on. Further, 5G distributed cloud layer provides acceleration technologies for AI/ML workloads to support federated and distributed learning.

Besides the above, AI/ML is also being used for customer experience management and business support systems to support a multitude of emerging applications (e.g., AR/VR, Industrial IoT, autonomous vehicles, drones, Industry 4.0 initiatives, Smart Cities, Smart Ports).

In all the above cases, aspects of data integrity, legal rights to data, data collection, data pipeline management, data lake design, and data science project cycles are considered. Further, aspects of model development, model training, model validation, model deployment, model monitoring and life cycle management including model libraries and model update/upgrade in service are also considered.

Several learning approaches such as supervised learning, unsupervised learning, reinforcement learning, federated learning, distributed learning, transfer learning, and deep learning based on algorithms such as Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs) are utilized to train data models based on target use cases.

Motivated by this progress and to further advance related discussions, the AI/ML topical will bring together leading experts from telecom service providers, system OEMs, software service providers, silicon vendors, open source network automation projects as well as leading researchers from academia to share their perspectives on opportunities and challenges to the operation of 5G using AI/ML. It will provide a unique forum for practitioners and researchers to share perspectives on recent developments, evolving landscape of AI/ML technologies, deployment use cases in various 5G scenarios and business benefits. Architects, Developers, Engineers, Testers, and Business Leaders as well as Students and Researchers from academia will surely find it useful to listen and have an opportunity to network with experts and innovators from industry and academia.

Speakers’ Profiles:

Speaker: Mazin Gilbert, AT&T Labs
Title: The AI Revolution in a 5G World
Abstract: 5G is driving a transformation in how people live, work, and play. This transformation will result in radically new experiences on how we are entertained, manufacture products, receive healthcare, and communicate with each other. This presentation takes you on a journey to the year 2025 to provide a view of these experiences that are riding on the 5G network. Behind this emerging 5G world, AI is revolutionizing how we forecast, design, plan and build the network, and how we optimize connectivity among people and “everything” through a dynamic, intelligent and an adaptive network. AI, empowered by machine learning and virtualization, is enabling efficient, cost-effective, secure and a highly reliable network.

Bio: Dr. Mazin Gilbert is the Vice President of Advanced Technology and Systems at AT&T Labs. He leads AT&T’s Research and Advanced Development of its network and access transformations. In this role, Mazin oversees advancements in artificial intelligence, software-defined networking and access, digital transformation, cloud technologies, open source software platforms and big data. Mazin holds 176 U.S. patents in communication and multimedia processing and has published over 100 technical papers in human-machine communication. He is the author of the book titled, “Artificial Neural Networks for Speech Analysis/Synthesis,” 1992, and an editor of a recent book on “Artificial Intelligence for Autonomous Networks,” 2018. With more than three decades of experience under his belt, Mazin’s previous work includes Bell Labs, BBC and British Telecom. He’s also worked in academia at Rutgers University, Princeton University and Liverpool University. He became an IEEE Fellow in 2012. Mazin earned a bachelor’s and a doctoral degree, with first-class honors, in electrical engineering from the University of Liverpool. He also earned an MBA for Executives from the Wharton Business School of the University of Pennsylvania.

 

Speaker: Harish Viswanathan, Nokia Bell Labs
Title: Machine Learning Enhanced Wireless Communications: Opportunities for 5G and 6G
Abstract: The physical layer of cellular systems has been optimized over five generations of technology evolution through the application of deep knowledge of information and communication theory principles. Nevertheless, several recent studies applying deep learning techniques to wireless signal transmission and reception show potential for further improvement in spectral efficiency and/or reduction in receiver complexity. We discuss several AI/ML receiver concepts for 5G systems such as random-access channel reception, channel estimation, demapping, as well as a complete deep learning receiver approach that show improvement over model-based receiver design. Looking ahead towards 6G new air-interface design, we discuss opportunities to optimize the transmitted signal using AI/ML.
Bio: Harish Viswanathan is Head of the Radio Systems Research Group at Nokia Bell Labs. He leads an international team of researchers investigating various aspects of wireless communication systems. In his prior role as a CTO Partner, he was responsible for advising the Corporate CTO on Technology Strategy through in-depth analysis of emerging technology and market needs. Harish Viswanathan joined Bell Labs in 1997 and has worked on multiple antenna technology for cellular wireless networks, mobile network architecture, and IoT. He received the B. Tech. degree from the Department of Electrical Engineering, Indian Institute of Technology, Madras, India and the M.S. and Ph.D. degrees from the School of Electrical Engineering, Cornell University, Ithaca, NY. He holds more than 50 patents and has published more than 100 papers. He is a Fellow of the IEEE and a Bell Labs Fellow. He has served as an associate editor for the IEEE Communications Letters and as an adjunct faculty at Columbia University.

 

Speaker: Chandra R. Murthy, Indian Institute of Science, Bangalore
Title: Joint channel estimation and soft-symbol detection in massive MIMO systems with low resolution ADCs
Abstract: In this talk, we present a variational Bayes’ algorithm for joint channel estimation and soft symbol decoding in an uplink massive multiple input multiple output (MIMO) receiver with low resolution analog to digital converters (ADCs). The posterior beliefs obtained from the algorithm can be easily used to compute the bit log likelihood ratios, which can be input to a channel decoder. We evaluate the symbol error probability and the normalized mean squared error of the channel estimates of the proposed algorithm using Monte Carlo simulations, and benchmark it against an unquantized variational Bayesian algorithm with perfect and imperfect channel state information (CSI). Also, we empirically show that the perfect CSI assumption that is considered in a few low-resolution ADC based massive MIMO papers greatly overestimates the performance of the system. This is joint work with Sai Subramanyam Thoota and Ramesh Annavajjala.
Bio: Prof. Chandra R. Murthy is with the Electrical Communication Engineering Department, Indian Institute of Science, Bangalore. His research interests include signal processing, information theory, estimation theory, compressive sensing, and performance analysis and optimization of wireless systems.

 

Speaker: Joshua Ness, Verizon 5G Labs
Title: The New Ecosystem Orchestrators: 5G and the Rise of Collaborative Connected Solutions
Abstract:
A decade ago, advancements in device hardware technology drove innovations in wireless network infrastructure and cloud resource availability — and we’ve collectively been reaping the benefits ever since with the widespread adoption of virtual networking, spatial computing, and artificial intelligence algorithms applied to every industry that have redefined how we live our lives. But the symbiosis of existing high-speed wireless connectivity and device capabilities is being upended by the arrival of 5G. 

This advanced network-as-a-platform is poised to enable a revolution with respect to how we interact with hardware (like mobile phones and VR/AR), and this has significant implications for creators and technologists who are thinking about how the next ten years will be shaped and how companies will provide value to their customers. The fact is: 5G is here and will open the door to entire ecosystems of computing that uses information from the entire world around us.
In this session, you will learn why life (as we know it) is about to change, and how 5G and AI/ML will enable emerging technologies and influence the development of entirely new ways to interact with each other — and how we can begin preparing for it now.
Bio:
Joshua Ness is a Senior Manager at Verizon 5G Labs in New York City. He partners with enterprise, startups, and academic teams to drive innovation around 5G and co-create new 5G concepts that take advantage of complementary technologies like spatial computing, edge computing, and artificial intelligence. Joshua sits at the intersection of the application of emerging technology and the nitty-gritty under the hood that makes it all work. He uses this to create compelling stories that use historical contexts combined with current and future technology advancements in order to educate, inspire, and connect the dots for individuals and businesses.

 

Speaker: Navid Abedini, Qualcomm
Title: Heterogenous Densification of Future Networks
Abstract: To fully experience the 5G potentials and enjoy its promised high data rates and unprecedented throughput, the future networks must guarantee the users a reliable coverage and uniform performance anytime and anywhere.  This can only be achieved through network densification and deploying orders of magnitude more access-nodes than the traditional networks. To strike a balance between the offered user experience and the cost of implementation and operation, the future networks will comprise access-nodes of different types and functionalities: macro-cells, small-cells, relay nodes (e.g. integrated access and backhaul (IAB) nodes), wired and wireless remote units, smart RF repeaters, intelligent reflecting surfaces, etc. In this talk, we give an overview of recent standardization efforts for some of these access-nodes and discuss how AI/ML may help with managing the future dense heterogenous networks.
Bio: Navid Abedini is a Senior Staff Systems Engineer at Qualcomm Technologies Inc., where he is working on the design and implementation of wireless systems. He has been involved in the design and standardization of various technologies including LTE sidelink, CV2X, NB-IoT, 5G NR, and NR-IAB (integrated access and backhaul). He has received his Ph.D. in electrical and computer engineering from Texas A&M University in 2012. He is the recipient of the 2010 Capocelli prize for the best paper at IEEE Data Compression Conference. He holds more than 120 US and international patents and more than 1000 pending patent applications.

 

Speaker: Seong-Hwan Kim, Xilinx
Title: 5G + AI/ML = Smart World, Addressing Real-World Problems with Smart Technology
Abstract: Smart technology is the result of the fusion of AI/ML with highly performant computing and video over an ultra-low latency, high bandwidth, 5G network. From SmartPhones to Smart Cities and a Smart World, these technologies are making us safer and more productive.  This talk will address how AI/ML is being applied to video in real-time to address a wide range of applications from Smart Cities to Smart Retail and Smart Hospitals. In Smart Cities, live video feeds from traffic and city cameras, along with drones, are combined to address issues like access control, suspicious vehicle detection, surveillance using face detection, crowd detection, and vehicle traffic detection. When applied to Smart Retail it can detect missed scans during product check-out or tickets that had been swapped. With Smart Hospitals, this means in bed patient monitoring that detects nearly a dozen conditions where the patient has moved and is in distress, for example, a seizure or they require assistance and have simply waived their hand to get attention.  Wireless 5G networks provide the fabric on which the video and AI/ML-driven computations and results travel. With latency targets for these smart technologies in the 10s of milliseconds, no other broadband network is up to the task.  AI/ML, combined with 5G, is making the world safer and more productive.
Bio: Seong Kim is a Sr. Director Datacenter Architect and leads the datacenter architect and specialist team at Xilinx. His key focus areas include Data Center application acceleration and offload for Compute,
Network and Storage platforms, and NFV acceleration solutions. His recent acceleration objectives are machine learning, video transcoding, database acceleration and smart NIC. Prior to his current role, he served as a system architect at AMD. He has more than 20 years of industry and research experience in the field of networking, and holds a Ph.D. degree in Electrical Engineering from the State University of New York at Stony Brook and an M.B.A from Lehigh University.

 

Speaker: Rajarajan Sivaraj, Mavenir
Title: Applying scalable and practical machine learning for real-time programmable optimization in 5G cellular networks
Abstract: Traditional mobile networks have cellular Radio Access Network (RAN) and core networking components inter-operate with each other based on protocol and standards, as opposed to data and intelligence. With the impetus on advanced ML/AI techniques, open-source technologies, SDN/NFV and network edge cloud, there has been a recent interest in the telecom industry to move from native and closed-source solutions in vendor-proprietary hardware to intelligent data-driven and open solutions in third-party commercial-off-the-shelf platforms. This has opened opportunities for multiple players in the telecom space, fostering an innovative and competitive 3rd party ecosystem. However, in the past, the closed architecture of a cellular network made it challenging to incorporate these solutions in operational mobile networks. With the advent of 5G, disaggregated RAN and open-RAN architectures and edge-cloud APIs, there has been a significant momentum in building intelligent solutions on operational 5G networks.
In this talk, we will cover the application of scalable and practical ML/AI techniques in the context of 5G cellular networks under two key aspects: (i) real-time optimization of the cellular RAN, and (ii) end-to-end network and application optimization. We will leverage SDN/NFV, open RAN-architected RAN Intelligent Controller (RIC) and edge-cloud APIs towards achieving this. In particular, we talk about how we could predict by using ML/AI and optimize RAN latency in real-time for LTE-NR dual-connected 5G users by leveraging the RIC component. We also talk about how to optimize IP packet sizes in the core network based on ML/AI-driven RAN latency prediction to optimize the goodput and latency of the end-user application and the throughput of the network by leveraging network edge-cloud APIs. We further discuss the relevance of these techniques in the context of network slicing for enhanced mobile broadband (eMBB) and ultra-high reliable low-latency communication (uRLLC) applications.
Bio: Dr. Rajarajan Sivaraj is a Principal Technical Architect at Mavenir, where he focuses on RAN analytics and optimization using open-RAN (O-RAN) and virtualized RAN (vRAN) technologies. His work focuses on SDN/NFV-based RAN optimization using the O-RAN architected RAN Intelligent Controller, disaggregating the RAN and facilitating end-to-end intelligence in cellular networks using edge cloud APIs. He is also proficient in understanding and evaluating the performance of various next-generation mobile streaming apps over cellular networks, such as AR/VR/360-degree video, etc. Dr. Sivaraj completed his PhD in Computer Science from the University of California at Davis in 2016 and has published his research works in top highly-selective peer-reviewed conferences and journals. He has also had granted and pending patents. He has received best-paper nominations for his research works and has been in the editorial, peer reviewer and TPC committee for a number of international conferences and journals. Dr. Sivaraj also actively participates and makes contributions to standard bodies such as the O-RAN alliance. Prior to joining Mavenir, Dr. Sivaraj has had work experiences at AT&T labs, Microsoft Research, Intel Labs, NEC Labs America, xRAN.org, Broadcom, TCS Innovation Labs Bangalore, etc. He has also had a rich international experience having worked in 5 different countries (USA, UK, Australia, France and India). Dr. Sivaraj continues to work on exciting problems in the 5G and beyond 5G space and firmly believes that the blend of technology and telecom would be a major disruption in transforming the lives of billions of people towards enhanced connectivity and enriched services.

 

Speaker: Wenting Sun, Ericsson
Title: Advanced AI Applications for 5G
Abstract:
This talk will cover several highly promising AI technologies applicable for 5G, including causal inference, graph and graph machine learning and transfer learning. Given the distributed nature of 5G and the availability of huge amount of data, we are presented with the unique opportunity and challenge to build smarter systems leveraging on AI. This talk helps illustrate the general principals and practical use cases on how these AI technologies can be used to provide advantages in various 5G applications.
Bio:
 Wenting Sun is a Senior Data Science Manager in Ericsson. She leads a team of data scientists and data engineers to develop cutting edge AI/ML applications in telecommunication domain and drives some of the AI related open sources initiatives for Ericsson. Wenting has been working in the AI/ML related field for the last 17 years, has 10+ peer reviewed publications. She holds a bachelor’s degree in electrical and electronics engineering and a PhD degree in intelligent control.

 

Speaker: Zoran Kostic, Columbia University, NYC
Title: High-Bandwidth Low-Latency Applications using AI and Edge Computing Infrastructure – A Snapshot
Abstract: This talk reviews experiences in pursuing experimental studies with high-bandwidth low-latency applications which utilize video-based deep learning. Using the example of the COSMOS smart-city intersection, we illustrate one approach to privacy preservation, and show how it affects the results in object detection accuracies. We discuss the development/tool chain, debugging, profiling and evaluation of the results, and provide some insight into what challenges will have to be further overcome to facilitate support for reliable low-latency high-bandwidth applications.
Bio: Zoran Kostic completed his Ph.D. in Electrical Engineering at the University of Rochester and his Dipl. Ing. degree at the University of Novi Sad. He spent most of his career in industry where he worked in research, product development and in leadership positions. He is presently on the Faculty of Electrical Engineering at Columbia University in New York City. Zoran’s expertise spans wireless communications, signal processing, system-on-chip development and applications of parallel computing.
His research addresses Internet of Things systems and physical data analytics, smart cities, and applications of deep learning for smart cities and in medical therapeutics. His work comprises a mix of research, system architecture and software/hardware development, which resulted in a notable publication record, three dozen patents, and critical contributions to successful products. He has experience in Intellectual Property consulting.

 

Speaker: Ken Birman, Cornell University
Title: Enabling Cloud Intelligence for the 5G Edge
Abstract:
We generally think of 5G purely as a telephony architecture, yet 5G will also open the door to “cloud edge” computing as a scalable technology for apps that run on mobile phones, enabling mobile devices to tap into real-time big data analytics and real-time machine intelligence.  The challenge is that to use 5G with the cloud in this manner, we will need instant correct reaction based on rapidly changing real-world data.  In this talk, I’ll describe recent work to make the cloud friendlier for demanding real-world applications.  Cornell’s Derecho and Cascade technologies, on which this talk is based, can be downloaded for free open-source use from GitHub.com/Derecho-Project.
Bio:
Ken Birman is the N. Rama Rao Professor of Computer Science at Cornell. An ACM Fellow and the winner of the IEEE Tsutomu Kanai Award, Ken has written 3 textbooks and published more than 150 papers in prestigious journals and conferences. Software he developed operated the New York Stock Exchange for more than a decade without trading disruptions, and still plays central roles in the French Air Traffic Control System and the US Navy AEGIS warship. Other technologies from his group found their way into IBM’s Websphere product, Amazon’s EC2 and S3 systems, Microsoft’s cluster management solutions, and the US Northeast bulk power grid. His Vsync system (vsync.codeplex.com) has become a widely used teaching tool for students learning to create secure, strongly consistent and scalable cloud computing solutions. Derecho is intended for demanding settings such as the smart power grid, smart highways and homes, and scalable vision systems.

 

Speaker: Ricardo Queirós, Ericsson
Title: AI-powered radio access networks
Abstract:
Imagine the network efficiencies that could be gained if radio frequency bands and traffic were managed automatically through artificial intelligence… 5G coverage could be secured and the user experience would never suffer. We now enable a unique means of optimizing the radio access networks: by adding AI across the radio access network. Ericsson’s AI functionalities are optimized for the RAN Compute architecture and have the advantage of running close to the radios, bringing down feedback loops to less than a few milliseconds.
Bio:
Ricardo Queirós is the Head of Product Security, Operations & Maintenance and Transport in Business Unit Networks, leading a team of product managers and security specialists to drive technology strategies and product competitiveness. He is part of the key strategic decisions that ensure best-in-class product security in the industry. Ricardo’s work is an important part of supporting operators to automate RAN operations, boost network performance and shorten the time-to-market for new features, by enabling advanced artificial intelligence and machine learning capabilities in radio access networks.
Ricardo has over 10 years of experience in the telecommunications sector, combining his business management and technical acumen in pre-sales, product and portfolio management. Previously, Ricardo worked as a Portfolio Manager responsible for defining the RAN portfolio strategies to ensure a profitable and competitive portfolio. He also managed customer and partner engagement to align business objectives with ecosystem development.
Ricardo joined Ericsson in 2006 and has since held many positions in the company, leading and managing telecom projects and product development strategies onsite in over 20 countries in Europe, Asia, Latin America and Africa.
Ricardo holds an MSc in Electrical Engineering (specialization in telecommunications) from the University of Coimbra, Portugal and an Executive MBA from ICADE Business School, Spain.

 

Speaker: Joe Krystofik, Fujitsu Network Communications
Title: Practical 5G Use Cases and Machine Learning Adoption
Abstract: Machine Learning is proving to play a significant role across many facets of the 5G landscape.  This presentation explores 5G use cases converting the benefits of ML technology into real world value.  Additionally, this presentation covers the underlying ML technology and the key role data pipeline management plays in successfully deploying ML based solutions.
Bio: Joe Krystofik is a Product Manager at Fujitsu Network Communications Inc., where he is responsible for leading AI & ML driven software automation. Prior to joining Fujitsu, he was a Network Planner at Verizon Communications where he focused on the technology evolution of the global IP/MPLS Core network. He holds a Bachelor of Science in Engineering from Trinity University in San Antonio and a Master of Science in Electrical Engineering from Cal Poly in San Luis Obispo.

 

Speaker: Henrik Rydén, Ericsson
Title: ML in 5G and beyond networks
Abstract: Machine learning (ML) as a tool for enabling automation in radio access networks (RANs) is growing more important each year due to the densification of networks and the growth in data consumption.  Driven by improved processing and enhanced software techniques and access to massive amount of data, ML techniques promise to combine simplification with improved performance and efficiency. ML methods allow network operators to solve problems that are challenging with traditional algorithms, optimize several variables jointly, and optimize a sequence of decisions. In order to leverage the potentials of ML in wireless communication systems, a deep understanding of existing systems is vital. Combining ML competence with domain knowledge is crucial for allowing us to solve the right problem in a simplified and efficient way. This presentation will provide an overview of challenges, learnings and opportunities when introducing ML in 5G and beyond networks.
Bio: Henrik Rydén is a Senior Researcher in Radio at Ericsson Research in Kista, Sweden. He joined Ericsson in 2014 and currently works as a technical coordinator for a project targeting ML for radio networks. Henrik has co-authored several scientific papers and patents within the area of ML for radio access networks. In 2014, he received an M.Sc in Electrical Engineering from Linköping University, Sweden.

 

Speaker: Sayandev Mukherjee, CableLabs
Title: Distributed intelligence for dynamic resource allocation in 5G networks
Abstract:
In 5G networks, we expect resource allocation problems to be: (a) localized, because of small cells; (b) more in number, because of denser cells and higher traffic; and (c) heterogeneous, because of the variety of applications and devices on such networks.  Centralized resource allocation is a poor fit for such scenarios.  We argue in favor of distributed solutions for such allocation problems.  Moreover, we want these solutions to be automated in order to scale up to the demands of 5G networks.  We propose market-based resource allocation solutions based on economic models of utility and show how they are distributed and lead to efficient allocation of resources while maintaining the quality of service as perceived by both users and network operators.  The automation of such market-based solutions is achieved by machine learning models to predict resource demand by each user such that the predicted demand can be satisfied by resources acquired by the user through a market transaction.  We illustrate such intelligent market-based resource allocation for the case of a user supported by a cluster of base stations under coordinated multi-point (CoMP) transmission.
Bio:
Dr. Sayandev Mukherjee received his M.S. and Ph.D. degrees (both in Electrical Engineering) from Cornell University, Ithaca, NY in 1994 and 1997 respectively.  His graduate research work was on the training of deep learning models, then called “artificial neural networks.”  Since then, his career has combined the two threads of wireless communications research and machine learning at corporate research labs (Futurewei/Huawei R&D, 2018-2019; DOCOMO, 2010-2018; Bell Labs, 1996-2006), startups (SpiderCloud Wireless, 2008-2010), and industry (Marvell Semiconductor, 2006-2008).  He is currently a Principal Architect in the Next Generation Systems group at CableLabs, working on applications of AI/ML in both wired and wireless communication systems.   He is a Senior Member of the IEEE and the author of two research monographs: Analytical Modeling of Heterogeneous Cellular Networks (2014; Chinese edition forthcoming in 2020) and Stochastic Geometry Analysis of Cellular Networks (2018), both published by Cambridge University Press.

 

Speaker: Siddhartha Sen, Microsoft Research
Title: Responsible AI for Networked Systems
Abstract:
I will describe a responsible methodology for applying AI to networked systems that is minimally disruptive, synergistic with human solutions, and safe. First, I will develop a paradigm that combines reinforcement learning with the ability to ask counterfactual (“what if”) questions about a decision-making system and show how to use this to exploit the natural information emitted by these systems. Then, I will describe an abstraction called a “safeguard” that protects an AI system from violating a safety specification, while allowing the system and the safeguard to co-evolve. We will apply this methodology to several infrastructure systems in our Azure cloud and edge.
Bio:
Siddhartha Sen is a Principal Researcher in the Microsoft Research New York City lab, and previously a researcher in the MSR Silicon Valley lab. He uses data structures, algorithms, and machine learning to build more powerful distributed systems. His current mission is to use AI to optimize cloud infrastructure in a way that is minimally disruptive, synergistic with human solutions, and safe. Siddhartha received his BS degrees in computer science and mathematics and his MEng degree in computer science from MIT. From 2004-2007 he worked as a developer at Microsoft and built a network load balancer for Windows Server. He returned to academia and completed his PhD from Princeton University in 2013. Siddhartha received the inaugural Google Fellowship in Fault-Tolerant Computing in 2009, the best student paper award at PODC 2012, and the best paper award at ASPLOS 2017.

 

Workshop Co-Chairs:

Dr. Deepak Kataria
Chair,  IEEE Princeton Central Jersey Section (Region 1)

Dr. Anwar Walid
Director of Network Intelligence and Distributed Systems Research, Nokia Bell Labs