David MaloneyDavid Moloney is a director of Machine vision at Intel Corp and has worked in the semiconductor industry since qualifying with a BEng from DCU in 1985. He has a wealth of international experience having worked for Infineon in Munich for 5 years and SGS- Thomson Microelectronics (STM) in Milan for 4 years respectively.

In 1994 he returned from STM to lead the engineering team for the first product development at Parthus Technologies where he was a key member of the management team and where he spearheaded the development of Parthus, Bluetooth technology. David left Parthus in 2003 to work towards his PhD in Trinity College Dublin and as an independent consultant for Frontier Silicon and Dublin City University. He subsequently co-founded Movidius as CTO in 2005 which went on to pioneer low-power embedded vision and neural network processing in edge devices before being acquired by Intel in November 2016. He received a PhD from Trinity College Dublin in 2010 for his research into high-performance computer architectures. His interests span SoC and embedded processor design, deep-learning hardware, computer vision, robotics and navigation. David is the co-inventor of 36 issued patents and co-author of 32 conference/journal papers.

Title: A giant leap for AI

Since AlexNet won the ImageNet challenge in 2012 the use of deep-learning for image recognition, semantic segmentation, voice and NLP applications has proliferated with astonishing speed. What were primarily cloud-based applications have now begun to devolve to the network edge due to the availability of low-cost and low-power edge inference devices such as Movidius Myriad-X. Edge devices, primarily CCTV cameras are now generating 2500Petabytes per day and growing and edge inference will handle 50% of the total inference workload, eclipsing the cloud for the first time in 2019 and accelerating quickly to handle the vast bulk of inference in the coming years.

The market for remote sensing space-based applications is fundamentally limited by up and downlink bandwidth and onboard compute capability on satellites and spacecraft. This talk details how the compute capability on these platforms can be vastly increased by leveraging COTS system-on-chip (SoC) technology. The orders of magnitude increase in processing power can then be applied to consuming data at source rather than on the ground allowing the deployment of value-added applications in space which consume a tiny fraction of the downlink bandwidth that would be otherwise required. The solution has the potential to revolutionise Earth Observation (EO) and other remote sensing applications, reducing the time and cost to deploy new added value services to space by a great extent compared with the state of the art. We'll also discuss the first results in radiation tolerance and power/performance of these COTS SoCs for space-based applications and map the trajectory towards Low Earth Orbit (LEO) trials and the complete life-cycle for space-based Artificial Intelligence (AI) classifiers on orbital platforms and spacecraft.


Earl McCune photo Earl McCune is a Fellow of the IEEE and is author of two books with Cambridge University Press. He has 93 issued US patents, and serves as Chair of the IEEE Standards working group on Energy Efficient Communication Hardware. He is a professor for Sustainable Wireless Systems at TU Delft, and is the CTO for Eridan Communications. He represents MTT and SSCS on two IEEE Initiatives: Sustainable ICT, and Future Networks.



Title: Challenges review and Testing Options to help Success in the Implementation of 5G-New Radio Systems

5G heavily leverages LTE radio technology, so there are both old and new aspects to this New Radio system which present challenges to physical-layer designers. Major challenges include antenna coupling across arrays, transistor issues and needed measurements, amplifier issues and new characterizations, RF safety at mmW frequencies, and thermal issues with arrays of radios. Details on where these issues come from and the design problems they represent are discussed. New measurement techniques could help point the way toward successful solutions.

Miriam Leeser photoMiriam Leeser is Professor of Electrical and Computer Engineering at Northeastern University, and currently a sabbatical visitor at Maynooth University. She has been doing research in hardware accelerators, including FPGAs and GPUs, for decades, and has done ground breaking research in floating point implementations, unsupervised learning, medical imaging, privacy preserving data processing and wireless networking. She received her BS degree in Electrical Engineering from Cornell University, and Diploma and Ph.D. Degrees in Computer Science from Cambridge University in England. She has been a faculty member at Northeastern since 1996, where she is head of the Reconfigurable Computing Laboratory and a member of the Computer Engineering group. She is a senior member of ACM, IEEE and SWE. She is the recipient of an NSF Young Investigator Award. Throughout her career she has been funded by both government agencies and companies, including DARPA, NSF, Google, MathWorks and Microsoft. She received the prestigious Fulbright Scholar Award in 2018.

Title: Billions and billions -- or what to do with all those transistors?

Modern computer chips can contain billions of transistors. There has been a revival in computer architecture as researchers investigate novel ways to take advantage of the capacity. I will give an overview of three major architectures, namely CPUs, GPUs and FPGAs, and discuss the differences between the way they are organized. I will then discuss the differences and challenges from a user's perspective and will highlight applications that do best on each of these different hardware architectures.

Mauro Dragone

Mauro Dragone is Assistant Professor at Heriot-Watt University, Edinburgh Centre for Robotics. He received a BS degree in Computer Science from Bologna University, in 1999, and a PhD from University College Dublin, in 2009. His research expertise includes cognitive robotics, human-robot interaction, multi-agent systems, software engineering and Internet of Things. He led the EU project RUBICON, adding self-learning abilities to smart robotic environments. At Heriot-Watt Dr. Dragone is co-investigator in the UK projects ORCA-Hub (orcahub.org) and USMART (research.ncl.ac.uk/usmart), developing intelligent sensors and robot networks for monitoring of underwater environments. He also set up the Robotic Assisted Living Testbed, a home-like environment for the co-design of innovative solutions for healthy ageing and assisted living applications.

Title: Internet of Robotic Things

The Internet-of-Robotic-Things (IoRT) is an emerging paradigm that brings together autonomous robotic systems with the Internet of Things (IoT) vision of connected sensors and smart objects pervasively embedded in everyday environments. In my talk I will illustrate how this merger can enable novel applications in almost every sector where cooperation between robots and IoT technology can be imagined. I will discuss associated technical and scientific challenges and show examples from projects that have already started to bridge the existing gaps between IoT and Robotic R&D.

&nbps;
&nbps;